Feeds:
Posts
Comments

Archive for the ‘Statistical Methods for Research Evaluation’ Category

Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use

Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use

Curators:

 

THE VOICE of Aviva Lev-Ari, PhD, RN

In this curation we wish to present two breaking through goals:

Goal 1:

Exposition of a new direction of research leading to a more comprehensive understanding of Metabolic Dysfunctional Diseases that are implicated in effecting the emergence of the two leading causes of human mortality in the World in 2023: (a) Cardiovascular Diseases, and (b) Cancer

Goal 2:

Development of Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics for these eight subcellular causes of chronic metabolic diseases. It is anticipated that it will have a potential impact on the future of Pharmaceuticals to be used, a change from the present time current treatment protocols for Metabolic Dysfunctional Diseases.

According to Dr. Robert Lustig, M.D, an American pediatric endocrinologist. He is Professor emeritus of Pediatrics in the Division of Endocrinology at the University of California, San Francisco, where he specialized in neuroendocrinology and childhood obesity, there are eight subcellular pathologies that drive chronic metabolic diseases.

These eight subcellular pathologies can’t be measured at present time.

In this curation we will attempt to explore methods of measurement for each of these eight pathologies by harnessing the promise of the emerging field known as Bioelectronics.

Unmeasurable eight subcellular pathologies that drive chronic metabolic diseases

  1. Glycation
  2. Oxidative Stress
  3. Mitochondrial dysfunction [beta-oxidation Ac CoA malonyl fatty acid]
  4. Insulin resistance/sensitive [more important than BMI], known as a driver to cancer development
  5. Membrane instability
  6. Inflammation in the gut [mucin layer and tight junctions]
  7. Epigenetics/Methylation
  8. Autophagy [AMPKbeta1 improvement in health span]

Diseases that are not Diseases: no drugs for them, only diet modification will help

Image source

Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease

https://www.youtube.com/watch?v=Ee_uoxuQo0I

 

Exercise will not undo Unhealthy Diet

Image source

Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease

https://www.youtube.com/watch?v=Ee_uoxuQo0I

 

These eight Subcellular Pathologies driving Chronic Metabolic Diseases are becoming our focus for exploration of the promise of Bioelectronics for two pursuits:

  1. Will Bioelectronics be deemed helpful in measurement of each of the eight pathological processes that underlie and that drive the chronic metabolic syndrome(s) and disease(s)?
  2. IF we will be able to suggest new measurements to currently unmeasurable health harming processes THEN we will attempt to conceptualize new therapeutic targets and new modalities for therapeutics delivery – WE ARE HOPEFUL

In the Bioelecronics domain we are inspired by the work of the following three research sources:

  1. Biological and Biomedical Electrical Engineering (B2E2) at Cornell University, School of Engineering https://www.engineering.cornell.edu/bio-electrical-engineering-0
  2. Bioelectronics Group at MIT https://bioelectronics.mit.edu/
  3. The work of Michael Levin @Tufts, The Levin Lab
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Wikipedia
Born: 1969 (age 54 years), Moscow, Russia
Education: Harvard University (1992–1996), Tufts University (1988–1992)
Affiliation: University of Cape Town
Research interests: Allergy, Immunology, Cross Cultural Communication
Awards: Cozzarelli prize (2020)
Doctoral advisor: Clifford Tabin
Most recent 20 Publications by Michael Levin, PhD
SOURCE
SCHOLARLY ARTICLE
The nonlinearity of regulation in biological networks
1 Dec 2023npj Systems Biology and Applications9(1)
Co-authorsManicka S, Johnson K, Levin M
SCHOLARLY ARTICLE
Toward an ethics of autopoietic technology: Stress, care, and intelligence
1 Sep 2023BioSystems231
Co-authorsWitkowski O, Doctor T, Solomonova E
SCHOLARLY ARTICLE
Closing the Loop on Morphogenesis: A Mathematical Model of Morphogenesis by Closed-Loop Reaction-Diffusion
14 Aug 2023Frontiers in Cell and Developmental Biology11:1087650
Co-authorsGrodstein J, McMillen P, Levin M
SCHOLARLY ARTICLE
30 Jul 2023Biochim Biophys Acta Gen Subj1867(10):130440
Co-authorsCervera J, Levin M, Mafe S
SCHOLARLY ARTICLE
Regulative development as a model for origin of life and artificial life studies
1 Jul 2023BioSystems229
Co-authorsFields C, Levin M
SCHOLARLY ARTICLE
The Yin and Yang of Breast Cancer: Ion Channels as Determinants of Left–Right Functional Differences
1 Jul 2023International Journal of Molecular Sciences24(13)
Co-authorsMasuelli S, Real S, McMillen P
SCHOLARLY ARTICLE
Bioelectricidad en agregados multicelulares de células no excitables- modelos biofísicos
Jun 2023Revista Española de Física32(2)
Co-authorsCervera J, Levin M, Mafé S
SCHOLARLY ARTICLE
Bioelectricity: A Multifaceted Discipline, and a Multifaceted Issue!
1 Jun 2023Bioelectricity5(2):75
Co-authorsDjamgoz MBA, Levin M
SCHOLARLY ARTICLE
Control Flow in Active Inference Systems – Part I: Classical and Quantum Formulations of Active Inference
1 Jun 2023IEEE Transactions on Molecular, Biological, and Multi-Scale Communications9(2):235-245
Co-authorsFields C, Fabrocini F, Friston K
SCHOLARLY ARTICLE
Control Flow in Active Inference Systems – Part II: Tensor Networks as General Models of Control Flow
1 Jun 2023IEEE Transactions on Molecular, Biological, and Multi-Scale Communications9(2):246-256
Co-authorsFields C, Fabrocini F, Friston K
SCHOLARLY ARTICLE
Darwin’s agential materials: evolutionary implications of multiscale competency in developmental biology
1 Jun 2023Cellular and Molecular Life Sciences80(6)
Co-authorsLevin M
SCHOLARLY ARTICLE
Morphoceuticals: Perspectives for discovery of drugs targeting anatomical control mechanisms in regenerative medicine, cancer and aging
1 Jun 2023Drug Discovery Today28(6)
Co-authorsPio-Lopez L, Levin M
SCHOLARLY ARTICLE
Cellular signaling pathways as plastic, proto-cognitive systems: Implications for biomedicine
12 May 2023Patterns4(5)
Co-authorsMathews J, Chang A, Devlin L
SCHOLARLY ARTICLE
Making and breaking symmetries in mind and life
14 Apr 2023Interface Focus13(3)
Co-authorsSafron A, Sakthivadivel DAR, Sheikhbahaee Z
SCHOLARLY ARTICLE
The scaling of goals from cellular to anatomical homeostasis: an evolutionary simulation, experiment and analysis
14 Apr 2023Interface Focus13(3)
Co-authorsPio-Lopez L, Bischof J, LaPalme JV
SCHOLARLY ARTICLE
The collective intelligence of evolution and development
Apr 2023Collective Intelligence2(2):263391372311683SAGE Publications
Co-authorsWatson R, Levin M
SCHOLARLY ARTICLE
Bioelectricity of non-excitable cells and multicellular pattern memories: Biophysical modeling
13 Mar 2023Physics Reports1004:1-31
Co-authorsCervera J, Levin M, Mafe S
SCHOLARLY ARTICLE
There’s Plenty of Room Right Here: Biological Systems as Evolved, Overloaded, Multi-Scale Machines
1 Mar 2023Biomimetics8(1)
Co-authorsBongard J, Levin M
SCHOLARLY ARTICLE
Transplantation of fragments from different planaria: A bioelectrical model for head regeneration
7 Feb 2023Journal of Theoretical Biology558
Co-authorsCervera J, Manzanares JA, Levin M
SCHOLARLY ARTICLE
Bioelectric networks: the cognitive glue enabling evolutionary scaling from physiology to mind
1 Jan 2023Animal Cognition
Co-authorsLevin M
SCHOLARLY ARTICLE
Biological Robots: Perspectives on an Emerging Interdisciplinary Field
1 Jan 2023Soft Robotics
Co-authorsBlackiston D, Kriegman S, Bongard J
SCHOLARLY ARTICLE
Cellular Competency during Development Alters Evolutionary Dynamics in an Artificial Embryogeny Model
1 Jan 2023Entropy25(1)
Co-authorsShreesha L, Levin M
5

5 total citations on Dimensions.

Article has an altmetric score of 16
SCHOLARLY ARTICLE
1 Jan 2023BIOLOGICAL JOURNAL OF THE LINNEAN SOCIETY138(1):141
Co-authorsClawson WP, Levin M
SCHOLARLY ARTICLE
Future medicine: from molecular pathways to the collective intelligence of the body
1 Jan 2023Trends in Molecular Medicine
Co-authorsLagasse E, Levin M

THE VOICE of Dr. Justin D. Pearlman, MD, PhD, FACC

PENDING

THE VOICE of  Stephen J. Williams, PhD

Ten TakeAway Points of Dr. Lustig’s talk on role of diet on the incidence of Type II Diabetes

 

  1. 25% of US children have fatty liver
  2. Type II diabetes can be manifested from fatty live with 151 million  people worldwide affected moving up to 568 million in 7 years
  3. A common myth is diabetes due to overweight condition driving the metabolic disease
  4. There is a trend of ‘lean’ diabetes or diabetes in lean people, therefore body mass index not a reliable biomarker for risk for diabetes
  5. Thirty percent of ‘obese’ people just have high subcutaneous fat.  the visceral fat is more problematic
  6. there are people who are ‘fat’ but insulin sensitive while have growth hormone receptor defects.  Points to other issues related to metabolic state other than insulin and potentially the insulin like growth factors
  7. At any BMI some patients are insulin sensitive while some resistant
  8. Visceral fat accumulation may be more due to chronic stress condition
  9. Fructose can decrease liver mitochondrial function
  10. A methionine and choline deficient diet can lead to rapid NASH development

 

Read Full Post »

Thriving at the Survival Calls during Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

Thriving at the Survival Calls during Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL

Author and Curator: Aviva Lev-Ari, PhD, RN

 

The source for the inspiration to write this curation is described in

Survival Calls during Careers in the Digital Age

https://pharmaceuticalintelligence.com/2018/06/13/survival-calls-during-careers-in-the-digital-age/

 

In this curation, I present the following concepts in three parts:

  1. Part 1: Authenticity of Careers in the Digital Age: In Focus, the BioTechnology Sector
  2. Part 2: Top 10 books to help you survive the Digital Age

  3. Part 3: A case study on Thriving at the Survival Calls during Careers in the Digital Age: Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76 

 

Part 1: Authenticity of Careers in the Digital Age: 

In Focus, the BioTechnology Sector

 

Lisa LaMotta, Senior Editor, BioPharma Dive wrote in Conference edition | June 11, 2018

Unlike that little cancer conference in Chicago last week, the BIO International convention is not about data, but about the people who make up the biopharma industry.

The meeting brings together scientists, board members, business development heads and salespeople, from the smallest virtual biotechs to the largest of pharmas. It allows executives at fledgling biotechs to sit at the same tables as major decision-makers in the industry — even if it does look a little bit like speed dating.

But it’s not just a partnering meeting.

This year’s BIO also sought to shine a light on pressing issues facing the industry. Among those tackled included elevating the discussion on gender diversity and how to bring more women to the board level; raising awareness around suicide and the need for more mental health treatments; giving a voice to patient advocacy groups; and highlighting the need for access to treatments in developing nations.

Four days of meetings and panel discussions are unlikely to move the needle for many of these challenges, but debate can be the first step toward progress.

I attended the meetings on June 4,5,6, 2018 and covered in Real Time the sessions I attended. On the link below, Tweets, Re-Tweets and Likes mirrors the feelings and the opinions of the attendees as expressed in real time using the Twitter.com platform. This BioTechnology events manifested the AUTHENTICITY of Careers in the Digital Age – An AGE like no Other, also known as, DIGITAL.

The entire event is covered on twitter.com by the following hash tag and two handles:

 

I covered the events on two tracks via two Twitter handles, each handle has its own followers:

The official LPBI Group Twitter.com account

The Aviva Lev-Ari, PhD, RN Twitter.com account

Track A:

  • Original Tweets by @Pharma_BI and by @AVIVA1950 for #BIO2018 @IAmBiotech @BIOConvention – BIO 2018, Boston, June 4-7, 2018, BCEC

Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/06/11/original-tweets-by-pharma_bi-and-by-aviva1950-from-bio2018-iambiotech-bioconvention-bio-2018-boston-june-4-7-2018-bcec/

 

  • Reactions to Original Tweets by @Pharma_BI and by @AVIVA1950 from #BIO2018

Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/06/12/reactions-to-original-tweets-by-pharma_bi-and-by-aviva1950-from-bio2018/

Track B:

  • Re-Tweets and Likes by @Pharma_BI and by @AVIVA1950 from #BIO2018 @IAmBiotech @BIOConvention – BIO 2018, Boston, June 4-7, 2018, BCEC

Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/06/11/re-tweets-and-likes-by-pharma_bi-aviva1950-from-bio2018-iambiotech-bioconvention-bio-2018-boston-june-4-7-2018-bcec/

Part 2: Top 10 books to help you survive the digital age

From Philip K Dick’s obtuse robots to Mark O’Connell’s guide to transhumanism, novelist Julian Gough picks essential reading for a helter skelter world

Here are 10 of the books that did help me [novelist Julian Gough]: they might also help you understand, and survive, our complicated, stressful, digital age.

  1. Marshall McLuhan Unbound by Marshall McLuhan (2005)
    The visionary Canadian media analyst predicted the internet, and coined the phrase the Global Village, in the early 1960s. His dense, complex, intriguing books explore how changes in technology change us. This book presents his most important essays as 20 slim pamphlets in a handsome, profoundly physical, defiantly non-digital slipcase.
  2. Ubik by Philip K Dick (1969)
    Pure pulp SF pleasure; a deep book disguised as a dumb one. Dick shows us, not a dystopia, but a believably shabby, amusingly human future. The everyman hero, Joe Chip, wakes up and argues with his robot toaster, which refuses to toast until he sticks a coin in the slot. Joe can’t do this, because he’s broke. He then has a stand-up row with his robot front door, which won’t open, because he owes it money too … Technology changes: being human, and broke, doesn’t. Warning: Dick wrote Ubik at speed, on speed. But embedded in the pulpy prose are diamonds of imagery that will stay with you for ever.
  3. The Singularity Is Near by Ray Kurzweil (2005)
    This book is what Silicon Valley has instead of a bible. It’s a visionary work that predicts a technological transformation of the world in our lifetime. Kurzweil argues that computer intelligence will soon outperform human thought. We will then encode our minds, upload them, and become one with our technology, achieving the Singularity. At which point, the curve of technological progress starts to go straight up. Ultimately – omnipotent, no longer mortal, no longer flesh – we transform all the matter in the universe into consciousness; into us.
  4. To Be a Machine by Mark O’Connell (2017)
    This response to Kurzweil won this year’s Wellcome prize. It’s a short, punchy tour of transhumanism: the attempt to meld our minds with machines, to transcend biology and escape death. He meets some of the main players, and many on the fringes, and listens to them, quizzically. It is a deliberately, defiantly human book, operating in that very modern zone between sarcasm and irony, where humans thrive and computers crash.
  5. A Visit from the Goon Squad by Jennifer Egan (2011)
    This intricately structured, incredibly clever novel moves from the 60s right through to a future maybe 15 years from now. It steps so lightly into that future you hardly notice the transition. It has sex and drugs and rock’n’roll, solar farms, social media scams and a stunningly moving chapter written as a PowerPoint presentation. It’s a masterpiece. Life will be like this.
  6. What Technology Wants by Kevin Kelly (2010)
    Kelly argues that we scruffy biological humans are no longer driving technological progress. Instead, the technium, “the greater, global, massively interconnected system of technology vibrating around us”, is now driving its own progress, faster and faster, and we are just caught up in its slipstream. As we accelerate down the technological waterslide, there is no stopping now … Kelly’s vision of the future is scary, but it’s fun, and there is still a place for us in it.
  7. The Meme Machine by Susan Blackmore (1999)
    Blackmore expands powerfully and convincingly on Richard Dawkins’s original concept of the meme. She makes a forceful case that technology, religion, fashion, art and even our personalities are made of memes – ideas that replicate, mutate and thus evolve over time. We are their replicators (if you buy my novel, you’ve replicated its memes); but memes drive our behaviour just as we drive theirs. It’s a fascinating book that will flip your world upside down.
  8. Neuromancer by William Gibson (1984)
    In the early 1980s, Gibson watched kids leaning into the screens as they played arcade games. They wanted to be inside the machines, he realised, and they preferred the games to reality. In this novel, Gibson invented the term cyberspace; sparked the cyberpunk movement (to his chagrin); and vividly imagined the jittery, multi-screened, anxious, technological reality that his book would help call into being.
  9. You Are Not a Gadget: A Manifesto by Jaron Lanier (2010)
    Lanier, an intense, brilliant, dreadlocked artist, musician and computer scientist, helped to develop virtual reality. His influential essay Digital Maoism described early the downsides of online collective action. And he is deeply aware that design choices made by (mainly white, young, male) software engineers can shape human behaviour globally. He argues, urgently, that we need to question those choices, now, because once they are locked in, all of humanity must move along those tracks, and we may not like where they take us. Events since 2010 have proved him right. His manifesto is a passionate argument in favour of the individual voice, the individual gesture.
  10. All About Love: New Visions by bell hooks (2000)
    Not, perhaps, an immediately obvious influence on a near-future techno-thriller in which military drones chase a woman and her son through Las Vegas. But hooks’s magnificent exploration and celebration of love, first published 18 years ago, will be far more useful to us, in our alienated digital future, than the 10,000 books of technobabble published this year. All About Love is an intensely practical roadmap, from where we are now to where we could be. When Naomi and Colt find themselves on the run through a militarised American wilderness of spirit, when GPS fails them, bell hooks is their secret guide.

SOURCE

https://www.theguardian.com/books/2018/may/30/top-10-books-to-help-you-survive-the-digital-age?utm_source=esp&utm_medium=Email&utm_campaign=Bookmarks+-+Collections+2017&utm_term=277690&subid=25658468&CMP=bookmarks_collection

Part 3: A case study on Thriving at the Survival Calls during Careers in the Digital Age:  Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76

 

On June 10, 2018

 

Following, is a case study about an alumna of HUJI and UC, Berkeley as an inspirational role model. An alumna’s profile in context of dynamic careers in the digital age. It has great timeliness and relevance to graduate students, PhD level at UC Berkeley and beyond, to all other top tier universities in the US and Europe. As presented in the following curations:

Professional Self Re-Invention: From Academia to Industry – Opportunities for PhDs in the Business Sector of the Economy

https://pharmaceuticalintelligence.com/2018/05/22/professional-self-re-invention-from-academia-to-industry-opportunities-for-phds-in-the-business-sector-of-the-economy/

 

Pioneering implementations of analytics to business decision making: contributions to domain knowledge conceptualization, research design, methodology development, data modeling and statistical data analysis: Aviva Lev-Ari, UCB, PhD’83; HUJI, MA’76 

https://pharmaceuticalintelligence.com/2018/05/28/pioneering-implementations-of-analytics-to-business-decision-making-contributions-to-domain-knowledge-conceptualization-research-design-methodology-development-data-modeling-and-statistical-data-a/

 

This alumna is Editor-in-Chief of a Journal that has other 173 articles on Scientist: Career Considerations 

https://pharmaceuticalintelligence.com/category/scientist-career-considerations/

 

In a 5/22/2018 article, Ways to Pursue Science Careers in Business After a PhD by Ankita Gurao,

https://bitesizebio.com/38498/ways-to-pursue-the-business-of-science-after-a-ph-d/?utm_source=facebook&utm_medium=social&utm_campaign=SocialWarfare

Unemployment figures of PhDs by field of science are included, Ankita Gurao identifies the following four alternative careers for PhDs in the non-academic world:

  1. Science Writer/Journalist/Communicator
  2. Science Management
  3. Science Administration
  4. Science Entrepreneurship

My career, as presented in Reflections on a Four-phase Career: Aviva Lev-Ari, PhD, RN, March 2018

https://pharmaceuticalintelligence.com/2018/03/06/reflections-on-a-four-phase-career-aviva-lev-ari-phd-rn-march-2018/

has the following phases:

  • Phase 1: Research, 1973 – 1983
  • Phase 2: Corporate Applied Research in the US, 1985 – 2005
  • Phase 3: Career Reinvention in Health Care, 2005 – 2012
  • Phase 4: Electronic Scientific Publishing, 4/2012 to present

These four phases are easily mapped to the four alternative careers for PhDs in the non-academic world. One can draw parallel lines between the four career opportunities A,B,C,D, above, and each one of the four phases in my own career.

Namely, I have identified A,B,C,D as early as 1985, and pursued each of them in several institutional settings, as follows:

A. Science Writer/Journalist/Communicator – see link above for Phase 4: Electronic Scientific Publishing, 4/2012 to present 

B. Science Management – see link above for Phase 2: Corporate Applied Research in the US, 1985 – 2005 and Phase 3: Career Reinvention in Health Care, 2005 – 2012 

C. Science Administration – see link above for Phase 2: Corporate Applied Research in the US, 1985 – 2005and Phase 4: Electronic Scientific Publishing, 4/2012 to present 

D. Science Entrepreneurship – see link above for Phase 4: Electronic Scientific Publishing, 4/2012 to present  

Impressions of My Days at Berkeley in Recollections: Part 1 and 2, below.

  • Recollections: Part 1 – My days at Berkeley, 9/1978 – 12/1983 –About my doctoral advisor, Allan Pred, other professors and other peers

https://pharmaceuticalintelligence.com/2018/03/15/recollections-my-days-at-berkeley-9-1978-12-1983-about-my-doctoral-advisor-allan-pred-other-professors-and-other-peer/

  • Recollections: Part 2 – “While Rolling” is preceded by “While Enrolling” Autobiographical Alumna Recollections of Berkeley – Aviva Lev-Ari, PhD’83

https://pharmaceuticalintelligence.com/2018/05/24/recollections-part-2-while-rolling-is-preceded-by-while-enrolling-autobiographical-alumna-recollections-of-berkeley-aviva-lev-ari-phd83/

The topic of Careers in the Digital Age is closely related to my profile, see chiefly: Four-phase Career, Reflections, Recollections Parts 1 & 2 and information from other biographical sources, below.

Other sources for my biography

 

Read Full Post »

Decline in Sperm Count – Epigenetics, Well-being and the Significance for Population Evolution and Demography

Dr. Marc Feldman, Expert Opinion on the significance of Sperm Count Decline on the Future of Population Evolution and Demography

Dr. Sudipta Saha, Effects of Sperm Quality and Quantity on Human Reproduction

Dr. Aviva Lev-Ari, Psycho-Social Effects of Poverty, Unemployment and Epigenetics on Male Well-being, Physiological Conditions affecting Sperm Quality and Quantity

 

Updated on 10/6/2022

“Red states are now less healthy than blue states, a reversal of what was once the case,” Anne Case and Angus Deaton, economists at Princeton, argue in a paper they published in April, “The Great Divide: Education, Despair, and Death.”

Carol Graham, a senior fellow at Brookings, described the erosion of economic and social status for whites without college degrees in a 2021 paper:

From 2005 to 2019, an average of 70,000 Americans died annually from deaths of despair (suicide, drug overdose, and alcohol poisoning). These deaths are concentrated among less than college educated middle-aged whites, with those out of the labor force disproportionately represented. Low-income minorities are significantly more optimistic than whites and much less likely to die of these deaths. This despair reflects the decline of the white working class. Counties with more respondents reporting lost hope in the years before 2016 were more likely to vote for Trump.

2010 Pew Research Center study that examined the effects of the Great Recession on Black and white Americans reported that Black Americans consistently suffered more in terms of unemployment, work cutbacks and other measures, but remained far more optimistic about the future than whites. Twice as many Black as white Americans were forced during the 2008 recession to work fewer hours, to take unpaid leave or switch to part-time, and Black unemployment rose from 8.9 to 15.5 percent from April 2007 to April 2009, compared with an increase from 3.7 to 8 percent for whites.

Despite experiencing more hardship, 81 percent of Black Americans agreed with the statement “America will always continue to be prosperous and make economic progress,” compared with 59 percent of whites; 45 percent of Black Americans said the country was still in recession compared with 57 percent of whites

In “Trends in Extreme Distress in the United States, 1993-2019,” David G. Blanchflower and Andrew J. Oswald, economists at Dartmouth and the University of Warwick in Britain, note that “the proportion of the U.S. population in extreme distress rose from 3.6 percent in 1993 to 6.4 percent in 2019. Among low-education midlife white persons, the percentage more than doubled, from 4.8 percent to 11.5 percent.”

In her 2020 paper, “Trends in U.S. Working-Age Non-Hispanic White Mortality: Rural-Urban and Within-Rural Differences,” Shannon M. Monnat, a professor of sociology at Syracuse University’s Maxwell School, explained that “between 1990-92 and 2016-18, the mortality rates among non-Hispanic whites increased by 9.6 deaths per 100,000 population among metro males and 30.5 among metro females but increased by 70.1 and 65.0 among nonmetro (rural and exurban) males and females, respectively.”

Three economists, David AutorDavid Dorn and Gordon Hanson of M.I.T., the University of Zurich and Harvard, reported in their 2018 paper, “When Work Disappears: Manufacturing Decline and the Falling Marriage Market Value of Young Men,” on the debilitating consequences for working-class men of the “China shock”

There is some evidence that partisanship correlates with mortality rates.

In their June 2022 paper, “The Association Between Covid-19 Mortality and the County-Level Partisan Divide in the United States,” Neil Jay SehgalDahai YueElle PopeRen Hao Wang and Dylan H. Roby, public health experts at the University of Maryland, found in their study of county-level Covid-19 mortality data from Jan. 1, 2020, to Oct. 31, 2021, that “majority Republican counties experienced 72.9 additional deaths per 100,000 people.”

Anne Case wrote in her email, that the United States is fast approaching a point where

Education divides everything, including connection to the labor market, marriage, connection to institutions (like organized religion), physical and mental health, and mortality. It does so for whites, Blacks and Hispanics. There has been a profound (not yet complete) convergence in life expectancy by education. There are two Americas now: one with a B.A. and one without.

SOURCE

 

UPDATED on 2/20/2021

Count Down

How Our Modern World Is Threatening Sperm Counts, Altering Male and Female Reproductive Development, and Imperiling the Future of the Human Race

https://www.simonandschuster.com/books/Count-Down/Shanna-H-Swan/9781982113667

Aside from the decline in sperm counts, growing numbers of sperm appear defective — there’s a boom in two-headed sperm — while others loll aimlessly in circles, rather than furiously swimming in pursuit of an egg. And infants who have had greater exposures to a kind of endocrine disruptor called phthalates have smaller penises, Swan found.

Still, the Endocrine Society, the Pediatric Endocrine Society, the President’s Cancer Panel and the World Health Organization have all warned about endocrine disruptors, and Europe and Canada have moved to regulate them.

Scientists are concerned by falling sperm counts and declining egg quality. Endocrine-disrupting chemicals may be the problem.

Opinion Columnist

https://www.nytimes.com/2021/02/20/opinion/sunday/endocrine-disruptors-sperm.html?campaign_id=45&emc=edit_nk_20210220&instance_id=27333&nl=nicholas-kristof&regi_id=65713389&segment_id=52055&te=1&user_id=edf020ada5f25f6d6c4b0b32ac4a1ee9

 

UPDATED on 2/3/2018

Nobody Really Knows What Is Causing the Overdose Epidemic, But Here Are A Few Theories

https://www.buzzfeed.com/danvergano/whats-causing-the-opioid-crisis?utm_term=.kbJPMgaQo4&utm_source=BrandeisNOW%2BWeekly&utm_campaign=58ada49a84-EMAIL_CAMPAIGN_2018_01_29&utm_medium=email#.uugW6mx1dG

 

Recent studies concluded via rigorous and comprehensive analysis found that Sperm Count (SC) declined 52.4% between 1973 and 2011 among unselected men from western countries, with no evidence of a ‘leveling off’ in recent years. Declining mean SC implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility. The high proportion of men from western countries with concentration below 40 million/ml is particularly concerning given the evidence that SC below this threshold is associated with a decreased monthly probability of conception.

1.Temporal trends in sperm count: a systematic review and meta-regression analysis 

Hagai Levine, Niels Jørgensen, Anderson Martino‐Andrade, Jaime Mendiola, Dan Weksler-Derri, Irina Mindlis, Rachel Pinotti, Shanna H SwanHuman Reproduction Update, July 25, 2017, doi:10.1093/humupd/dmx022.

Link: https://academic.oup.com/humupd/article-lookup/doi/10.1093/humupd/dmx022.

2. Sperm Counts Are Declining Among Western Men – Interview with Dr. Hagai Levine

https://news.afhu.org/news/sperm-counts-are-declining-among-western-men?utm_source=Master+List&utm_campaign=dca529d919-EMAIL_CAMPAIGN_2017_07_27&utm_medium=email&utm_term=0_343e19a421-dca529d919-92801633

3. Trends in Sperm Count – Biological Reproduction Observations

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

4. Long, mysterious strips of RNA contribute to low sperm count – Long non-coding RNAs can be added to the group of possible non-structural effects, possibly epigenetic, that might regulate sperm counts.

http://casemed.case.edu/cwrumed360/news-releases/release.cfm?news_id=689

https://scienmag.com/long-mysterious-strips-of-rna-contribute-to-low-sperm-count/

Dynamic expression of long non-coding RNAs reveals their potential roles in spermatogenesis and fertility

Published: 29 July 2017
Thus, we postulated that some lncRNAs may also impact mammalian spermatogenesis and fertility. In this study, we identified a dynamic expression pattern of lncRNAs during murine spermatogenesis. Importantly, we identified a subset of lncRNAs and very few mRNAs that appear to escape meiotic sex chromosome inactivation (MSCI), an epigenetic process that leads to the silencing of the X- and Y-chromosomes at the pachytene stage of meiosis. Further, some of these lncRNAs and mRNAs show strong testis expression pattern suggesting that they may play key roles in spermatogenesis. Lastly, we generated a mouse knock out of one X-linked lncRNA, Tslrn1 (testis-specific long non-coding RNA 1), and found that males carrying a Tslrn1 deletion displayed normal fertility but a significant reduction in spermatozoa. Our findings demonstrate that dysregulation of specific mammalian lncRNAs is a novel mechanism of low sperm count or infertility, thus potentially providing new biomarkers and therapeutic strategies.

This article presents two perspectives on the potential effects of Sperm Count decline.

One Perspective identifies Epigenetics and male well-being conditions

  1. as a potential explanation to the Sperm Count decline, and
  2. as evidence for decline in White male longevity in certain geographies in the US since the mid 80s.

The other Perspective, evaluates if Sperm Count Decline would have or would not have a significant long term effects on Population Evolution and Demography.

The Voice of Prof. Marc Feldman, Stanford University – Long term significance of Sperm Count Decline on Population Evolution and Demography

Poor sperm count appears to be associated with such demographic statistics as life expectancy (1), infertility (2), and morbidity (3,4). The meta-analysis by Levine et al. (5) focuses on the change in sperm count of men from North America, Europe, Australia, and New Zealand, and shows a more than 50% decline between 1973 and 2011. Although there is no analysis of potential environmental or lifestyle factors that could contribute to the estimated decline in sperm count, Levine et al. speculate that this decline could be a signal for other negative changes in men’s health.

Because this study focuses mainly on Western men, this remarkable decline in sperm count is difficult to associate with any change in actual fertility, that is, number of children born per woman. The total fertility rate in Europe, especially Italy, Spain, and Germany, has slowly declined, but age at first marriage has increased at the same time, and this increase may be more due to economic factors than physiological changes.

Included in Levine et al.’s analysis was a set of data from “Other” countries from South America, Asia, and Africa. Sperm count in men from these countries did not show significant trends, which is interesting because there have been strong fertility declines in Asia and Africa over the same period, with corresponding increases in life expectancy (once HIV is accounted for).

What can we say about the evolutionary consequences for humans of this decrease? The answer depends on the minimal number of sperm/ml/year that would be required to maintain fertility (per woman) at replacement level, say 2.1 children, over a woman’s lifetime. Given the smaller number of ova produced per woman, a change in the ovulation statistics of women would be likely to play a larger role in the total fertility rate than the number of sperm/ejaculate/man. In other words, sperm count alone, absent other effects on mortality during male reproductive years, is unlikely to tell us much about human evolution.

Further, the major declines in fertility over the 38-year period covered by Levine et al. occurred in China, India, and Japan. Chinese fertility has declined to less than 1.5 children per woman, and in Japan it has also been well below 1.5 for some time. These declines have been due to national policies and economic changes, and are therefore unlikely to signal genetic changes that would have evolutionary ramifications. It is more likely that cultural changes will continue to be the main drivers of fertility change.

The fastest growing human populations are in the Muslim world, where fertility control is not nearly as widely practiced as in the West or Asia. If this pattern were to continue for a few more generations, the cultural evolutionary impact would swamp any effects of potentially declining sperm count.

On the other hand, if the decline in sperm count were to be discovered to be associated with genetic and/or epigenetic phenotypic effects on fetuses, newborns, or pre-reproductive humans, for example, due to stress or obesity, then there would be cause to worry about long-term evolutionary problems. As Levine et al. remark, “decline in sperm count might be considered as a ‘canary in the coal mine’ for male health across the lifespan”. But to date, there is little evidence that the evolutionary trajectory of humans constitutes such a “coal mine”.

References

  1. Jensen TK, Jacobsen R, Christensen K, Nielsen NC, Bostofte E. 2009. Good semen quality and life expectancy: a cohort study of 43,277 men. Am J Epidemiol 170: 559-565.
  2. Eisenberg ML, Li S, Behr B, Cullen MR, Galusha D, Lamb DJ, Lipshultz LI. 2014. Semen quality, infertility and mortality in the USA. Hum Reprod 29: 1567-1574.
  3. Eisenberg ML, Li S, Cullen MR, Baker LC. 2016. Increased risk of incident chronic medical conditions in infertile men: analysis of United States claims data. Fertil Steril 105: 629-636.
  4. Latif T, Kold Jensen T, Mehlsen J, Holmboe SA, Brinth L, Pors K, Skouby SO, Jorgensen N, Lindahl-Jacobsen R. Semen quality is a predictor of subsequent morbidity. A Danish cohort study of 4,712 men with long-term follow-up. Am J Epidemiol. Doi: 10.1093/aje/kwx067. (Epub ahead of print]
  5. Levine H, Jorgensen N, Martino-Andrade A, Mendiola J, Weksler-Derri D, Mindlis I, Pinotti R, Swan SH. 2017. Temporal trends in sperm count: a systematic review and meta-regression analysis. Hum Reprod Update pp. 1-14. Doi: 10.1093/humupd/dmx022.

SOURCE

From: Marcus W Feldman <mfeldman@stanford.edu>

Date: Monday, July 31, 2017 at 8:10 PM

To: Aviva Lev-Ari <aviva.lev-ari@comcast.net>

Subject: Fwd: text of sperm count essay

Psycho-Social Effects of Poverty, Unemployment and Epigenetics on Male Well-being, Physiological Conditions as POTENTIAL effects on Sperm Quality and Quantity and Evidence of its effects on Male Longevity

The Voice of Carol GrahamSergio Pinto, and John Juneau II , Monday, July 24, 2017, Report from the Brookings Institute

  1. The IMPACT of Well-being, Stress induced by Worry, Pain, Perception of Hope related to Employment and Lack of employment on deterioration of Physiological Conditions as evidence by Decrease Longevity

  2. Epigenetics and Environmental Factors

The geography of desperation in America

Carol GrahamSergio Pinto, and John Juneau II Monday, July 24, 2017, Report from the Brookings Institute

In recent work based on our well-being metrics in the Gallup polls and on the mortality data from the Centers for Disease Control and Prevention, we find a robust association between lack of hope (and high levels of worry) among poor whites and the premature mortality rates, both at the individual and metropolitan statistical area (MSA) levels. Yet we also find important differences across places. Places come with different economic structures and identities, community traits, physical environments and much more. In the maps below, we provide a visual picture of the differences in in hope for the future, worry, and pain across race-income cohorts across U.S. states. We attempted to isolate the specific role of place, controlling for economic, socio-demographic, and other variables.

One surprise is the low level of optimism and high level of worry in the minority dense and generally “blue” state of California, and high levels of pain and worry in the equally minority dense and “blue” states of New York and Massachusetts. High levels of income inequality in these states may explain these patterns, as may the nature of jobs that poor minorities hold.

We cannot answer many questions at this point. What is it about the state of Washington, for example, that is so bad for minorities across the board? Why is Florida so much better for poor whites than it is for poor minorities? Why is Nevada “good” for poor white optimism but terrible for worry for the same group? One potential issue—which will enter into our future analysis—is racial segregation across places. We hope that the differences that we have found will provoke future exploration. Readers of this piece may have some contributions of their own as they click through the various maps, and we welcome their input. Better understanding the role of place in the “crisis” of despair facing our country is essential to finding viable solutions, as economic explanations, while important, alone are not enough.

https://www.brookings.edu/research/the-geography-of-desperation-in-america/?utm_medium=social&utm_source=facebook&utm_campaign=global

 

Read Full Post »

Trends in Sperm Count

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

There has been a genuine decline in semen quality over the past 50 years. There is lot of controversy about this as there are limitations in studies that have attempted to address it. Sperm count is of considerable public health importance for several reasons. First, sperm count is closely linked to male fecundity and is a crucial component of semen analysis, the first step to identify male factor infertility.

Reduced sperm count is associated with cryptorchidism, hypospadias and testicular cancer. It may be associated with multiple environmental influences, including endocrine disrupting chemicals, pesticides, heat and lifestyle factors, including diet, stress, smoking and BMI. Therefore, sperm count may sensitively reflect the impacts of the modern environment on male health throughout the life span.

This study provided a systematic review and meta-regression analysis of recent trends in sperm counts as measured by sperm concentration (SC) and total sperm count (TSC), and their modification by fertility and geographic group. Analyzing trends by birth cohorts instead of year of sample collection may aid in assessing the causes of the decline (prenatal or in adult life) but was not feasible owing to lack of information.

This rigorous and comprehensive analysis found that SC declined 52.4% between 1973 and 2011 among unselected men from western countries, with no evidence of a ‘leveling off’ in recent years. Declining mean SC implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility. The high proportion of men from western countries with concentration below 40 million/ml is particularly concerning given the evidence that SC below this threshold is associated with a decreased monthly probability of conception.

Declines in sperm count have implications beyond fertility and reproduction. The decline reported in this study is consistent with reported trends in other male reproductive health indicators, such as testicular germ cell tumors, cryptorchidism, onset of male puberty and total testosterone levels. The public health implications are even wider. Recent studies have shown that poor sperm count is associated with overall morbidity and mortality. While the current study is not designed to provide direct information on the causes of the observed declines, sperm count has been plausibly associated with multiple environmental (including unwanted chemical exposure in alarming levels) and lifestyle influences, both prenatally and in adult life. In particular, endocrine disruption from chemical exposures or maternal smoking during critical windows of male reproductive development may play a role in prenatal life, while lifestyle changes and exposure to pesticides may play a role in adult life.

These findings strongly suggest a significant decline in male reproductive health, which has serious implications beyond fertility concerns. Research on causes and implications of this decline is urgently needed.

 

REFERENCES

Temporal trends in sperm count: a systematic review and meta-regression analysis 

Hagai Levine, Niels Jørgensen, Anderson Martino‐Andrade, Jaime Mendiola, Dan Weksler-Derri, Irina Mindlis, Rachel Pinotti, Shanna H Swan. Human Reproduction Update, July 25, 2017, doi:10.1093/humupd/dmx022.

Link: https://academic.oup.com/humupd/article-lookup/doi/10.1093/humupd/dmx022.

Sperm Counts Are Declining Among Western Men – Interview with Dr. Hagai Levine

https://news.afhu.org/news/sperm-counts-are-declining-among-western-men?utm_source=Master+List&utm_campaign=dca529d919-EMAIL_CAMPAIGN_2017_07_27&utm_medium=email&utm_term=0_343e19a421-dca529d919-92801633

J Urol. 1983 Sep;130(3):467-75.

A critical method of evaluating tests for male infertility.

https://www.ncbi.nlm.nih.gov/pubmed/6688444

Hum Reprod. 1993 Jan;8(1):65-70.

Estimating fertility potential via semen analysis data.

https://www.ncbi.nlm.nih.gov/pubmed/8458929

Lancet. 1998 Oct 10;352(9135):1172-7.

Relation between semen quality and fertility: a population-based study of 430 first-pregnancy planners.

https://www.ncbi.nlm.nih.gov/pubmed/9777833

Hum Reprod Update. 2010 May-Jun;16(3):231-45. doi: 10.1093/humupd/dmp048. Epub 2009 Nov 24.

World Health Organization reference values for human semen characteristics.

https://www.ncbi.nlm.nih.gov/pubmed/19934213

J Nutr. 2016 May;146(5):1084-92. doi: 10.3945/jn.115.226563. Epub 2016 Apr 13.

Intake of Fruits and Vegetables with Low-to-Moderate Pesticide Residues Is Positively Associated with Semen-Quality Parameters among Young Healthy Men.

https://www.ncbi.nlm.nih.gov/pubmed/27075904

Reprod Toxicol. 2003 Jul-Aug;17(4):451-6.

Semen quality of Indian welders occupationally exposed to nickel and chromium.

https://www.ncbi.nlm.nih.gov/pubmed/12849857

Fertil Steril. 1996 May;65(5):1009-14.

Semen analyses in 1,283 men from the United States over a 25-year period: no decline in quality.

https://www.ncbi.nlm.nih.gov/pubmed/8612826

 

https://www.euronews.com/next/2022/06/10/research-into-falling-sperm-counts-finds-alarming-levels-of-chemicals-in-male-urine-sample

 

Read Full Post »

A New Computational Method illuminates the Heterogeneity and Evolutionary Histories of cells within a Tumor, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)

A New Computational Method illuminates the Heterogeneity and Evolutionary Histories of cells within a Tumor

Reporter: Aviva Lev-Ari, PhD, RN

 

Start Quote

Numerous computational approaches aimed at inferring tumor phylogenies from single or multi-region bulk sequencing data have recently been proposed. Most of these methods utilize the variant allele fraction or cancer cell fraction for somatic single-nucleotide variants restricted to diploid regions to infer a two-state perfect phylogeny, assuming an infinite-site model such that each site can mutate only once and persists. In practice, convergent evolution could result in the acquisition of the same mutation more than once, thereby violating this assumption. Similarly, mutations could be lost due to loss of heterozygosity. Indeed, both single-nucleotide variants and copy number alterations arise during tumor evolution, and both the variant allele fraction and cancer cell fraction depend on the copy number state whose inference reciprocally relies on the relative ordering of these alterations such that joint analysis can help resolve their ancestral relationship (Figure 1). To tackle this outstanding problem, El-Kebir et al. (2016) formulated the multi-state perfect phylogeny mixture deconvolution problem to infer clonal genotypes, clonal fractions, and phylogenies by simultaneously modeling single-nucleotide variants and copy number alterations from multi-region sequencing of individual tumors. Based on this framework, they present SPRUCE (Somatic Phylogeny Reconstruction Using Combinatorial Enumeration), an algorithm designed for this task. This new approach uses the concept of a ‘‘character’’ to represent the status of a variant in the genome.

Commonly, binary characters have been used to represent single-nucleotide variants— that is, the variant is present or absent. In contrast, El-Kebir et al. use multi-state characters to represent copy number alterations, which may be present in zero, one, two, or more copies in the genome.

SPRUCE outperforms existing methods on simulated data, yielding higher recall rates under a variety of scenarios. Moreover, it is more robust to noise in variant allele frequency estimates, which is a significant feature of tumor genome sequencing data. Importantly, El-Kebir and colleagues demonstrate that there is often an ensemble of phylogenetic trees consistent with the underlying data. This uncertainty calls for caution in deriving definitive conclusions about the evolutionary process from a single solution.”

End Quote

 

From Original Paper

Inferring Tumor Phylogenies from Multi-region Sequencing

Zheng Hu1,2 and Christina Curtis1,2,*

1Departments of Medicine and Genetics

2Stanford Cancer Institute

Stanford University School of Medicine, Stanford, CA 94305, USA

*Correspondence: cncurtis@stanford.edu

http://dx.doi.org/10.1016/j.cels.2016.07.007

Read Full Post »

Warfarin and Dabigatran, Similarities and Differences

Author and Curator: Danut Dragoi, PhD

 

What anticoagulants do?

An anticoagulant helps your body control how fast your blood clots; therefore, it prevents clots from forming inside your arteries, veins or heart during certain medical conditions.

If you have a blood clot, an anticoagulant may prevent the clot from getting larger. It also may prevent a piece of the clot from breaking off and traveling to your lungs, brain or heart. The anticoagulant medication does not dissolve the blood clot. With time, however, this clot may dissolve on its own.

Blood tests you will need

The blood tests for clotting time are called prothrombin time (Protime, PT) and international normalized ratio (INR). These tests help determine if your medication is working. The tests are performed at a laboratory, usually once a week to once a month, as directed by your doctor. Your doctor will help you decide which laboratory you will go to for these tests.

The test results help the doctor decide the dose of warfarin (Coumadin) that you should take to keep a balance between clotting and bleeding.

Important things to keep in mind regarding blood tests include:

  • Have your INR checked when scheduled.
  • Go to the same laboratory each time. (There can be a difference in results between laboratories).
  • If you are planning a trip, talk with your doctor about using another laboratory while traveling.
Dosage

The dose of medication usually ranges from 1 mg to 10 mg once daily. The doctor will prescribe one strength and change the dose as needed (your dose may be adjusted with each INR).

The tablet is scored and breaks in half easily. For example: if your doctor prescribes a 5 mg tablet and then changes the dose to 2.5 mg (2½ mg), which is half the strength, you should break one of the 5 mg tablets in half and take the half-tablet. If you have any questions about your dose, talk with your doctor or pharmacist.

What warfarin (Coumadin) tablets look like

Warfarin is made by several different drug manufacturers and is available in many different shapes. Each color represents a different strength, measured in milligrams (mg). Each tablet has the strength imprinted on one side, and is scored so you can break it in half easily to adjust your dose as your doctor instructed.

https://my.clevelandclinic.org/health/drugs_devices_supplements/hic_Understanding_Coumadin

Today, on the basis of 4 clinical trials involving over 9,000 patients, PRADAXA is approved to treat blood clots in the veins of your legs(deep vein thrombosis, or DVT) or lungs (pulmonary embolism, or PE)in patients who have been treated with blood thinner injections, and to reduce the risk of them occurring again.

In these trials, PRADAXA was compared to warfarin or to placebo (sugar pills) for the treatment of DVT and PE patients.

https://www.pradaxa.com/pradaxa-vs-warfarin?gclid=CMaRq7al9ssCFUxZhgodZuoC5w

Warfarin (NB-which goes by the brand name Coumadin, see link in here) reduces the risk of stroke in patients with atrial fibrillation (NB- atrial fibrillation (also called AFib or AF) is a quivering or irregular heartbeat (arrhythmia) that can lead to blood clots, stroke, heart failure and other heart-related complications. Some people refer to AF as a quivering heart, see link here) but increases the risk of hemorrhage and is difficult to use.

Dabigatran is a new oral direct thrombin inhibitor (NB-direct thrombin inhibitors are a class of medication that act as anticoagulants by directly inhibiting the enzyme thrombin). Some are in clinical use, while others are undergoing clinical development), see link in here.

Some international large clinical trials, see link in here,  show results for patients with atrial fibrillation, dabigatran given at a dose of 110 mg was associated with rates of stroke and systemic embolism that were similar to those associated with warfarin, as well as lower rates of major hemorrhage. Dabigatran administered at a dose of 150 mg, as compared with warfarin, was associated with lower rates of stroke and systemic embolism but similar rates of major hemorrhage.

Picture below shows a deep vein thrombosis which is a blood clot that forms inside a vein, usually deep within the leg. About half a million Americans every year get one, and up to 100,000 die because of it. The danger is that part of the clot can break off and travel through your bloodstream. It could get stuck in your lungs and block blood flow, causing organ damage or death, see link in here.

Blod Clot

Image SOURCE: http://www.webmd.com/heart-disease/guide/warfarin-other-blood-thinners

The behaviour of blood thinning drugs is dependent on their physico-chemical properties and since a significant proportion of drugs contain ionisable centers a knowledge of their pKa (NB-pKa was introduced as an index to express the acidity of weak acids, where pKa is defined as follows. For example, the Ka constant for acetic acid (CH3C00H) is 0.0000158 (= 10-4.8), but the pKa constant is 4.8, which is a simpler expression. In addition, the smaller the pKa value, the stronger the acid, see link in here ) is essential, see link in here. The pKa is defined as the negative log of the dissociation constant, see link in here:

pka=-log10(Ka)              (1)

where the dissociation constant is defined thus:

Ka=[A][H+]/[AH]

Most drugs have pKa in the range 0-12, and whilst it is possible to calculate pKa it is desirable to experimentally measure the value for representative examples. There are a number of instruments that are capable of measuring pKa utilising Sirius T3 instrument, see link in here .

Table 1 below shows the pka values for warfarin, see link in here  and dabigatran, see link in here.

Table 1

==========================

Anticoagulant           pka          

warfarin                     4.99

dabigatran                 4.24        11.51*

==========================

* dabigatran possess both acidic and basic functionality.

Both groups are at ionized at blood pH and exist as zwitterionic

structures, see link in here.

Adding physico-chemical features of anticoagulants utilized in “dissolving” blood clots is important for better understanding the de-blocking process within the veins utilizing anticoagulants.

SOURCE

http://theochem.chem.rug.nl/publications/PDF/ft683.pdf

http://www.rsc.org/chemical-sciences-repository/articles/article/dr000000003197?doi=10.1039/c5ra04680g

http://pubs.rsc.org/en/content/articlelanding/2015/ra/c5ra11623f#!divAbstract

http://www.cambridgemedchemconsulting.com/resources/physiochem/pka.html

http://www.webmd.com/heart-disease/guide/warfarin-other-blood-thinners

https://www.google.com/#q=define+atrial+fibrillation

https://www.researchgate.net/profile/Lars_Wallentin/publication/26777612_Dabigatran_versus_Warfarin_in_Patients_with_Atrial_Fibrillation/links/02bfe50c8c2fa639c0000000.pdf

http://www.webmd.com/heart-disease/guide/warfarin-other-blood-thinners

 

Other related articles published in this Open Access Online Scientific Journal, include the following:

Coagulation N=69

http://pharmaceuticalintelligence.com/?s=Coagulation

Peripheral Arterial Disease N=43

http://pharmaceuticalintelligence.com/?s=Peripheral

Antiarrhythmic drugs

http://pharmaceuticalintelligence.com/?s=Antiarrhythmic+drugs

A-Fib

http://pharmaceuticalintelligence.com/?s=a-fib

Electrophysiology N = 80

http://pharmaceuticalintelligence.com/?s=Electrophysiology

 

Read Full Post »

Huge Data Network Bites into Cancer Genomics

Larry H. Bernstein, MD, FCAP, Curator

LPBI

 

Closer to a Cure for Gastrointestinal Cancer

Suzanne Tracy, Editor-in-Chief, Scientific Computing and HPC Source
http://www.scientificcomputing.com/news/2015/11/closer-cure-gastrointestinal-cancer

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its HPC environment with data capture and analytics capabilities, allowing data to move transparently between research steps, and driving discoveries such as a link between certain viruses and gastrointestinal cancers.

 

SANTA CLARA, CA — At the University of Miami’s Center for Computational Science (CCS), more than 2,000 internal researchers and a dozen expert collaborators across academic and industry sectors worldwide are working together in workflow management, data management, data mining, decision support, visualization and cloud computing. CCS maintains one of the largest centralized academic cyberinfrastructures in the country, which fuels vital and critical discoveries in Alzheimer’s, Parkinson’s, gastrointestinal cancer, paralysis and climate modeling, as well as marine and atmospheric science research.

In order to streamline workflows and keep pace with data-intensive discovery demands, CCS integrated its high performance computing (HPC) environment with data capture and analytics capabilities, allowing data to move transparently between research steps. To speed scientific discoveries and boost collaboration with researchers around the world, the center deployed high-performance DataDirect Networks (DDN) GS12K scale-out file storage. CCS now relies on GS12K storage to handle bandwidth-driven workloads while serving very high IOPS demand resulting from intense user interaction, which simplifies data capture and analysis. As a result, the center is able to capture, store and distribute massive amounts of data generated from multiple scientific models running different simulations on 15 Illumina HiSeq sequencers simultaneously on DDN storage. Moreover, number-crunching time for genome mapping and SNP calling has been reduced from 72 to 17 hours.

“DDN enabled us to analyze thousands of samples for the Cancer Genome Atlas, which amounts to nearly a petabyte of data,” explained Dr. Nicholas Tsinoremas, director of the Center for Computational Sciences at the University of Miami. “Having a robust storage platform like DDN is essential to driving discoveries, such as our recent study that revealed a link between certain viruses and gastrointestinal cancers. Previously, we couldn’t have done that level of computation.”

In addition to providing significant storage processing power to meet both high I/O and interactive processing requirements, CCS needed a flexible file system that could support large parallel and short serial jobs. The center also needed to address “data in flight” challenges that result from major data surges during analysis, and which often cause a 10x spike in storage. The system’s performance for genomics assembly, alignment and mapping is enabling CCS to support all its application needs, including the use of BWA and Bowtie for initial mapping, as well as SamTools and GATK for variant analysis and SNP calling.

“Our arrangement is to share data or make it available to anyone asking, anywhere in the world,” added Tsinoremas. “Now, we have the storage versatility to attract researchers from both within and outside the HPC community … we’re well-positioned to generate, analyze and integrate all types of research data to drive major scientific discoveries and breakthroughs.”

About DDN

DataDirect Networks is a big data storage supplier to data-intensive, global organizations. For more than 15 years, the company has designed, developed, deployed and optimized systems, software and solutions that enable enterprises, service providers, universities and government agencies to generate more value and to accelerate time to insight from their data and information, on premise and in the cloud. Organizations leverage DDN technology and the technical expertise of its team to capture, store, process, analyze, collaborate and distribute data, information and content at largest scale in the most efficient, reliable and cost effective manner. DDN customers include financial services firms and banks, healthcare and life science organizations, manufacturing and energy companies, government and research facilities, and web and cloud service providers.

 

“Where DDN really stood out is in the ability to adapt to whatever we would need. We have both IOPS-centric storage and the deep, slower I/O pool at full bandwidth. No one else could do that.”

Joel P. Zysman

Director of High Performance Computing

Center for Computational Science at the University of Miami

The University of Miami maintains one of the largest centralized, academic, cyber infrastructures in the US, which is integral to addressing and solving major scientific challenges. At its Center for Computational Science (CCS), more than 2,000 researchers, faculty, staff and students across multiple disciplines collaborate on diverse and interdisciplinary projects requiring HPC resources.

With 50% of the center’s users come from University of Miami’s Miller School of Medicine with ongoing projects at the Hussman Institute for Human Genomics, the explosion of next-generation sequencing has had a major impact on compute and storage demands. At CCS, the heavy I/O required to create four billion reads from one genome in a couple of days only intensifies when the data from the reads needs to be managed and analyzed

Aside from providing sufficient storage power to meet both high I/O and interactive processing demands, CCS needed a powerful file system that was flexible enough to handle very large parallel jobs as well as smaller, shorter serial jobs. CCS also needed to address as much as 10X spikes in storage, so it was critical to scale and support petabytes of machine-generated data without adding a layer of complexity or creating inefficiencies.

Read their success story to learn how high-performance DDN® Storage I/O has helped the University of Miami:

  • Establish links between certain viruses and gastrointestinal cancers discovered with computation that were not possible before
  • Reduce genomics compute and analysis time from 72 to 17 hours
CHALLENGES

  • Diverse, interdisciplinary research projects required massive compute and storage power as well as integrated data lifecycle movement and management
  • Highly demanding I/O and heavy interactivity requirements from next-gen sequencing intensified data generation, analysis and management
  • Handle large parallel jobs and smaller, shorter serial jobs
  • Data surges during analysis created “data-in-flight” challenges

SOLUTION

An end-to-end, high performance DDN GRIDScaler® solution featuring a GS12K™ scale-out appliance with an embedded IBM® GPFS™ parallel file system

TECHNICAL BENEFITS

  • Centralized storage with an embedded file system makes it easy to add storage where needed—in the high-performance, high-transaction or slower storage pools—and then manage it all through a single pane of glass
  • DDN’s transparent data movement enables using one platform for data capture, download, analysis and retention
  • The ability to maintain an active archive of storage lets the center accommodate different types of analytics with varied I/O needs

Read Full Post »

Diet and Exercise

Writer and Curator: Larry H. Bernstein, MD, FCAP 

 

Introduction

In the last several decades there has been a transformation in the diet of Americans, and much debate about obesity, type 2 diabetes mellitus, hyperlipidemia, and the transformation of medical practice to a greater emphasis on preventive medicine. This occurs at a time that the Western countries are experiencing a large portion of the obesity epidemic, which actually diverts attention from a larger share of malnutrition in parts of Africa, Asia, and to a greater extent in India. This does not mean that obesity or malnutrition is exclusively in any parts of the world. But there is a factor at play that involves social factors, poverty, education, cognition, anxiety, and eating behaviors, food preferences and food balance, and activities of daily living. The epidemic of obesity also involves the development of serious long term health problems, such as, type 2 diabetes mellitus, sarcopenia, fracture risk, pulmonary disease, sleep apnea in particular, and cardiovascular and stroke risk. Nevertheless, this generation of Western society is also experiencing a longer life span than its predecessors. In this article I shall explore the published work on diet and exercise.

 

‘‘Go4Life’’ exercise counseling, accelerometer feedback, and activity levels in older people

Warren G. Thompson, CL Kuhle, GA Koepp, SK McCrady-Spitzer, JA Levine
Archives of Gerontology and Geriatrics 58 (2014) 314–319
http://dx.doi.org/10.1016/j.archger.2014.01.004

Older people are more sedentary than other age groups. We sought to determine if providing an accelerometer with feedback about activity and counseling older subjects using Go4Life educational material would increase activity levels. Participants were recruited from independent living areas within assisted living facilities and the general public in the Rochester, MN area. 49 persons aged 65–95(79.5 + 7.0 years) who were ambulatory but sedentary and overweight participated in this randomized controlled crossover trial for one year. After a baseline period of 2 weeks, group 1 received an accelerometer and counseling using Go4Life educational material (www.Go4Life.nia.nih.gov) for 24 weeks and accelerometer alone for the next 24 weeks. Group 2 had no intervention for the first 24 weeks and then received an accelerometer and Go4Life based counseling for 24 weeks. There were no significant baseline differences between the two groups. The intervention was not associated with a significant change inactivity, body weight, % body fat, or blood parameters (p > 0.05). Older (80–93) subjects were less active than younger (65–79) subjects (p = 0.003). Over the course of the 48 week study, an increase in activity level was associated with a decline in % body fat (p = 0.008). Increasing activity levels benefits older patients. However, providing an accelerometer and a Go4Life based exercise counseling program did not result in a 15% improvement in activity levels in this elderly population. Alternate approaches to exercise counseling may be needed in elderly people of this age range.

It is generally recommended that older adults be moderately or vigorously active for 150 min each week. A systematic review demonstrated that only 20–60% of older people are achieving this goal. These studies determined adherence to physical activity recommendations by questionnaire. Using NHANES data, it has been demonstrated that older people meet activity recommendations 62% of the time using a self-report questionnaire compared to 9.6% of the time when measured by accelerometry. Thus, objective measures suggest that older people are falling even more short of the goal than previously thought. Most studies have measured moderate and vigorous activity. However, light activity or NEAT (non-exercise activity thermogenesis) also has an important effect on health. For example, increased energy expenditure was associated with lower mortality in community-dwelling older adults. More than half of the extra energy expenditure in the high energy expenditure group came from non-exercise (light) activity. In addition to reduced total mortality, increased light and moderate activity has been associated with better cognitive function, reduced fracture rate (Gregg et al., 1998), less cardiovascular disease, and weight loss in older people. A meta-analysis of middle-aged and older adults has demonstrated greater all-cause mortality with increased sitting time. Thus, any strategy which can increase activity (whether light or more vigorous) has the potential to save lives and improve quality of life for older adults. A variety of devices have been used to measure physical activity.

A tri-axial accelerometer measures movement in three dimensions. Studies comparing tri-axial accelerometers with uniaxial accelerometers and pedometers demonstrate that only certain tri-axial accelerometers provide a reliable assessment of energy expenditure. This is usually due to failure to detect light activity. Since light activity accounts for a substantial portion of older people’s energy expenditure, measuring activity with a questionnaire or measuring steps with a pedometer do not provide an accurate reflection of activity in older people.

A recent review concluded that there is only weak evidence that physical activity can be improved. Since increasing both light and moderate activity benefit older people, studies demonstrating that physical activity can be improved are urgently needed. Since accelerometry is the best way to accurately assess light activity, we performed a study to determine if an activity counseling program and using an accelerometer which gives feedback on physical activity, can result in an increase in light and moderate activity in older people. We also sought to determine whether counseling and accelerometer feedback would result in weight loss, change in % body fat, glucose, hemoglobin A1c, insulin, and fasting lipid profile.

The main results of the study are both the experimental and control group lost weight (about 1 kg) at 6months (p = 0.04 and 0.02, respectively). The experimental group was less active at 6 months but not significantly while the control group was significantly less active at 6 months (p = 0.006) than at baseline. The experimental group had a modest decline in cholesterol (p = 0.03) and an improvement in Get Up & go time (p = 0.03) while the control group had a slight improvement in HgbA1c (p = 0.01). However, the main finding of the study was that there were no differences between the two groups on any of these variables. Thus, providing this group of older participants with an accelerometer and Go4Life based counseling resulted in no increase in physical activity, weight loss or change in glucose, lipids, blood pressure, or body fat. There were no differences within either group or between groups from 6 to 12 months on any of the variables (data not shown). While age was correlated with baseline activity, it did not affect activity change indicating that younger participants did not respond to the program better than older participants. Performance on the Get Up and Go test and season of the year did not influence the change in activity. There were no differences in physical activity levels at 3 or 9 months.

There was a significant correlation (r = -0.38, p = 0.006) between change in activity and change in body fat over the course of the study. Those subjects (whether in the experimental or control group) who increased their activity over the course of the year were likely to have a decline in % body fat over the year while those whose activity declined were likely to have increased %body fat. There was no correlation between change in activity and any of the other parameters including weight and waist circumference (data not shown).

Older adults are the fastest growing segment of the population in the US, but few meet the minimum recommended 30 min of moderate activity on 5 days or more per week (Centers for Disease Control and Prevention, 2002). Our study found that within the geriatric population, activity declines as people age. We saw a 2.4% decline per year cross-sectionally. This finding agrees with a recent cohort study (Bachman et al., 2014). In that study, the annual decline accelerated with increasing age. Thus, there is a need to increase activity particularly in the oldest age groups. The United States Preventive Services Task Force concluded that the evidence that counseling improves physical activity is weak (Moyer and US Preventive Services Task Force, 2012). The American Heart Association reached similar conclusions (Artinian et al., 2010). Thus, new ways of counseling older patients to counter the natural decline in activity with age are urgently needed.

Applying health behavior theory to multiple behavior change: Considerations and approaches

Seth M. Noar, Melissa Chabot, Rick S. Zimmerman
Preventive Medicine 46 (2008) 275–280
http://dx.doi.org:/10.1016/j.ypmed.2007.08.001

Background.There has been a dearth of theorizing in the area of multiple behavior change. The purpose of the current article was to examine how health behavior theory might be applied to the growing research terrain of multiple behavior change. Methods. Three approaches to applying health behavior theory to multiple behavior change are advanced, including searching the literature for potential examples of such applications. Results. These three approaches to multiple behavior change include

(1) a behavior change principles approach;

(2) a global health/behavioral category approach, and

(3) a multiple behavioral approach.

Each approach is discussed and explicated and examples from this emerging literature are provided. Conclusions. Further study in this area has the potential to broaden our understanding of multiple behaviors and multiple behavior change. Implications for additional theory-testing and application of theory to interventions are discussed.

Many of the leading causes of death in the United States are behavior-related and thus preventable. While a number of health behaviors are a concern individually, increasingly the impact of multiple behavioral risks is being appreciated. As newer initiatives funded by the National Institutes of Health and Robert Wood Johnson Foundation begin to stimulate research in this important area, a critical question emerges: How can we understand multiple health behavior change from a theoretical standpoint? While multiple behavior change interventions are beginning to be developed and evaluated, to date there have been few efforts to garner a theory-based understanding of the process of multiple health behavior change. Given that so little theoretical work currently exists in this area, our main purpose is to advance the conversation on how health behavior theory can help us to achieve a greater understanding of multiple behavior change. The approaches discussed have implications for both theory-testing as well as intervention design.

A critical question that must be asked, is whether there is a common set of principles of health behavior change that transcend individual health behaviors. This is an area where much data already exists, as health behavior theories have been tested across numerous health behaviors.The integration of findings from studies across diverse behavioral areas, is not what it could be. Godin and Kok (1996) reviewed studies of the TPB applied to numerous health-related behaviors. Across seven categories of health behaviors, they found TPB components to offer similar prediction of intention but inconsistent prediction of behavior.They concluded that the nature of differing health behaviors may require additional constructs to be added to the TPB, such as actual (versus perceived) behavioral control. Prochaska et al. (1994) examined decisional balance across stages of change for 12 health-related behaviors. Similar patterns were found across nearly all of these health behaviors, with the “pros” of changing generally increasing across the stages, the “cons” decreasing, and a pro/con crossover occurring in the contemplation or preparation stages of change. Prochaska et al. (1994) concluded that clear commonalties exist across these differing health behaviors which were examined in differing samples. Finally, Rosen (2000) examined change processes from the TTM across six behavioral categories, examining whether the trajectory of change processes is similar or different across stages of change in those health areas. He found that for smoking cessation, cognitive change processes were used more in earlier stages of change than behavioral processes, while for physical activity and dietary change, both categories of change processes increased together.

A second approach is the following: Rather than applying theoretical concepts to specific behaviors, such concepts might be applied at the general or global level. A general orientation toward health may not lead directly to specific health behaviors, but it may increase the chances of particular health-related attitudes, which may in turn lead to specific health behaviors. In fact, although Ajzen and Timko (1986) found general health attitudes to be poor predictors of behavior, such attitudes were significantly related to specific health attitudes and perceived behavioral control over specific behaviors. It is likely that when we consider multiple behaviors that we may discover an entire network of health attitudes and beliefs that are interrelated. In fact, studies of single behaviors essentially take those behaviors out of the multi-attitude and multi-behavioral context in which they are embedded. For instance, although attitudes toward walking may be a better predictor of walking behavior than attitudes toward physical activity, walking behavior is part of a larger “physical activity” behavioral category. While predicting that particular behavior may be best served by the specific measure, the larger category is both relevant and of interest. Thus, it may be that there are higher order constructs to be understood here.

A third approach is a multiple behavioral approach, or one which focuses on the linkages among health behaviors. It shares some similarities to the approach just described. Here the focus is more strictly on how particular  interventions were superior to comparison groups for 21 of 41 (51%) studies (3 physical activity, 7 diet, 11 weight loss/physical activity and diet). Twenty-four studies had indeterminate results, and in four studies the comparison conditions outperformed eHealth interventions. Conclusions: Published studies of eHealth interventions for physical activity and dietary behavior change are in their infancy. Results indicated mixed findings related to the effectiveness of eHealth interventions. Interventions that feature interactive technologies need to be refined and more rigorously evaluated to fully determine their potential as tools to facilitate health behavior change.

 

A prospective evaluation of the Transtheoretical Model of Change applied to exercise in young people 

Patrick Callaghan, Elizabeth Khalil, Ioannis Morres
Intl J Nursing Studies 47 (2010) 3–12
http://dx.doi.org:/10.1016/j.ijnurstu.2009.06.013

Objectives:To investigate the utility of the Transtheoretical Model of Change in predicting exercise in young people. Design: A prospective study: assessments were done at baseline and follow-up 6 months later. Method: Using stratified random sampling 1055 Chinese high school pupils living in Hong Kong, 533 of who were followed up at 6 months, completed measures of stage of change (SCQ), self-efficacy (SEQ), perceptions of the pros and cons of exercising (DBQ) and processes of change (PCQ). Data were analyzed using one-way ANOVA, repeated measures ANOVA and independent sample t tests.
Results:The utility of the TTM to predict exercise in this population is not strong; increases in self-efficacy and decisional balance discriminated between those remaining active at baseline and follow-up, but not in changing from an inactive (e.g.,Precontemplation or Contemplation) to an active state (e.g.,Maintenance) as one would anticipate given the staging algorithm of the TTM.
Conclusion:The TTM is a modest predictor of future stage of change for exercise in young Chinese people. Where there is evidence that TTM variables may shape movement over time, self-efficacy, pros and behavioral processes of change appear to be the strongest predictors

 

A retrospective study on changes in residents’ physical activities, social interactions, and neighborhood cohesion after moving to a walkable community

Xuemei Zhu,Chia-Yuan Yu, Chanam Lee, Zhipeng Lu, George Mann
Preventive Medicine 69 (2014) S93–S97
http://dx.doi.org/10.1016/j.ypmed.2014.08.013

Objective. This study is to examine changes in residents’ physical activities, social interactions, andneighbor-hood cohesion after they moved to a walkable community in Austin, Texas.
Methods. Retrospective surveys (N=449) were administered in 2013–2014 to collect pre-and post-move data about the outcome variables and relevant personal, social, and physical environmental factors. Walkability of each resident’s pre-move community was measured using the Walk Score. T tests were used to examine the pre–post move differences in the outcomes in the whole sample and across subgroups with different physical activity levels, neighborhood conditions, and neighborhood preferences before the move. Results. After the move, total physical activity increased significantly in the whole sample and all subgroups except those who were previously sufficiently active; lived in communities with high walkability, social interactions, or neighborhood cohesion; or had moderate preference for walkable neighborhoods. Walking in the community increased in the whole sample and all subgroups except those who were previously sufficiently active, moved from high-walkability communities, or had little to no preference for walkable neighborhoods. Social interactions and neighborhood cohesion increased significantly after the move in the whole sample and all subgroups.
Conclusion.This study explored potential health benefits of a walkable community in promoting physically and socially active lifestyles, especially for populations at higher risk of obesity. The initial result is promising, suggesting the need for more work to further examine the relationships between health and community design using pre–post assessments.

 

Application of the transtheoretical model to identify psychological constructs influencing exercise behavior: A questionnaire survey

Young-Ho Kim
Intl J Nursing Studies 44 (2007) 936–944
http://dx.doi.org:/10.1016/j.ijnurstu.2006.03.008

Background: Current research on exercise behavior has largely been attempted to identify the relationship between psychological attributes and the initiation or adherence of exercise behavior based on psychological theories. A limited data are available on the psychological predictors of exercise behavior in public health. Objectives: The present study examined the theorized association of TTM of behavior change constructs by stage of change for exercise behavior. Methods: A total of 228 college students selected from 2 universities in Seoul were surveyed. Four Korean-version questionnaires were used to identify the stage of exercise behavior and psychological attributes of adolescents. Data were analyzed by frequency analysis, MANOVA, correlation analysis, and discriminant function analysis.
Results: Multivariate F-test indicated that behavioral and cognitive processes of change, exercise efficacy, and pros differentiated participants across the stages of exercise behavior. Furthermore, exercise behavior was significantly correlated with the TTM constructs, and that overall classification accuracy across the stages of change was 61.0%. Conclusions:The present study supports the internal and external validity of the Transtheoretical Model for explaining exercise behavior. As this study highlights, dissemination must increase awareness but also influences perceptions regarding theoretically based and practically important exercise strategies for public health professionals.

 

 

Does more education lead to better health habits? Evidence from the school reforms in Australia?

Jinhu Li, Nattavudh Powdthavee
Social Science & Medicine 127 (2015) 83-91
http://dx.doi.org/10.1016/j.socscimed.2014.07.021

The current study provides new empirical evidence on the causal effect of education on health-related behaviors by exploiting historical changes in the compulsory schooling laws in Australia. Since World War II, Australian states increased the minimum school leaving age from 14 to 15 in different years. Using differences in the laws regarding minimum school leaving age across different cohorts and across different states as a source of exogenous variation in education, we show that more education improves people’s diets and their tendency to engage in more regular exercise and drinking moderately, but not necessarily their tendency to avoid smoking and to engage in more preventive health checks. The improvements in health behaviors are also reflected in the estimated positive effect of education on some health outcomes. Our results are robust to alternative measures of education and different estimation methods.

Read Full Post »

Anorexia Nervosa and Related Eating Disorders

Writer and Curator: Larry H. Bernstein, MD, FCAP 

 

Introduction

Anorexia nervosa is a stress related disorder that occurs mainly in women, closely related to bulimia, and is related to self-esteem, or to a preoccupation with how the individual would like to see themselves. It is not necessarily driven by conscious motive, but lies in midbrain activities that govern hormonal activity and social behavior

 

Eating disorders

Christopher G Fairburn, Paul J Harrison
Lancet 2003; 361: 407–16

Eating disorders are an important cause of physical and psychosocial morbidity in adolescent girls and young adult women. They are much less frequent in men. Eating disorders are divided into three diagnostic categories: anorexia nervosa, bulimia nervosa, and the atypical eating disorders. However, the disorders have many features in common and patients frequently move between them, so for the purposes of this Seminar we have adopted a transdiagnostic perspective. The cause of eating disorders is complex and badly understood. There is a genetic predisposition, and certain specific environmental risk factors have been implicated. Research into treatment has focused on bulimia nervosa, and evidence-based management of this disorder is possible. A specific form of cognitive behavior therapy is the most effective treatment, although few patients seem to receive it in practice. Treatment of anorexia nervosa and atypical eating disorders has received remarkably little research attention.

Eating disorders are of great interest to the public, of perplexity to researchers, and a challenge to clinicians. They feature prominently in the media, often attracting sensational coverage. Their cause is elusive, with social, psychological, and biological processes all seeming to play a major part, and they are difficult to treat, with some patients actively resisting attempts to help them.

Anorexia nervosa and bulimia nervosa are united by a distinctive core psychopathology, which is essentially the same in female and male individuals; patients overevaluate their shape and weight. Whereas most of us assess ourselves on the basis of our perceived performance in various domains—eg, relationships, work, parenting, sporting prowess—patients with anorexia nervosa or bulimia nervosa judge
their self-worth largely, or even exclusively, in terms of their shape  and weight and their ability to control them. Most of the other features
of these disorders seem to be secondary to this psychopathology and to its consequences—for example, self-starvation. Thus, in anorexia nervosa there is a sustained and determined pursuit of weight loss and, to the extent that this pursuit is successful, this behavior is not seen as a problem. Indeed, these patients tend to view their low weight as an accomplishment rather than an affliction. In bulimia nervosa, equivalent attempts to control shape and weight are undermined by frequent episodes of uncontrolled overeating (binge eating) with the result that patients  often describe themselves as failed anorexics.  The core psychopathology has other manifestations; for example,  many patients mislabel certain adverse physical and emotional states as feeling fat, and some repeatedly scrutinize aspects of their shape,
which could contribute to them overestimating their size.

Panel 1: Classification and diagnosis of eating disorders

Definition of an eating disorder

  • There is a definite disturbance of eating habits or weight- control behavior
  • Either this disturbance, or associated core eating disorder features, results in a clinically significant impairment of physical health or psychosocial functioning (core eating disorder features comprise the disturbance of eating and any associated over-evaluation of shape or weight)
  • The behavioral disturbance should not be secondary to any general medical disorder or to any other psychiatric condition

Classification of eating disorders

  • Anorexia nervosa
  • Bulimia nervosa
  • Atypical eating disorders (or eating disorder not otherwise specified)

Principal diagnostic criteria

  • Anorexia nervosa
  1. Over-evaluation of shape and weight—ie, judging self-worth largely, or exclusively, in terms of shape and weight
  2. Active maintenance of an unduly low bodyweight—eg, body-mass index 17·5 kg/m2
  3. Amenorrhea in post-menarche females who are not taking an oral contraceptive. The value of the amenorrhea criterion can be questioned since most female patients who meet the other two diagnostic criteria are amenorrheic, and those who menstruate
    seem to resemble closely those who do not
  • Bulimia nervosa
  1. Over-evaluation of shape and weight—ie, judging self-worth largely,
    or exclusively, in terms of shape and weight
  2. Recurrent binge eating—i.e., recurrent episodes of uncontrolled overeating
  3. Extreme weight-control behavior—e.g., strict dietary restriction, frequent self-induced vomiting or laxative misuse

Diagnostic criteria for anorexia nervosa are not met

  • Atypical eating disorders

Eating disorders of clinical severity that do not conform to the diagnostic criteria for anorexia nervosa or bulimia nervosa

Research into the pathogenesis of the eating disorders has focused almost exclusively on anorexia nervosa and bulimia nervosa. There is undoubtedly a genetic predisposition and a range of environmental risk factors, and there is some information with respect to the identity and relative importance of these contributions. However, virtually nothing is known about the individual causal processes involved, or about how they interact and vary across the development and maintenance of the disorders.

 

Panel 3: Main risk factors for anorexia nervosa and bulimia nervosa

  • General factors
  1. Female
  2. Adolescence and early adulthood
  3. Living in a Western society
  • Individual-specific factors

Family history

  • Eating disorder of any type
  • Depression
  • Substance misuse, especially alcoholism (bulimia nervosa)
  • Obesity (bulimia nervosa)

Premorbid experiences

  • Adverse parenting (especially low contact, high expectations, parental discord)
  • Sexual abuse
  • Family dieting
  • Critical comments about eating, shape, or weight from family and others
  • Occupational and recreational pressure to be slim Premorbid characteristics

Low self-esteem

  • Perfectionism (anorexia nervosa and to a lesser extent bulimia nervosa)
  • Anxiety and anxiety disorders
  • Obesity (bulimia nervosa)
  • Early menarche (bulimia nervosa)

There has been extensive research into the neurobiology of eating disorders. This work has focused on neuropeptide and monoamine (especially 5-HT) systems thought to be central to the physiology of eating and weight regulation. Of the various central and peripheral abnormalities reported, many are likely to be secondary to the aberrant eating and associated weight loss. However, some aspects of 5-HT function remain abnormal after recovery, leading to speculation that there is a trait monoamine abnormality that might predispose to the development of eating disorders or to associated characteristics such as perfectionism. Furthermore, normal dieting in healthy women alters central 5-HT function, providing a potential mechanism by which eating disorders might be precipitated in women vulnerable for other reasons.

Specific psychological theories have been proposed to account for the development and maintenance of eating disorders. Most influential in terms of treatment have been cognitive behavioral theories. In brief, these theories propose that the restriction of food intake that characterizes the onset of many eating disorders has two main origins, both of which may operate. The first is a need to feel in control of life, which gets displaced onto controlling eating. The second is over-evaluation of shape and weight in those who have been sensitized to their appearance. In both instances, the resulting dietary restriction is highly reinforcing. Subsequently, other processes begin to
operate and serve to maintain the eating disorder.

 

Depression, coping, hassles, and body dissatisfaction: Factors associated with disordered eating

Rose Marie Ward, M. Cameron Hay
Eating Behaviors 17 (2015) 14–18
http://dx.doi.org/10.1016/j.eatbeh.2014.12.002

The objective was to explore what predicts first-year college women’s disordered eating tendencies when they arrive on campus. The 215 first-year college women completed the surveys within the first 2 weeks of classes. A structural model examined how much the Helplessness, Hopelessness, Haplessness Scale, the Brief COPE, the Brief College Student Hassle Scale, and the Body Shape Questionnaire predicted eating disordered tendencies (as measured by the Eating Attitudes Test). The Body Shape Questionnaire, the Helplessness, Hopelessness, Haplessness Scale (inversely), and the Denial subscale of the Brief COPE significantly predicted eating disorder tendencies in first-year college women. In addition, the Planning and Self-Blame subscales of the Brief COPE and the Helplessness, Hopelessness, Haplessness Scale predicted the Body Shape Questionnaire. In general, higher levels on the Helplessness, Hopelessness, Haplessness Scale and higher levels on the Brief College Student Hassle Scale related to higher levels on the Brief COPE. Coping seems to remove the direct path from stress and depression to disordered eating and body dissatisfaction.

Eating disorders and disordered eating on college campuses are a pervasive problem. Research estimates that approximately 8–13.5% of college women meet the criteria for clinically diagnosed eating disorders such as anorexia nervosa, bulima nervosa, or eating disorders not otherwise specified. In addition, negative moods and stress seem to relate eating disorders. Diagnosable eating disorders emerge in the broader context of disordered eating, that is — engaging in practices such as restricting calories, eating less fat, skipping meals, using nonprescription diet pills, using laxatives, or inducing vomiting. Whereas disordered eating is broadly associated with the dynamics of human development in adolescence in the United States and the socio-cultural pressure to be thin, college environments may particularly predispose young women to disordered eating. In a national survey, 57% of female college students reported trying to lose weight, while only 38% of female college students categorized themselves as overweight.

The mean for the overall EAT scale was 8.89 (SD=9.26, mode=2, median = 6, range 0 to 60). Over 13% (n = 22) of the sample met the criteria for potential eating disorders with overall scores of 20 or greater. One primary model was tested using the quantitative measurement data. The model fit the data, χ2 (n = 191, 72) = 89.33, p = .08, CFI N .99, TLI = .99, and RMSEA = .035.

Note: Only significant paths shown; *p < .05; **p < .01; ***p < .001; HHH = Helplessness, Hopelessness, Haplessness Scale; Hassles = Brief College Student Hassle Scale; EAT = Eating Attitudes Test-26; BSQ = Body Satisfaction Questionnaire; CFI = Comparative Fit Index; TLI = Tucker-Lewis Index; RMSEA = Root Mean Squared Error of Approximation.

Structural modeling predicting eating disorder tendencies

Structural modeling predicting eating disorder tendencies

Structural modeling predicting eating disorder tendencies. Note: Only significant paths shown; *p < .05; **p < .01; **p < .001; HHH = Helplessness, Hopelessness, Haplessness Scale; Hassles = Brief College Student Hassle Scale; EAT = Eating Attitudes Test-26; BSQ = Body Satisfaction Questionnaire; CFI = Comparative Fit Index; TLI= Tucker–Lewis Index; RMSEA = Root Mean Squared Error of Approximation.

By identifying the risk factors through research, interventions can be developed that empower people to take control of their own eating behavior. This kind of intervention is supported by the finding that those students with more agentive, active coping styles, or who did not report frequent experiences of helplessness, haplessness, and hopelessness were less likely to have disordered eating behaviors. Whereas active coping has been associated with lower disordered eating in some studies (e.g., Ball & Lee, 2000), others suggest a more complicated relationship between denial or avoidant coping and disordered eating.

 

The cognitive behavioral model for eating disorders: A direct evaluation in children and adolescents with obesity

Veerle Decaluwe, Caroline Braet
Eating Behaviors 6 (2005) 211–220
http://dx.doi.org:/10.1016/j.eatbeh.2005.01.006

Objective: The cognitive behavioural model of bulimia nervosa. The clinical features and maintenance of bulimia nervosa. In K.D. Brownell, and J.P. Foreyt (Eds.), Handbook of eating disorders: physiology, psychology and treatment of obesity, anorexia and bulimia (pp. 389–404). New York: Basic Books.] provides the theoretical framework for cognitive behavior therapy of Bulimia Nervosa. For a long time it was assumed that the model can also be used to understand the mechanism of binge eating among obese individuals. The present study aimed to test whether the specific hypotheses derived from the cognitive behavioral theory of bulimia nervosa are also valid for children and adolescents with obesity. Method: The prediction of the model was tested using structural equation modeling. Data were collected from 196 children and adolescents.  Results: In line with the model, the results suggest that a lower self-esteem predicts concerns about eating, weight and shape, which in turn predict dietary restraint, which then further is predictive of binge eating.
Discussion: The findings suggest that the mechanisms specified in the model of bulimia nervosa is also operational among obese youngsters. The cognitive behavioral model of Bulimia Nervosa (BN), outlined by Fairburn, Cooper, and Cooper (1986), provides the theoretical framework for cognitive behavior therapy of BN (Fairburn, Marcus, & Wilson, 1993; Wilson, Fairburn, & Agras, 1997). According to this model, over-evaluation of eating, weight and shape plays a central role in the maintenance of BN. It is assumed that over-concern in combination with a low self-esteem can lead to dietary restraint (e.g. strict dieting and other weight control behavior). However, the rigid and unrealistic dietary rules are difficult to follow and the eating behavior is seen as a failure. Moreover, minor dietary slips are considered as evidence of lack of control and can lead to an all-or-nothing reaction in which all efforts to control eating are abandoned. This condition makes people vulnerable to binge eating. In order to minimize weight gain as a result of overeating, some patients practice compensatory purging (compensatory vomiting or laxative misuse).

The present study aimed to directly evaluate the model among a population of children and adolescents suffering from obesity. It is justified to study this model in a group at-risk. Binge eating is [V. Decaluwe´, C. Braet / Eating Behaviors 6 (2005) 211–220] not restricted to adulthood and is recognized among children with obesity as well (Decaluwe´ & Braet, 2003). Even in childhood, associated eating and shape concerns and comorbid psychopathology are manifest. Until now, little is known about how the risk factors for BED operate. A case-control study by Fairburn et al. (1998) reported a number of adverse factors in childhood, carrying a higher risk of developing BED, including negative self-evaluation, parental depression, adverse experiences (sexual or physical abuse and parental problems), overweight and repeated exposure to negative comments about shape, weight and eating. Moreover, it seems that childhood obesity is not only a risk factor for developing BED, but also one of the risk factors for the development of BN (Fairburn, Welch, Doll, Davies, & O’Connor, 1997). If Fairburn’s model is able to predict binge eating in an obese population, we can discover how the risk factors are related to one another and how they are operating to predict disordered eating among obese youngsters.

To conclude, in the present study, we were interested whether the cognitive behavioral theory would predict disordered eating in a young obese population. Because the study focuses on subjects at risk for developing binge-eating problems, BED or BN, we considered the cognitive behavioral theory as a risk factor model for eating disorders rather than a model for the maintenance of eating disorders.

  1. Method

2.1. Design

The prediction of the models was evaluated using structural equation modeling (LISREL 8.50; Jo¨reskog & So¨rbom, 2001). The dependent variables were binge eating, over-evaluation of eating, shape and weight, and dietary restraint. The independent variable was self-esteem. Purging behavior was not included in the structural equation modeling since binge eating among children occurs in the absence of compensatory behavior. Next, it is worth noting that the concept of self-esteem is implicit in the original cognitive model of BN. In order to compare the present research with the study of Byrne and McLean (2002), self-esteem was included in the evaluation of the model.

A sample of 196 children and adolescents with obesity (78 boys and 118 girls) between the ages of 10 and 16 participated in the study (M=12.73 years, SD=1.75). All subjects were seeking help for obesity. The sample consisted of children seeking inpatient or outpatient treatment. All children seeking inpatient or outpatient treatment between July 1999 and December 2001 were invited to participate. The response rate was 72%. Children younger than 10 or older than 16 and mentally retarded children were excluded from the study. All participating children obtained a diagnosis of primary obesity. The group had a mean overweight of 172.69% (SD=27.09) with a range of 120–253%. The study was approved by the local research ethics committee. The subjects were visited at their homes before they entered into treatment. Informed consent was obtained from both the children and their parents. Two subjects (1%), both female, met the full diagnostic criteria for BED and 18 subjects (9.2%) experienced at least one binge-eating episode over the previous three months (overeating with loss of control), but did not endorse all of the other DSM-IV criteria that are required for a diagnosis of BED.

To conclude, in the present study, we were interested whether the cognitive behavioral theory would predict disordered eating in a young obese population. Because the study focuses on subjects at risk for developing binge-eating problems, BED or BN, we considered the cognitive behavioral theory as a risk factor model for eating disorders rather than a model for the maintenance of eating disorders.

A two-step procedure was followed to construct the measurement model. We first conducted a confirmatory factor analysis on the variance–covariance matrix of the items of the exogenous construct (independent latent variable) b self-esteem Q. The construct b self-esteem Q is composed of 5 items of the Global self-worth subscale of the SPPA. Goodness-of-fit statistics were generated by the analysis. Items with poor loading (absolute t-value = 1.96) were removed. This resulted in a satisfactory model, χ2 (2)=6.23, p=0.04, GFI=0.97, AGFI=0.87 after omitting 1 item. The parameter estimates between the observed items and the latent variable ranged from 0.49 to 0.88.

Self-esteem was highly negatively correlated with over-evaluation of eating, weight and shape (standardized ϒ=-0.59, t=-5.05), indicating that higher levels of concerns about eating, weight and shape were associated with a lower self-esteem. Over-evaluation of eating, weight and shape, in turn, was shown to be significantly related with dietary restraint (standardized β=0.70, t=2.71), indicating that more concerns about eating, weight or shape were associated with higher levels of dietary restraint. Finally, dietary restraint was significantly associated with binge eating (standardized β=0.45, t=2.14), indicating that higher levels of dietary restraint were associated with a higher level of binge eating. The feedback from binge eating to over-evaluation of eating, weight and shape was not significant. Overall, the results appeared to suggest that a lower self-esteem predicts concerns over eating, weight and shape, which in turn predict dietary restraint. This would then be predictive of binge eating.

To our knowledge, this was the first study that directly evaluated the CBT model of BN among children. Overall, the model was found to be a good fit of the data. The main predictions of the model were confirmed. We can conclude that the CBT model provides a relatively valid explanation of the prediction of binge-eating problems in a young obese sample. Three findings supported the model and one finding did not confirm the model.

First, in line with the model, the construct self-esteem was a predictor of the over-evaluation of eating, weight and shape. This finding is also consistent with findings of Byrne and McLean (2002) and previous research in children and adolescents, which also found an association between over-concern with weight and shape and a lower self-esteem.

Second, the over-evaluation of eating, weight and shape, in turn, was a direct predictor of dietary restraint. Our findings were in line with prospective studies that found that thin-ideal internalization and body dissatisfaction (components of the over-evaluation of shape and weight) had a significant effect on dieting. Our findings also support the cross sectional study of Womble et al. (2001), who found a direct association between body dissatisfaction and dietary restraint among obese women. As in adults, children seem to respond in the same manner by dieting to lose weight. To our knowledge, the relationship between over-evaluation and dietary restraint has never been explored before among children with obesity.

Third, in accordance with the CBT model of BN, the key pathway between dietary restraint and binge eating was confirmed: higher levels of dietary restraint were associated with higher rates of binge eating. It seems that the subjects of this study were not able to maintain their dietary restraint.

 

Transdiagnostic Theory and Application of Family-Based Treatment for Youth With Eating Disorders

Katharine L. Loeb, James Lock, Rebecca Greif, Daniel le Grange
Cognitive and Behavioral Practice 19 (2012) 17-30

This paper describes the transdiagnostic theory and application of family-based treatment (FBT) for children and adolescents with eating disorders. We review the fundamentals of FBT, a transdiagnostic theoretical model of FBT and the literature supporting its clinical application, adaptations across developmental stages and the diagnostic spectrum of eating disorders, and the strengths and challenges of this approach, including its suitability for youth. Finally, we report a case study of an adolescent female with eating disorder not otherwise specified (EDNOS) for whom FBT was effective. We conclude that FBT is a promising outpatient treatment for anorexia nervosa, bulimia nervosa, and their EDNOS variants. The transdiagnostic model of FBT posits that while the etiology of an eating disorder is unknown, the pathology affects the family and home environment in ways that inadvertently allow for symptom maintenance and progression. FBT directly targets and resolves family level variables,  including secrecy, blame, internalization of illness, and extreme active or passive parental responses to the eating disorder. Future research will test these mechanisms, which are currently theoretical.

 

The Evolution of “Enhanced” Cognitive Behavior Therapy for Eating Disorders: Learning From Treatment Nonresponse

Zafra Cooper and Christopher G. Fairburn
Cognitive and Behavioral Practice 18 (2011) 394–402

In recent years there has been widespread acceptance that cognitive behavior therapy (CBT) is the treatment of choice for bulimia nervosa. The cognitive behavioral treatment of bulimia nervosa (CBT-BN) was first described in 1981. Over the past decades the theory and treatment have evolved in response to a variety of challenges. The treatment has been adapted to make it suitable for all forms of eating disorder—thereby making it “transdiagnostic” in its scope— and treatment procedures have been refined to improve outcome. The new version of the treatment, termed enhanced CBT (CBT-E) also addresses psychopathological processes “external” to the eating disorder, which, in certain subgroups of patients, interact with the disorder itself. In this paper we discuss how the development of this broader theory and treatment arose from focusing on those patients who did not respond well to earlier versions of the treatment.

In recent years there has been widespread acceptance that cognitive behavior therapy (CBT) is the treatment of choice for bulimia nervosa (National Institute for Health and Clinical Excellence, 2004; Wilson, Grilo, & Vitousek, 2007; Shapiro et al., 2007). The cognitive behavioral treatment of bulimia nervosa (CBT-BN) was first described in 1981 (Fairburn). Several years later, Fairburn (1985) described further procedural details along with a more complete exposition of the theory upon which the treatment was based (1986). This theory has since been extensively studied and the treatment derived from it, CBT-BN (Fairburn et al., 1993), has been tested in a series of treatment trials (e.g., Agras, Crow, et al., 2000; Agras, Walsh, et al., 2000; Fairburn, Jones, et al., 1993). A detailed treatment manual was published in 1993 (Fairburn, Jones, et al.). In 1997 a supplement to the manual was published (Wilson, Fairburn, & Agras) and the theory was elaborated in the same year (Fairburn).

According to the cognitive behavioral theory of bulimia nervosa, central to the maintenance of the disorder is the patient’s over-evaluation of shape and weight, the so-called “core psychopathology” [Fig. 1 – not shown – schematic form the core eating disorder maintaining mechanisms (modified from Fairburn, Cooper, & Shafran, 2003 )]. Most other features can be understood as stemming directly from this psychopathology, including the dietary restraint and restriction, the other forms of weight-control behavior, the various forms of body checking and avoidance, and the preoccupation with thoughts about shape, weight, and eating (Fairburn, 2008).

The only feature of bulimia nervosa that is not obviously a direct expression of the core psychopathology is binge eating. The cognitive behavioral theory proposes that binge eating is largely a product of a form of dietary restraint (attempts to restrict eating), which may or may not be accompanied by dietary restriction (actual undereating). Rather than adopting general guidelines about how they should eat, patients try to adhere to multiple demanding, and highly specific, dietary rules and tend to react in an extreme and negative fashion to the (almost inevitable) breaking of these rules.

A substantial body of evidence supports CBT-BN, and the findings indicate that CBTBN is the leading treatment. However, at best, half the patients who start treatment make a full and lasting response. Between 30% and 50% of patients cease binge eating and purging, and a further proportion show some improvement while others drop out of treatment or fail to respond. These findings led us to ask the question, “Why aren’t more people getting better?”

In the light of our experience with patients, we proposed that in certain patients one or more of four additional maintaining processes interact with the core eating disorder maintaining mechanisms and that when this occurs they constitute further obstacles to change. The first of these maintaining mechanisms concerns the influence of extreme perfectionism (“clinical perfectionism”). The second concerns difficulty coping with intense mood states (“mood intolerance”). Two other mechanisms concern the impact of unconditional and pervasive low self-esteem (“core low self-esteem”), and marked interpersonal problems (“interpersonal difficulties”).  This new theory represents an extension of the original theory illustrated in Fig. 1. Fig. 2 shows in schematic form both the core maintaining mechanisms and the four hypothesized additional mechanisms.

This program of work illustrates the value of focusing attention on those patients who benefit least from treatment. Doing so resulted in the enhanced form of CBT, which appears to be markedly more effective and more useful (in terms of the full range of patients treated) than its forerunner, CBT-BN.

 

A novel measure of compulsive food restriction in anorexia nervosa: Validation of the Self-Starvation Scale (SS)

Lauren R. Godier, Rebecca J. Park
Eating Behaviors 17 (2015) 10–13
http://dx.doi.org/10.1016/j.eatbeh.2014.12.004

The characteristic relentless self-starvation behavior seen in Anorexia Nervosa (AN) has been described as evidence of compulsivity,with increasing suggestion of transdiagnostic parallels with addictive behavior. There is a paucity of standardized self-report measures of compulsive behavior in eating disorders (EDs). Measures that index the concept of compulsive self-starvation in AN are needed to explore the suggested parallels with addictions. With this aima novel measure of self-starvation was developed (the Self-Starvation Scale, SS). 126 healthy participants, and 78 individuals with experience of AN, completed the new measure along with existing measures of eating disorder symptoms, anxiety and depression. Initial validation in the healthy sample indicated good reliability and construct validity, and incremental validity in predicting eating disorder symptoms. The psychometric properties of the SS scale were replicated in the AN sample. The ability of this scale to predict ED symptoms was particularly strong in individuals currently suffering from AN. These results suggest the SS may be a useful index of compulsive food restriction in AN. The concept of ‘starvation dependence’ in those with eating disorders, as a parallel with addiction, may be of clinical and theoretical importance.

The compulsive nature of Anorexia Nervosa (AN) has increasingly been compared to the maladaptive cycle of compulsive drug-seeking behavior (Barbarich-Marsteller, Foltin, & Walsh, 2011). Individuals with AN engage in persistent weight loss behavior, such as extreme self-starvation and excessive exercise, to modulate anxiety associated with ingestion of food, in a similar way to the use of mood altering drugs in substance dependence. Substance dependence is described as a persistent state in which there is a lack of control over compulsive drug-seeking, and lack of regard for the risk of serious negative consequences, which may parallel the relentlessness with which individuals with AN pursue weight loss despite profoundly negative physiological and psychological consequences.

Considering the parallels suggested between AN and substance dependence, it may be useful to use the concept of ‘dependence’ on starvation when measuring compulsive behaviors in eating disorders (EDs) such as AN. For that reason, a novel measure of self-starvation, the Self-Starvation Scale (SS) was derived, in part by adapting the Yale Food Addiction Scale (YFAS) (Gearhardt, Corbin, & Brownell, 2009) for this construct.

The set of online questionnaires was created using Bristol Online Surveys (BOS; Institute of Learning and Research Technology, University of Bristol, UK). In addition to the new measure described below, ED symptoms were measured using the Eating Disorder Examination-Questionnaire (EDE-Q) (Fairburn & Beglin, 2008), and the Clinical Impairment Assessment (CIA) (Bohn & Fairburn, 2008). Depression symptoms were measured using the Patient Health Questionnaire-9 (PHQ-9) (Kroenke, Spitzer, & Williams, 2001). Anxiety symptoms were measured using the Generalized Anxiety Disorder Assessment-7 (GAD-7) (Spitzer, Kroenke, Williams, & Lowe, 2006). The mirror image concept of ‘food addiction’ was measured using the YFAS (Gearhardt et al., 2009). Excessive exercise was measured using the Compulsive Exercise Test (CET) (Taranis, Touyz, & Meyer, 2011). Impulsivity was measured using the Barratt Impulsivity Scale-11 (BIS-11) (Patton, Stanford, & Barratt, 1995). Substance abuse symptoms were measured using the Leeds Dependence Questionnaire (LDQ) (Raistrick et al., 1994).

The results of this study suggest that using the criteria of dependence in capturing compulsive self-starvation behavior in AN may have some validity. The utility of this criteria in capturing compulsive behavior across disorders, including AN, suggests that compulsivity as a construct of behavior may have transdiagnostic application (Godier & Park, 2014; Robbins, Gillan, Smith, de Wit, & Ersche, 2012), on which disorder-specific themes are superimposed.

Read Full Post »

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

Article ID #164: The Evolution of Clinical Chemistry in the 20th Century. Published on 12/13/2014

WordCloud Image Produced by Adam Tubman

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

Older Posts »