Feeds:
Posts
Comments

Archive for the ‘Systematic Error (Bias)’ Category

Science Has A Systemic Problem, Not an Innovation Problem

Curator: Stephen J. Williams, Ph.D.

    A recent email, asking me to submit a survey, got me thinking about the malaise that scientists and industry professionals frequently bemoan: that innovation has been stymied for some reason and all sorts of convuluted processes must be altered to spur this mythical void of great new discoveries…..  and it got me thinking about our current state of science, and what is the perceived issue… and if this desert of innovation actually exists or is more a fundamental problem which we have created.

The email was from an NIH committee asking for opinions on recreating the grant review process …. now this on the same day someone complained to me about a shoddy and perplexing grant review they received.

The following email, which was sent out to multiple researchers, involved in either NIH grant review on both sides, as well as those who had been involved in previous questionnaires and studies on grant review and bias.  The email asked for researchers to fill out a survey on the grant review process, and how to best change it to increase innovation of ideas as well as inclusivity.  In recent years, there have been multiple survey requests on these matters, with multiple confusing procedural changes to grant format and content requirements, adding more administrative burden to scientists.

The email from Center for Scientific Review (one of the divisions a grant will go to before review {they set up review study sections and decide what section a grant should be  assigned to} was as follows:

Update on Simplifying Review Criteria: A Request for Information

https://www.csr.nih.gov/reviewmatters/2022/12/08/update-on-simplifying-review-criteria-a-request-for-information/

NIH has issued a request for information (RFI) seeking feedback on revising and simplifying the peer review framework for research project grant applications. The goal of this effort is to facilitate the mission of scientific peer review – identification of the strongest, highest-impact research. The proposed changes will allow peer reviewers to focus on scientific merit by evaluating 1) the scientific impact, research rigor, and feasibility of the proposed research without the distraction of administrative questions and 2) whether or not appropriate expertise and resources are available to conduct the research, thus mitigating the undue influence of the reputation of the institution or investigator.

Currently, applications for research project grants (RPGs, such as R01s, R03s, R15s, R21s, R34s) are evaluated based on five scored criteria: Significance, Investigators, Innovation, Approach, and Environment (derived from NIH peer review regulations 42 C.F.R. Part 52h.8; see Definitions of Criteria and Considerations for Research Project Grant Critiques for more detail) and a number of additional review criteria such as Human Subject Protections.

NIH gathered input from the community to identify potential revisions to the review framework. Given longstanding and often-heard concerns from diverse groups, CSR decided to form two working groups to the CSR Advisory Council—one on non-clinical trials and one on clinical trials. To inform these groups, CSR published a Review Matters blog, which was cross-posted on the Office of Extramural Research blog, Open Mike. The blog received more than 9,000 views by unique individuals and over 400 comments. Interim recommendations were presented to the CSR Advisory Council in a public forum (March 2020 videoslides; March 2021 videoslides). Final recommendations from the CSRAC (report) were considered by the major extramural committees of the NIH that included leadership from across NIH institutes and centers. Additional background information can be found here. This process produced many modifications and the final proposal presented below. Discussions are underway to incorporate consideration of a Plan for Enhancing Diverse Perspectives (PEDP) and rigorous review of clinical trials RPGs (~10% of RPGs are clinical trials) within the proposed framework.

Simplified Review Criteria

NIH proposes to reorganize the five review criteria into three factors, with Factors 1 and 2 receiving a numerical score. Reviewers will be instructed to consider all three factors (Factors 1, 2 and 3) in arriving at their Overall Impact Score (scored 1-9), reflecting the overall scientific and technical merit of the application.

  • Factor 1: Importance of the Research (Significance, Innovation), numerical score (1-9)
  • Factor 2: Rigor and Feasibility (Approach), numerical score (1-9)
  • Factor 3: Expertise and Resources (Investigator, Environment), assessed and considered in the Overall Impact Score, but not individually scored

Within Factor 3 (Expertise and Resources), Investigator and Environment will be assessed in the context of the research proposed. Investigator(s) will be rated as “fully capable” or “additional expertise/capability needed”. Environment will be rated as “appropriate” or “additional resources needed.” If a need for additional expertise or resources is identified, written justification must be provided. Detailed descriptions of the three factors can be found here.

Now looking at some of the Comments were very illuminating:

I strongly support streamlining the five current main review criteria into three, and the present five additional criteria into two. This will bring clarity to applicants and reduce the workload on both applicants and reviewers. Blinding reviewers to the applicants’ identities and institutions would be a helpful next step, and would do much to reduce the “rich-getting-richer” / “good ole girls and good ole boys” / “big science” elitism that plagues the present review system, wherein pedigree and connections often outweigh substance and creativity.

I support the proposed changes. The shift away from “innovation” will help reduce the tendency to create hype around a proposed research direction. The shift away from Investigator and Environment assessments will help reduce bias toward already funded investigators in large well-known institutions.

As a reviewer for 5 years, I believe that the proposed changes are a step in the right direction, refocusing the review on whether the science SHOULD be done and whether it CAN BE DONE WELL, while eliminating burdensome and unhelpful sections of review that are better handled administratively. I particularly believe that the de-emphasis of innovation (which typically focuses on technical innovation) will improve evaluation of the overall science, and de-emphasis of review of minor technical details will, if implemented correctly, reduce the “downward pull” on scores for approach. The above comments reference blinded reviews, but I did not see this in the proposed recommendations. I do not believe this is a good idea for several reasons: 1) Blinding of the applicant and institution is not likely feasible for many of the reasons others have described (e.g., self-referencing of prior work), 2) Blinding would eliminate the potential to review investigators’ biosketches and budget justifications, which are critically important in review, 3) Making review blinded would make determination of conflicts of interest harder to identify and avoid, 4) Evaluation of “Investigator and Environment” would be nearly impossible.

Most of the Comments were in favor of the proposed changes, however many admitted that it adds additional confusion on top of many administrative changes to formats and content of grant sections.

Being a Stephen Covey devotee, and just have listened to  The Four Principles of Execution, it became more apparent that issues that hinder many great ideas coming into fruition, especially in science, is a result of these systemic or problems in the process, not at the level of individual researchers or small companies trying to get their innovations funded or noticed.  In summary, Dr. Covey states most issues related to the success of any initiative is NOT in the strategic planning, but in the failure to adhere to a few EXECUTION principles.  Primary to these failures of strategic plans is lack of accounting of what Dr. Covey calls the ‘whirlwind’, or those important but recurring tasks that take us away from achieving the wildly important goals.  In addition, lack of  determining lead and lag measures of success hinder such plans.

In this case a lag measure in INNOVATION.  It appears we have created such a whirlwind and focus on lag measures that we are incapable of translating great discoveries into INNOVATION.

In the following post, I will focus on issues relating to Open Access, publishing and dissemination of scientific discovery may be costing us TIME to INNOVATION.  And it appears that there are systemic reasons why we appear stuck in a rut, so to speak.

The first indication is from a paper published by Johan Chu and James Evans in 2021 in PNAS:

 

Slowed canonical progress in large fields of science

Chu JSG, Evans JA. Slowed canonical progress in large fields of science. Proc Natl Acad Sci U S A. 2021 Oct 12;118(41):e2021636118. doi: 10.1073/pnas.2021636118. PMID: 34607941; PMCID: PMC8522281

 

Abstract

In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.

So the Summary of this paper is

  • The authors examined 1.8 billion citations among 90 million papers over 241 subjects
  • found the corpus of papers do not lead to turnover of new ideas in a field, but rather the ossification or entrenchment of canonical (or older ideas)
  • this is mainly due to older paper cited more frequently than new papers with new ideas, potentially because authors are trying to get their own papers cited more frequently for funding and exposure purposes
  • The authors suggest that “fundamental progress may be stymied if quantitative growth of scientific endeavors is not balanced by structures fostering disruptive scholarship and focusing attention of novel ideas”

The authors note that, in most cases, science policy reinforces this “more is better” philosophy”,  where metrics of publication productivity are either number of publications or impact measured by citation rankings.  However, using an analysis of citation changes occurring in large versus smaller fields, it becomes apparent that this process is favoring the older, more established papers and a recirculating of older canonical ideas.

“Rather than resulting in faster turnover of field paradigms, the massive amounts of new publications entrenches the ideas of top-cited papers.”  New ideas are pushed down to the bottom of the citation list and potentially lost in the literature.  The authors suggest that this problem will intensify as the “annual mass” of new publications in each field grows, especially in large fields.  This issue is exacerbated by the deluge on new online ‘open access’ journals, in which authors would focus on citing the more highly cited literature. 

We maybe at a critical junction, where if many papers are published in a short time, new ideas will not be considered as carefully as the older ideas.  In addition,

with proliferation of journals and the blurring of journal hierarchies due to online articles-level access can exacerbate this problem

As a counterpoint, the authors do note that even though many molecular biology highly cited articles were done in 1976, there has been extremely much innovation since then however it may take a lot more in experiments and money to gain the level of citations that those papers produced, and hence a lower scientific productivity.

This issue is seen in the field of economics as well

Ellison, Glenn. “Is peer review in decline?” Economic Inquiry, vol. 49, no. 3, July 2011, pp. 635+. Gale Academic OneFile, link.gale.com/apps/doc/A261386330/AONE?u=temple_main&sid=bookmark-AONE&xid=f5891002. Accessed 12 Dec. 2022.

Abstract

Over the past decade, there has been a decline in the fraction of papers in top economics journals written by economists from the highest-ranked economics departments. This paper documents this fact and uses additional data on publications and citations to assess various potential explanations. Several observations are consistent with the hypothesis that the Internet improves the ability of high-profile authors to disseminate their research without going through the traditional peer-review process. (JEL A14, 030)

The facts part of this paper documents two main facts:

1. Economists in top-ranked departments now publish very few papers in top field journals. There is a marked decline in such publications between the early 1990s and early 2000s.

2. Comparing the early 2000s with the early 1990s, there is a decline in both the absolute number of papers and the share of papers in the top general interest journals written by Harvard economics department faculty.

Although the second fact just concerns one department, I see it as potentially important to understanding what is happening because it comes at a time when Harvard is widely regarded (I believe correctly) as having ascended to the top position in the profession.

The “decline-of-peer-review” theory I allude to in the title is that the necessity of going through the peer-review process has lessened for high-status authors: in the old days peer-reviewed journals were by far the most effective means of reaching readers, whereas with the growth of the Internet high-status authors can now post papers online and exploit their reputation to attract readers.

Many alternate explanations are possible. I focus on four theories: the decline-in-peer-review theory and three alternatives.

1. The trends could be a consequence of top-school authors’ being crowded out of the top journals by other researchers. Several such stories have an optimistic message, for example, there is more talent entering the profession, old pro-elite biases are being broken down, more schools are encouraging faculty to do cutting-edge research, and the Internet is enabling more cutting-edge research by breaking down informational barriers that had hampered researchers outside the top schools. (2)

2. The trends could be a consequence of the growth of revisions at economics journals discussed in Ellison (2002a, 2002b). In this more pessimistic theory, highly productive researchers must abandon some projects and/or seek out faster outlets to conserve the time now required to publish their most important works.

3. The trends could simply reflect that field journals have declined in quality in some relative sense and become a less attractive place to publish. This theory is meant to encompass also the rise of new journals, which is not obviously desirable or undesirable.

The majority of this paper is devoted to examining various data sources that provide additional details about how economics publishing has changed over the past decade. These are intended both to sharpen understanding of the facts to be explained and to provide tests of auxiliary predictions of the theories. Two main sources of information are used: data on publications and data on citations. The publication data include department-level counts of publications in various additional journals, an individual-level dataset containing records of publications in a subset of journals for thousands of economists, and a very small dataset containing complete data on a few authors’ publication records. The citation data include citations at the paper level for 9,000 published papers and less well-matched data that is used to construct measures of citations to authors’ unpublished works, to departments as a whole, and to various journals.

Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship

Josh Angrist, Pierre Azoulay, Glenn Ellison, Ryan Hill, Susan Feng Lu. Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship.

JOURNAL OF ECONOMIC LITERATURE

VOL. 58, NO. 1, MARCH 2020

(pp. 3-52)

So if innovation is there but it may be buried under the massive amount of heavily cited older literature, do we see evidence of this in other fields like medicine?

Why Isn’t Innovation Helping Reduce Health Care Costs?

 
 

National health care expenditures (NHEs) in the United States continue to grow at rates outpacing the broader economy: Inflation- and population-adjusted NHEs have increased 1.6 percent faster than the gross domestic product (GDP) between 1990 and 2018. US national health expenditure growth as a share of GDP far outpaces comparable nations in the Organization for Economic Cooperation and Development (17.2 versus 8.9 percent).

Multiple recent analyses have proposed that growth in the prices and intensity of US health care services—rather than in utilization rates or demographic characteristics—is responsible for the disproportionate increases in NHEs relative to global counterparts. The consequences of ever-rising costs amid ubiquitous underinsurance in the US include price-induced deferral of care leading to excess morbidity relative to comparable nations.

These patterns exist despite a robust innovation ecosystem in US health care—implying that novel technologies, in isolation, are insufficient to bend the health care cost curve. Indeed, studies have documented that novel technologies directly increase expenditure growth.

Why is our prolific innovation ecosystem not helping reduce costs? The core issue relates to its apparent failure to enhance net productivity—the relative output generated per unit resource required. In this post, we decompose the concept of innovation to highlight situations in which inventions may not increase net productivity. We begin by describing how this issue has taken on increased urgency amid resource constraints magnified by the COVID-19 pandemic. In turn, we describe incentives for the pervasiveness of productivity-diminishing innovations. Finally, we provide recommendations to promote opportunities for low-cost innovation.

 

 

Net Productivity During The COVID-19 Pandemic

The issue of productivity-enhancing innovation is timely, as health care systems have been overwhelmed by COVID-19. Hospitals in Italy, New York City, and elsewhere have lacked adequate capital resources to care for patients with the disease, sufficient liquidity to invest in sorely needed resources, and enough staff to perform all of the necessary tasks.

The critical constraint in these settings is not technology: In fact, the most advanced technology required to routinely treat COVID-19—the mechanical ventilator—was invented nearly 100 years ago in response to polio (the so-called iron lung). Rather, the bottleneck relates to the total financial and human resources required to use the technology—the denominator of net productivity. The clinical implementation of ventilators has been illustrative: Health care workers are still required to operate ventilators on a nearly one-to-one basis, just like in the mid-twentieth century. 

High levels of resources required for implementation of health care technologies constrain the scalability of patient care—such as during respiratory disease outbreaks such as COVID-19. Thus, research to reduce health care costs is the same kind of research we urgently require to promote health care access for patients with COVID-19.

Types Of Innovation And Their Relationship To Expenditure Growth

The widespread use of novel medical technologies has been highlighted as a central driver of NHE growth in the US. We believe that the continued expansion of health care costs is largely the result of innovation that tends to have low productivity (exhibit 1). We argue that these archetypes—novel widgets tacked on to existing workflows to reinforce traditional care models—are exactly the wrong properties to reduce NHEs at the systemic level.

Exhibit 1: Relative productivity of innovation subtypes

Source: Authors’ analysis.

Content Versus Process Innovation

Content (also called technical) innovation refers to the creation of new widgets, such as biochemical agents, diagnostic tools, or therapeutic interventions. Contemporary examples of content innovation include specialty pharmaceuticalsmolecular diagnostics, and advanced interventions and imaging.

These may be contrasted with process innovations, which address the organized sequences of activities that implement content. Classically, these include clinical pathways and protocols. They can address the delivery of care for acute conditions, such as central line infections, sepsis, or natural disasters. Alternatively, they can target chronic conditions through initiatives such as team-based management of hypertension and hospital-at-home models for geriatric care. Other processes include hiring staffdelegating labor, and supply chain management.

Performance-Enhancing Versus Cost-Reducing Innovation

Performance-enhancing innovations frequently create incremental outcome gains in diagnostic characteristics, such as sensitivity or specificity, or in therapeutic characteristics, such as biomarkers for disease status. Their performance gains often lead to higher prices compared to existing alternatives.  

Performance-enhancing innovations can be compared to “non-inferior” innovations capable of achieving outcomes approximating those of existing alternatives, but at reduced cost. Industries outside of medicine, such as the computing industry, have relied heavily on the ability to reduce costs while retaining performance.

In health care though, this pattern of innovation is rare. Since passage of the 2010 “Biosimilars” Act aimed at stimulating non-inferior innovation and competition in therapeutics markets, only 17 agents have been approved, and only seven have made it to market. More than three-quarters of all drugs receiving new patents between 2005 and 2015 were “reissues,” meaning they had already been approved, and the new patent reflected changes to the previously approved formula. Meanwhile, the costs of approved drugs have increased over time, at rates between 4 percent and 7 percent annually.

Moreover, the preponderance of performance-enhancing diagnostic and therapeutic innovations tend to address narrow patient cohorts (such as rare diseases or cancer subtypes), with limited clear clinical utility in broader populations. For example, the recently approved eculizimab is a monoclonal antibody approved for paroxysmal nocturnal hemoglobinuria—which effects 1 in 10 million individuals. At the time of its launch, eculizimab was priced at more than $400,000 per year, making it the most expensive drug in modern history. For clinical populations with no available alternatives, drugs such as eculizimab may be cost-effective, pending society’s willingness to pay, and morally desirable, given a society’s values. But such drugs are certainly not cost-reducing.

Additive Versus Substitutive Innovation

Additive innovations are those that append to preexisting workflows, while substitutive innovations reconfigure preexisting workflows. In this way, additive innovations increase the use of precedent services, whereas substitutive innovations decrease precedent service use.

For example, previous analyses have found that novel imaging modalities are additive innovations, as they tend not to diminish use of preexisting modalities. Similarly, novel procedures tend to incompletely replace traditional procedures. In the case of therapeutics and devices, off-label uses in disease groups outside of the approved indication(s) can prompt innovation that is additive. This is especially true, given that off-label prescriptions classically occur after approved methods are exhausted.

Eculizimab once again provides an illustrative example. As of February 2019, the drug had been used for 39 indications (it had been approved for three of those, by that time), 69 percent of which lacked any form of evidence of real-world effectiveness. Meanwhile, the drug generated nearly $4 billion in sales in 2019. Again, these expenditures may be something for which society chooses to pay—but they are nonetheless additive, rather than substitutive.

Sustaining Versus Disruptive Innovation

Competitive market theory suggests that incumbents and disruptors innovate differently. Incumbents seek sustaining innovations capable of perpetuating their dominance, whereas disruptors pursue innovations capable of redefining traditional business models.

In health care, while disruptive innovations hold the potential to reduce overall health expenditures, often they run counter to the capabilities of market incumbents. For example, telemedicine can deliver care asynchronously, remotely, and virtually, but large-scale brick-and-mortar medical facilities invest enormous capital in the delivery of synchronous, in-house, in-person care (incentivized by facility fees).

The connection between incumbent business models and the innovation pipeline is particularly relevant given that 58 percent of total funding for biomedical research in the US is now derived from private entities, compared with 46 percent a decade prior. It follows that the growing influence of eminent private organizations may favor innovations supporting their market dominance—rather than innovations that are societally optimal.

Incentives And Repercussions Of High-Cost Innovation

Taken together, these observations suggest that innovation in health care is preferentially designed for revenue expansion rather than for cost reduction. While offering incremental improvements in patient outcomes, therefore creating theoretical value for society, these innovations rarely deliver incremental reductions in short- or long-term costs at the health system level.

For example, content-based, performance-enhancing, additive, sustaining innovations tend to add layers of complexity to the health care system—which in turn require additional administration to manage. The net result is employment growth in excess of outcome improvement, leading to productivity losses. This gap leads to continuously increasing overall expenditures in turn passed along to payers and consumers.

Nonetheless, high-cost innovations are incentivized across health care stakeholders (exhibit 2). From the supply side of innovation, for academic researchers, “breakthrough” and “groundbreaking” innovations constitute the basis for career advancement via funding and tenure. This is despite stakeholders’ frequent inability to generalize early successes to become cost-effective in the clinical setting. As previously discussed, the increasing influence of private entities in setting the medical research agenda is also likely to stimulate innovation benefitting single stakeholders rather than the system.

Exhibit 2: Incentives promoting low-value innovation

Source: Authors’ analysis adapted from Hofmann BM. Too much technology. BMJ. 2015 Feb 16.

From the demand side of innovation (providers and health systems), a combined allure (to provide “cutting-edge” patient care), imperative (to leave “no stone unturned” in patient care), and profit-motive (to amplify fee-for-service reimbursements) spur participation in a “technological arms-race.” The status quo thus remains as Clay Christensen has written: “Our major health care institutions…together overshoot the level of care actually needed or used by the vast majority of patients.”

Christensen’s observations have been validated during the COVID-19 epidemic, as treatment of the disease requires predominantly century-old technology. By continually adopting innovation that routinely overshoots the needs of most patients, layer by layer, health care institutions are accruing costs that quickly become the burden of society writ large.

Recommendations To Reduce The Costs Of Health Care Innovation

Henry Aaron wrote in 2002 that “…the forces that have driven up costs are, if anything, intensifying. The staggering fecundity of biomedical research is increasing…[and] always raises expenditures.” With NHEs spiraling ever-higher, urgency to “bend the cost curve” is mounting. Yet, since much biomedical innovation targets the “flat of the [productivity] curve,” alternative forms of innovation are necessary.

The shortcomings in net productivity revealed by the COVID-19 pandemic highlight the urgent need for redesign of health care delivery in this country, and reevaluation of the innovation needed to support it. Specifically, efforts supporting process redesign are critical to promote cost-reducing, substitutive innovations that can inaugurate new and disruptive business models.

Process redesign rarely involves novel gizmos, so much as rejiggering the wiring of, and connections between, existing gadgets. It targets operational changes capable of streamlining workflows, rather than technical advancements that complicate them. As described above, precisely these sorts of “frugal innovations” have led to productivity improvements yielding lower costs in other high-technology industries, such as the computing industry.

Shrank and colleagues recently estimated that nearly one-third of NHEs—almost $1 trillion—were due to preventable waste. Four of the six categories of waste enumerated by the authors—failure in care delivery, failure in care coordination, low-value care, and administrative complexity—represent ripe targets for process innovation, accounting for $610 billion in waste annually, according to Shrank.

Health systems adopting process redesign methods such as continuous improvement and value-based management have exhibited outcome enhancement and expense reduction simultaneously. Internal processes addressed have included supply chain reconfiguration, operational redesign, outlier reconciliation, and resource standardization.

Despite the potential of process innovation, focus on this area (often bundled into “health services” or “quality improvement” research) occupies only a minute fraction of wallet- or mind-share in the biomedical research landscape, accounting for 0.3 percent of research dollars in medicine. This may be due to a variety of barriers beyond minimal funding. One set of barriers is academic, relating to negative perceptions around rigor and a lack of outlets in which to publish quality improvement research. To achieve health care cost containment over the long term, this dimension of innovation must be destigmatized relative to more traditional manners of innovation by the funders and institutions determining the conditions of the research ecosystem.

Another set of barriers is financial: Innovations yielding cost reduction are less “reimbursable” than are innovations fashioned for revenue expansion. This is especially the case in a fee-for-service system where reimbursement is tethered to cost, which creates perverse incentives for health care institutions to overlook cost increases. However, institutions investing in low-cost innovation will be well-positioned in a rapidly approaching future of value-based care—in which the solvency of health care institutions will rely upon their ability to provide economically efficient care.

Innovating For Cost Control Necessitates Frugality Over Novelty

Restraining US NHEs represents a critical step toward health promotion. Innovation for innovation’s sake—that is content-based, incrementally effective, additive, and sustaining—is unlikely to constrain continually expanding NHEs.

In contrast, process innovation offers opportunities to reduce costs while maintaining high standards of patient care. As COVID-19 stress-tests health care systems across the world, the importance of cost control and productivity amplification for patient care has become apparent.

As such, frugality, rather than novelty, may hold the key to health care cost containment. Redesigning the innovation agenda to stem the tide of ever-rising NHEs is an essential strategy to promote widespread access to care—as well as high-value preventive care—in this country. In the words of investors across Silicon Valley: Cost-reducing innovation is no longer a “nice-to-have,” but a “need-to-have” for the future of health and overall well-being this country.

So Do We Need A New Way of Disseminating Scientific Information?  Can Curation Help?

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!



Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.



The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

We have discussed this in other posts such as

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

and

Curation Methodology – Digital Communication Technology to mitigate Published Information Explosion and Obsolescence in Medicine and Life Sciences

For years the pharmaceutical industry has toyed with the idea of making innovation networks and innovation hubs

It has been the main focus of whole conferences

Tales from the Translational Frontier – Four Unique Approaches to Turning Novel Biology into Investable Innovations @BIOConvention #BIO2018

However it still seems these strategies have not worked

Is it because we did not have an Execution plan? Or we did not understand the lead measures for success?

Other Related Articles on this Open Access Scientific Journal Include:

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

Analysis of Utilizing LPBI Group’s Scientific Curation Platform as an Educational Tool: New Paradigm for Student Engagement

Global Alliance for Genomics and Health Issues Guidelines for Data Siloing and Sharing

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

Read Full Post »

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

Pentose Shunt, Electron Transfer, Galactose, more Lipids in brief

Pentose Shunt, Electron Transfer, Galactose, more Lipids in brief

Reviewer and Curator: Larry H. Bernstein, MD, FCAP

Pentose Shunt, Electron Transfer, Galactose, and other Lipids in brief

This is a continuation of the series of articles that spans the horizon of the genetic
code and the progression in complexity from genomics to proteomics, which must
be completed before proceeding to metabolomics and multi-omics.  At this point
we have covered genomics, transcriptomics, signaling, and carbohydrate metabolism
with considerable detail.In carbohydrates. There are two topics that need some attention –
(1) pentose phosphate shunt;
(2) H+ transfer
(3) galactose.
(4) more lipids
Then we are to move on to proteins and proteomics.

Summary of this series:

The outline of what I am presenting in series is as follows:

  1. Signaling and Signaling Pathways
    http://pharmaceuticalintelligence.com/2014/08/12/signaling-and-signaling-pathways/
  2. Signaling transduction tutorial.
    http://pharmaceuticalintelligence.com/2014/08/12/signaling-transduction-tutorial/
  3. Carbohydrate metabolism
    http://pharmaceuticalintelligence.com/2014/08/13/carbohydrate-metabolism/

Selected References to Signaling and Metabolic Pathways published in this Open Access Online Scientific Journal, include the following: 

http://pharmaceuticalintelligence.com/2014/08/14/selected-references-to-signaling-
and-metabolic-pathways-in-leaders-in-pharmaceutical-intelligence/

  1. Lipid metabolism

4.1  Studies of respiration lead to Acetyl CoA
http://pharmaceuticalintelligence.com/2014/08/18/studies-of-respiration-lead-to-acetyl-coa/

4.2 The multi-step transfer of phosphate bond and hydrogen exchange energy
http://pharmaceuticalintelligence.com/2014/08/19/the-multi-step-transfer-of-phosphate-
bond-and-hydrogen-exchange-energy/

5.Pentose shunt, electron transfers, galactose, and other lipids in brief

6. Protein synthesis and degradation

7.  Subcellular structure

8. Impairments in pathological states: endocrine disorders; stress
hypermetabolism; cancer.

Section I. Pentose Shunt

Bernard L. Horecker’s Contributions to Elucidating the Pentose Phosphate Pathway

Nicole Kresge,     Robert D. Simoni and     Robert L. Hill

The Enzymatic Conversion of 6-Phosphogluconate to Ribulose-5-Phosphate
and Ribose-5-Phosphate (Horecker, B. L., Smyrniotis, P. Z., and Seegmiller,
J. E.      J. Biol. Chem. 1951; 193: 383–396

Bernard Horecker

Bernard Leonard Horecker (1914) began his training in enzymology in 1936 as a
graduate student at the University of Chicago in the laboratory of T. R. Hogness.
His initial project involved studying succinic dehydrogenase from beef heart using
the Warburg manometric apparatus. However, when Erwin Hass arrived from Otto
Warburg’s laboratory he asked Horecker to join him in the search for an enzyme
that would catalyze the reduction of cytochrome c by reduced NADP. This marked
the beginning of Horecker’s lifelong involvement with the pentose phosphate pathway.

During World War II, Horecker left Chicago and got a job at the National Institutes of
Health (NIH) in Frederick S. Brackett’s laboratory in the Division of Industrial Hygiene.
As part of the wartime effort, Horecker was assigned the task of developing a method
to determine the carbon monoxide hemoglobin content of the blood of Navy pilots
returning from combat missions. When the war ended, Horecker returned to research
in enzymology and began studying the reduction of cytochrome c by the succinic
dehydrogenase system.

Shortly after he began these investigation changes, Horecker was approached by
future Nobel laureate Arthur Kornberg, who was convinced that enzymes were the
key to understanding intracellular biochemical processes
. Kornberg suggested
they collaborate, and the two began to study the effect of cyanide on the succinic
dehydrogenase system. Cyanide had previously been found to inhibit enzymes
containing a heme group, with the exception of cytochrome c. However, Horecker
and Kornberg found that

  • cyanide did in fact react with cytochrome c and concluded that
  • previous groups had failed to perceive this interaction because
    • the shift in the absorption maximum was too small to be detected by
      visual examination.

Two years later, Kornberg invited Horecker and Leon Heppel to join him in setting up
a new Section on Enzymes in the Laboratory of Physiology at the NIH. Their Section on Enzymes eventually became part of the new Experimental Biology and Medicine
Institute and was later renamed the National Institute of Arthritis and Metabolic
Diseases.

Horecker and Kornberg continued to collaborate, this time on

  • the isolation of DPN and TPN.

By 1948 they had amassed a huge supply of the coenzymes and were able to
present Otto Warburg, the discoverer of TPN, with a gift of 25 mg of the enzyme
when he came to visit. Horecker also collaborated with Heppel on 

  • the isolation of cytochrome c reductase from yeast and 
  • eventually accomplished the first isolation of the flavoprotein from
    mammalian liver.

Along with his lab technician Pauline Smyrniotis, Horecker began to study

  • the enzymes involved in the oxidation of 6-phosphogluconate and the
    metabolic intermediates formed in the pentose phosphate pathway.

Joined by Horecker’s first postdoctoral student, J. E. Seegmiller, they worked
out a new method for the preparation of glucose 6-phosphate and 6-phosphogluconate, 
both of which were not yet commercially available.
As reported in the Journal of Biological Chemistry (JBC) Classic reprinted here, they

  • purified 6-phosphogluconate dehydrogenase from brewer’s yeast (1), and 
  • by coupling the reduction of TPN to its reoxidation by pyruvate in
    the presence of lactic dehydrogenase
    ,
  • they were able to show that the first product of 6-phosphogluconate oxidation,
  • in addition to carbon dioxide, was ribulose 5-phosphte.
  • This pentose ester was then converted to ribose 5-phosphate by a
    pentose-phosphate isomerase.

They were able to separate ribulose 5-phosphate from ribose 5- phosphate and demonstrate their interconversion using a recently developed nucleotide separation
technique called ion-exchange chromatography. Horecker and Seegmiller later
showed that 6-phosphogluconate metabolism by enzymes from mammalian
tissues also produced the same products
.8

Bernard Horecker

Bernard Horecker

http://www.jbc.org/content/280/29/e26/F1.small.gif

Over the next several years, Horecker played a key role in elucidating the

  • remaining steps of the pentose phosphate pathway.

His total contributions included the discovery of three new sugar phosphate esters,
ribulose 5-phosphate, sedoheptulose 7-phosphate, and erythrose 4-phosphate, and
three new enzymes, transketolase, transaldolase, and pentose-phosphate 3-epimerase.
The outline of the complete pentose phosphate cycle was published in 1955
(2). Horecker’s personal account of his work on the pentose phosphate pathway can
be found in his JBC Reflection (3).1

Horecker’s contributions to science were recognized with many awards and honors
including the Washington Academy of Sciences Award for Scientific Achievement in
Biological Sciences (1954) and his election to the National Academy of Sciences in
1961. Horecker also served as president of the American Society of Biological
Chemists (now the American Society for Biochemistry and Molecular Biology) in 1968.

Footnotes

  • 1 All biographical information on Bernard L. Horecker was taken from Ref. 3.
  • The American Society for Biochemistry and Molecular Biology, Inc.

References

  1. ↵Horecker, B. L., and Smyrniotis, P. Z. (1951) Phosphogluconic acid dehydrogenase
    from yeast. J. Biol. Chem. 193, 371–381FREE Full Text
  2. Gunsalus, I. C., Horecker, B. L., and Wood, W. A. (1955) Pathways of carbohydrate
    metabolism in microorganisms. Bacteriol. Rev. 19, 79–128  FREE Full Text
  3. Horecker, B. L. (2002) The pentose phosphate pathway. J. Biol. Chem. 277, 47965–
    47971 FREE Full Text

The Pentose Phosphate Pathway (also called Phosphogluconate Pathway, or Hexose
Monophosphate Shunt) is depicted with structures of intermediates in Fig. 23-25
p. 863 of Biochemistry, by Voet & Voet, 3rd Edition. The linear portion of the pathway
carries out oxidation and decarboxylation of glucose-6-phosphate, producing the
5-C sugar ribulose-5-phosphate.

Glucose-6-phosphate Dehydrogenase catalyzes oxidation of the aldehyde
(hemiacetal), at C1 of glucose-6-phosphate, to a carboxylic acid in ester linkage
(lactone). NADPserves as electron acceptor.

6-Phosphogluconolactonase catalyzes hydrolysis of the ester linkage (lactone)
resulting in ring opening. The product is 6-phosphogluconate. Although ring opening
occurs in the absence of a catalyst, 6-Phosphogluconolactonase speeds up the
reaction, decreasing the lifetime of the highly reactive, and thus potentially
toxic, 6-phosphogluconolactone.

Phosphogluconate Dehydrogenase catalyzes oxidative decarboxylation of
6-phosphogluconate, to yield the 5-C ketose ribulose-5-phosphate. The
hydroxyl at C(C2 of the product) is oxidized to a ketone. This promotes loss
of the carboxyl at C1 as CO2.  NADP+ again serves as oxidant (electron acceptor).

pglucose hd

pglucose hd

https://www.rpi.edu/dept/bcbp/molbiochem/MBWeb/mb2/part1/images/pglucd.gif

Reduction of NADP+ (as with NAD+) involves transfer of 2e- plus 1H+ to the
nicotinamide moiety.

nadp

NADPH, a product of the Pentose Phosphate Pathway, functions as a reductant in
various synthetic (anabolic) pathways, including fatty acid synthesis.

NAD+ serves as electron acceptor in catabolic pathways in which metabolites are
oxidized. The resultant NADH is reoxidized by the respiratory chain, producing ATP.

nadnadp

https://www.rpi.edu/dept/bcbp/molbiochem/MBWeb/mb2/part1/images/nadnadp.gif

Regulation: 
Glucose-6-phosphate Dehydrogenase is the committed step of the Pentose
Phosphate Pathway. This enzyme is regulated by availability of the substrate NADP+.
As NADPH is utilized in reductive synthetic pathways, the increasing concentration of
NADP+ stimulates the Pentose Phosphate Pathway, to replenish NADPH.

The remainder of the Pentose Phosphate Pathway accomplishes conversion of the
5-C ribulose-5-phosphate to the 5-C product ribose-5-phosphate, or to the 3-C
glyceraldehyde -3-phosphate and the 6-C fructose-6-phosphate (reactions 4 to 8
p. 863).

Transketolase utilizes as prosthetic group thiamine pyrophosphate (TPP), a
derivative of vitamin B1.

tpp

tpp

https://www.rpi.edu/dept/bcbp/molbiochem/MBWeb/mb2/part1/images/tpp.gif

Thiamine pyrophosphate binds at the active sites of enzymes in a “V” conformation.The amino group of the aminopyrimidine moiety is close to the dissociable proton,
and serves as the proton acceptor. This proton transfer is promoted by a glutamate
residue adjacent to the pyrimidine ring.

The positively charged N in the thiazole ring acts as an electron sink, promoting
C-C bond cleavage. The 3-C aldose glyceraldehyde-3-phosphate is released.
2-C fragment remains on TPP.

FASEB J. 1996 Mar;10(4):461-70.   http://www.ncbi.nlm.nih.gov/pubmed/8647345

Reviewer

The importance of this pathway can easily be underestimated.  The main source for
energy in respiration was considered to be tied to the

  • high energy phosphate bond in phosphorylation and utilizes NADPH, converting it to NADP+.

glycolysis n skeletal muscle in short term, dependent on muscle glycogen conversion
to glucose, and there is a buildup of lactic acid – used as fuel by the heart.  This
pathway accounts for roughly 5% of metabolic needs, varying between tissues,
depending on there priority for synthetic functions, such as endocrine or nucleic
acid production.

The mature erythrocyte and the ocular lens both are enucleate.  85% of their
metabolic energy needs are by anaerobic glycolysis.  Consider the erythrocyte
somewhat different than the lens because it has iron-based hemoglobin, which
exchanges O2 and CO2 in the pulmonary alveoli, and in that role, is a rapid
regulator of H+ and pH in the circulation (carbonic anhydrase reaction), and also to
a lesser extent in the kidney cortex, where H+ is removed  from the circulation to
the urine, making the blood less acidic, except when there is a reciprocal loss of K+.
This is how we need a nomogram to determine respiratory vs renal acidosis or
alkalosis.  In the case of chronic renal disease, there is substantial loss of
functioning nephrons, loss of countercurrent multiplier, and a reduced capacity to
remove H+.  So there is both a metabolic acidosis and a hyperkalemia, with increased
serum creatinine, but the creatinine is only from muscle mass – not accurately
reflecting total body mass, which includes visceral organs.  The only accurate
measure of lean body mass would be in the linear relationship between circulating
hepatic produced transthyretin (TTR).

The pentose phosphate shunt is essential for

  • the generation of nucleic acids, in regeneration of red cells and lens – requiring NADPH.

Insofar as the red blood cell is engaged in O2 exchange, the lactic dehydrogenase
isoenzyme composition is the same as the heart. What about the lens of and cornea the eye, and platelets?  The explanation does appear to be more complex than
has been proposed and is not discussed here.

Section II. Mitochondrial NADH – NADP+ Transhydrogenase Reaction

There is also another consideration for the balance of di- and tri- phospopyridine
nucleotides in their oxidized and reduced forms.  I have brought this into the
discussion because of the centrality of hydride tranfer to mitochondrial oxidative
phosphorylation and the energetics – for catabolism and synthesis.

The role of transhydrogenase in the energy-linked reduction of TPN 

Fritz HommesRonald W. Estabrook∗∗

The Wenner-Gren Institute, University of Stockholm
Stockholm, Sweden
Biochemical and Biophysical Research Communications 11, (1), 2 Apr 1963, Pp 1–6
http://dx.doi.org:/10.1016/0006-291X(63)90017-2

In 1959, Klingenberg and Slenczka (1) made the important observation that incubation of isolated

  • liver mitochondria with DPN-specific substrates or succinate in the absence of phosphate
    acceptor resulted in a rapid and almost complete reduction of the intramitochondrial TPN.

These and related findings led Klingenberg and co-workers (1-3) to postulate

  • the occurrence of an ATP-controlled transhydrogenase reaction catalyzing the reduction of
    mitochondrial TPN by DPNH. A similar conclusion was reached by Estabrook and Nissley (4).

The present paper describes the demonstration and some properties of an

  • energy-dependent reduction of TPN by DPNH, catalyzed by submitochondrial particles.

Preliminary reports of some of these results have already appeared (5, 6 ) , and a
complete account is being published elsewhere (7).We have studied the energy- dependent reduction of TPN by PNH with submitochondrial particles from both
rat liver and beef heart. Rat liver particles were prepared essentially according to
the method of Kielley and Bronk (8), and beef heart particles by the method of
Low and Vallin (9).

PYRIDINE NUCLEOTIDE TRANSHYDROGENASE  II. DIRECT EVIDENCE FOR
AND MECHANISM OF THE
 TRANSHYDROGENASE REACTION*

BY  NATHAN 0. KAPLAN, SIDNEY P. COLOWICK, AND ELIZABETH F. NEUFELD
(From the McCollum-Pratt Institute, The Johns Hopkins University, Baltimore,
Maryland)  J. Biol. Chem. 1952, 195:107-119.
http://www.jbc.org/content/195/1/107.citation

NO Kaplan

NO Kaplan

Sidney Colowick

Sidney Colowick

Elizabeth Neufeld

Elizabeth Neufeld

Kaplan studied carbohydrate metabolism in the liver under David M. Greenberg at the
University of California, Berkeley medical school. He earned his Ph.D. in 1943. From
1942 to 1944, Kaplan participated in the Manhattan Project. From 1945 to 1949,
Kaplan worked with Fritz Lipmann at Massachusetts General Hospital to study
coenzyme A. He worked at the McCollum-Pratt Institute of Johns Hopkins University
from 1950 to 957. In 1957, he was recruited to head a new graduate program in
biochemistry at Brandeis University. In 1968, Kaplan moved to the University of
California, San Diego
, where he studied the role of lactate dehydrogenase in cancer. He also founded a colony of nude mice, a strain of laboratory mice useful in the study
of cancer and other diseases. [1] He was a member of the National Academy of
Sciences.One of Kaplan’s students at the University of California was genomic
researcher Craig Venter.[2]3]  He was, with Sidney Colowick, a founding editor of the scientific book series Methods
in Enzymology
.[1]

http://books.nap.edu/books/0309049768/xhtml/images/img00009.jpg

Colowick became Carl Cori’s first graduate student and earned his Ph.D. at
Washington University St. Louis in 1942, continuing to work with the Coris (Nobel
Prize jointly) for 10 years. At the age of 21, he published his first paper on the
classical studies of glucose 1-phosphate (2), and a year later he was the sole author on a paper on the synthesis of mannose 1-phosphate and galactose 1-phosphate (3). Both papers were published in the JBC. During his time in the Cori lab,

Colowick was involved in many projects. Along with Herman Kalckar he discovered
myokinase (distinguished from adenylate kinase from liver), which is now known as
adenyl kinase. This discovery proved to be important in understanding transphos-phorylation reactions in yeast and animal cells. Colowick’s interest then turned to
the conversion of glucose to polysaccharides, and he and Earl Sutherland (who
will be featured in an upcoming JBC Classic) published an important paper on the
formation of glycogen from glucose using purified enzymes (4). In 1951, Colowick
and Nathan Kaplan were approached by Kurt Jacoby of Academic Press to do a
series comparable to Methodem der Ferment Forschung. Colowick and Kaplan
planned and edited the first 6 volumes of Methods in Enzymology, launching in 1955
what became a series of well known and useful handbooks. He continued as
Editor of the series until his death in 1985.

http://bioenergetics.jbc.org/highwire/filestream/9/field_highwire_fragment_image_s/0/F1.small.gif

The Structure of NADH: the Work of Sidney P. Colowick

Nicole KresgeRobert D. Simoni and Robert L. Hill

On the Structure of Reduced Diphosphopyridine Nucleotide

(Pullman, M. E., San Pietro, A., and Colowick, S. P. (1954)

J. Biol. Chem. 206, 129–141)

Elizabeth Neufeld
·  Born: September 27, 1928 (age 85), Paris, France
·  EducationQueens College, City University of New YorkUniversity of California,
Berkeley

http://fdb5.ctrl.ucla.edu/biological-chemistry/institution/photo?personnel%5fid=45290&max_width=155&max_height=225

In Paper I (l), indirect evidence was presented for the following transhydrogenase
reaction, catalyzed by an enzyme present in extracts of Pseudomonas
fluorescens:

TPNHz + DPN -+ TPN + DPNHz

The evidence was obtained by coupling TPN-specific dehydrogenases with the
transhydrogenase and observing the reduction of large amounts of diphosphopyridine nucleotide (DPN) in the presence of catalytic amounts of triphosphopyridine
nucleotide (TPN).

In this paper, data will be reported showing the direct

  • interaction between TPNHz and DPN, in thepresence of transhydrogenase alone,
  • to yield products having the propertiesof TPN and DPNHZ.

Information will be given indicating that the reaction involves

  • a transfer of electrons (or hydrogen) rather than a phosphate 

Experiments dealing with the kinetics and reversibility of the reaction, and with the
nature of the products, suggest that the reaction is a complex one, not fully described
by the above formulation.

Materials and Methods [edited]

The TPN and DPN used in these studies were preparations of approximately 75
percent purity and were prepared from sheep liver by the chromatographic procedure
of Kornberg and Horecker (unpublished). Reduced DPN was prepared enzymatically with alcohol dehydrogenase as described elsewhere (2). Reduced TPN was prepared by treating TPN with hydrosulfite. This treated mixture contained 2 pM of TPNHz per ml.
The preparations of desamino DPN and reduced desamino DPN have been
described previously (2, 3). Phosphogluconate was a barium salt which was kindly
supplied by Dr. B. F. Horecker. Cytochrome c was obtained from the Sigma Chemical Company.

Transhydrogenase preparations with an activity of 250 to 7000 units per mg. were
used in these studies. The DPNase was a purified enzyme, which was obtained
from zinc-deficient Neurospora and had an activity of 5500 units per mg. (4). The
alcohol dehydrogenase was a crystalline preparation isolated from yeast according to the procedure of Racker (5).

Phosphogluconate dehydrogenase from yeast and a 10 per cent pure preparation of the TPN-specific cytochrome c reductase from liver (6) were gifts of Dr. B. F.
Horecker.

DPN was assayed with alcohol and crystalline yeast alcohol dehydrogenase. TPN was determined By the specific phosphogluconic acid dehydrogenase from yeast and also by the specific isocitric dehydrogenase from pig heart. Reduced DPN was
determined by the use of acetaldehyde and the yeast alcohol dehydrogenase.
All of the above assays were based on the measurement of optical density changes
at 340 rnp. TPNHz was determined with the TPN-specific cytochrome c reductase system. The assay of the reaction followed increase in optical density at 550 rnp  as a measure of the reduction of the cytochrome c after cytochrome c
reductase was added to initiate the reaction. The changes at 550 rnp are plotted for different concentrations of TPNHz in Fig. 3, a. The method is an extremely sensitive and accurate assay for reduced TPN.

Results
[No Figures or Table shown]

Formation of DPNHz from TPNHz and DPN-Fig. 1, a illustrates the direct reaction between TPNHz and DPN to form DPNHZ. The reaction was carried out by incubating TPNHz with DPN in the presence of the
transhydrogenase, yeast alcohol dehydrogenase, and acetaldehyde. Since the yeast dehydrogenase is specific for DPN,

  • a decrease in absorption at340 rnp can only be due to the formation of reduced DPN. It can
    be seen from the curves in Fig. 1, a that a decrease in optical density occurs only in the
    presence of the complete system.

The Pseudomonas enzyme is essential for the formation of DPNH2. It is noteworthy
that, under the conditions of reaction in Fig. 1, a,

  • approximately 40 per cent of theTPNH, reacted with the DPN.

Fig. 1, a also indicates that magnesium is not required for transhydrogenase activity.  The reaction between TPNHz and DPN takes place in the absence of alcohol
dehydrogenase and acetaldehyde
. This can be demonstrated by incubating the
two pyridine nucleotides with the transhydrogenase for 4 8 12 16 20 24 28 32 36
minutes

FIG. 1. Evidence for enzymatic reaction of TPNHt with DPN.

  • Rate offormation of DPNH2.

(b) DPN disappearance and TPN formation.

(c) Identification of desamino DPNHz as product of reaction of TPNHz with desamino DPN.  (assaying for reduced DPN by the yeast alcohol dehydrogenase technique.

Table I (Experiment 1) summarizes the results of such experiments in which TPNHz was added with varying amounts of DPN.

  • In the absence of DPN, no DPNHz was formed. This eliminates the possibility that TPNH 2 is
    converted to DPNHz
  • by removal ofthe monoester phosphate grouping.

The data also show that the extent of the reaction is

  • dependent on the concentration of DPN.

Even with a large excess of DPN, only approximately 40 per cent of the TPNHzreacts to form reduced DPN. It is of importance to emphasize that in the above
experiments, which were carried out in phosphate buffer, the extent of  the reaction

  • is the same in the presence or absence of acetaldehyde andalcohol dehydrogenase.

With an excess of DPN and different  levels of TPNHZ,

  • the amount of reduced DPN which is formed is
  • dependent on the concentration of TPNHz(Table I, Experiment 2).
  • In all cases, the amount of DPNHz formed is approximately
    40 per cent of the added reduced TPN.

Formation of TPN-The reaction between TPNHz and DPN should yield TPN as well as DPNHz.
The formation of TPN is demonstrated in Table 1. in Fig. 1, b. In this experiment,
TPNHz was allowed to react with DPN in the presence of the transhydrogenase
(PS.), and then alcohol and alcohol dehydrogenase were added . This
would result in reduction of the residual DPN, and the sample incubated with the
transhydrogenase contained less DPN. After the completion of the alcohol
dehydrogenase reaction, phosphogluconate and phosphogluconic dehydrogenase (PGAD) were added to reduce the TPN. The addition of this TPN-specific
dehydrogenase results in an

  • increase inoptical density in the enzymatically treated sample.
  • This change represents the amount of TPN formed.

It is of interest to point out that, after addition of both dehydrogenases,

  • the total optical density change is the same in both

Therefore it is evident that

  • for every mole of DPN disappearing  a mole of TPN appears.

Balance of All Components of Reaction

Table II (Experiment 1) shows that,

  • if measurements for all components of the reaction are made, one can demonstrate
    that there is
  • a mole for mole disappearance of TPNH, and DPN, and
  • a stoichiometric appearance of TPN and DPNH2.
  1. The oxidized forms of the nucleotides were assayed as described
  2. the reduced form of TPN was determined by the TPNHz-specific cytochrome c reductase,
  3. the DPNHz by means of yeast alcohol dehydrogenase plus

This stoichiometric balance is true, however,

  • only when the analyses for the oxidized forms are determined directly on the reaction

When analyses are made after acidification of the incubated reaction mixture,

  • the values found forDPN and TPN are much lower than those obtained by direct analysis.

This discrepancy in the balance when analyses for the oxidized nucleotides are
carried out in acid is indicated in Table II (Experiment 2). The results, when
compared with the findings in Experiment 1, are quite striking.

Reaction of TPNHz with Desamino DPN

Desamino DPN

  • reacts with the transhydrogenase system at the same rate as does DPN (2).

This was of value in establishing the fact that

  • the transhydrogenase catalyzesa transfer of hydrogen rather than a phosphate transfer reaction.

The reaction between desamino DPN and TPNHz can be written in two ways.

TPN f desamino DPNHz

TPNH, + desamino DPN

DPNH2 + desamino TPN

If the reaction involved an electron transfer,

  • desamino DPNHz would be
  • Phosphate transfer would result in the production of reduced

Desamino DPNHz can be distinguished from DPNHz by its

  • slowerrate of reaction with yeast alcohol dehydrogenase (2, 3).

Fig. 1, c illustrates that, when desamino DPN reacts with TPNH2, 

  • the product of the reaction is desamino DPNHZ.

This is indicated by the slow rate of oxidation of the product by yeast alcohol
dehydrogenase and acetaldehyde.

From the above evidence phosphate transfer 

  • has been ruled out as a possible mechanism for the transhydrogenase reaction.

Inhibition by TPN

As mentioned in Paper I and as will be discussed later in this paper,

  • the transhydrogenase reaction does not appear to be readily reversible.

This is surprising, particularly since only approximately 

  • 40 per cent of the TPNHz undergoes reaction with DPN
    under the conditions described above. It was therefore thought that
  • the TPN formed might inhibit further transfer of electrons from TPNH2.

Table III summarizes data showing the

  • strong inhibitory effect of TPN on thereaction between TPNHz and DPN.

It is evident from the data that

  • TPN concentration is a factor in determining the extent of the reaction.

Effect of Removal of TPN on Extent of Reaction

A purified DPNase from Neurospora has been found

  • to cleave the nicotinamide riboside linkagesof the oxidized forms of both TPN and DPN
  • without acting on thereduced forms of both nucleotides (4).

It has been found, however, that

  • the DPNase hydrolyzes desamino DPN at a very slow rate (3).

In the reaction between TPNHz and desamino DPN, TPN and desamino DPNH:,

  • TPNis the only component of this reaction attacked by the Neurospora enzyme
    at an appreciable rate

It was  thought that addition of the DPNase to the TPNHZ-desamino DPN trans-
hydrogenase reaction mixture

  • would split the TPN formed andpermit the reaction to go to completion.

This, indeed, proved to be the case, as indicated in Table IV, where addition of
the DPNase with desamino DPN results in almost

  • a stoichiometric formation of desamino DPNHz
  • and a complete disappearance of TPNH2.

Extent of Reaction in Buffers Other Than Phosphate

All the reactions described above were carried out in phosphate buffer of pH 7.5.
If the transhydrogenase reaction between TPNHz and DPN is run at the same pH
in tris(hydroxymethyl)aminomethane buffer (TRIS buffer)

  • with acetaldehydeand alcohol dehydrogenase present,
  • the reaction proceeds muchfurther toward completion 
  • than is the case under the same conditions ina phosphate medium (Fig. 2, a).

The importance of phosphate concentration in governing the extent of the reaction
is illustrated in Fig. 2, b.

In the presence of TRIS the transfer reaction

  • seems to go further toward completion in the presence of acetaldehyde
    and 
    alcohol dehydrogenase
  • than when these two components are absent.

This is not true of the reaction in phosphate,

  • in which the extent is independent of the alcoholdehydrogenase system.

Removal of one of the products of the reaction (DPNHp) in TRIS thus

  • appears to permit the reaction to approach completion,whereas
  • in phosphate this removal is without effect on the finalcourse of the reaction.

The extent of the reaction in TRIS in the absence of alcohol dehydrogenase
and acetaldehyde
 is

  • somewhat greater than when the reaction is run in phosphate.

TPN also inhibits the reaction of TPNHz with DPN in TRIS medium, but the inhibition

  • is not as marked as when the reaction is carried out in phosphate buffer.

Reversibility of Transhydrogenase Reaction;

Reaction between DPNHz and TPN

In Paper I, it was mentioned that no reversal of the reaction could be achieved in a system containing alcohol, alcohol dehydrogenase, TPN, and catalytic amounts of
DPN.

When DPNH, and TPN are incubated with the purified transhydrogenase, there is
also

  • no evidence for reversibility.

This is indicated in Table V which shows that

  • there is no disappearance of DPNHz in such a system.

It was thought that removal of the TPNHz, which might be formed in the reaction,
could promote the reversal of the reaction. Hence,

  • by using the TPNHe-specific cytochrome c reductase, one could
  1. not only accomplishthe removal of any reduced TPN,
  2. but also follow the course of the reaction.

A system containing DPNH2, TPN, the transhydrogenase, the cytochrome c
reductase, and cytochrome c, however, gives

  • no reduction of the cytochrome

This is true for either TRIS or phosphate buffers.2

Some positive evidence for the reversibility has been obtained by using a system
containing

  • DPNH2, TPNH2, cytochrome c, and the cytochrome creductase in TRIS buffer.

In this case, there is, of course, reduction of cytochrome c by TPNHZ, but,

  • when the transhydrogenase is present.,there is
  • additional reduction over and above that due to the added TPNH2.

This additional reduction suggests that some reversibility of the reaction occurred
under these conditions. Fig. 3, b shows

  • the necessity of DPNHzfor this additional reduction.

Interaction of DPNHz with Desamino DPN-

If desamino DPN and DPNHz are incubated with the purified Pseudomonas enzyme,
there appears

  • to be a transfer of electrons to form desamino DPNHz.

This is illustrated in Fig. 4, a, which shows the

  • decreased rate of oxidation by thealcohol dehydrogenase system
  • after incubation with the transhydrogenase.
  • Incubation of desamino DPNHz with DPN results in the formation of DPNH2,
  • which is detected by the faster rate of oxidation by the alcohol dehydrogenase system
  • after reaction of the pyridine nucleotides with thetranshydrogenase (Fig. 4, b).

It is evident from the above experiments that

the transhydrogenase catalyzes an exchange of hydrogens between

  • the adenylic and inosinic pyridine nucleotides.

However, it is difficult to obtain any quantitative information on the rate or extent of
the reaction by the method used, because

  • desamino DPNHz also reacts with the alcohol dehydrogenase system,
  • although at a much slower rate than does DPNH2.

DISCUSSION

The results of the balance experiments seem to offer convincing evidence that
the transhydrogenase catalyzes the following reaction.

TPNHz + DPN -+ DPNHz + TPN

Since desamino DPNHz is formed from TPNHz and desamino DPN,

  • thereaction appears to involve an electron (or hydrogen) transfer
  • rather thana transfer of the monoester phosphate grouping of TPN.

A number of the findings reported in this paper are not readily understandable in
terms of the above simple formulation of the reaction. It is difficult to understand
the greater extent of the reaction in TRIS than in phosphate when acetaldehyde
and alcohol dehydrogenase are present.

One possibility is that an intermediate may be involved which is more easily converted
to reduced DPN in the TRIS medium. The existence of such an intermediate is also
suggested by the discrepancies noted in balance experiments, in which

  • analyses of the oxidized nucleotides after acidification showed
  • much lower values than those found by direct analysis.

These findings suggest that the reaction may involve

  • a 1 electron ratherthan a 2 electron transfer with
  • the formation of acid-labile free radicals as intermediates.

The transfer of hydrogens from DPNHz to desamino DPN

  • to yield desamino DPNHz and DPN and the reversal of this transfer
  • indicate the unique role of the transhydrogenase
  • in promoting electron exchange between the pyridine nucleotides.

In this connection, it is of interest that alcohol dehydrogenase and lactic
dehydrogenase cannot duplicate this exchange  between the DPN and
the desamino systems.3  If one assumes that desamino DPN behaves
like DPN,

  • one might predict that the transhydrogenase would catalyze an
    exchange of electrons (or hydrogen) 3.

Since alcohol dehydrogenase alone

  • does not catalyze an exchange of electrons between the adenylic
    and inosinic pyridine nucleotides, this rules out the possibility
  • that the dehydrogenase is converted to a reduced intermediate
  • during electron between DPNHz and added DPN.

It is hoped to investigate this possibility with isotopically labeled DPN.
Experiments to test the interaction between TPN and desamino TPN are
also now in progress.

It seems likely that the transhydrogenase will prove capable of

  • catalyzingan exchange between TPN and TPNH2, as well as between DPN and

The observed inhibition by TPN of the reaction between TPNHz and DPN may
therefore

  • be due to a competition between DPN and TPNfor the TPNH2.

SUMMARY

  1. Direct evidence for the following transhydrogenase reaction. catalyzedby an
    enzyme from Pseudomonas fluorescens, is presented.

TPNHz + DPN -+ TPN + DPNHz

Balance experiments have shown that for every mole of TPNHz disappearing
1 mole of TPN appears and that for each mole of DPNHz generated 1 mole of
DPN disappears. The oxidized nucleotides found at the end of the reaction,
however, show anomalous lability toward acid.

  1. The transhydrogenase also promotes the following reaction.

TPNHz + desamino DPN -+ TPN + desamino DPNH,

This rules out the possibility that the transhydrogenase reaction involves a
phosphate transfer and indicates that the

  • enzyme catalyzes a shift of electrons (or hydrogen atoms).

The reaction of TPNHz with DPN in 0.1 M phosphate buffer is strongly
inhibited by TPN; thus

  • it proceeds only to the extent of about40 per cent or less, even
  • when DPNHz is removed continuously by meansof acetaldehyde
    and alcohol dehydrogenase.
  • In other buffers, in whichTPN is less inhibitory, the reaction proceeds
    much further toward completion under these conditions.
  • The reaction in phosphate buffer proceedsto completion when TPN
    is removed as it is formed.
  1. DPNHz does not react with TPN to form TPNHz and DPN in the presence
    of transhydrogenase. Some evidence, however, has been obtained for
    the reversibility by using the following system:
  • DPNHZ, TPNHZ, cytochromec, the TPNHz-specific cytochrome c reductase,
    and the transhydrogenase.
  1. Evidence is cited for the following reversible reaction, which is catalyzed
    by the transhydrogenase.

DPNHz + desamino DPN fi DPN + desamino DPNHz

  1. The results are discussed with respect to the possibility that the
    transhydrogenase reaction may
  • involve a 1 electron transfer with theformation of free radicals as intermediates.

 

BIBLIOGRAPHY

  1. Coiowick, S. P., Kaplan, N. O., Neufeld, E. F., and Ciotti, M. M., J. Biol. Chem.,196, 95 (1952).
  2. Pullman, 111. E., Colowick, S. P., and Kaplan, N. O., J. Biol. Chem., 194, 593(1952).
  3. Kaplan, N. O., Colowick, S. P., and Ciotti, M. M., J. Biol. Chem., 194, 579 (1952).
  4. Kaplan, N. O., Colowick, S. P., and Nason, A., J. Biol. Chem., 191, 473 (1951).
  5. Racker, E., J. Biol. Chem., 184, 313 (1950).
  6. Horecker, B. F., J. Biol. Chem., 183, 593 (1950).

Section !II. 

Luis_Federico_Leloir_-_young

The Leloir pathway: a mechanistic imperative for three enzymes to change
the stereochemical configuration of a single carbon in galactose.

Frey PA.
FASEB J. 1996 Mar;10(4):461-70.    http://www.fasebj.org/content/10/4/461.full.pdf
PMID:8647345

The biological interconversion of galactose and glucose takes place only by way of
the Leloir pathway and requires the three enzymes galactokinase, galactose-1-P
uridylyltransferase, and UDP-galactose 4-epimerase.
The only biological importance of these enzymes appears to be to

  • provide for the interconversion of galactosyl and glucosyl groups.

Galactose mutarotase also participates by producing the galactokinase substrate
alpha-D-galactose from its beta-anomer. The galacto/gluco configurational change takes place at the level of the nucleotide sugar by an oxidation/reduction
mechanism in the active site of the epimerase NAD+ complex. The nucleotide portion
of UDP-galactose and UDP-glucose participates in the epimerization process in two ways:

1) by serving as a binding anchor that allows epimerization to take place at glycosyl-C-4 through weak binding of the sugar, and

2) by inducing a conformational change in the epimerase that destabilizes NAD+ and
increases its reactivity toward substrates.

Reversible hydride transfer is thereby facilitated between NAD+ and carbon-4
of the weakly bound sugars.

The structure of the enzyme reveals many details of the binding of NAD+ and
inhibitors at the active site
.

The essential roles of the kinase and transferase are to attach the UDP group
to galactose, allowing for its participation in catalysis by the epimerase. The
transferase is a Zn/Fe metalloprotein
, in which the metal ions stabilize the
structure rather than participating in catalysis. The structure is interesting
in that

  • it consists of single beta-sheet with 13 antiparallel strands and 1 parallel strand
    connected by 6 helices.

The mechanism of UMP attachment at the active site of the transferase is a double
displacement
, with the participation of a covalent UMP-His 166-enzyme intermediate
in the Escherichia coli enzyme. The evolution of this mechanism appears to have
been guided by the principle of economy in the evolution of binding sites.

PMID: 8647345 Free full text

Section IV.

More on Lipids – Role of lipids – classification

  • Energy
  • Energy Storage
  • Hormones
  • Vitamins
  • Digestion
  • Insulation
  • Membrane structure: Hydrophobic properties

Lipid types

lipid types

lipid types

nat occuring FAs in mammals

nat occuring FAs in mammals

Read Full Post »

Prologue to Cancer – e-book Volume One – Where are we in this journey?

Prologue to Cancer – e-book Volume One – Where are we in this journey?

Author and Curator: Larry H. Bernstein, MD, FCAP

Consulting Reviewer and Contributor:  Jose Eduardo de Salles Roselino, MD

 

LH Bernstein

LH Bernstein

Jose Eduardo de Salles Roselino

LES Roselino

 

 

This is a preface to the fourth in the ebook series of Leaders in Pharmaceutical Intelligence, a collaboration of experienced doctorate medical and pharmaceutical professionals.  The topic is of great current interest, and it entails a significant part of current medical expenditure by a group of neoplastic diseases that may develop at different periods in life, and have come to supercede infections or even eventuate in infectious disease as an end of life event.  The articles presented are a collection of the most up-to-date accounts of the state of a now rapidly emerging field of medical research that has benefitted enormously by progress in immunodiagnostics,  radiodiagnostics, imaging, predictive analytics, genomic and proteomic discovery subsequent to the completion of the Human Genome Project, advances in analytic methods in qPCR, gene sequencing, genome mapping, signaling pathways, exome identification, identification of therapeutic targets in inhibitors, activators, initiators in the progression of cell metabolism, carcinogenesis, cell movement, and metastatic potential.  This story is very complicated because we are engaged in trying to evoke from what we would like to be similar clinical events, dissimilar events in their expression and classification, whether they are within the same or different anatomic class.  Thus, we are faced with constructing an objective evidence-based understanding requiring integration of several disciplinary approaches to see a clear picture.  The failure to do so creates a high risk of failure in biopharmaceutical development.

The chapters that follow cover novel and important research and development in cancer related research, development, diagnostics and treatment, and in balance, present a substantial part of the tumor landscape, with some exceptions.  Will there ever be a unifying concept, as might be hoped for? I certainly can’t see any such prediction on the horizon.  Part of the problem is that disease classification is a human construct to guide us, and so are treatments that have existed and are reexamined for over 2,000 years.  In that time, we have changed, our afflictions have been modified, and our environment has changed with respect to the microorganisms within and around us, viruses, the soil, and radiation exposure, and the impacts of war and starvation, and access to food.  The outline has been given.  Organic and inorganic chemistry combined with physics has given us a new enterprise in biosynthetics that is and will change our world.  But let us keep in mind that this is a human construct, just as drug target development is such a construct, workable with limitations.

What Molecular Biology Gained from Physics

We need greater clarity and completeness in defining the carcinogenetic process.  It is the beginning, but not the end.  But we must first examine the evolution of the scientific structure that leads to our present understanding. This was preceded by the studies of anatomy, physiology, and embryology that had to occur as a first step, which was followed by the researches into bacteriology, fungi, sea urchins and the evolutionary creatures that could be studied having more primary development in scale.  They are still major objects of study, with the expectation that we can derive lessons about comparative mechanisms that have been passed on through the ages and have common features with man.  This became the serious intent of molecular biology, the discipline that turned to find an explanation for genetics, and to carry out controlled experiments modelled on the discipline that already had enormous success in physics, mathematics, and chemistry. In 1900, when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, it had important ramifications for chemistry and biology (See Appendix II and Footnote 1, Planck equation, energy and oscillation).  The leading idea is to search below the large-scale observations of classical biology.

The central dogma of molecular biology where genetic material is transcribed into RNA and then translated into protein, provides a starting point, but the construct is undergoing revision in light of emerging novel roles for RNA and signaling pathways.   The term, coined by Warren Weaver (director of Natural Sciences for the Rockefeller Foundation), who observed an emergence of significant change given recent advances in fields such as X-ray crystallography. Molecular biology also plays important role in understanding formations, actions, regulations of various parts of cellswhich can be used efficiently for targeting new drugs, diagnosis of disease, physiology of the Cell. The Nobel Prize in Physiology or Medicine in 1969 was shared by Max Delbrück, Alfred D. Hershey, Salvador E. Luria, whose work with viral replication gave impetus to the field.  Delbruck was a physicist who trained in Copenhagen under Bohr, and specifically committed himself to a rigor in biology, as was in physics.

Dorothy Hodgkin  protein crystallography

Dorothy Hodgkin protein crystallography

Rosalind Franlin crystallographer double helix

Rosalind Franlin
crystallographer
double helix

 Max Delbruck         molecular biology

Max Delbruck        
molecular biology

Max Planck

Max Planck Quantum Physics

 

 

 

We then stepped back from classical (descriptive) physiology, with the endless complexity, to molecular biology.  This led us to the genetic code, with a double helix model.  It has recently been found insufficiently explanatory, with the recent construction of triplex and quadruplex models. They have a potential to account for unaccounted for building blocks, such as inosine, and we don’t know whether more than one model holds validity under different conditions .  The other major field of development has been simply unaccounted for in the study of proteomics, especially in protein-protein interactions, and in the energetics of protein conformation, first called to our attention by the work of Jacob, Monod, and Changeux (See Footnote 2).  Proteins are not just rigid structures stamped out by the monotonously simple DNA to RNA to protein concept.  Nothing is ever quite so simple. Just as there are epigenetic events, there are posttranslational events, and yet more.

JPChangeux-150x170

JP Changeux

 

 

 

 

 

 

 

 

The Emergence of Molecular Biology

I now return the discussion to the topic of medicine, the emergence of molecular biology and the need for convergence with biochemistry in the mid-20th century. Jose Eduardo de Salles Roselino recalls “I was previously allowed to make of the conformational energy as made by R Marcus in his Nobel lecture revised (J. of Electroanalytical  Chemistry 438:(1997) p251-259. (See Footnote 1) His description of the energetic coordinates of a landscape of a chemical reaction is only a two-dimensional cut of what in fact is a volcano crater (in three dimensions) (each one varies but the sum of the two is constant. Solvational+vibrational=100% in ordinate) nuclear coordinates in abcissa. In case we could represent it by research methods that allow us to discriminate in one by one degree of different pairs of energy, we would most likely have 360 other similar representations of the same phenomenon. The real representation would take into account all those 360 representations together. In case our methodology was not that fine, for instance it discriminates only differences of minimal 10 degrees in 360 possible, will have 36 partial representations of something that to be perfectly represented will require all 36 being taken together. Can you reconcile it with ATGC?  Yet, when complete genome sequences were presented they were described as though we will know everything about this living being. The most important problems in biology will be viewed by limited vision always and the awareness of this limited is something we should acknowledge and teach it. Therefore, our knowledge is made up of partial representations. If we had the entire genome data for the most intricate biological problems, they are still not amenable to this level of reductionism. But going from general views of signals andsymptoms we could get to the most detailed molecular view and in this case genome provides an anchor.”

“Warburg Effect” describes the preference of glycolysis and lactic acid fermentation rather than oxidative phosphorylation for energy production in cancer cells. Mitochondrial metabolism is an important and necessary component in the functioning and maintenance of the cell, and accumulating evidence suggests that dysfunction of mitochondrial metabolism plays a role in cancer. Progress has demonstrated the mechanisms of the mitochondrial metabolism-to-glycolysis switch in cancer development and how to target this metabolic switch.

 

 

Glycolysis

glycolysis

 

Otto Heinrich Warburg (1883-  )

Otto Warburg

435px-Louis_Pasteur,_foto_av_Félix_Nadar_Crisco_edit

Louis Pasteur

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The expression “Pasteur effect” was coined by Warburg when inspired by Pasteur’s findings in yeast cells, when he investigated this metabolic observation (Pasteur effect) in cancer cells. In yeast cells, Pasteur had found that the velocity of sugar used was greatly reduced in presence of oxygen. Not to be confused, in the “Crabtree effect”, the velocity of sugar metabolism was greatly increased, a reversal, when yeast cells were transferred from the aerobic to an anaerobic condition. Thus, the velocity of sugar metabolism of yeast cells was shown to be under metabolic regulatory control in response to change in environmental oxygen conditions in growth. Warburg had to verify whether cancer cells and tissue related normal mammalian cells also have a similar control mechanism. He found that this control was also found in normal cells studied, but was absent in cancer cells. Strikingly, cancer cells continue to have higher anaerobic gycolysis despite the presence of oxygen in their culture media (See Footnote 3).

Taking this a step further, food is digested and supplied to cells In vertebrates mainly in the form of glucose, which is metabolized producing Adenosine Triphosphate (ATP) by two pathways. Glycolysis, occurs via anaerobic metabolism in the cytoplasm, and is of major significance for making ATP quickly, but in a minuscule amount (2 molecules).  In the presence of oxygen, the breakdown process continues in the mitochondria via the Krebs’s cycle coupled with oxidative phosphorylation, which is more efficient for ATP production (36 molecules). Cancer cells seem to depend on glycolysis. In the 1920s, Otto Warburg first proposed that cancer cells show increased levels of glucose consumption and lactate fermentation even in the presence of ample oxygen (known as “Warburg Effect”). Based on this theory, oxidative phosphorylation switches to glycolysis which promotes the proliferation of cancer cells. Many studies have demonstrated glycolysis as the main metabolic pathway in cancer cells.

Albert Szent Gyogy (Warburg’s student) and Otto Meyerhof both studied striated skeletal muscle metabolism invertebrates, and they found those changes observed in yeast by Pasteur. The description of the anaerobic pathway was largely credited to Emden and Meyerhof. Whenever there is increase in muscle work, energy need is above what can be provided by blood supply, the cell metabolism changes from aerobic (where  Acetyl CoA  provides the chemical energy for aerobic production of ATP) to anaerobic metabolism of glucose. In this condition, glucose is obtained directly from its muscle glycogen stores (not from hepatic glycogenolysis).  This is the sole source of chemical energy that is independent of oxygen supplied to the cell. It is a physiological change on muscle metabolism that favors autonomy. It does not depend upon the blood oxygen for aerobic metabolim or blood sources of carbon metabolites borne out from adipose tissue (free fatty acids) or muscle proteins (branched chain amino acids), or vascular delivery of glucose. On that condition, the muscle can perform contraction by its internal source of ATP and uses conversion of pyruvate to lactate in order to regenerate much-needed NAD (by hydride transfer from pyruvate) as a replacement for this mitochondrial function. This regulatory change, keeps glycolysis going at fast rate in order to meet ATP needs of the cell under low yield condition (only two or three ATP for each glucose converted into two lactate molecules). Therefore, it cannot last for long periods of time. This regulatory metabolic change is made in seconds, minutes and therefore happens with the proteins that are already presented in the cell. It does not requires the effect of transcription factors and/or changes in gene expression (See Footnote 1, 2).

In other types mammalian cells, like those from the lens of the eye (86% gycolysis + pentose shunt),  and red blood cells (RBC)[both lacking mitochondria], and also in the deep medullary layer of the kidneys, for lack of mitochondria in the first two cases and normally reduced blood perfusion in the third – A condition required for the counter current mechanism and our ability to concentrate urine also have, permanent higher anaerobic metabolism. In the case of RBC, it includes the ability to produce in a shunt of glycolytic pathway 2,3 diphospho- glycerate that is required to place the hemogloblin macromolecule in an unstable equilibrium between its two forms (R and T – Here presented as simplified accordingly to the model of Monod, Wyman and Changeux. The final model would be even much complex (see for instance, H-W and K review Nature 2007 vol 450: p 964-972 )

Any tissue under a condition of ischemia that is required for some medical procedures (open heart surgery, organ transplants, etc) displays this fast regulatory mechanism (See Footnote 1, 2). A display of these regulatory metabolic changes can be seen in: Cardioplegia: the protection of the myocardium during open heart surgery: a review. D. J. Hearse J. Physiol., Paris, 1980, 76, 751-756 (Fig 1).  The following points are made:

1-       It is a fast regulatory response. Therefore, no genetic mechanism can be taken into account.

2-       It moves from a reversible to an irreversible condition, while the cells are still alive. Death can be seen at the bottom end of the arrow. Therefore, it cannot be reconciled with some of the molecular biology assumptions:

A-       The gene and genes reside inside the heart muscle cells but, in order to preserve intact, the source of coded genetic information that the cell reads and transcribes, DNA must be kept to a minimal of chemical reactivity.

B-       In case sequence determines conformation, activity and function , elevated potassium blood levels could not cause cardiac arrest.

In comparison with those conditions here presented, cancer cells keep the two metabolic options for glucose metabolism at the same time. These cells can use glucose that our body provides to them or adopt temporarily, an independent metabolic form without the usual normal requirement of oxygen (one or another form for ATP generation).  ATP generation is here, an over-simplification of the metabolic status since the carbon flow for building blocks must also be considered and in this case oxidative metabolism of glucose in cancer cells may be viewed as a rich source of organic molecules or building blocks that dividing cells always need.

JES Roselino has conjectured that “most of the Krebs cycle reaction works as ideal reversible thermodynamic systems that can supply any organic molecule that by its absence could prevent cell duplication.” In the vision of Warburg, cancer cells have a defect in Pasteur-effect metabolic control. In case it was functioning normally, it will indicate which metabolic form of glucose metabolism is adequate for each condition. What more? Cancer cells lack differentiated cell function. Any role for transcription factors must be considered as the role of factors that led to the stable phenotypic change of cancer cells. The failure of Pasteur effect must be searched for among the fast regulatory mechanisms that aren’t dependent on gene expression (See Footnote 3).

Extending the thoughts of JES Roselino (Hepatology 1992;16: 1055-1060), reduced blood flow caused by increased hydrostatic pressure in extrahepatic cholestasis decreases mitochondrial function (quoted in Hepatology) and as part of Pasteur effect normal response, increased glycolysis in partial and/or functional anaerobiosis and therefore blocks the gluconeogenic activity of hepatocytes that requires inhibited glycolysis. In this case, a clear energetic link can be perceived between the reduced energetic supply and the ability to perform differentiated hepatic function (gluconeogenesis). In cancer cells, the action of transcription factors that can be viewed as different ensembles of kaleidoscopic pieces (with changing activities as cell conditions change) are clearly linked to the new stable phenotype. In relation to extrahepatic cholestasis mentioned above it must be reckoned that in case a persistent chronic condition is studied a secondary cirrhosis is installed as an example of persistent stable condition, difficult to be reversed and without the requirement for a genetic mutation. (See Footnote 4).

 The Rejection of Complexity

Most of our reasoning about genes was derived from scientific work in microorganisms. These works have provided great advances in biochemistry.

250px-DNA_labeled  DNA diagram showing base pairing

double helix

 

hgp_hubris_220x288_72  genome cartoon

Dna triplex pic

Triple helix

 

formation of a triplex DNA structure

formation of triple helix

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

1-      The “Gelehrter idea”: No matter what you are doing you will always be better off, in case you have a gene (In chapter 7 Principles of Medical Genetics Gelehrter and Collins Williams & Wilkins 1990).

2-      The idea that everything could be found following one gene one enzyme relationship that works fine for our understanding of the metabolism, in all biological problems.

3-      The idea that everything that explains biochemistry in microorganisms explains also for every living being (J Nirenberg).

4-      The idea that biochemistry may not require that time should be also taken into account. Time must be considered only for genetic and biological evolution studies (S Luria. In Life- The unfinished experiment 1977 C Scribner´s sons NY).

5-      Finally, the idea that everything in biology, could be found in the genome. Since all information in biology goes from DNA through RNA to proteins. Alternatively, are in the DNA, in case the strict line that includes RNA is not included.

This last point can be accepted in case it is considered that ALL GENETIC information is in our DNA. Genetics as part of life and not as its total expression.

For example, when our body is informed that the ambient temperature is too low or alternatively is too high, our body is receiving an information that arrives from our environment. This external information will affect our proteins and eventually, in case of longer periods in a new condition will cause adaptive response that may include conformational changes in transcription factors (proteins) that will also, produce new readings on the DNA. However, it is an information that moves from outside, to proteins and not from DNA to proteins. The last pathway, when transcription factors change its conformation and change DNA reading will follow the dogmatic view as an adaptive response (See Footnotes 1-3).

However, in case, time is taken into account, the first reactions against cold or warmer temperatures will be the ones that happen through change in protein conformation, activities and function before any change in gene expression can be noticed at protein level. These fast changes, in seconds, minutes cannot be explained by changes in gene expression and are strongly linked to what is needed for the maintenance of life.

“It is possible”, says Roselino, “desirable, to explain all these fast biochemical responses to changes in a living being condition as the sound foundation of medical practices without a single mention to DNA. In case a failure in any mechanism necessary to life is found to be genetic in its origin, the genome in context with with this huge set of transcription factors must be taken into account. This is the biochemical line of reasoning that I have learned with Houssay and Leloir. It would be an honor to see it restored in modern terms.”

More on the Mechanism of Metabolic Control

It was important that genomics would play such a large role in medical research for the last 70 years. There is also good reason to rethink the objections of the Nobelists James Watson and Randy Schekman in the past year, whatever discomfort it brings.  Molecular biology has become a tautology, and as a result deranged scientific rigor inside biology.

Crick & Watson with their DNA model, 1953

Eatson and Crick

Randy-Schekman Berkeley

Randy-Schekman Berkeley

 

 

According to JES Roselino, “consider that glycolysis is oscillatory thanks to the kinetic behavior of Phosphofructokinase. Further, by its effect upon Pyruvate kinase through Fructose 1,6 diphosphate oscillatory levels, the inhibition of gluconeogenesis is also oscillatory. When the carbon flow through glycolysis is led to a maximal level gluconeogenesis will be almost completely blocked. The reversal of the Pyruvate kinase step in liver requires two enzymes (Pyruvate carboxylase (maintenance of oxaloacetic levels) + phosphoenolpyruvate carboxykinase (E.C. 4.1.1.32)) and energy requiring reactions that most likely could not as an ensemble, have a fast enough response against pyruvate kinase short period of inhibition during high frequency oscillatory periods of glycolytic flow. Only when glycolysis oscillates at low frequency the opposite reaction could enable gluconeogenic carbon flow.”

In case it can be shown in a rather convincing way, the same reasoning could be applied to understand how simple replicative signals inducing Go to G1 transition in cells, could easily overcome more complex signals required for cell differentiation and differentiated function.

Perhaps the problem of overextension of the equivalence of the DNA and what happens to the organism is also related to the initial reliance on a single cell model to relieve the complexity (which isn’t fully the case).

For instance, consider this fragment:
“Until only recently it was assumed that all proteins take on a clearly defined three-dimensional structure – i.e. they fold in order to be able to assume these functions.”
Cold Spring Harbour Symp. Quant. Biol. 1973  p 187-193 J.C Seidel and J Gergely – Investigation of conformational changes in Spin-Labeled Myosin Model for muscle contraction:
Huxley, A. F. 1971 Proc. Roy. Soc (London) (B) 178:1
Huxley, A.F and R. M. Simmons,1971. Nature 233:633
J.C Haselgrove X ray Evidence for a conformational Change in the Actin-containing filaments…Cold Spring Harbour Symp Quant Biol.1972 v 37: p 341-352

Only a very small sample indicating otherwise. Proteins were held as interacting macromolecules, changing their conformation in regulatory response to changes in the microenvironment (See Footnote 2). DNA was the opposite, non-interacting macromolecules to be as stable as a library must be.

The dogma held that the property of proteins could be read in DNA alone. Consequenly, the few examples quoted above, must be ignored and all people must believe that DNA alone, without environmental factors roles, controls protein amino acid sequence (OK), conformation (not true), activity (not true) and function (not true).

It appeared naively to be correct from the dogma to conclude from interpreting your genome: You have a 50% increased risk of developing the following disease (deterministic statement).  The correct form must be: You belong to a population that has a 50% increase in the risk of….followed by –  what you must do to avoid increase in your personal risk and the care you should take in case you want to have longer healthy life.  Thus, genetics and non-genetic diseases were treated as the same and medical foundations were reinforced by magical considerations (dogmas) in a very profitable way for those involved besides the patient.

 Footnotes:

  1. There is a link of electricity with ions in biology and the oscillatory behavior of some electrical discharges.  In addition, the oscillatory form of electrical discharged may have allowed Planck to relate high energy content with higher frequencies and conversely, low energy content in low frequency oscillatory events.  One may think of high density as an indication of great amount of matter inside a volume in space.  This helps the understanding of Planck’s idea as a high-density-energy in time for a high frequency phenomenon.
  1. Take into account a protein that may have its conformation restricted by an S-S bridge. This protein also, may move to another more flexible conformation in case it is in HS HS condition when the S-S bridge is broken. Consider also that, it takes some time for a protein to move from one conformation for instance, the restricted conformation (S-S) to other conformations. Also, it takes a few seconds or minutes to return to the S-S conformation (This is the Daniel Koshland´s concept of induced fit and relaxation time used by him in order to explain allosteric behavior of monomeric proteins- Monod, Wyman and Changeux requires tetramer or at least, dimer proteins).
  1. In case you have glycolysis oscillating in a frequency much higher than the relaxation time you could lead to the prevalence of high NADH effect leading to high HS /HS condition and at low glycolytic frequency, you could have predominance of S-S condition affecting protein conformation. In case you have predominance of NAD effect upon protein S-S you would get the opposite results.  The enormous effort to display the effect of citrate and over Phosphofructokinase conformation was made by others. Take into account that ATP action as an inhibitor in this case, is a rather unusual one. It is a substrate of the reaction, and together with its action as activator  F1,6 P (or its equivalent F2,6 P) is also unusual. However, it explains oscillatory behaviour of glycolysis. (Goldhammer , A.R, and Paradies: PFK structure and function, Curr. Top Cell Reg 1979; 15:109-141).
  1. The results presented in our Hepatology work must be viewed in the following way: In case the hepatic (oxygenated) blood flow is preserved, the bile secretory cells of liver receive well-oxygenated blood flow (the arterial branches bath secretory cells while the branches originated from portal vein irrigate the hepatocytes.  During extra hepatic cholestasis the low pressure, portal blood flow is reduced and the hepatocytes do not receive enough oxygen required to produce ATP that gluconeogenesis demands. Hepatic artery do not replace this flow since, its branches only join portal blood fluxes after the previous artery pressure  is reduced to a low pressure venous blood – at the point where the formation of hepatic vein is. Otherwise, the flow in the portal vein would be reversed or, from liver to the intestine. It is of no help to take into account possible valves for this reasoning since minimal arterial pressure is well above maximal venous pressure and this difference would keep this valve in permanent close condition. In low portal blood flow condition, the hepatocyte increases pyruvate kinase activity and with increased pyruvate kinase activity Gluconeogenesis is forbidden (See Walsh & Cooper revision quoted in the Hepatology as ref 23). For the hemodynamic considerations, role of artery and veins in hepatic portal system see references 44 and 45 Rappaport and Schneiderman and Rappapaport.

 

 Appendix I.

metabolic pathways

metabolic pathways

Signals Upstream and Targets Downstream of Lin28 in the Lin28 Pathway

Signals Upstream and Targets Downstream of Lin28 in the Lin28 Pathway

 

 

 

 

 

 

 

 

1.  Functional Proteomics Adds to Our Understanding

Ben Schuler’s research group from the Institute of Biochemistry of the University of Zurich has now established that an increase in temperature leads to folded proteins collapsing and becoming smaller. Other environmental factors can trigger the same effect. The crowded environments inside cells lead to the proteins shrinking. As these proteins interact with other molecules in the body and bring other proteins together, understanding of these processes is essential “as they play a major role in many processes in our body, for instance in the onset of cancer”, comments study coordinator Ben Schuler.

Measurements using the “molecular ruler”

“The fact that unfolded proteins shrink at higher temperatures is an indication that cell water does indeed play an important role as to the spatial organisation eventually adopted by the molecules”, comments Schuler with regard to the impact of temperature on protein structure. For their studies the biophysicists use what is known as single-molecule spectroscopy. Small colour probes in the protein enable the observation of changes with an accuracy of more than one millionth of a millimetre. With this “molecular yardstick” it is possible to measure how molecular forces impact protein structure.

With computer simulations the researchers have mimicked the behaviour of disordered proteins. They want to use them in future for more accurate predictions of their properties and functions.

Correcting test tube results

That’s why it’s important, according to Schuler, to monitor the proteins not only in the test tube but also in the organism. “This takes into account the fact that it is very crowded on the molecular level in our body as enormous numbers of biomolecules are crammed into a very small space in our cells”, says Schuler. The biochemists have mimicked this “molecular crowding” and observed that in this environment disordered proteins shrink, too.

Given these results many experiments may have to be revisited as the spatial organisation of the molecules in the organism could differ considerably from that in the test tube according to the biochemist from the University of Zurich. “We have, therefore, developed a theoretical analytical method to predict the effects of molecular crowding.” In a next step the researchers plan to apply these findings to measurements taken directly in living cells.

Explore further: Designer proteins provide new information about the body’s signal processesMore information: Andrea Soranno, Iwo Koenig, Madeleine B. Borgia, Hagen Hofmann, Franziska Zosel, Daniel Nettels, and Benjamin Schuler. Single-molecule spectroscopy reveals polymer effects of disordered proteins in crowded environments. PNAS, March 2014. DOI: 10.1073/pnas.1322611111

 

Effects of Hypoxia on Metabolic Flux

  1. Glucose-6-phosphate dehydrogenase regulation in the hepatopancreas of the anoxia-tolerantmarinemollusc, Littorina littorea

JL Lama , RAV Bell and KB Storey

Glucose-6-phosphate dehydrogenase (G6PDH) gates flux through the pentose phosphate pathway and is key to cellular antioxidant defense due to its role in producing NADPH. Good antioxidant defenses are crucial for anoxia-tolerant organisms that experience wide variations in oxygen availability. The marine mollusc, Littorina littorea, is an intertidal snail that experiences daily bouts of anoxia/hypoxia with the tide cycle and shows multiple metabolic and enzymatic adaptations that support anaerobiosis. This study investigated the kinetic, physical and regulatory properties of G6PDH from hepatopancreas of L. littorea to determine if the enzyme is differentially regulated in response to anoxia, thereby providing altered pentose phosphate pathway functionality under oxygen stress conditions.

Several kinetic properties of G6PDH differed significantly between aerobic and 24 h anoxic conditions; compared with the aerobic state, anoxic G6PDH (assayed at pH 8) showed a 38% decrease in K G6P and enhanced inhibition by urea, whereas in pH 6 assays Km NADP and maximal activity changed significantly.

All these data indicated that the aerobic and anoxic forms of G6PDH were the high and low phosphate forms, respectively, and that phosphorylation state was modulated in response to selected endogenous protein kinases (PKA or PKG) and protein phosphatases (PP1 or PP2C). Anoxia-induced changes in the phosphorylation state of G6PDH may facilitate sustained or increased production of NADPH to enhance antioxidant defense during long term anaerobiosis and/or during the transition back to aerobic conditions when the reintroduction of oxygen causes a rapid increase in oxidative stress.

Lama et al.  Peer J 2013.   http://dx.doi.org/10.7717/peerj.21

 

  1. Structural Basis for Isoform-Selective Inhibition in Nitric Oxide Synthase

    TL. Poulos and H Li

In the cardiovascular system, the important signaling molecule nitric oxide synthase (NOS) converts L-arginine into L-citrulline and releases nitric oxide (NO). NO produced by endothelial NOS (eNOS) relaxes smooth muscle which controls vascular tone and blood pressure. Neuronal NOS (nNOS) produces NO in the brain, where it influences a variety of neural functions such as neural transmitter release. NO can also support the immune system, serving as a cytotoxic agent during infections. Even with all of these important functions, NO is a free radical and, when overproduced, it can cause tissue damage. This mechanism can operate in many neurodegenerative diseases, and as a result the development of drugs targeting nNOS is a desirable therapeutic goal.

However, the active sites of all three human isoforms are very similar, and designing inhibitors specific for nNOS is a challenging problem. It is critically important, for example, not to inhibit eNOS owing to its central role in controlling blood pressure. In this Account, we summarize our efforts in collaboration with Rick Silverman at Northwestern University to develop drug candidates that specifically target NOS using crystallography, computational chemistry, and organic synthesis. As a result, we have developed aminopyridine compounds that are 3800-fold more selective for nNOS than eNOS, some of which show excellent neuroprotective effects in animal models. Our group has solved approximately 130 NOS-inhibitor crystal structures which have provided the structural basis for our design efforts. Initial crystal structures of nNOS and eNOS bound to selective dipeptide inhibitors showed that a single amino acid difference (Asp in nNOS and Asn in eNOS) results in much tighter binding to nNOS. The NOS active site is open and rigid, which produces few large structural changes when inhibitors bind. However, we have found that relatively small changes in the active site and inhibitor chirality can account for large differences in isoform-selectivity. For example, we expected that the aminopyridine group on our inhibitors would form a hydrogen bond with a conserved Glu inside the NOS active site. Instead, in one group of inhibitors, the aminopyridine group extends outside of the active site where it interacts with a heme propionate. For this orientation to occur, a conserved Tyr side chain must swing out of the way. This unanticipated observation taught us about the importance of inhibitor chirality and active site dynamics. We also successfully used computational methods to gain insights into the contribution of the state of protonation of the inhibitors to their selectivity. Employing the lessons learned from the aminopyridine inhibitors, the Silverman lab designed and synthesized symmetric double-headed inhibitors with an aminopyridine at each end, taking advantage of their ability to make contacts both inside and outside of the active site. Crystal structures provided yet another unexpected surprise. Two of the double-headed inhibitor molecules bound to each enzyme subunit, and one molecule participated in the generation of a novel Zn site that required some side chains to adopt alternate conformations. Therefore, in addition to achieving our specific goal, the development of nNOS selective compounds, we have learned how subtle differences in and structure can control proteinligand interactions and often in unexpected ways.

 

300px-Nitric_Oxide_Synthase

Nitric oxide synthase

arginine-NO-citulline cycle

arginine-NO-citulline cycle

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

active site of eNOS (PDB_1P6L) and nNOS (PDB_1P6H).

 

 

NO - muscle, vasculature, mitochondria

NO – muscle, vasculature, mitochondria

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure:  (A) Structure of one of the early dipeptide lead compounds, 1, that exhibits excellentisoform selectivity. (B, C) show the crystal structures of the dipeptide inhibitor 1 in the active site of eNOS (PDB: 1P6L) and nNOS (PDB: 1P6H). In nNOS, the inhibitor “curls” which enables the inhibitor R-amino group to interact with both Glu592 and Asp597. In eNOS, Asn368 is the homologue to nNOS Asp597.

Accounts in Chem Res 2013; 46(2): 390-98.

  1. Jamming a Protein Signal

Interfering with a single cancer-promoting protein and its receptor can open this resistance mechanism by initiating autophagy of the affected cells,  according to researchers at The University of Texas MD Anderson Cancer Center  in the journal Cell Reports.  According to Dr. Anil Sood and Yunfei Wen, lead and first authors, blocking  prolactin, a potent growth factor for ovarian cancer, sets off downstream events that result in cell by autophagy, the process  recycles damaged organelles and proteins for new use by the cell through the phagolysozome. This in turn, provides a clinical rationale for blocking prolactin and its receptor to initiate sustained autophagy as an alternative strategy for treating cancers.

Steep reductions in tumor weight

Prolactin (PRL) is a hormone previously implicated in ovarian, endometrial and other cancer development andprogression. When PRL binds to its cell membrane receptor, PRLR, activation of cancer-promoting cell signaling pathways follows.  A variant of normal prolactin called G129R blocks the reaction between prolactin and its receptor. Sood and colleagues treated mice that had two different lines of human ovarian cancer, both expressing the prolactin receptor, with G129R. Tumor weights fell by 50 percent for mice with either type of ovarian cancer after 28 days of treatment with G129R, and adding the taxane-based chemotherapy agent paclitaxel cut tumor weight by 90 percent. They surmise that higher doses of G129R may result in even greater therapeutic benefit.

 

3D experiments show death by autophagy

 

[video width=”1280″ height=”720″ mp4=”http://pharmaceuticalintelligence.com/wp-content/uploads/2014/04/1741-7007-11-65-s1-macromolecular-juggling-by-ubiquitylation-enzymes1.mp4″][/video]

 

Next the team used the prolactin-mimicking peptide to treat cultures of cancer spheroids which sharply reduced their numbers, and blocked the activation of JAK2 and STAT signaling pathways.

Protein analysis of the treated spheroids showed increased presence of autophagy factors and genomic analysis revealed increased expression of a number of genes involved in autophagy progression and cell death.  Then a series of experiments using fluorescence and electron microscopy showed that the cytosol of treated cells had large numbers of cavities caused by autophagy.

The team also connected the G129R-induced autophagy to the activity of PEA-15, a known cancer inhibitor. Analysis of tumor samples from 32 ovarian cancer patients showed that tumors express higher levels of the prolactin receptor and lower levels of phosphorylated PEA-15 than normal ovarian tissue. However, patients with low levels of the prolactin receptor and higher PEA-15 had longer overall survival than those with high PRLR and low PEA-15.

Source: MD Anderson Cancer Center

 

  1. Chemists’ Work with Small Peptide Chains of Enzymes

Korendovych and his team designed seven simple peptides, each containing seven amino acids. They then allowed the molecules of each peptide to self-assemble, or spontaneously clump together, to form amyloids. (Zinc, a metal with catalytic properties, was introduced to speed up the reaction.) What they found was that four of the seven peptides catalyzed the hydrolysis of molecules known as esters, compounds that react with water to produce water and acids—a feat not uncommon among certain enzymes.

“It was the first time that a peptide this small self-assembled to produce an enzyme-like catalyst,” says Korendovych. “Each enzyme has to be an exact fit for its respective substrate,” he says, referring to the molecule with which an enzyme reacts. “Even after millions of years, nature is still testing all the possible combinations of enzymes to determine which ones can catalyze metabolic reactions. Our results make an argument for the design of self-assembling nanostructured catalysts.”

Source: Syracuse University

Here are three articles emphasizing the value of combinatorial analysis, which can be formed from genomic, clinical, and proteomic data sets.

 

  1. Comparative analysis of differential network modularity in tissue specific normal and cancer protein interaction networks

    F Islam , M Hoque , RS Banik , S Roy , SS Sumi, et al.

As most biological networks show modular properties, the analysis of differential modularity between normal and cancer protein interaction networks can be a good way to understand cancer more significantly. Two aspects of biological network modularity e.g. detection of molecular complexes (potential modules or clusters) and identification of crucial nodes forming the overlapping modules have been considered in this regard.

The computational analysis of previously published protein interaction networks (PINs) has been conducted to identify the molecular complexes and crucial nodes of the networks. Protein molecules involved in ten major cancer signal transduction pathways were used to construct the networks based on expression data of five tissues e.g. bone, breast, colon, kidney and liver in both normal and cancer conditions.

Cancer PINs show higher level of clustering (formation of molecular complexes) than the normal ones. In contrast, lower level modular overlapping is found in cancer PINs than the normal ones. Thus a proposition can be made regarding the formation of some giant nodes in the cancer networks with very high degree and resulting in reduced overlapping among the network modules though the predicted molecular complex numbers are higher in cancer conditions.

Islam et al. Journal of Clinical Bioinformatics 2013, 3:19-32

  1. A new 12-gene diagnostic biomarker signature of melanoma revealed by integrated microarray analysis

    Wanting Liu , Yonghong Peng and Desmond J. Tobin
    PeerJ 1:e49;        http://dx.doi.org/10.7717/peerj.49

Here we present an integrated microarray analysis framework, based on a genome-wide relative significance (GWRS) and genome-wide global significance (GWGS) model. When applied to five microarray datasets on melanoma published between 2000 and 2011, this method revealed a new signature of 200 genes. When these were linked to so-called ‘melanoma driver’ genes involved in MAPK, Ca2+, and WNT signaling pathways we were able to produce a new 12-gene diagnostic biomarker signature for melanoma (i.e., EGFR, FGFR2, FGFR3, IL8, PTPRF, TNC, CXCL13, COL11A1, CHP2, SHC4, PPP2R2C, andWNT4).We have begun to experimentally validate a subset of these genes involved inMAPK signaling at the protein level, including CXCL13, COL11A1, PTPRF and SHC4 and found these to be overexpressed inmetastatic and primarymelanoma cells in vitro and in situ compared to melanocytes cultured from healthy skin epidermis and normal healthy human skin.

 

catalytic amyloid forming particle

catalytic amyloid forming particle

 

 

 

 

 

 

 

        8.    PanelomiX: A threshold-based algorithm to create panels of biomarkers

X Robin , N Turck , A Hainard , N Tiberti, et al.
               Translational Proteomics 2013.    http://dx.doi.org/10.1016/j.trprot.2013.04.003

The PanelomiX toolbox combines biomarkers and evaluates the performance of panels to classify patients better than singlemarkers or other classifiers. The ICBTalgorithm proved to be an efficient classifier, the results of which can easily be interpreted.

Here are two current examples of the immense role played by signaling pathways in carcinogenic mechanisms and in treatment targeting, which is also confounded by acquired resistance.

 

  1. Triple-Negative Breast Cancer

  1. epidermal growth factor receptor (EGFR or ErbB1) and
  2. high activity of the phosphatidylinositol 3-kinase (PI3K)–Akt pathway

are both targeted in triple-negative breast cancer (TNBC).

  • activation of another EGFR family member [human epidermal growth factor receptor 3 (HER3) (or ErbB3)] may limit the antitumor effects of these drugs.

This study found that TNBC cell lines cultured with the EGFR or HER3 ligand EGF or heregulin, respectively, and treated with either an Akt inhibitor (GDC-0068) or a PI3K inhibitor (GDC-0941) had increased abundance and phosphorylation of HER3.

The phosphorylation of HER3 and EGFR in response to these treatments

  1. was reduced by the addition of a dual EGFR and HER3 inhibitor (MEHD7945A).
  2. MEHD7945A also decreased the phosphorylation (and activation) of EGFR and HER3 and
  3. the phosphorylation of downstream targets that occurred in response to the combination of EGFR ligands and PI3K-Akt pathway inhibitors.

In culture, inhibition of the PI3K-Akt pathway combined with either MEHD7945A or knockdown of HER3

  1. decreased cell proliferation compared with inhibition of the PI3K-Akt pathway alone.
  2. Combining either GDC-0068 or GDC-0941 with MEHD7945A inhibited the growth of xenografts derived from TNBC cell lines or from TNBC patient tumors, and
  3. this combination treatment was also more effective than combining either GDC-0068 or GDC-0941 with cetuximab, an EGFR-targeted antibody.
  4. After therapy with EGFR-targeted antibodies, some patients had residual tumors with increased HER3 abundance and EGFR/HER3 dimerization (an activating interaction).

Thus, we propose that concomitant blockade of EGFR, HER3, and the PI3K-Akt pathway in TNBC should be investigated in the clinical setting.

Reference: Antagonism of EGFR and HER3 Enhances the Response to Inhibitors of the PI3K-Akt Pathway in Triple-Negative Breast Cancer. JJ Tao, P Castel, N Radosevic-Robin, M Elkabets, et al.  Sci. Signal., 25 March 2014;
7(318), p. ra29   http://dx.doi.org/10.1126/scisignal.2005125

 

                  10.   Metastasis in RAS Mutant or Inhibitor-Resistant Melanoma Cells

The protein kinase BRAF is mutated in about 40% of melanomas, and BRAF inhibitors improve progression-free and overall survival in these patients. However, after a relatively short period of disease control, most patients develop resistance because of reactivation of the RAF–ERK (extracellular signal–regulated kinase) pathway, mediated in many cases by mutations in RAS. We found that BRAF inhibition induces invasion and metastasis in RAS mutant melanoma cells through a mechanism mediated by the reactivation of the MEK (mitogen-activated protein kinase kinase)–ERK pathway.

Reference: BRAF Inhibitors Induce Metastasis in RAS Mutant or Inhibitor-Resistant Melanoma Cells by Reactivating MEK and ERK Signaling. B Sanchez-Laorden, A Viros, MR Girotti, M Pedersen, G Saturno, et al., Sci. Signal., 25 March 2014;  7(318), p. ra30  http://dx.doi.org/10.1126/scisignal.2004815

Appendix II.

The world of physics in the twentieth century saw the end of determinism established by Newton. This is characterized by discrete laws that describe natural observations. These are in gravity and in eletricity. In an early phase of investigation, an era of galvanic or voltaic electricity represented a revolutionary break from the historical focus on frictional electricity. Alessandro Voltadiscovered that chemical reactions could be used to create positively charged anodes and negatively charged cathodes.  In 1790, Prof. Luigi Alyisio Galvani of Bologna, while conducting experiments on “animal electricity“, noticed the twitching of a frog’s legs in the presence of an electric machine. He observed that a frog’s muscle, suspended on an iron balustrade by a copper hook passing through its dorsal column, underwent lively convulsions without any extraneous cause, the electric machine being at this time absent.  Volta communicated a description of his pile to the Royal Society of London and shortly thereafter Nicholson and Cavendish (1780) produced the decomposition of water by means of the electric current, using Volta’s pile as the source of electromotive force.

Siméon Denis Poisson attacked the difficult problem of induced magnetization, and his results provided  a first approximation. His innovation required the application of mathematics to physics.  His memoirs on the theory of electricity and magnetism created a new branch of mathematical physics.  The discovery of electromagnetic induction was made almost simultaneously and independently by Michael Faraday and Joseph Henry. Michael Faraday, the successor of Humphry Davy, began his epoch-making research relating to electric and electromagnetic induction in 1831. In his investigations of the peculiar manner in which iron filings arrange themselves on a cardboard or glass in proximity to the poles of a magnet, Faraday conceived the idea of magnetic “lines of force” extending from pole to pole of the magnet and along which the filings tend to place themselves. On the discovery being made that magnetic effects accompany the passage of an electric current in a wire, it was also assumed that similar magnetic lines of force whirled around the wire. He also posited that iron, nickel, cobalt, manganese, chromium, etc., are paramagnetic (attracted by magnetism), whilst other substances, such as bismuth, phosphorus, antimony, zinc, etc., are repelled by magnetism or are diamagnetic.

Around the mid-19th century, Fleeming Jenkin‘s work on ‘ Electricity and Magnetism ‘ and Clerk Maxwell’s ‘ Treatise on Electricity and Magnetism ‘ were published. About 1850 Kirchhoff published his laws relating to branched or divided circuits. He also showed mathematically that according to the then prevailing electrodynamic theory, electricity would be propagated along a perfectly conducting wire with the velocity of light. Herman Helmholtz investigated the effects of induction on the strength of a current and deduced mathematical equations, which experiment confirmed. In 1853 Sir William Thomson (later Lord Kelvin) predicted as a result of mathematical calculations the oscillatory nature of the electric discharge of a condenser circuit.  Joseph Henry, in 1842 discerned  the oscillatory nature of the Leyden jardischarge.

In 1864 James Clerk Maxwell announced his electromagnetic theory of light, which was perhaps the greatest single step in the world’s knowledge of electricity. Maxwell had studied and commented on the field of electricity and magnetism as early as 1855/6 when On Faraday’s lines of force was read to the Cambridge Philosophical Society. The paper presented a simplified model of Faraday’s work, and how the two phenomena were related. He reduced all of the current knowledge into a linked set of differential equations with 20 equations in 20 variables. This work was later published as On Physical Lines of Force in1861. In order to determine the force which is acting on any part of the machine we must find its momentum, and then calculate the rate at which this momentum is being changed. This rate of change will give us the force. The method of calculation which it is necessary to employ was first given by Lagrange, and afterwards developed, with some modifications, by Hamilton’s equations. Now Maxwell logically showed how these methods of calculation could be applied to the electro-magnetic field. The energy of a dynamical systemis partly kinetic, partly potential. Maxwell supposes that the magnetic energy of the field is kinetic energy, the electric energy potential.  Around 1862, while lecturing at King’s College, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately that of the speed of light.   Maxwell’s electromagnetic theory of light obviously involved the existence of electric waves in free space, and his followers set themselves the task of experimentally demonstrating the truth of the theory. By 1871, he presented the Remarks on the mathematical classification of physical quantities.

A Wave-Particle Dilemma at the Century End

In 1896 J.J. Thomson performed experiments indicating that cathode rays really were particles, found an accurate value for their charge-to-mass ratio e/m, and found that e/m was independent of cathode material. He made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called “corpuscles”, had perhaps one thousandth of the mass of the least massive ion known (hydrogen). He further showed that the negatively charged particles produced by radioactive materials, by heated materials, and by illuminated materials, were universal.  In the late 19th century, the Michelson–Morley experiment was performed by Albert Michelson and Edward Morley at what is now Case Western Reserve University. It is generally considered to be the evidence against the theory of a luminiferous aether. The experiment has also been referred to as “the kicking-off point for the theoretical aspects of the Second Scientific Revolution.” Primarily for this work, Albert Michelson was awarded theNobel Prize in 1907.

Wave–particle duality is a theory that proposes that all matter exhibits the properties of not only particles, which have mass, but also waves, which transfer energy. A central concept of quantum mechanics, this duality addresses the inability of classical concepts like “particle” and “wave” to fully describe the behavior of quantum-scale objects. Standard interpretations of quantum mechanics explain this paradox as a fundamental property of the universe, while alternative interpretations explain the duality as an emergent, second-order consequence of various limitations of the observer. This treatment focuses on explaining the behavior from the perspective of the widely used Copenhagen interpretation, in which wave–particle duality serves as one aspect of the concept of complementarity, that one can view phenomena in one way or in another, but not both simultaneously.  Through the work of Max PlanckAlbert EinsteinLouis de BroglieArthur Compton, Niels Bohr, and many others, current scientific theory holds that all particles also have a wave nature (and vice versa).

Beginning in 1670 and progressing over three decades, Isaac Newton argued that the perfectly straight lines of reflection demonstrated light’s particle nature, but Newton’s contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different, refraction could be easily explained. The resulting Huygens–Fresnel principle was supported by Thomas Young‘s discovery of double-slit interference, the beginning of the end for the particle light camp.  The final blow against corpuscular theory came when James Clerk Maxwell discovered that he could combine four simple equations, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. When the propagation speed of these electromagnetic waves was calculated, the speed of light fell out. While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter.

Matter and Light

In 1789, Antoine Lavoisier secured chemistry by introducing rigor and precision into his laboratory techniques. By discovering diatomic gases, Avogadro completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. The final stroke in classical atomic theory came when Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry.   Chemistry was now an atomic science.

Black-body radiation, the emission of electromagnetic energy due to an object’s heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object’s energy is partitioned equally among the object’s vibrational modes. This worked well when describing thermal objects, whose vibrational modes were defined as the speeds of their constituent atoms, and the speed distribution derived from egalitarian partitioning of these vibrational modes closely matched experimental results. Speeds much higher than the average speed were suppressed by the fact that kinetic energy is quadratic—doubling the speed requires four times the energy—thus the number of atoms occupying high energy modes (high speeds) quickly drops off. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. The Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths.

The solution arrived in 1900 when Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with energy less than .

In 1905 Albert Einstein took Planck’s black body model in itself and saw a wonderful solution to another outstanding problem of the day: the photoelectric effect, the phenomenon where electrons are emitted from atoms when they absorb energy from light.   Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck’s constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck’s constant. These results were not confirmed until 1915, when Robert Andrews Millikan, produced experimental results in perfect accord with Einstein’s predictions. While  the energy of ejected electrons reflected Planck’s constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect  When Einstein received his Nobel Prizein 1921, it was  for the photoelectric effect, the suggestion of quantized light. Einstein’s “light quanta” represented the quintessential example of wave–particle duality. Electromagnetic radiation propagates following  linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously.

Radioactivity Changes the Scientific Landscape

The turn of the century also features radioactivity, which later came to the forefront of the activities of World War II, the Manhattan Project, the discovery of the chain reaction, and later – Hiroshima and Nagasaki.

Marie Curie

Marie Curie

 

 

 

Marie Skłodowska-Curie was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was the first woman to win a Nobel Prize, the only woman to win in two fields, and the only person to win in multiple sciences. She was also the first woman to become a professor at the University of Paris, and in 1995 became the first woman to be entombed on her own merits in the Panthéon in Paris. She shared the 1903 Nobel Prize in Physics with her husband Pierre Curie and with physicist Henri Becquerel. She won the 1911 Nobel Prize in Chemistry.  Her achievements included a theory of radioactivity (a term that she coined, techniques for isolating radioactive isotopes, and the discovery of polonium and radium. She named the first chemical element that she discovered – polonium, which she first isolated in 1898 – after her native country. Under her direction, the world’s first studies were conducted into the treatment of neoplasms using radioactive isotopes. She founded the Curie Institutes in Paris and in Warsaw, which remain major centres of medical research today. During World War I, she established the first military field radiological centres.  Curie died in 1934 due to aplastic anemia brought on by exposure to radiation – mainly, it seems, during her World War I service in mobile X-ray units created by her.

 

Read Full Post »

The Cost to Value Conundrum in Cardiovascular Healthcare Provision

The Cost to Value Conundrum in Cardiovascular Healthcare Provision

Author: Larry H. Bernstein, MD, FCAP

I write this introduction to Volume 2 of the e-series on Cardiovascular Diseases, which curates the basic structure and physiology of the heart, the vasculature, and related structures, e.g., the kidney, with respect to:

1. Pathogenesis
2. Diagnosis
3. Treatment

Curation is an introductory portion to Volume Two, which is necessary to introduce the methodological design used to create the following articles. More needs not to be discussed about the methodology, which will become clear, if only that the content curated is changing based on success or failure of both diagnostic and treatment technology availability, as well as the systems needed to support the ongoing advances.  Curation requires:

  • meaningful selection,
  • enrichment, and
  • sharing combining sources and
  • creation of new synnthesis

Curators have to create a new perspective or idea on top of the existing media which supports the content in the original. The curator has to select from the myriad upon myriad options available, to re-share and critically view the work. A search can be overwhelming in size of the output, but the curator has to successfully pluck the best material straight out of that noise.

Part 1 is a highly important treatment that is not technological, but about the system now outdated to support our healthcare system, the most technolog-ically advanced in the world, with major problems in the availability of care related to economic disparities.  It is not about technology, per se, but about how we allocate healthcare resources, about individuals’ roles in a not full list of lifestyle maintenance options for self-care, and about the important advances emerging out of the Affordable Care Act (ACA), impacting enormously on Medicaid, which depends on state-level acceptance, on community hospital, ambulatory, and home-care or hospice restructuring, which includes the reduction of management overhead by the formation of regional healthcare alliances, the incorporation of physicians into hospital-based practices (with the hospital collecting and distributing the Part B reimbursement to the physician, with “performance-based” targets for privileges and payment – essential to the success of an Accountable Care Organization (AC)).  One problem that ACA has definitively address is the elimination of the exclusion of patients based on preconditions.  One problem that has been left unresolved is the continuing existence of private policies that meet financial capabilities of the contract to provide, but which provide little value to the “purchaser” of care.  This is a holdout that persists in for-profit managed care as an option.  A physician response to the new system of care, largely fostered by a refusal to accept Medicaid, is the formation of direct physician-patient contracted care without an intermediary.

In this respect, the problem is not simple, but is resolvable.  A proposal for improved economic stability has been prepared by Edward Ingram. A concern for American families and businesses is substantially addressed in a macroeconomic design concept, so that financial services like housing, government, and business finance, savings and pensions, boosting confidence at every level giving everyone a better chance of success in planning their personal savings and lifetime and business finances.

http://macro-economic-design.blogspot.com/p/book.html

Part 2 is a collection of scientific articles on the current advances in cardiac care by the best trained physicians the world has known, with mastery of the most advanced vascular instrumentation for medical or surgical interventions, the latest diagnostic ultrasound and imaging tools that are becoming outdated before the useful lifetime of the capital investment has been completed.  If we tie together Part 1 and Part 2, there is ample room for considering  clinical outcomes based on individual and organizational factors for best performance. This can really only be realized with considerable improvement in information infrastructure, which has miles to go.  Why should this be?  Because for generations of IT support systems, they are historically focused on billing and have made insignificant inroads into the front-end needs of the clinical staff.

Read Full Post »

Risk of Bias in Translational Science

Author: Larry H. Bernstein, MD, FCAP

and

Curator: Aviva Lev-Ari, PhD, RN

 

Assessment of risk of bias in translational science

Andre Barkhordarian1, Peter Pellionisz2, Mona Dousti1, Vivian Lam1,Lauren Gleason1, Mahsa Dousti1, Josemar Moura3 and Francesco Chiappelli14*  

1Oral Biology & Medicine, School of Dentistry, UCLA, Evidence-Based Decisions Practice-Based Research Network, Los Angeles, USA

2Pre-medical program, UCLA, Los Angeles, CA

3School of Medicine, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil

4Evidence-Based Decisions Practice-Based Research Network, UCLA School of Dentistry, Los Angeles, CA

Journal of Translational Medicine 2013, 11:184   http://dx.doi.org/10.1186/1479-5876-11-184
http://www.translational-medicine.com/content/11/1/184

This is an Open Access article distributed under the terms of the Creative Commons Attribution License 
http://creativecommons.org/licenses/by/2.0

Abstract

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

Background

As recently discussed in this journal [1], translational medicine is a rapidly evolving field. In its most recent conceptualization, it consists of two primary domains:

  • translational research proper and
  • translational effectiveness.

This distinction arises from a cogent articulation of the fundamental construct of translational medicine in particular, and of translational health care in general.

The Institute of Medicine’s Clinical Research Roundtable conceptualized the field as being composed by two fundamental “blocks”:

  • one translational “block” (T1) was defined as “…the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention and their first testing in humans…”, and
  • the second translational “block” (T2) was described as “…the translation of results from clinical studies into everyday clinical practice and health decision making…” [2].

These are clearly two distinct facets of one meta-construct, as outlined in Figure 1. As signaled by others, “…Referring to T1 and T2 by the same name—translational research—has become a source of some confusion. The 2 spheres are alike in name only. Their goals, settings, study designs, and investigators differ…” [3].

1479-5876-11-184-1  Fig 1. TM construct

Figure 1. Schematic representation of the meta-construct of translational health carein general, and translational medicine in particular, which consists of two fundamental constructs: the T1 “block” (as per Institute of Medicine’s Clinical Research Roundtable nomenclature), which represents the transfer of new understandings of disease mechanisms gained in the laboratory into the development of new methods for diagnosis, therapy, and prevention as well as their first testing in humans, and the T2 “block”, which pertains to translation of results from clinical studies into everyday clinical practice and health decision making [[3]]. The two “blocks” are inextricably intertwined because they jointly strive toward patient-centered research outcomes (PCOR) through the process of comparative effectiveness and efficacy research/review and analysis for clinical practice (CEERAP). The domain of each construct is distinct, since the “block” T1 is set in the context of a laboratory infrastructure within a nurturing academic institution, whereas the setting of “block” T2 is typically community-based (e.g., patient-centered medical/dental home/neighborhoods [4]; “communities of practice” [5]).

For the last five years at least, the Federal responsibilities for “block” T1 and T2 have been clearly delineated. The National Institutes of Health (NIH) predominantly concerns itself with translational research proper – the bench-to-bedside enterprise (T1); the Agency for Healthcare Research Quality (AHRQ) focuses on the result-translation enterprise (T2). Specifically: “…the ultimate goal [of AHRQ] is research translation—that is, making sure that findings from AHRQ research are widely disseminated and ready to be used in everyday health care decision-making…” [6]. The terminology of translational effectiveness has emerged as a means of distinguishing the T2 block from T1.

Therefore, the bench-to-bedside enterprise pertains to translational research, and the result-translation enterprise describes translational effectiveness. The meta-construct of translational health care (viz., translational medicine) thus consists of these two fundamental constructs:

  • translational research and
  • translational effectiveness,

which have distinct purposes, protocols and products, while both converging on the same goal of new and improved means of

  • individualized patient-centered diagnostic and prognostic care.

It is important to note that the U.S. Patient Protection and Affordable Care Act (PPACA, 23 March 2010) has created an environment that facilitates the pursuit of translational health care because it emphasizes patient-centered outcomes research (PCOR). That is to say, it fosters the transaction between translational research (i.e., “block” T1)(TR) and translational effectiveness (i.e., “block” T2)(TE), and favors the establishment of communities of practice-research interaction. The latter, now recognized as practice-based research networks, incorporate three or more clinical practices in the community into

  • a community of practices network coordinated by an academic center of research.

Practice-based research networks may be a third “block” (T3)(PBTN) in translational health care and they could be conceptualized as a stepping-stone, a go-between bench-to-bedside translational research and result-translation translational effectiveness [7]. Alternatively, practice-based research networks represent the practical entities where the transaction between

  • translational research and translational effectiveness can most optimally be undertaken.

It is within the context of the practice-based research network that the process of bench-to-bedside can best seamlessly proceed, and it is within the framework of the practice-based research network that

  • the best evidence of results can be most efficiently translated into practice and
  • be utilized in evidence-based clinical decision-making, viz. translational effectiveness.

Translational effectiveness

As noted, translational effectiveness represents the translation of the best available evidence in the clinical practice to ensure its utilization in clinical decisions. Translational effectiveness fosters evidence-based revisions of clinical practice guidelines. It also encourages

  • effectiveness-focused,
  • patient-centered and
  • evidence-based clinical decision-making.

Translational effectiveness rests not only on the expertise of the clinical staff and the empowerment of patients, caregivers and stakeholders, but also, and

  • most importantly on the best available evidence [8].

The pursuit of the best available evidence is the foundation of

  • translational effectiveness and more generally of
  • translational medicine in evidence-based health care.

The best available evidence is obtained through a systematic process driven by

  • a research question/hypothesis that is articulated about clearly stated criteria that pertain to the
  • patient (P), the interventions (I) under consideration (C), for the sought clinical outcome (O), within a given timeline (T) and clinical setting (S).

PICOTS is tested on the appropriate bibliometric sample, with tools of measurements designed to establish the level (e.g., CONSORT) and the quality of the evidence. Statistical and meta-analytical inferences, often enhanced by analyses of clinical relevance [9], converge into the formulation of the consensus of the best available evidence. Its dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation

  • in the utilization of the best available evidence in clinical decisions, viz., translational effectiveness.

To be clear, translational effectiveness – and, in the perspective discussed above, translational health care – is anchored on obtaining the best available evidence,

  • which emerges from highest quality research.
  • which is obtained when errors are minimized.

In an early conceptualization [10], errors in research were presented as

  • those situations that threaten the internal and the external validity of a research study –

that is, conditions that impede either the study’s reproducibility, or its generalization. In point of fact, threats to internal and external validity [10] represent specific aspects of systematic errors (i.e., bias) in the

  • research design,
  • methodology and
  • data analysis.

Thence emerged a branch of science that seeks to

  • understand,
  • control and
  • reduce risk of bias in research.

Risk of bias and the best available evidence

It follows that the best available evidence comes from research with the fewest threats to internal and to external validity – that is to say, the fewest systematic errors: the lowest risk of bias. Quality of research, as defined in the field of research synthesis [11], has become synonymous with

  • low bias and contained risk of bias [1215].

Several years ago, the Cochrane group embarked on a new strategy for assessing the quality of research studies by examining potential sources of bias. Certain original areas of potential bias in research were identified, which pertain to

(a) the sampling and the sample allocation process, to measurement, and to other related sources of errors (reliability of testing),

(b) design issues, including blinding, selection and drop-out, and design-specific caveats, and

(c) analysis-related biases.

A Risk of Bias tool was created (Cochrane Risk of Bias), which covered six specific domains:

1. selection bias,

2. performance bias,

3. detection bias,

4. attrition bias,

5. reporting bias, and

6. other research protocol-related biases.

Assessments were made within each domain by one or more items specific for certain aspects of the domain. Each items was scored in two distinct steps:

1. the support for judgment was intended to provide a succinct free-text description of the domain being queried;

2. each item was scored high, low, or unclear risk of material bias (defined here as “…bias of sufficient magnitude to have a notable effect on the results or conclusions…” [16]).

It was advocated that assessments across items in the tool should be critically summarized for each outcome within each report. These critical summaries were to inform the investigator so that the primary meta-analysis could be performed either

  • only on studies at low risk of bias, or for
  • the studies stratified according to risk of bias [16].

This is a form of acceptable sampling analysis designed to yield increased homogeneity of meta-analytical outcomes [17]. Alternatively, the homogeneity of the meta-analysis can be further enhanced by means of the more direct quality-effects meta-analysis inferential model [18].

Clearly, one among the major drawbacks of the Cochrane Risk of Bias tool is

  • the subjective nature of its assessment protocol.

In an effort to correct for this inherent weakness of the instrument, the Cochrane group produced

  • detailed criteria for making judgments about the risk of bias from each individual item[16], and
  • that judgments be made independently by at least two people, with any discrepancies resolved by discussion [16].

This approach to increase the reliability of measurement in research synthesis protocols

  • is akin to that described by us [19,20] and by AHRQ [21].

In an effort to aid clinicians and patients in making effective health care related decisions, AHRQ developed an alternative Risk of Bias instrument for enabling systematical evaluation of evidence reporting [22]. The AHRQ Risk of Bias instrument was created to monitor four primary domains:

1. risk of bias: design, methodology, analysis scoring – low, medium, high

2. consistency: extent of similarity in effect sizes across studies within a bibliome scoring – consistent, inconsistent, unknown

3. directness: unidirectional link between the interventions of interest and the sought outcome, as opposed to multiple links in a casual chain scoring – direct, indirect

4. precision: extent of certainty for estimate of effect with respect to the outcome scoring – precise, imprecise In addition, four secondary domains were identified:

a. Dose response association: pattern of a larger effect with greater exposure (Present/Not Present/Not Applicable or Not Tested)

a. Confounders: consideration of confounding variables (Present/Absent)

a. Strength of association: likelihood that the observed effect is large enough that it cannot have occurred solely as a result of bias from potential confounding factors (Strong/Weak)

a. Publication bias

The AHRQ Risk of Bias instrument is also designed to yield an overall grade of the estimated risk of bias in quality reporting:

•Strength of Evidence Grades (scored as high – moderate – low – insufficient)

This global assessment, in addition to incorporating the assessments above, also rates:

–major benefit

–major harm

–jointly benefits and harms

–outcomes most relevant to patients, clinicians, and stakeholders

The AHRQ Risk of Bias instrument suffers from the same two major limitations as the Cochrane tool:

1. lack of formal psychometric validation as most other tools in the field [21], and

2. providing a subjective and not quantifiable assessment.

To begin the process of engaging in a systematic dialectic of the two instruments in terms of their respective construct and content validity, it is necessary

  • to validate each for reliability and validity either by means of the classic psychometric theory or generalizability (G) theory, which allows
  • the simultaneous estimation of multiple sources of measurement error variance (i.e., facets)
  • while generalizing the main findings across the different study facets.

G theory is particularly useful in clinical care analysis of this type, because it permits the assessment of the reliability of clinical assessment protocols.

  • the reliability and minimal detectable changes across varied combinations of these facets are then simply calculated [23], but
  • it is recommended that G theory determination follow classic theory psychometric assessment.

Therefore, we have commenced a process of revision the AHRQ Risk of Bias instrument by rendering questions in primary domains quantifiable (scaled 1–4),

  • which established the intra-rater reliability (r = 0.94, p < 0.05), and
  • the criterion validity (r = 0.96, p < 0.05) for this instrument (Figure 2).

????????????????????????????????????????

 

Figure 2. Proportion of shared variance in criterion validity (A) and inter-rater reliability (B) in the AHRQ Risk of Bias instrument revised as described.
Two raters were trained and standardized 
[20] with the revised AHRQ Risk of Bias and with the R-Wong instrument, which has been previously validated[24]. Each rater independently produced ratings on a sample of research reports with both instruments on two separate occasions, 1–2 months apart. Pearson correlation coefficient was used to compute the respective associations. The figure shows Venn diagrams to illustrate the intersection between each two sets data used in the correlations. The overlap between the sets in each panel represents the proportion of shared variance for that correlation. The percent of unexplained variance is given in the insert of each panel.

A similar revision of the Cochrane Risk of Bias tool may also yield promising validation data. G theory validation of both tools will follow. Together, these results will enable a critical and systematic dialectical comparison of the Cochrane and the AHRQ Risk of Bias measures.

Discussion

The critical evaluation of the best available evidence is critical to patient-centered care, because biased research findings are fundamentally invalid and potentially harmful to the patient. Depending upon the tool of measurement, the validity of an instrument in a study is obtained by means of criterion validity through correlation coefficients. Criterion validity refers to the extent to which one measures or predicts the value of another measure or quality based on a previously well-established criterion. There are other domains of validity such as: construct validity and content validity that are rather more descriptive than quantitative. Reliability however is used to describe the consistency of a measure, the extent to which a measurement is repeatable. It is commonly assessed quantitatively by correlation coefficients. Inter-rater reliability is rendered as a Pearson correlation coefficient between two independent readers, and establishes equivalence of ratings produced by independent observers or readers. Intra-rater reliability is determined by repeated measurement performed by the same subject (rater/reader) at two different points in time to assess the correlation or strength of association of the two sets of scores.

To establish the reliability of research quality assessment tools it is necessary, as we previously noted [20]:

•a) to train multiple readers in sharing a common view for the cognitive interpretation of each item. Readers must possess declarative knowledge a factual form of information known to be static in nature a certain depth of knowledge and understanding of the facts about which they are reviewing the literature. They must also have procedural knowledge known as imperative knowledge that can be directly applied to a task in this case a clear understanding of the fundamental concepts of research methodology, design, analysis and inference.

•b) to train the readers to read and evaluate the quality of a set of papers independently and blindly. They must also be trained to self-monitor and self-assess their skills for the purpose of insuring quality control.

•c) to refine the process until the inter-rater correlation coefficient and Cohen coefficient of agreement are about 0.9 (over 81% shared variance). This will establishes that the degree of attained agreement among well-trained readers is beyond chance.

•d) to obtain independent and blind reading assessments from readers on reports under study.

•e) to compute means and standard deviation of scores for each question across the reports, repeat process if the coefficient of variations are greater than 5% (i.e., less than 5% error among the readers across each questions).

The quantification provided by instruments validated in such a manner to assess the quality and the relative lack of bias in the research evidence allows for the analysis of the scores by means of the acceptable sampling protocol. Acceptance sampling is a statistical procedure that uses statistical sampling to determine whether a given lot, in this case evidence gathered from an identified set of published reports, should be accepted or rejected [12,25]. Acceptable sampling of the best available evidence can be obtained by:

•convention: accept the top 10 percentile of papers based on the score of the quality of the evidence (e.g., low Risk of Bias);

•confidence interval (CI95): accept the papers whose scores fall at of beyond the upper confidence limit at 95%, obtained with mean and variance of the scores of the entire bibliome;

•statistical analysis: accept the papers that sustain sequential repeated Friedman analysis.

To be clear, the Friedman test is a non-parametric equivalent of the analysis of variance for factorial designs. The process requires the 4-E process outlined below:

•establishing a significant Friedman outcome, which indicates significant differences in scores among the individual reports being tested for quality;

•examining marginal means and standard deviations to identify inconsistencies, and to identify the uniformly strong reports across all the domains tested by the quality instrument

•excluding those reports that show quality weakness or bias

•executing the Friedman analysis again, and repeating the 4-E process as many times as necessary, in a statistical process akin to hierarchical regression, to eliminate the evidence reports that exhibit egregious weakness, based on the analysis of the marginal values, and to retain only the group of report that harbor homogeneously strong evidence.

Taken together, and considering the domain and the structure of both tools, expectations are that these analyses will confirm that these instruments are two related entities, each measuring distinct aspects of bias. We anticipate that future research will establish that both tools assess complementary sub-constructs of one and the same archetype meta-construct of research quality.

References

  1. Jiang F, Zhang J, Wang X, Shen X: Important steps to improve translation from medical research to health policy.

    J Trans Med 2013, 11:33. BioMed Central Full Text OpenURL

  2. Sung NS, Crowley WF Jr, Genel M, Salber P, Sandy L, Sherwood LM, Johnson SB, Catanese V, Tilson H, Getz K, Larson EL, Scheinberg D, Reece EA, Slavkin H, Dobs A, Grebb J, Martinez RA, Korn A, Rimoin D:Central challenges facing the national clinical research enterprise.

    JAMA 2003, 289:1278-1287. PubMed Abstract | Publisher Full Text OpenURL

  3. Woolf SH: The meaning of translational research and why it matters.

    JAMA 2008, 299(2):211-213. PubMed Abstract | Publisher Full Text OpenURL

  4. Chiappelli F: From translational research to translational effectiveness: the “patient-centered dental home” model.

    Dental Hypotheses 2011, 2:105-112. Publisher Full Text OpenURL

  5. Maida C: Building communities of practice in comparative effectiveness research. In Comparative effectiveness and efficacy research and analysis for practice (CEERAP): applications for treatment options in health care. Edited by Chiappelli F, Brant X, Cajulis C. Heidelberg: Springer–Verlag; 2012.

    Chapter 1

    OpenURL

  6. Agency for Healthcare Research and Quality: Budget estimates for appropriations committees, fiscal year (FY) 2008: performance budget submission for congressional justification.

    Performance budget overview 2008.

    http://www.ahrq.gov/about/cj2008/cjweb08a.htm#Statement webcite. Accessed 11 May 2013

    OpenURL

  7. Westfall JM, Mold J, Fagnan L: Practice-based research—“blue highways” on the NIH roadmap.

    JAMA 2007, 297:403-406. PubMed Abstract | Publisher Full Text OpenURL

  8. Chiappelli F, Brant X, Cajulis C: Comparative effectiveness and efficacy research and analysis for practice (CEERAP) applications for treatment options in health care. Heidelberg: Springer–Verlag; 2012. OpenURL

  9. Dousti M, Ramchandani MH, Chiappelli F: Evidence-based clinical significance in health care: toward an inferential analysis of clinical relevance.

    Dental Hypotheses 2011, 2:165-177. Publisher Full Text OpenURL

  10. Campbell D, Stanley J: Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally; 1963. OpenURL

  11. Littell JH, Corcoran J, Pillai V: Research synthesis reports and meta-analysis. New York, NY: Oxford Univeristy Press; 2008. OpenURL

  12. Chiappelli F: The science of research synthesis: a manual of evidence-based research for the health sciences. Hauppauge NY: NovaScience Publisher, Inc; 2008. OpenURL

  13. Higgins JPT, Green S: Cochrane handbook for systematic reviews of interventions version 5.0.1. Chichester, West Sussex, UK: John Wiley & Sons. The Cochrane collaboration; 2008. OpenURL

  14. CRD: Systematic Reviews: CRD’s guidance for undertaking reviews in health care. National Institute for Health Research (NIHR). University of York, UK: Center for reviews and dissemination; 2009. PubMed Abstract| Publisher Full Text OpenURL

  15. McDonald KM, Chang C, Schultz E: Closing the quality Gap: revisiting the state of the science. Summary report. U.S. Department of Health & Human Services. AHRQ, Rockville, MD: Summary report. AHRQ publication No. 12(13)-E017; 2013. OpenURL


Read Full Post »

%d bloggers like this: