Feeds:
Posts
Comments

Posts Tagged ‘laboratory medicine’

The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Read Full Post »

8:00AM 11/13/2014 – 10th Annual Personalized Medicine Conference at the Harvard Medical School, Boston

REAL TIME Coverage of this Conference by Dr. Aviva Lev-Ari, PhD, RN – Director and Founder of LEADERS in PHARMACEUTICAL BUSINESS INTELLIGENCE, Boston http://pharmaceuticalintelligence.com

8:00 A.M. Welcome from Gary Gottlieb, M.D.

Opening Remarks:

Partners HealthCare is the largest healthcare organization in Massachusetts and whose founding members are Brigham and Women’s Hospital and Massachusetts General Hospital. Dr. Gottlieb has long been a supporter of personalized medicine and he will provide his vision on the role of genetics and genomics in healthcare across the many hospitals that are part of Partners HealthCare.

Opening Remarks and Introduction

Scott Weiss, M.D., M.S. @PartnersNews
Scientific Director, Partners HealthCare Personalized Medicine;
Associate Director, Channing Laboratory/
Professor of Medicine, Harvard Medical School 
@harvardmed

Welcome

Engine of innovations

  • lower cost – Accountable care
  • robust IT infrastructure on the Unified Medical Records
  • Lab Molecular Medicine and Biobanks
  • 1. Lab Molecular medicine
  • 2. Biobank
  • 3. Translations Genomics: RNA Sequencing
  • 4. Medical Records integration of coded diagnosis linked to Genomics

BIOBANKS – Samples and contact patients, return actionable procedures

LIFE STYLE SURVEY – supplements the medical record

GENOTYPING and SEQUENCING – less $50 per sequence available to researcher / investigators

RECRUITMENT – subject to biobank, own Consents – e-mail patient – consent online consenting — collects 16,000 patients per month – very successful Online Consent

LAB Molecular Medicine – CLIA — genomics test and clinical care – EGFR identified as a bio-marker to cancer in 3 month a test was available. Best curated medical exon databases Emory Genetics Lab (EMVClass) and CHOP (BioCreative and MitoMAP and MitoMASTER). Labs are renowned in pharmacogenomics and interpretability.

IT – GeneInsight – IT goal Clinicians empowered by a workflow geneticist assign cases, data entered into knowledge base, case history, GENEINSIGHT Lab — geneticists enter info in a codified way will trigger a report for the Geneticist – adding specific knowledge standardized report enters Medical Record. Available in many Clinics of Partners members.

Example: Management of Patient genetic profiles – Relationships built between the lab and the Clinician

Variety of Tools are in development

GenInsight Team –>> Pathology –>> Sunquest Relationship

The Future

Genetic testing –>> other info (Pathology, Exams, Life Style Survey, Meds, Imaging) — Integrated Medical Record

Clinic of the Future-– >> Diagnostics – Genomics data and Variants integrated at the Clinician desk

Gary Gottlieb, M.D. @PartnersNews
President and CEO, Partners HealthCare

Translational Science
Partners 6,000 MDs, MGH – 200 years as Teaching Hospital of HMS, BWH – magnets in HealthCare

2001  – Center for Genomics was started at Partners, 2008 Genomics and Other Omis, Population Health, PM – Innovations at Partners.

Please Click on Link  Video on 20 years of PartnersHealthcare

Video of Dr. Gottlieb at ECRI conference 2012

Why is personalized medicine  important to Partners?

From Healthcare system to the Specific Human Conditions

  • Lab translate results to therapy
  • Biobank +50,000 specimens links to Medical Records of patients – relevant to Clinician, Genomics to Clinical Applications

Questions from the Podium

  • test results are not yet available online for patients
  • clinicians and liability – delays from Lab to decide a variant needs to be reclassified – alert is triggered. Lab needs time to accumulated knowledge before reporting a change in state.
  • Training Clinicians in above type of IT infrastructure: Labs around the Nations deal with VARIANT RECLASSIFICATION- physician education is a must, Clinicians have access to REFERENCE links.
  • All clinicians accessing this IT infrastructure — are trained. Most are not yet trained
  • Coordination within Countries and Across Nations — Platforms are Group specific – PARTNERS vs the US IT Infrastructure — Genomics access to EMR — from 20% to 70% Nationwide during the Years of the Obama Adm.
  • Shakeout in SW linking Genetic Labs to reach Gold Standard

Click to see Advanced Medical Education Partners Offers

 

– See more at: http://personalizedmedicine.partners.org/Education/Personalized-Medicine-Conference/Program.aspx#sthash.qGbGZXXf.dpuf

@HarvardPMConf

#PMConf

@SachsAssociates

@PartnersNews

@MassGeneral

@HarvardHealth

@harvardmed

@BrighamWomens

Read Full Post »

Landscape of Cardiac Biomarkers for Improved Clinical Utilization

Curator and Author: Larry H Bernstein, MD, FCAP

Curation

This reviewer has been engaged in the development, the application, and the validation of cardiac biomarkers for over 30 years. There has been a nonlinear introduction of new biomarkers in that period, with an explosion of methods discovery and large studies to validate them in concert with clinical trials. The improvement of interventional methods, imaging methods, and the unraveling of patient characteristics associated with emerging cardiovascular disease is both cause for alarm (technology costs) and for raised expectations for both prevention, risk reduction, and treatment. What is strikingly missing is the kind of data analyses on the population database that could alleviate the burden of physician overload. It is an urgent requirement for the EHR, and it needs to be put in place to facilitate patient care.

Introduction

This is a journey through the current status of biochemical markers in cardiac evaluation. 

In the traditional use of cardiac biomarkers, the is a timed blood sampling from the decubital fossa. This was the case with alanine aminotransferase (AST, then SGOT), creatine kinase (CK) or its isoenzyme MB, and lactic dehydrogenase (or the isoenzyme-1). The time of sampling was based on time to appearance from time of damage, and the release of the biomarker is a stochastic process. The earliest studies of CK-MB appearance, peak height, and disappearance was by Burton Sobel and associates related to measuring the extent of damage, and determined that reperfusion had an effect. A significant reason for using a combination of CK-MB and LD-1 was that a patient who is a late arrival might have a CK-MB on the decline (peak at 18 h) while the LD-1 is rising (peak at 48 h).

The introduction of the troponins was accompanied by a serial 4 h measurement, usually for 4 draws (0, 4, 8, 12 h). The computational power of laboratory information systems was limited until recently, so it is somewhat surprising, given what we have seen – in addition to published work in the 1980’s – that this capability is not in use today, when regression and nonparametric classification algorithms are now so advanced that would enable much improved and effective communication to the physician needing the information.

J Adan, LH Bernstein, J Babb. Can peak CK-MB segregate patients with acute myocardial infarction into different outcome classes? Clin Chem 1985; 31(2):996-997. ICID: 844986.

RA Rudolph, LH Bernstein, J Babb. Information induction for predicting acute myocardial infarction. Clin Chem 1988; 34(10):2031-2038. ICID: 825568.

LH Bernstein, IJ Good, GI Holtzman, ML Deaton, J Babb. Diagnosis of acute myocardial infarction from two measurements of creatine kinase isoenzyme MB with use of nonparametric probability estimation. Clin Chem 1989; 35(3):444-447. ICID: 825570.

L H Bernstein, A Qamar, C McPherson, S Zarich, R Rudolph. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Clin Chem 1999; 72(1):5-13. ICID: 825618

Vermunt, J.K. & Magidson, J. (2000a). “Latent Class Cluster Analysis”, chapter 3 in J.A. Hagenaars and A.L. McCutcheon (eds.), Advances in Latent Class AnalysisCambridge University Press.

Vermunt, J.K. & Magidson, J. (2000b). Latent GOLD 2.0 User’s Guide. Belmont, MA: Statistical Innovations Inc.

LH Bernstein, A Qamar, C McPherson, S Zarich. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med. 1999; 72(4):259-268. ICID: 825617.

L Bernstein, K Bradley, S Zarich. GOLDmineR: improving models for classifying patients with chest pain.
Yale J Biol Med. 2002; 75(4):183-198. ICID: 825624

SA Haq, M Tavakol, LH Bernstein, J Kneifati-Hayek, M Schlefer, et al. The ACC/ESC Recommendation for 99th Percentile of the Reference NormalTroponin I Overestimates the Risk of an Acute Myocardial Infarction: a novel enhancement in the diagnostic performance of troponins. “6th Scientific Forum on Quality of Care and Outcomes Research in Cardiovascular Disease and Stroke.” Circulation 2005; 111(20):e313-313. ICID: 939931.

LH Bernstein, MY Zions, SA Haq, S Zarich, J Rucinski, B Seamonds, …., John F Heitner. Effect of renal function loss on NT-proBNP level variations. Clin Biochem 2009; 42(10-11):1091-1098. ICID: 937529

SA Haq, M Tavakol, S Silber, L Bernstein, J Kneifati-Hayek, et al. Enhancing the diagnostic performance of troponins in the acute care setting. J Emerg Med 2008; ICID: 937619

Gil David, LarryH Bernstein, Ronald Coifman. Generating Evidence Based Interpretation of Hematology Screens via Anomaly Characterization. OCCJ 2011; 4(1):10-16. ICID: 939928

The use and limitations of high-sensitivity cardiac troponin and natriuretic peptide concentrations in at risk populations

Background: High-sensitivity cardiac troponin (hs-cTn) assays are now available that can detect measurable troponin in significantly more individuals in the general population than conventional assays. The clinical use of these hs-cTn assays depends on the development of proper reference values. However, even with a univariate biomarker for risk and/or severity of ischemic heart disease, a single reference value for the cardiac biomarker does not discriminate the probabilities between 2 or 3 different cardiac disorders, or identify any combination of these, such as, heart failure or renal disease > stage 2 and acute coronary syndrome. True, the physician has a knowledge of the history and presentation as a guide. Do we know how adequate the information is in a patient who has an atypical presentation? Again, the same problem arises with the use of the natriuretic peptides, but the value of these tests is improved over the previous generation tests. Let us parse through the components of this diagnostic problem, which is critical for reaching the best decisions under the circumstances.

Issue 1. The use of the clinical information, such as, patient age, gender, past medical history, known medical illness, CHEST PAIN, ECG, medications, are the basis of longstanding clinical practice. These may be sufficient in a patient who presents with acute coronary syndrome and a Q-wave not previously seen, or with ST-elevation, ST-depression, T-wave inversion, or rhythm abnormality. Many patients don’t present that way.

Issue 2. The use of a single ‘decision-value’ for critical situations decribed, leaves us with a yes-no answer. If you use a receiver-operator characteristic curve, all of the patients used to construct the sensitivity/specificity analysis have to be decisively known for identification. Otherwise, one might just take the median of a very large population, and the median represents the best value for a data set that is not normal distribution. However, the ROC method may inform about an acute event, if that is the purpose, but with a single value for a single variable, it can’t identify a likelihood of an event in the next six months.

Issue 3. There are several quantitative biomarkers that are considerably better than were available 15 years prior to this discussion. These can be used alone, but preferably in combination for diagnostic evaluation, for predictiong prognosis, and for therapeutic decision-making. What is now available was unimagined 20 years ago, both in test selection and in treatment selection.

Cardiac troponin assays were recently reviewed in Clin Chem by Fred Apple and Amy Seenger. (The State of Cardiac Troponin Assays: Looking Bright and Moving in the Right Direction).

Cardiac troponin assays have evolved substantially over 20 years, owing to the efforts of manufacturers to make them more precise and sensitive. These enhancements have led to high-sensitivity cardiac troponin assays, which ideally would give measureable values above the limit of detection (LoD) for 100% of healthy individuals and demonstrate an imprecision (CV) of ≤10% at the 99th percentile.

As laboratorians, we wish to comment on the recently published “ACCF 2012 Expert Consensus Document on Practical Clinical Considerations in the Implementation of Troponin Elevations”. Our purpose is to address 8 analytical issues that we believe have the potential to cause confusion and that therefore deserve clarification.

Since the initial publications by the National Academy of Clinical Biochemistry (NACB) in 1999 and by the European Society of Cardiology/American College of Cardiology in 2000, when both organizations endorsed cardiac troponin I (cTnI) or cTnT as the preferred biomarker for the detection of myocardial infaction, numerous other organizations have followed suit and promoted the sole use of cardiac troponin in this clinical application. The American College of Cardiology Foundation (ACCF) 2012 Expert Consensus Document summarizes the recently published 2012 Third Universal Definition of Myocardial Infarction by the Global Task Force, thus providing some practical recommendations on the use and interpretation of cardiac troponin in clinical practice.

This commentator has already expressed the view that there is no ‘silver bullet’, and the potential for confusion is not yet going to be resolved. The potential for greater accuracy in diagnosis is bolstered by currently available imaging.

Current strength of cardiac biomarker opportunities:

A recent study measured hs-tnI in 1716 (93%) of the community-based study cohort and 499 (88%) of the healthy reference cohort. Parameters that significantly contributed to higher hs-cTnI concentrations in the healthy reference cohort included age, male sex, systolic blood pressure, and left ventricular mass. Glomerular filtration rate and body mass index were not independently associated with hs-cTnI in the healthy reference cohort. Individuals with diastolic and systolic dysfunction, hypertension, and coronary artery disease (but not impaired renal function) had significantly higher hs-cTnI values than the healthy reference cohort.

The authors concluded that hs-cTnI assay with the aid of echocardiographic imaging in a large, well-characterized community-based cohort demonstrated hs-cTnI to be remarkably sensitive in the general population, and there are important sex and age differences among healthy reference individuals. Even though the results have important implications for defining hs-cTnI reference values and identifying disease, the reference value is not presented, and the question remains about how many subjects in the 88% (499) healthy reference consort had elevated systolic blood pressure or left ventricular hypertrophy (LVH) measured by imaging. Furthermore, while impaired renal function dropped out as an independent predictor of associated hs-cTnI, one would expect it to have a strong association with LVH.

Defining High-Sensitivity Cardiac Troponin Concentrations in the Community.
PM McKie, DM Heublein, CG. Scott, ML Gantzer, …and AS Jaffe.
Depart Med & Lab Med and Pathology, Mayo Clinic and Foundation, Rochester, MN; Siemens Diagnostics, Newark, DE. Clin Chem 2013.

hsTnI with NSTEMI

Another study looks at the prognostic performance of hs-TnI assay with non-STEMI. High-sensitivity assays for cardiac troponin enable more precise measurement of very low concentrations and improved diagnostic accuracy. However, the prognostic value of these measurements, particularly at low concentrations, is less well defined. (This is the sensitivity vs specificity dilemma raised with regard to the impoved hs-cTn assays.) But the value of low measured values is a matter for prognostic evaluation, based on the hypothesis that any cTnI that is measured in serum is leaked from cardiomyocytes. This assay evaluation used the Abbott ARCHITECT. The data were 4695 patients with non–ST-segment elevation acute coronary syndromes (NSTE-ACS) from the EARLY-ACS (Early Glycoprotein IIb/IIIa Inhibition in NSTE-ACS) and SEPIA-ACS1-TIMI 42 (Otamixaban for the Treatment of Patients with NSTE-ACS–Thrombolysis in Myocardial Infarction 42) trials. The primary endpoint was cardiovascular death or new myocardial infarction (MI) at 30 days. Baseline cardiac troponin was categorized at the 99th percentile reference limit (26 ng/L for hs-cTnI; 10 ng/L for cTnT) and at sex-specific 99th percentiles for hs-cTnI.

All patients at baseline had detectable hs-cTnI compared with 94.5% with detectable cTnT. With adjustment for all other elements of the TIMI risk score, patients with hs-cTnI ≥99th percentile had a 3.7-fold higher adjusted risk of cardiovascular death or MI at 30 days relative to patients with hs-cTnI <99th percentile (9.7% vs 3.0%; odds ratio, 3.7; 95% CI, 2.3–5.7; P < 0.001). Similarly, when stratified by categories of hs-cTnI, very low concentrations demonstrated a graded association with cardiovascular death or MI (P-trend < 0.001). Thus, Application of this hs-cTnI assay identified a clinically relevant higher risk of recurrent events among patients with NSTE-ACS, even at very low troponin concentrations.

Prognostic Performance of a High-Sensitivity Cardiac Troponin I Assay in Patients with Non–ST-Elevation Acute Coronary Syndrome. EA Bohula May, MP Bonaca, P Jarolim, EM Antman, …and DA Morrow. Clin Chem 2013.

Combination test with cTnI and a troponin

The next study looks at the value of a combination of cTnT and N-Terminal pro-B-type-natriuretic-peptide (NT proBNP) to predict heart failure risk. Recall that NT proBNP has been a stabd-alone biomarker for CHF. The study was done with the consideration that heart failure (HF) is projected to have the largest increases in incidence over the coming decades. Therefore, would cardiac troponin T (cTnT) measured with a high-sensitivity assay and N-terminal pro-B–type natriuretic peptide (NT-proBNP), biomarkers strongly associated with incident HF, improve HF risk prediction in the Atherosclerosis Risk in Communities (ARIC) study?

Using sex-specific models, we added cTnT and NT-proBNP to age and race (“laboratory report” model) and to the ARIC HF model (includes age, race, systolic blood pressure, antihypertensive medication use, current/former smoking, diabetes, body mass index, prevalent coronary heart disease, and heart rate) in 9868 participants without prevalent HF; area under the receiver operating characteristic curve (AUC), integrated discrimination improvement, net reclassification improvement (NRI), and model fit were described.

Over a mean follow-up of 10.4 years, 970 participants developed incident HF. Adding cTnT and NT-proBNP to the ARIC HF model significantly improved all statistical parameters (AUCs increased by 0.040 and 0.057; the continuous NRIs were 50.7% and 54.7% in women and men, respectively). Interestingly, the simpler laboratory report model was statistically no different than the ARIC HF model.

Troponin T and N-Terminal Pro-B–Type Natriuretic Peptide: A Biomarker Approach to Predict Heart Failure Risk: The Atherosclerosis Risk in Communities Study. V Nambi, X Liu, LE Chambless, JA de Lemos, SS Virani, et al.
Clin Chem 2013.

BCM Researchers Discover Simpler, Improved Biomarkers to Predict Heart Failure As Accurate As Complex Models     Posted by: Anna Ishibashi Sep 17, 2013

Biomarkers for heart failure Researchers at the Baylor College of Medicine and the Michael E. DeBakey Veterans Affairs hospital discovered two improved biomarkers in the bloodstream that predict who is at higher risk of having heart failure in 10 years. The study was published in the journal Clinical Chemistry.

In the Atherosclerosis Risk in Communities (ARIC) clinical study, researchers measured the blood concentration of troponin T and N-terminal-pro-B-type natriuretic peptide (NT-proBNP) in the models, while also collecting age and race data. The important point taken from the study was that researchers did not find any difference in the accuracy of heart failure risk prediction statistically between this simpler test and the traditional, more complex one, which includes information of age, race, systolic blood pressure, antihypertensive medication use, smoking status, diabetes, body-mass index, prevalent coronary heart disease and heart rate.

Troponin T is an indicator of damaged heart muscle and can be detected in low levels even in individuals with no symptoms through this simpler, improved testing method. Similarly, NT-proBNP is a by-product of brain natriuretic peptide (BNP), which is a small neuropeptide hormone that has been shown to be effective in diagnosing congestive heart failure.

The critical issues that we must now address is what lifestyle and drug therapies can prevent the development of heart failures for individuals who are at high risk – according to Dr. Christie Ballantyne, professor of medicine and section chief of cardiology and cardiovascular research at BCM and the Houston Methodist Center for Cardiovascular Disease Prevention.

Although chest pain is widely considered a key symptom in the diagnosis of myocardial infarction (MI), not all patients with MI present with chest pain. This study was done the frequency with which patients with MI present without chest pain and to examine their subsequent management and outcome. A total of 434,877 patients with confirmed MI enrolled June 1994 to March 1998 in the National Registry of Myocardial Infarction, which includes 1674 hospitals in the United States. Outcome measures were prevalence of presentation without chest pain; clinical characteristics, treatment, and mortality among MI patients without chest pain vs those with chest pain.

Of all patients diagnosed as having MI, 142,445 (33%) did not have chest pain on presentation to the hospital. This group of MI patients was, on average, 7 years older than those with chest pain (74.2 vs 66.9 years), with a higher proportion of women (49.0% vs 38.0%) and patients with diabetes mellitus (32.6% vs 25.4%) or prior heart failure (26.4% vs 12.3%). Also, MI patients without chest pain had a longer delay before hospital presentation (mean, 7.9 vs 5.3 hours), were less likely to be diagnosed as having confirmed MI at the time of admission (22.2% vs 50.3%), and were less likely to receive thrombolysis or primary angioplasty (25.3% vs 74.0%), aspirin (60.4% vs 84.5%), β-blockers (28.0% vs 48.0%), or heparin (53.4% vs 83.2%). Myocardial infarction patients without chest pain had a 23.3% in-hospital mortality rate compared with 9.3% among patients with chest pain (adjusted odds ratio for mortality, 2.21 [95% confidence interval, 2.17-2.26]).

We tested the hypotheses that MI patients without chest pain compared with those with chest pain would present later for medical attention, would be less likely to be diagnosed as having acute MI on initial evaluation, and would receive fewer appropriate medical treatments within the first 24 hours. We also evaluated the association between the presence of atypical presenting symptoms and hospital mortality related to MI.

Our results suggest that patients without chest pain on presentation represent a large segment of the MI population and are at increased risk for delays in seeking medical attention, less aggressive treatments, and in-hospital mortality.

Prevalence, Clinical Characteristics, and Mortality Among Patients With Myocardial Infarction Presenting Without Chest Pain. JG Canto, MG Shlipak, WJ Rogers, JA Malmgren, PD Frederick, et al. JAMA 2013; 283(24):3223-3229. http://dx.doi.org/10.1001/jama.283.24.3223

cTnT degraded forms in circulation

This recent study questions whether degraded cTnT forms circulate in the patient’s blood. Separation of cTnT forms by gel filtration chromatography (GFC) was performed in sera from 13 AMI patients to examine cTnT degradation. The GFC eluates were subjected to Western blot analysis with the original antibodies from the Roche immunoassay used to mimic the clinical cTnT assay. GFC analysis of AMI patients’ sera revealed 2 cTnT peaks with retention volumes of 5 and 21 mL. Western blot analysis identified these peaks as cTnT fragments of 29 and 14–18 kDa, respectively. Furthermore, the performance of direct Western blots on standardized serum samples demonstrated a time-dependent degradation pattern of cTnT, with fragments ranging between 14 and 40 kDa. Intact cTnT (40 kDa) was present in only 3 patients within the first 8 h after hospital admission.

Time-Dependent Degradation Pattern of Cardiac Troponin T Following Myocardial Infarction. EPM Cardinaels, AMA Mingels T van Rooij, PO Collinson, FW Prinzen and MP van Dieijen-Visser. Clin Chem 2013.

Older patients with higher cTNI

One of the problems of interpretation of cTnI is the age relationship to the 99th percentile of the elderly. cTnI was measured using a high-sensitivity assay (Abbott Diagnostics) in 814 community-dwelling individuals at both 70 and 75 years of age. The cTnI 99th percentiles were determined separately using nonparametric methods in the total sample, in men and women, and in individuals with and without CVD.

The cTnI 99th percentile at baseline was 55.2 ng/L for the total cohort. Higher 99th percentiles were noted in men (69.3 ng/L) and individuals with CVD (74.5 ng/L). The cTnI 99th percentile in individuals free from CVD at baseline (n = 498) increased by 51% from 38.4 to 58.0 ng/L during the 5-year observation period. Relative increases ranging from 44% to 83% were noted across all subgroups. Male sex [odds ratio, 5.3 (95% CI, 1.5–18.3)], log-transformed N-terminal pro-B-type natriuretic peptide [odds ratio, 1.9 (95% CI, 1.2–3.0)], and left-ventricular mass index [odds ratio, 1.3 (95% CI, 1.1–1.5)] predicted increases in cTnI concentrations from below the 99th percentile (i.e., 38.4 ng/L) at baseline to concentrations above the 99th percentile at the age of 75 years.

cTnI concentration and its 99th percentile threshold depend strongly on the characteristics of the population being assessed. Among elderly community dwellers, higher concentrations were seen in men and individuals with prevalent CVD. Aging contributes to increasing concentrations, given the pronounced changes seen with increasing age across all subgroups. These findings should be taken into consideration when applying cTnI decision thresholds in clinical settings.

KM Eggers, Lars Lind, Per Venge and Bertil Lindahl. Factors Influencing the 99th Percentile of Cardiac Troponin I Evaluated in Community-Dwelling Individuals at 70 and 75 Years of Age/. Clin Chem 2013.

Background: Atrial natriuretic peptide (ANP) has antihypertrophic and antifibrotic properties that are relevant to AF substrates. The −G664C and rs5065 ANP single nucleotide polymorphisms (SNP) have been described in association with clinical phenotypes, including hypertension and left ventricular hypertrophy. A recent study assessed the association of early AF and rs5065 SNPs in low-risk subjects. In a Caucasian population with moderate-to-high cardiovascular risk profile and structural AF, we conducted a case-control study to assess whether the ANP −G664C and rs5065 SNP associate with nonfamilial structural AF.
Methods: 168 patients with nonfamilial structural AF and 168 age- and sex-matched controls were recruited. The rs5065 and −G664C ANP SNPs were genotyped.
Results: The study population had a moderate-to-high cardiovascular risk profile with 86% having hypertension, 23% diabetes, 26% previous myocardial infarction, and 23% left ventricular systolic dysfunction. Patients with AF had greater left atrial diameter (44 ± 7 vs. 39 ± 5 mm; P , 0.001) and higher plasma NTproANP levels (6240 ± 5317 vs. 3649 ± 2946 pmol/mL; P , 0.01). Odds ratios (ORs) for rs5065 and −G664C gene variants were 1.1 (95% confidence interval [CI], 0.7–1.8; P = 0.71) and 1.2 (95% CI, 0.3–3.2; P = 0.79), respectively, indicating no association with AF. There were no differences in baseline clinical characteristics among carriers and noncarriers of the −664C and rs5065 minor allele variants.
Conclusions: We report lack of association between the rs5065 and −G664C ANP gene SNPs and AF in a Caucasian population of patients with structural AF. Further studies will clarify whether these or other ANP gene variants affect the risk of different subphenotypes of AF driven by distinct pathophysiological mechanisms.

P Francia, A Ricotta, A Frattari, R Stanzione, A Modestino, et al.
Atrial Natriuretic Peptide Single Nucleotide Polymorphisms in Patients with Nonfamilial Structural Atrial Fibrillation.
Clinical Medicine Insights: Cardiology 2013:7 153–159   http://dx.doi.org/10.4137/CMC.S12239  http://www.la-press.com/atrial-natriuretic-peptide-single-nucleotide-polymorphisms-in-patients-article-a3882

Cystatin C and eGFR predict AMI or CVD mortality

BACKGROUND: The estimated glomerular filtration rate (eGFR) independently predicts cardiovascular death or myocardial infarction (MI) and can be estimated by creatinine and cystatin C concentrations. We evaluated 2 different cystatin C assays, alone or combined with creatinine, in patients with acute coronary syndrome.
METHODS: We analyzed plasma cystatin C, measured with assays from Gentian and Roche, and serum creatinine in 16 279 patients from the PLATelet Inhibition and Patient Outcomes (PLATO) trial. We evaluated Pearson correlation and agreement (Bland–Altman) between methods, as well as prognostic value in relation to cardiovascular death or MI during 1 year of follow up by multivariable logistic regression analysis including clinical variables, biomarkers, c-statistics, and relative integrated discrimination improvement (IDI).
RESULTS: Median cystatin C concentrations (interquartile intervals) were 0.83 (0.68–1.01) mg/L (Gentian) and 0.94 (0.80–1.14) mg/L (Roche). Overall correlation was 0.86 (95% CI 0.85–0.86). The level of agreement was within 0.39 mg/L (2 SD) (n = 16 279).
The areas under the curve (AUCs) in the multivariable risk prediction model with cystatin C (Gentian, Roche) or Chronic Kidney Disease Epidemiology Collaboration eGFR (CKD-EPI) added were 0.6914, 0.6913, and 0.6932. Corresponding relative IDI values were 2.96%, 3.86%, and 4.68% (n = 13 050). Addition of eGFR by the combined creatinine–cystatin C equation yielded AUCs of 0.6923 (Gentian) and 0.6924 (Roche) with relative IDI values of 3.54% and 3.24%.
CONCLUSIONS: Despite differences in cystatin C concentrations, overall correlation between the Gentian and Roche assays was good, while agreement was moderate. The combined creatinine–cystatin C equation did not outperform risk prediction by CKD-EPI.
A Åkerblom, L Wallentin, A Larsson, A Siegbahn, et al.
Cystatin C– and Creatinine-Based Estimates of Renal Function and Their Value for Risk Prediction in Patients with Acute Coronary Syndrome: Results from the PLATelet Inhibition and Patient Outcomes (PLATO) Study.
 

T2Dm has many subphenotypes in the prediabetic phase

For decades, glucose, hemoglobin A1c, insulin, and C peptide have been the laboratory tests of choice to detect and monitor diabetes. However, these tests do not identify individuals at risk for developing type 2 diabetes (T2Dm) (so-called prediabetic individuals and the subphenotypes therein), which would be a prerequisite for individualized prevention. Nor are these parameters suitable to identify T2Dm subphenotypes, a prerequisite for individualized therapeutic interventions. The oral glucose tolerance test (oGTT) is still the only means for the early and reliable identification of people in the prediabetic phase with impaired glucose tolerance (IGT). This procedure, however, is very time-consuming and expensive and is unsuitable as a screening method in a doctor′s office. Hence, there is an urgent need for innovative laboratory tests to simplify the early detection of alterations in glucose metabolism.
The search for diabetic risk genes was the first and most intensively pursued approach for individualized diabetes prevention and treatment. Over the last 20 years cohorts of tens of thousands of people have been analyzed, and more than 70 susceptibility loci associated with T2Dm and related metabolic traits have been identified. But despite extensive replication, no susceptibility loci or combinations of loci have proven suitable for diagnostic purposes.
Why did the genomic studies fail? One reason might be that T2Dm is a polygenetic disease, but there is another more important reason. The large diabetes cohorts investigated in these studies were very heterogeneous, consisting of poorly characterized individuals who were usually selected because they had an increase in blood glucose. Subsequently it has become clear that many different subphenotypes already exist in the prediabetic phase.
Metabolomics represents a new potential approach to move the diagnosis of diabetes beyond the application of the classical diabetic laboratory tests.
Rainer Lehmann. Diabetes Subphenotypes and Metabolomics: The Key to Discovering Laboratory Markers for Personalized Medicine?
 

Ca2+/calmodulin-dependent protein kinase II (CaMKII) has recently emerged as a ROS activated proarrhythmic signal

Background—Atrial fibrillation is a growing public health problem without adequate therapies. Angiotensin II (Ang II) and reactive oxygen species (ROS) are validated risk factors for atrial fibrillation (AF) in patients, but the molecular pathway(s) connecting ROS and AF is unknown. The Ca2+/calmodulin-dependent protein kinase II (CaMKII) has recently emerged as a ROS activated proarrhythmic signal, so we hypothesized that oxidized CaMKII􀄯(ox-CaMKII) could contribute to AF.
Methods and Results—We found ox-CaMKII was increased in atria from AF patients compared to patients in sinus rhythm and from mice infused with Ang II compared with saline. Ang II treated mice had increased susceptibility to AF compared to saline treated WT mice, establishing Ang II as a risk factor for AF in mice. Knock in mice lacking critical oxidation sites in CaMKII􀄯 (MM-VV) and mice with myocardial-restricted transgenic over-expression of methionine sulfoxide reductase A (MsrA TG), an enzyme that reduces ox-CaMKII, were resistant to AF induction after Ang II infusion.
 
RyR and Ca+ release from SR
 
ANS-   autonomic innervation of heart
 
 
mongillo_fig1  regulation of cardiac Ca++ cycling by ANS
 
jce561317.fig3    cardiac contraction
 
 
 
 
serum levels of MAA differentiated stable CAD from MI. For IgM antibodies to MAA, results were consistent with IgGantibodies to MAA
 
 
 
Conclusions—Our studies suggest that CaMKII is a molecular signal that couples increased ROS with AF and that therapeutic strategies to decrease ox-CaMKII may prevent or reduce AF.
Key words: atrial fibrillation, calcium/calmodulin-dependent protein kinase II, angiotensin II, reactive oxygen species, arrhythmia (mechanisms)
A Purohit, AG Rokita, X Guan, B Chen, et al.  Oxidized CaMKII Triggers Atrial Fibrillation.  Circulation. Sep 12, 2013;
 

Microparticles (MP)s give clues about vascular endothelial injury

BACKGROUND: Endothelial dysfunction is an early event in the development and progression of a wide range of cardiovascular diseases. Various human studies have identified that measures of endothelial dysfunction may offer prognostic information with respect to vascular events. Microparticles (MPs) are a heterogeneous population of small membrane fragments shed from various cell types. The endothelium is one of the primary targets of circulating MPs, and MPs isolated from blood have been considered biomarkers of vascular injury and inflammation.
CONTENT: This review summarizes current knowledge of the potential functional role of circulating MPs in promoting endothelial dysfunction. Cells exposed to different stimuli such as shear stress, physiological agonists, proapoptotic stimulation, or damage release MPs, which contribute to endothelial dysfunction and the development of cardiovascular diseases. Numerous studies indicate that MPs may trigger endothelial dysfunction by disrupting production of nitric oxide release from vascular endothelial cells and subsequently modifying vascular tone. Circulating MPs affect both proinflammatory and proatherosclerotic processes in endothelial cells. In addition, MPs can promote coagulation and inflammation or alter angiogenesis and apoptosis in endothelial cells.
SUMMARY: MPs play an important role in promoting endothelial dysfunction and may prove to be true biomarkers of disease state and progression.
Fina Lovren and Subodh Verma.  Evolving Role of Microparticles in the Pathophysiology of Endothelial Dysfunction.
 
Outcomes of STEMI and NSTEMI different predicted by NPs after MI
Patients with increased blood concentrations of natriuretic peptides (NPs) have poor cardiovascular outcomes after myocardial infarction (MI). Data from 41 683 patients with non–ST-segment elevation MI (NSTEMI) and 27 860 patients with ST-segment elevation MI (STEMI) at 309 US hospitals were collected as part of the ACTION Registry®–GWTG™ (Acute Coronary Treatment and Intervention Outcomes Network Registry–Get with the Guidelines) (AR-G) between July 2008 and September 2009.

B-type natriuretic peptide (BNP) or N-terminal pro-BNP (NT-proBNP) was measured in 19 528 (47%) of NSTEMI and 9220 (33%) of STEMI patients. Patients in whom NPs were measured were older and had more comorbidities, including prior heart failure or MI. There was a stepwise increase in the risk of in-hospital mortality with increasing BNP quartiles for both NSTEMI (1.3% vs 3.2% vs 5.8% vs 11.1%) and STEMI (1.9% vs 3.9% vs 8.2% vs 17.9%). The addition of BNP to the AR-G clinical model improved the C statistic from 0.796 to 0.807 (P < 0.001) for NSTEMI and from 0.848 to 0.855 (P = 0.003) for STEMI. The relationship between NPs and mortality was similar in patients without a history of heart failure or cardiogenic shock on presentation and in patients with preserved left ventricular function.

NPs are measured in almost 50% of patients in the US admitted with MI and appear to be used in patients with more comorbidities. Higher NP concentrations were strongly and independently associated with in-hospital mortality in the almost 30 000 patients in whom NPs were assessed, including patients without heart failure.

BM Scirica, MB Kadakia, JA de Lemos, MT Roe, DA Morrow, et al. Association between Natriuretic Peptides and Mortality among Patients Admitted with Myocardial Infarction: A Report from the ACTION Registry®–GWTG™.

Predictive value of processed forms of BNP in circulation

B-type natriuretic peptide (BNP) is secreted in response to pathologic stress from the heart. Its use as a biomarker of heart failure is well known; however, its diagnostic potential in ischemic heart disease is less explored. Recently, it has been reported that processed forms of BNP exist in the circulation. We characterized processed forms of BNP by a newly developed mass spectrometry–based detection method combined with immunocapture using commercial anti-BNP antibodies.

Measurements of processed forms of BNP by this assay were found to be strongly associated with presence of restenosis. Reduced concentrations of the amino-terminal processed peptide BNP(5–32) relative to BNP(3–32) [as the index parameter BNP(5–32)/BNP(3–32) ratio] were seen in patients with restenosis [median (interquartile range) 1.19 (1.11–1.34), n = 22] vs without restenosis [1.43 (1.22–1.61), n = 83; P < 0.001] in a cross-sectional study of 105 patients undergoing follow-up coronary angiography. A sensitivity of 100% to rule out the presence of restenosis was attained at a ratio of 1.52. Processed forms of BNP may serve as viable potential biomarkers to rule out restenosis.

H Fujimoto, T Suzuki, K Aizawa, D Sawaki, J Ishida, et al. Processed B-Type Natriuretic Peptide Is a Biomarker of Postinterventional Restenosis in Ischemic Heart Disease. Clin Chem 2013.

Circulating proteins from patients requiring revascularization

More than a million diagnostic cardiac catheterizations are performed annually in the US for evaluation of coronary artery anatomy and the presence of atherosclerosis. Nearly half of these patients have no significant coronary lesions or do not require mechanical or surgical revascularization. Consequently, the ability to rule out clinically significant coronary artery disease (CAD) using low cost, low risk tests of serum biomarkers in even a small percentage of patients with normal coronary arteries could be highly beneficial. METHODS: Serum from 359 symptomatic subjects referred for catheterization was interrogated for proteins involved in atherogenesis, atherosclerosis, and plaque vulnerability. Coronary angiography classified 150 patients without flow-limiting CAD who did not require percutaneous intervention (PCI) while 209 required coronary revascularization (stents, angioplasty, or coronary artery bypass graft surgery). Continuous variables were compared across the two patient groups for each analyte including calculation of false discovery rate (FDR [less than or equal to]1%) and Q value (P value for statistical significance adjusted to [less than or equal to]0.01).

Significant differences were detected in circulating proteins from patients requiring revascularization including increased apolipoprotein B100 (APO-B100), C-reactive protein (CRP), fibrinogen, vascular cell adhesion molecule 1 (VCAM-1), myeloperoxidase (MPO), resistin, osteopontin, interleukin (IL)-1beta, IL-6, IL-10 and N-terminal fragment protein precursor brain natriuretic peptide (NT-pBNP) and decreased apolipoprotein A1 (APO-A1). Biomarker classification signatures comprising up to 5 analytes were identified using a tunable scoring function trained against 239 samples and validated with 120 additional samples. A total of 14 overlapping signatures classified patients without significant coronary disease (38% to 59% specificity) while maintaining 95% sensitivity for patients requiring revascularization. Osteopontin (14 times) and resistin (10 times) were most frequently represented among these diagnostic signatures. The most efficacious protein signature in validation studies comprised osteopontin (OPN), resistin, matrix metalloproteinase 7 (MMP7) and interferon gamma (IFNgamma) as a four-marker panel while the addition of either CRP or adiponectin (ACRP-30) yielded comparable results in five protein signatures.

Proteins in the serum of CAD patients predominantly reflected

  1. a positive acute phase, inflammatory response and

  2. alterations in lipid metabolism, transport, peroxidation and accumulation.

    There were surprisingly few indicators of growth factor activation or extracellular matrix remodeling in the serum of CAD patients except for elevated OPN. These data suggest that many symptomatic patients without significant CAD could be identified by a targeted multiplex serum protein test without cardiac catheterization thereby eliminating exposure to ionizing radiation and decreasing the economic burden of angiographic testing for these patients.

WA Laframboise, R Dhir, LA Kelly, P Petrosko, JM Krill-Burger, et al. Serum protein profiles predict coronary artery disease in symptomatic patients referred for coronary angiography.
BMC Medicine (impact factor: 6.03). 12/2012; 10(1):157. http://dx.doi.org/10.1186/1741-7015-10-157

miRNAs in CAD

MicroRNAs are small RNAs that control gene expression. Besides their cell intrinsic function, recent studies reported that microRNAs are released by cultured cells and can be detected in the blood. To address the regulation of circulating microRNAs in patients with stable coronary artery disease. To determine the regulation of microRNAs, we performed a microRNA profile using RNA isolated from n=8 healthy volunteers and n=8 patients with stable coronary artery disease that received state-of-the-art pharmacological treatment. Interestingly, most of the highly expressed microRNAs that were lower in the blood of patients with coronary artery disease are known to be expressed in endothelial cells (eg, miR-126 and members of the miR-17 approximately 92 cluster). To prospectively confirm these data, we detected selected microRNAs in plasma of 36 patients with coronary artery disease and 17 healthy volunteers by quantitative PCR. Consistent with the data obtained by the profile, circulating levels of miR-126, miR-17, miR-92a, and the inflammation-associated miR-155 were significantly reduced in patients with coronary artery disease compared with healthy controls. Likewise, the smooth muscle-enriched miR-145 was significantly reduced. In contrast, cardiac muscle-enriched microRNAs (miR-133a, miR-208a) tend to be higher in patients with coronary artery disease. These results were validated in a second cohort of 31 patients with documented coronary artery disease and 14 controls. Circulating levels of vascular and inflammation-associated microRNAs are significantly downregulated in patients with coronary artery disease.

S Fichtlscherer, S De Rosa, H Fox, T Schwietz, A Fischer, et al. Circulating microRNAs in patients with coronary artery disease. Circulation Research 09/2010; 107(5):677-84.

Imaging modalities compared

This review compares the noninvasive anatomical imaging modalities of coronary artery calcium scoring and coronary CT angiography to the functional assessment modality of MPI in the diagnosis and prognostication of significant CAD in symptomatic patients. A large number of studies investigating this subject are analyzed with a critical look on the evidence, underlying the strengths and limitations. Although the overall findings of the presented studies are favoring the use of CT-based anatomical imaging modalities over MPI in the diagnosis and prognosticating of CAD, the lack of a high number of large- scale, multicenter randomized controlled studies limits the generalizability of this early evidence. Further studies comparing the short- and long-term clinical outcomes and cost-effectiveness of these tests are required to determine their optimal role in the management of symptomatic patients with suspected CAD.

Y Hacioglu, M Gupta, Matthew J Budoff. Noninvasive anatomical coronary artery imaging versus myocardial perfusion imaging: which confers superior diagnostic and prognostic information?
Journal of computer assisted tomography 34(5):637-44.

Three Dimensional In-Room Imaging (3DCA) in PCI

Introduction: Coronary angiography is a two-dimensional (2D) imaging modality and thus is limited in its ability to represent complex three-dimensional (3D) vascular anatomy. Lesion length, bifurcation angles/lesions, and tortuosity are often inadequately assessed using 2D angiography due to vessel overlap and foreshortening. 3D Rotational Angiography (3DRA) with subsequent reconstruction generates models of the coronary vasculature from which lesion length measurements and Optimal View Maps (OVM) defining the amount of vessel foreshortening for each gantry angle can be derived. This study sought to determine if 3DRA-assisted percutaneous coronary interventions resulted in improved procedural results by minimizing foreshortening and optimizing stent selection.
 Rotational angiographic acquisitions were performed and a 3D model was generated from two images greater than 30° apart. An optimal view map identifying the least amount of vessel foreshortening and overlap was derived from the 3D model.
The clinical validation of in-room image-processing tools such as 3DCA and optimal view maps is important since FDA approval of these tools does not require the presentation of any data on clinical experience and impact on clinical outcomes. While the technology of 3DRA and optimal view calculations has been well validated by the work of Chen and colleagues, this study is important in demonstrating how clinical care may be impacted [4,5,7]. This study was biased toward minimizing the impact of these tools on clinical decision-making since the study site, cardiologists, and staff have extensive experience in rotational angiography, 3-D modeling and reconstruction, and the impact of foreshortening on the assessment of lesion length and choice of stent size.
3DRA assistance significantly reduced target vessel foreshortening when compared to operator’s choice of working view for PCI (2.99% ± 2.96 vs. 9.48% ± 7.56, p=0.0001). The operators concluded that 3DRA recommended better optimal view selection for PCI in 14 of 26 (54%) total cases. In 9 (35%) of 26 cases 3DRA assistance facilitated stent positioning. 3DRA based imaging prompted stent length changes in 4/26 patients (15%).
MH. Eng, PA Hudson, AJ Klein, SYJ Chen, … , JA Garcia. Impact of Three Dimensional In-Room Imaging (3DCA) in the Facilitation of Percutaneous Coronary Interventions. J Cardio Vasc Med 2013; 1: 1-5.

 

Related References from PharmaceuticalIntelligence.com:

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013
Curators: Aviva Lev-Ari, PhD, RN and Larry H. Bernstein, MD, FCAP
http://pharmaceuticalintelligence.com/2013/03/07/genomics-genet…cs-32010-32013/
http://wp.me/p2kEDv-2Jp

Prognostic Marker Importance of Troponin I in Acute Decompensated Heart Failure (ADHF)
Larry H Bernstein and  Aviva Lev-Ari
http://pharmaceuticalintelligence.com/2013/06/30/troponin-i-in-…-heart-failure
http://wp.me/p2kEDv-41S

A Changing expectation from cardiac biomarkers.
Larry H Bernstein
http://pharmaceuticalintelligence.com/2012/12/25/assessing-card…ith-biomarkers/
http://wp.me/p2kEDv-1DN

Dealing with the Use of the High Sensitivity Troponin (hs cTn) Assays
Larry H Bernstein and Aviva Lev-Ari
http://pharmaceuticalintelligence.com/2013/05/18/dealing-with-t…-hs-ctn-assays/
http://pharmaceuticalintelligence.com/wp-admin/post.php?post=13255
http://wp.me/p2kEDv-3rN

For Disruption of Calcium Homeostasis in Cardiomyocyte Cells, see

Part VI: Calcium Cycling (ATPase Pump) in Cardiac Gene Therapy: Inhalable Gene Therapy for Pulmonary Arterial Hypertension and Percutaneous Intra-coronary Artery Infusion for Heart Failure: Contributions by Roger J. Hajjar, MD

Aviva Lev-Ari, PhD, RN

http://pharmaceuticalintelligence.com/2013/08/01/calcium-molecule-in-cardiac-gene-therapy-inhalable-gene-therapy-for-pulmonary-arterial-hypertension-and-percutaneous-intra-coronary-artery-infusion-for-heart-failure-contributions-by-roger-j-hajjar/

Part VII: Cardiac Contractility & Myocardium Performance: Ventricular Arrhythmias and Non-ischemic Heart Failure – Therapeutic Implications for Cardiomyocyte Ryanopathy (Calcium Release-related Contractile Dysfunction) and Catecholamine Responses

Justin Pearlman, MD, PhD, FACC, Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

http://pharmaceuticalintelligence.com/2013/08/28/cardiac-contractility-myocardium-performance-ventricular-arrhythmias-and-non-ischemic-heart-failure-therapeutic-implications-for-cardiomyocyte-ryanopathy-calcium-release-related-contractile/

Part VIII: Disruption of Calcium Homeostasis: Cardiomyocytes and Vascular Smooth Muscle Cells: The Cardiac and Cardiovascular Calcium Signaling Mechanism

Justin Pearlman, MD, PhD, FACC, Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

http://pharmaceuticalintelligence.com/2013/09/12/disruption-of-calcium-homeostasis-cardiomyocytes-and-vascular-smooth-muscle-cells-the-cardiac-and-cardiovascular-calcium-signaling-mechanism/

Read Full Post »

%d bloggers like this: