Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Clinical chemistry’


The Evolution of Clinical Chemistry in the 20th Century

Curator: Larry H. Bernstein, MD, FCAP

This is a subchapter in the series on developments in diagnostics in the period from 1880 to 1980.

Otto Folin: America’s First Clinical Biochemist

(Extracted from Samuel Meites, AACC History Division; Apr 1996)

Forward by Wendell T. Caraway, PhD.

The first introduction to Folin comes with the Folin-Wu protein-free filktrate, a technique for removing proteins from whole blood or plasma that resulted in water-clear solutions suitable for the determination of glucose, creatinine, uric acid, non-protein nitrogen, and chloride. The major active ingredient used in the precipitation of protein was sodium tungstate prepared “according to Folin”.Folin-Wu sugar tubes were used for the determination of glucose. From these and subsequent encounters, we learned that Folin was a pioneer in methods for the chemical analysis of blood.  The determination of uric acid in serum was the Benedict method in which protein-free filtrate was mixed with solutions of sodium cyanide and arsenophosphotungstic acid and then heated in a water bath to develop a blue color.  A thorough review of the literature revealed that Folin and Denis had published, in 1912, a method for uric acid in which they used sodium carbonate, rather than sodium cyanide, which was modified and largely superceded the “cyanide”method.

Notes from the author.

Modern clinical chemistry began with the application of 20th century quantitative analysis and instrumentation to measure constituents of blood and urine, and relating the values obtained to human health and disease. In the United States, the first impetus propelling this new area of biochemistry was provided by the 1912 papers of Otto Folin.  The only precedent for these stimulating findings was his own earlier and certainly classic papers on the quantitative compositiuon of urine, the laws governing its composition, and studies on the catabolic end products of protein, which led to his ingenious concept of endogenous and exogenous metabolism.  He had already determined blood ammonia in 1902.  This work preceded the entry of Stanley Benedict and Donald Van Slyke into biochemistry.  Once all three of them were active contributors, the future of clinical biochemistry was ensured. Those who would consult the early volumes of the Journal of Biological Chemistry will discover the direction that the work of Otto Follin gave to biochemistry.  This modest, unobstrusive man of Harvard was a powerful stimulus and inspiration to others.

Quantitatively, in the years of his scientific productivity, 1897-1934, Otto Folin published 151 (+ 1) journal articles including a chapter in Aberhalden’s handbook and one in Hammarsten’s Festschrift, but excluding his doctoral dissertation, his published abstracts, and several articles in the proceedings of the Association of Life Insurance Directors of America. He also wrote one monograph on food preservatives and produced five editions of his laboratory manual. He published four articles while studying in Europe (1896-98), 28 while at the McLean Hospital (1900-7), and 119 at Harvard (1908-34). In his banner year of 1912 he published 20 papers. His peak period from 1912-15 included 15 papers, the monograph, and most of the work on the first edition of his laboratory manual.

The quality of Otto Folin’s life’s work relates to its impact on biochemistry, particularly clinical biochemistry.  Otto’s two brilliant collaborators, Willey Denis and Hsien Wu, must be acknowledged.  Without denis, Otto could not have achieved so rapidly the introduction and popularization of modern blood analysis in the U.S. It would be pointless to conjecture how far Otto would have progressed without this pair.

His work provided the basis of the modern approach to the quantitative analysis of blood and urine through improved methods that reduced the body fluid volume required for analysis. He also applied these methods to metabolic studies on tissues as well as body fluids. Because his interests lay in protein metabolism, his major contributions were directede toward measuring nitrogenous waste or end products.His most dramatic achievement was is illustrated by the study of blood nitrogen retention in nephritis and gout.

Folin introduced colorimetry, turbidimetry, and the use of color filters into quantitative clinical biochemistry. He initiated and applied ingeniously conceived reagents and chemical reactions that paved the way for a host of studies by his contemporaries. He introduced the use of phosphomolybdate for detecting phenolic compounds, and phosphomolybdate for uric acid.  These, in turn, led to the quantitation of epinephrine and tyrosin tryptophane, and cystine in protein. The molybdate suggested to Fiske and SubbaRow the determination of phosphate as phosphomolybdate, and the tungsten led to the use of tungstic acid as a protein precipitant.  Phosphomolybdate became the key reagent in thge blood sugar method.  Folin resurrected the abandoned Jaffe reaction and established creatine and creatinine analysis. He also laid the groundwork for the discovery of the discovery of creatine phosphate. Clinical chemistry owes to him the introductionb of Nessler’s reagent, permutit, Lloyd’s reagent, gum ghatti, and preservatives for standards, such as benzoic acid and formaldehyde. Among his distinguished graduate investigators were Bloor, Doisy, fiske, Shaffer, SubbaRow, Sumner and, Wu.

A Golden Age of Clinical Chemistry: 1948–1960

Louis Rosenfeld
Clinical Chemistry 2000; 46(10): 1705–1714

The 12 years from 1948 to 1960 were notable for introduction of the Vacutainer
tube, electrophoresis, radioimmunoassay, and the Auto-Analyzer. Also
appearing during this interval were new organizations, publications, programs,
and services that established a firm foundation for the professional status
of clinical chemists. It was a golden age.
Except for photoelectric colorimeters, the clinical chemistry laboratories
in 1948—and in many places even later—were not very different from
those of 1925. The basic technology and equipment were essentially
unchanged.There was lots of glassware of different kinds—pipettes,
burettes, wooden racks of test tubes, funnels, filter paper,
cylinders, flasks, and beakers—as well as visual colorimeters,
centrifuges, water baths, an exhaust hood for evaporating organic
solvents after extractions, a microscope for examining urine
sediments, a double-pan analytical beam balance for weighing
reagents and standard chemicals, and perhaps a pH meter. The
most complicated apparatus was the Van Slyke volumetric gas
device—manually operated. The emphasis was on classical chemical
and biological techniques that did not require instrumentation.
The unparalleled growth and wide-ranging research that began after
World War II and have continued into the new century, often aided by
government funding for biomedical research and development as civilian
health has become a major national goal, have impacted the operations
of the clinical chemistry laboratory. The years from 1948 to 1960 were
especially notable for the innovative technology that produced better
methods for the investigation of many diseases, in many cases
leading to better treatment.

AUTOMATION IN CLINICAL CHEMISTRY: CURRENT SUCCESSES AND TRENDS
FOR THE FUTURE
Pierangelo Bonini
Pure & Appl.Chem.,1982;.54, (11):, 2Ol7—2O3O,

the history of automation in clinical chemistry is the history of how and
when the techno logical progress in the field of analytical methodology
as well as in the field of instrumentation, has helped clinical chemists
to mechanize their procedures and to control them.

GENERAL STEPS OF A CLINICAL CHEMISTRY PROCEDURE –
1 – PRELIMINARY TREATMENT (DEPR0TEINIZATION)
2 – SAMPLE + REAGENT(S)
3 – INCUBATION
L – READING
5 – CALCULATION
Fig. 1 General steps of a clinical chemistry procedure
Especially in the classic clinical chemistry methods, a preliminary treatment
of the sample ( in most cases a deproteinization) was an essential step. This
was a major constraint on the first tentative steps in automation and we will
see how this problem was faced and which new problems arose from avoiding
deproteinization. Mixing samples and reagents is the next step; then there is
a more or less long incubation at different temperatures and finally reading,
which means detection of modifications of some physical property of the
mixture; in most cases the development of a colour can reveal the reaction
but, as well known, many other possibilities exist; finally the result is calculated.

Some 25 years ago, Skeggs (1) presented his paper on continuous flow
automation that was the basis of very successful instruments still used all over
the world. The continuous flow automation reactions take place in an hydraulic
route common to all samples.them after mechanization.

Standards and samples enter the analytical stream segmented by air bubbles
and, as they circulate, specific chemical reactions and physical manipulations
continuously take place in the stream. Finally, after the air bubbles are vented,
the colour intensity, proportional to the solute molecules, is monitored in a
detector flow cell.

It is evident that the most important aim of automation is to correctly process
as many samples in as short a time as possible. This result can be obtained
thanks to many technological advances either from analytical point of view or
from the instrument technology.

ANALYTICAL METHODOLOGY –
– VERY ACTIVE ENZYMATIC REAGENTS
–                          SHORTER REACTION TIME
– KINETIC AND FIXED TIME REACTIONS
–                        No NEED OF DEPROTEINIZATION
– SURFACTANTS
–                      AUTOMATIC SAtIPLE BLANK CALCULATION
– POLYCHROMATIC ANALYSIS

The introduction of very active enzymatic reagents for determination of
substrates resulted in shorter reaction times and possibly, in many cases,
of avoiding deproteinization.Reaction times are also reduced by using kinetic
and fixed time reactions instead of end points. In this case, the measurement
of sample blank does not need a separate tube with separate reaction
mixture. Deproteinization can be avoided also by using some surfac—
tants in the reagent mixture. An automatic calculation of sample blanks
is also possible by using polychromatic analysis. As we can see from this
figure, reduction of reaction times and elimination of tedious ope
rations like deproteinization, are the main results of this analytical progress.

Many relevant improvements in mechanics and optics over the last
twenty years and the tremendous advance in electronics have largely
contributed to the instrumental improvement of clinical chemistry automation.

A recent interesting innovation in the field of centrifugal analyzers consists
in the possibility of adding another reagent to an already mixed sample—
reagent solution. This innovation allows a preincubation to be made and
sample blanks to be read before adding the starter reagent.
The possibility to measure absorbances in cuvettes positioned longitudinally
to the light path, realized in a recent model of centrifugal analyzers, is claimed
to be advantageous to read absorbances in non homogeneous solutions, to
avoid any influence of reagent volume errors on the absorbance and to have
more suitable calculation factors. The interest of fluorimetric assays is
growing more and more, especially in connection with drugs immunofluorimetric
assays. This technology has been recently applied also to centrifugal analyzers
technology. A Xenon lamp generates a high energy light, reflected by a mirror
— holographic — grating operated by a stepping motor.
The selected wavelength of the exciting light passes through a split and
reaches the rotating cuvettes. Fluorescence is then filtered, read by
means of a photomultiplier and compared to the continuously monitored
fluorescence of an appropriate reference compound. In this way, eventual
instability due either to the electro—optical devices or to changes in
physicochemical properties of solution is corrected.

…more…

Dr. Yellapragada Subbarow – ATP – Energy for Life

One of the observations Dr SubbaRow made while testing the phosphorus method seemed to provide a clue to the mystery what happens to blood sugar when insulin is administered. Biochemists began investigating the problem when Frederick Banting showed that injections of insulin, the pancreatic hormone, keeps blood sugar under control and keeps diabetics alive.

SubbaRow worked for 18 months on the problem, often dieting and starving along with animals used in experiments. But the initial observations were finally shown to be neither significant nor unique and the project had to be scrapped in September 1926.

Out of the ashes of this project however arose another project that provided the key to the ancient mystery of muscular contraction. Living organisms resist degeneration and destruction with the help of muscles, and biochemists had long believed that a hypothetical inogen provided the energy required for the flexing of muscles at work.

Two researchers at Cambridge University in United Kingdom confirmed that lactic acid is formed when muscles contract and Otto Meyerhof of Germany showed that this lactic acid is a breakdown product of glycogen, the animal starch stored all over the body, particularly in liver, kidneys and muscles. When Professor Archibald Hill of the University College of London demonstrated that conversion of glycogen to lactic acid partly accounts for heat produced during muscle contraction everybody assumed that glycogen was the inogen. And, the 1922 Nobel Prize for medicine and physiology was divided between Hill and Meyerhof.

But how is glycogen converted to lactic acid? Embden, another German biochemist, advanced the hypothesis that blood sugar and phosphorus combine to form a hexose phosphoric ester which breaks down glycogen in the muscle to lactic acid.

In the midst of the insulin experiments, it occurred to Fiske and SubbaRow that Embden’s hypothesis would be supported if normal persons were found to have more hexose phosphate in their muscle and liver than diabetics. For diabetes is the failure of the body to use sugar. There would be little reaction between sugar and phosphorus in a diabetic body. If Embden was right, hexose (sugar) phosphate level in the muscle and liver of diabetic animals should rise when insulin is injected.

Fiske and SubbaRow rendered some animals diabetic by removing their pancreas in the spring of 1926, but they could not record any rise in the organic phosphorus content of muscles or livers after insulin was administered to the animals. Sugar phosphates were indeed produced in their animals but they were converted so quickly by enzymes to lactic acid that Fiske and SubbaRow could not detect them with methods then available. This was fortunate for science because, in their mistaken belief that Embden was wrong, they began that summer an extensive study of organic phosphorus compounds in the muscle “to repudiate Meyerhof completely”.

The departmental budget was so poor that SubbaRow often waited on the back streets of Harvard Medical School at night to capture cats he needed for the experiments. When he prepared the cat muscles for estimating their phosphorus content, SubbaRow found he could not get a constant reading in the colorimeter. The intensity of the blue colour went on rising for thirty minutes. Was there something in muscle which delayed the colour reaction? If yes, the time for full colour development should increase with the increase in the quantity of the sample. But the delay was not greater when the sample was 10 c.c. instead of 5 c.c. The only other possibility was that muscle had an organic compound which liberated phosphorus as the reaction in the colorimeter proceeded. This indeed was the case, it turned out. It took a whole year.

The mysterious colour delaying substance was a compound of phosphoric acid and creatine and was named Phosphocreatine. It accounted for two-thirds of the phosphorus in the resting muscle. When they put muscle to work by electric stimulation, the Phosphocreatine level fell and the inorganic phosphorus level rose correspondingly. It completely disappeared when they cut off the blood supply and drove the muscle to the point of “fatigue” by continued electric stimulation. And, presto! It reappeared when the fatigued muscle was allowed a period of rest.

Phosphocreatine created a stir among the scientists present when Fiske unveiled it before the American Society of Biological Chemists at Rochester in April 1927. The Journal of American Medical Association hailed the discovery in an editorial. The Rockefeller Foundation awarded a fellowship that helped SubbaRow to live comfortably for the first time since his arrival in the United States. All of Harvard Medical School was caught up with an enthusiasm that would be a life-time memory for con­temporary students. The students were in awe of the medium-sized, slightly stoop shouldered, “coloured” man regarded as one of the School’s top research workers.

SubbaRow’s carefully conducted series of experiments disproved Meyerhof’s assumptions about the glycogen-lactic acid cycle. His calculations fully accounted for the heat output during muscle contraction. Hill had not been able to fully account for this in terms of Meyerhof’s theory. Clearly the Nobel Committee was in haste in awarding the 1922 physiology prize, but the biochemistry orthodoxy led by Meyerhof and Hill themselves was not too eager to give up their belief in glycogen as the prime source of muscular energy.

Fiske and SubbaRow were fully upheld and the Meyerhof-Hill­ theory finally rejected in 1930 when a Danish physiologist showed that muscles can work to exhaustion without the aid of glycogen or the stimulation of lactic acid.

Fiske and SubbaRow had meanwhile followed a substance that was formed by the combination of phosphorus, liberated from Phosphocreatine, with an unidentified compound in muscle. SubbaRow isolated it and identified it as a chemical in which adenylic acid was linked to two extra molecules of phosphoric acid. By the time he completed the work to the satisfaction of Fiske, it was August 1929 when Harvard Medical School played host to the 13th International Physiological Congress.

ATP was presented to the gathered scientists before the Congress ended. To the dismay of Fiske and SubbaRow, a few days later arrived in Boston a German science journal, published 16 days before the Congress opened. It carried a letter from Karl Lohmann of Meyerhof’s laboratory, saying he had isolated from muscle a compound of adenylic acid linked to two molecules of phosphoric acid!

While Archibald Hill never adjusted himself to the idea that the basis of his Nobel Prize work had been demolished, Otto Meyerhof and his associates had seen the importance of Phosphocreatine discovery and plunged themselves into follow-up studies in competition with Fiske and SubbaRow. Two associates of Hill had in fact stumbled upon Phosphocreatine about the same time as Fiske and SubbaRow but their loyalty to Meyerhof-Hill theory acted as blinkers and their hasty and premature publications reveal their confusion about both the nature and significance of Phosphocreatine.

The discovery of ATP and its significance helped reveal the full story of muscular contraction: Glycogen arriving in muscle gets converted into lactic acid which is siphoned off to liver for re-synthesis of glycogen. This cycle yields three molecules of ATP and is important in delivering usable food energy to the muscle. Glycolysis or break up of glycogen is relatively slow in getting started and in any case muscle can retain ATP only in small quantities. In the interval between the begin­ning of muscle activity and the arrival of fresh ATP from glycolysis, ­Phosphocreatine maintains ATP supply by re-synthesizing it as fast as its energy terminals are used up by muscle for its activity.

Muscular contraction made possible by ATP helps us not only to move our limbs and lift weights but keeps us alive. The heart is after all a muscle pouch and millions of muscle cells embedded in the walls of arteries keep the life-sustaining blood pumped by the heart coursing through body organs. ATP even helps get new life started by powering the sperm’s motion toward the egg as well as the spectacular transformation of the fertilized egg in the womb.

Archibald Hill for long denied any role for ATP in muscle contraction, saying ATP has not been shown to break down in the intact muscle. This objection was also met in 1962 when University of Pennsylvania scientists showed that muscles can contract and relax normally even when glycogen and Phosphocreatine are kept under check with an inhibitor.

Michael Somogyi

Michael Somogyi was born in Reinsdorf, Austria-Hungary, in 1883. He received a degree in chemical engineering from the University of Budapest, and after spending some time there as a graduate assistant in biochemistry, he immigrated to the United States. From 1906 to 1908 he was an assistant in biochemistry at Cornell University.

Returning to his native land in 1908, he became head of the Municipal Laboratory in Budapest, and in 1914 he was granted his Ph.D. After World War I, the politically unstable situation in his homeland led him to return to the United States where he took a job as an instructor in biochemistry at Washington University in St. Louis, Missouri. While there he assisted Philip A. Shaffer and Edward Adelbert Doisy, Sr., a future Nobel Prize recipient, in developing a new method for the preparation of insulin in sufficiently large amounts and of sufficient purity to make it a viable treatment for diabetes. This early work with insulin helped foster Somogyi’s lifelong interest in the treatment and cure of diabetes. He was the first biochemist appointed to the staff of the newly opened Jewish Hospital, and he remained there as the director of their clinical laboratory until his retirement in 1957.

Arterial Blood Gases.  Van Slyke.

The test is used to determine the pH of the blood, the partial pressure of carbon dioxide and oxygen, and the bicarbonate level. Many blood gas analyzers will also report concentrations of lactate, hemoglobin, several electrolytes, oxyhemoglobin, carboxyhemoglobin and methemoglobin. ABG testing is mainly used in pulmonology and critical care medicine to determine gas exchange which reflect gas exchange across the alveolar-capillary membrane.

DONALD DEXTER VAN SLYKE died on May 4, 1971, after a long and productive career that spanned three generations of biochemists and physicians. He left behind not only a bibliography of 317 journal publications and 5 books, but also more than 100 persons who had worked with him and distinguished themselves in biochemistry and academic medicine. His doctoral thesis, with Gomberg at University of Michigan was published in the Journal of the American Chemical Society in 1907.  Van Slyke received an invitation from Dr. Simon Flexner, Director of the Rockefeller Institute, to come to New York for an interview. In 1911 he spent a year in Berlin with Emil Fischer, who was then the leading chemist of the scientific world. He was particularly impressed by Fischer’s performing all laboratory operations quantitatively —a procedure Van followed throughout his life. Prior to going to Berlin, he published the classic nitrous acid method for the quantitative determination of primary aliphatic amino groups, the first of the many gasometric procedures devised by Van Slyke, and made possible the determination of amino acids. It was the primary method used to study amino acid composition of proteins for years before chromatography. Thus, his first seven postdoctoral years were centered around the development of better methodology for protein composition and amino acid metabolism.

With his colleague G. M. Meyer, he first demonstrated that amino acids, liberated during digestion in the intestine, are absorbed into the bloodstream, that they are removed by the tissues, and that the liver alone possesses the ability to convert the amino acid nitrogen into urea.  From the study of the kinetics of urease action, Van Slyke and Cullen developed equations that depended upon two reactions: (1) the combination of enzyme and substrate in stoichiometric proportions and (2) the reaction of the combination into the end products. Published in 1914, this formulation, involving two velocity constants, was similar to that arrived at contemporaneously by Michaelis and Menten in Germany in 1913.

He transferred to the Rockefeller Institute’s Hospital in 2013, under Dr. Rufus Cole, where “Men who were studying disease clinically had the right to go as deeply into its fundamental nature as their training allowed, and in the Rockefeller Institute’s Hospital every man who was caring for patients should also be engaged in more fundamental study”.  The study of diabetes was already under way by Dr. F. M. Allen, but patients inevitably died of acidosis.  Van Slyke reasoned that if incomplete oxidation of fatty acids in the body led to the accumulation of acetoacetic and beta-hydroxybutyric acids in the blood, then a reaction would result between these acids and the bicarbonate ions that would lead to a lower than-normal bicarbonate concentration in blood plasma. The problem thus became one of devising an analytical method that would permit the quantitative determination of bicarbonate concentration in small amounts of blood plasma.  He ingeniously devised a volumetric glass apparatus that was easy to use and required less than ten minutes for the determination of the total carbon dioxide in one cubic centimeter of plasma.  It also was soon found to be an excellent apparatus by which to determine blood oxygen concentrations, thus leading to measurements of the percentage saturation of blood hemoglobin with oxygen. This found extensive application in the study of respiratory diseases, such as pneumonia and tuberculosis. It also led to the quantitative study of cyanosis and a monograph on the subject by C. Lundsgaard and Van Slyke.

In all, Van Slyke and his colleagues published twenty-one papers under the general title “Studies of Acidosis,” beginning in 1917 and ending in 1934. They included not only chemical manifestations of acidosis, but Van Slyke, in No. 17 of the series (1921), elaborated and expanded the subject to describe in chemical terms the normal and abnormal variations in the acid-base balance of the blood. This was a landmark in understanding acid-base balance pathology.  Within seven years after Van moved to the Hospital, he had published a total of fifty-three papers, thirty-three of them coauthored with clinical colleagues.

In 1920, Van Slyke and his colleagues undertook a comprehensive investigation of gas and electrolyte equilibria in blood. McLean and Henderson at Harvard had made preliminary studies of blood as a physico-chemical system, but realized that Van Slyke and his colleagues at the Rockefeller Hospital had superior techniques and the facilities necessary for such an undertaking. A collaboration thereupon began between the two laboratories, which resulted in rapid progress toward an exact physico-chemical description of the role of hemoglobin in the transport of oxygen and carbon dioxide, of the distribution of diffusible ions and water between erythrocytes and plasma, and of factors such as degree of oxygenation of hemoglobin and hydrogen ion concentration that modified these distributions. In this Van Slyke revised his volumetric gas analysis apparatus into a manometric method.  The manometric apparatus proved to give results that were from five to ten times more accurate.

A series of papers on the CO2 titration curves of oxy- and deoxyhemoglobin, of oxygenated and reduced whole blood, and of blood subjected to different degrees of oxygenation and on the distribution of diffusible ions in blood resulted.  These developed equations that predicted the change in distribution of water and diffusible ions between blood plasma and blood cells when there was a change in pH of the oxygenated blood. A significant contribution of Van Slyke and his colleagues was the application of the Gibbs-Donnan Law to the blood—regarded as a two-phase system, in which one phase (the erythrocytes) contained a high concentration of nondiffusible negative ions, i.e., those associated with hemoglobin, and cations, which were not freely exchaThe importance of Vanngeable between cells and plasma. By changing the pH through varying the CO2 tension, the concentration of negative hemoglobin charges changed in a predictable amount. This, in turn, changed the distribution of diffusible anions such as Cl” and HCO3″ in order to restore the Gibbs-Donnan equilibrium. Redistribution of water occurred to restore osmotic equilibrium. The experimental results confirmed the predictions of the equations.

As a spin-off from the physico-chemical study of the blood, Van undertook, in 1922, to put the concept of buffer value of weak electrolytes on a mathematically exact basis.

This proved to be useful in determining buffer values of mixed, polyvalent, and amphoteric electrolytes, and put the understanding of buffering on a quantitative basis. A monograph in Medicine entitled “Observation on the Courses of Different Types of Bright’s Disease, and on the Resultant Changes in Renal Anatomy,” was a landmark that related the changes occurring at different stages of renal deterioration to the quantitative changes taking place in kidney function. During this period, Van Slyke and R. M. Archibald identified glutamine as the source of urinary ammonia. During World War II, Van and his colleagues documented the effect of shock on renal function and, with R. A. Phillips, developed a simple method, based on specific gravity, suitable for use in the field.

Over 100 of Van’s 300 publications were devoted to methodology. The importance of Van Slyke’s contribution to clinical chemical methodology cannot be overestimated. These included the blood organic constituents (carbohydrates, fats, proteins, amino acids, urea, nonprotein nitrogen, and phospholipids) and the inorganic constituents (total cations, calcium, chlorides, phosphate, and the gases carbon dioxide, carbon monoxide, and nitrogen). It was said that a Van Slyke manometric apparatus was almost all the special equipment needed to perform most of the clinical chemical analyses customarily performed prior to the introduction of photocolorimeters and spectrophotometers for such determinations.

The progress made in the medical sciences in genetics, immunology, endocrinology, and antibiotics during the second half of the twentieth century obscures at times the progress that was made in basic and necessary biochemical knowledge during the first half. Methods capable of giving accurate quantitative chemical information on biological material had to be painstakingly devised; basic questions on chemical behavior and metabolism had to be answered; and, finally, those factors that adversely modified the normal chemical reactions in the body so that abnormal conditions arise that we characterize as disease states had to be identified.

Viewed in retrospect, he combined in one scientific lifetime (1) basic contributions to the chemistry of body constituents and their chemical behavior in the body, (2) a chemical understanding of physiological functions of certain organ systems (notably the respiratory and renal), and (3) how such information could be exploited in the understanding and treatment of disease. That outstanding additions to knowledge in all three categories were possible was in large measure due to his sound and broadly based chemical preparation, his ingenuity in devising means of accurate measurements of chemical constituents, and the opportunity given him at the Hospital of the Rockefeller Institute to study disease in company with physicians.

In addition, he found time to work collaboratively with Dr. John P. Peters of Yale on the classic, two-volume Quantitative Clinical Chemistry. In 1922, John P. Peters, who had just gone to Yale from Van Slyke’s laboratory as an Associate Professor of Medicine, was asked by a publisher to write a modest handbook for clinicians describing useful chemical methods and discussing their application to clinical problems. It was originally to be called “Quantitative Chemistry in Clinical Medicine.” He soon found that it was going to be a bigger job than he could handle alone and asked Van Slyke to join him in writing it. Van agreed, and the two men proceeded to draw up an outline and divide up the writing of the first drafts of the chapters between them. They also agreed to exchange each chapter until it met the satisfaction of both.At the time it was published in 1931, it contained practically all that could be stated with confidence about those aspects of disease that could be and had been studied by chemical means. It was widely accepted throughout the medical world as the “Bible” of quantitative clinical chemistry, and to this day some of the chapters have not become outdated.

Paul Flory

Paul J. Flory was born in Sterling, Illinois, in 1910. He attended Manchester College, an institution for which he retained an abiding affection. He did his graduate work at Ohio State University, earning his Ph.D. in 1934. He was awarded the Nobel Prize in Chemistry in 1974, largely for his work in the area of the physical chemistry of macromolecules.

Flory worked as a newly minted Ph.D. for the DuPont Company in the Central Research Department with Wallace H. Carothers. This early experience with practical research instilled in Flory a lifelong appreciation for the value of industrial application. His work with the Air Force Office of Strategic Research and his later support for the Industrial Affiliates program at Stanford University demonstrated his belief in the need for theory and practice to work hand-in-hand.

Following the death of Carothers in 1937, Flory joined the University of Cincinnati’s Basic Science Research Laboratory. After the war Flory taught at Cornell University from 1948 until 1957, when he became executive director of the Mellon Institute. In 1961 he joined the chemistry faculty at Stanford, where he would remain until his retirement.

Among the high points of Flory’s years at Stanford were his receipt of the National Medal of Science (1974), the Priestley Award (1974), the J. Willard Gibbs Medal (1973), the Peter Debye Award in Physical Chemistry (1969), and the Charles Goodyear Medal (1968). He also traveled extensively, including working tours to the U.S.S.R. and the People’s Republic of China.

Abraham Savitzky

Abraham Savitzky was born on May 29, 1919, in New York City. He received his bachelor’s degree from the New York State College for Teachers in 1941. After serving in the U.S. Air Force during World War II, he obtained a master’s degree in 1947 and a Ph.D. in 1949 in physical chemistry from Columbia University.

In 1950, after working at Columbia for a year, he began a long career with the Perkin-Elmer Corporation. Savitzky started with Perkin-Elmer as a staff scientist who was chiefly concerned with the design and development of infrared instruments. By 1956 he was named Perkin-Elmer’s new product coordinator for the Instrument Division, and as the years passed, he continued to gain more and more recognition for his work in the company. Most of his work with Perkin-Elmer focused on computer-aided analytical chemistry, data reduction, infrared spectroscopy, time-sharing systems, and computer plotting. He retired from Perkin-Elmer in 1985.

Abraham Savitzky holds seven U.S. patents pertaining to computerization and chemical apparatus. During his long career he presented numerous papers and wrote several manuscripts, including “Smoothing and Differentiation of Data by Simplified Least Squares Procedures.” This paper, which is the collaborative effort of Savitzky and Marcel J. E. Golay, was published in volume 36 of Analytical Chemistry, July 1964. It is one of the most famous, respected, and heavily cited articles in its field. In recognition of his many significant accomplishments in the field of analytical chemistry and computer science, Savitzky received the Society of Applied Spectroscopy Award in 1983 and the Williams-Wright Award from the Coblenz Society in 1986.

Samuel Natelson

Samuel Natelson attended City College of New York and received his B.S. in chemistry in 1928. As a graduate student, Natelson attended New York University, receiving a Sc.M. in 1930 and his Ph.D. in 1931. After receiving his Ph.D., he began his career teaching at Girls Commercial High School. While maintaining his teaching position, Natelson joined the Jewish Hospital of Brooklyn in 1933. Working as a clinical chemist for Jewish Hospital, Natelson first conceived of the idea of a society by and for clinical chemists. Natelson worked to organize the nine charter members of the American Association of Clinical Chemists, which formally began in 1948. A pioneer in the field of clinical chemistry, Samuel Natelson has become a role model for the clinical chemist. Natelson developed the usage of microtechniques in clinical chemistry. During this period, he served as a consultant to the National Aeronautics and Space Administration in the 1960s, helping analyze the effect of weightless atmospheres on astronauts’ blood. Natelson spent his later career as chair of the biochemistry department at Michael Reese Hospital and as a lecturer at the Illinois Institute of Technology.

Arnold Beckman

Arnold Orville Beckman (April 10, 1900 – May 18, 2004) was an American chemist, inventor, investor, and philanthropist. While a professor at Caltech, he founded Beckman Instruments based on his 1934 invention of the pH meter, a device for measuring acidity, later considered to have “revolutionized the study of chemistry and biology”.[1] He also developed the DU spectrophotometer, “probably the most important instrument ever developed towards the advancement of bioscience”.[2] Beckman funded the first transistor company, thus giving rise to Silicon Valley.[3]

He earned his bachelor’s degree in chemical engineering in 1922 and his master’s degree in physical chemistry in 1923. For his master’s degree he studied the thermodynamics of aqueous ammonia solutions, a subject introduced to him by T. A. White.. Beckman decided to go to Caltech for his doctorate. He stayed there for a year, before returning to New York to be near his fiancée, Mabel. He found a job with Western Electric’s engineering department, the precursor to the Bell Telephone Laboratories. Working with Walter A. Shewhart, Beckman developed quality control programs for the manufacture of vacuum tubes and learned about circuit design. It was here that Beckman discovered his interest in electronics.

In 1926 the couple moved back to California and Beckman resumed his studies at Caltech. He became interested in ultraviolet photolysis and worked with his doctoral advisor, Roscoe G. Dickinson, on an instrument to find the energy of ultraviolet light. It worked by shining the ultraviolet light onto a thermocouple, converting the incident heat into electricity, which drove a galvanometer. After receiving a Ph.D. in photochemistry in 1928 for this application of quantum theory to chemical reactions, Beckman was asked to stay on at Caltech as an instructor and then as a professor. Linus Pauling, another of Roscoe G. Dickinson’s graduate students, was also asked to stay on at Caltech.

During his time at Caltech, Beckman was active in teaching at both the introductory and advanced graduate levels. Beckman shared his expertise in glass-blowing by teaching classes in the machine shop. He also taught classes in the design and use of research instruments. Beckman dealt first-hand with the chemists’ need for good instrumentation as manager of the chemistry department’s instrument shop. Beckman’s interest in electronics made him very popular within the chemistry department at Caltech, as he was very skilled in building measuring instruments.

Over the time that he was at Caltech, the focus of the department increasingly moved towards pure science and away from chemical engineering and applied chemistry. Arthur Amos Noyes, head of the chemistry division, encouraged both Beckman and chemical engineer William Lacey to be in contact with real-world engineers and chemists, and Robert Andrews Millikan, Caltech’s president, referred technical questions to Beckman from government and businessess.

Sunkist Growers was having problems with its manufacturing process. Lemons that were not saleable as produce were made into pectin or citric acid, with sulfur dioxide used as a preservative. Sunkist needed to know the acidity of the product at any given time, Chemist Glen Joseph at Sunkist was attempting to measure the hydrogen-ion concentration in lemon juice electrochemically, but sulfur dioxide damaged hydrogen electrodes, and non-reactive glass electrodes produced weak signals and were fragile.

Joseph approached Beckman, who proposed that instead of trying to increase the sensitivity of his measurements, he amplify his results. Beckman, familiar with glassblowing, electricity, and chemistry, suggested a design for a vacuum-tube amplifier and ended up building a working apparatus for Joseph. The glass electrode used to measure pH was placed in a grid circuit in the vacuum tube, producing an amplified signal which could then be read by an electronic meter. The prototype was so useful that Joseph requested a second unit.

Beckman saw an opportunity, and rethinking the project, decided to create a complete chemical instrument which could be easily transported and used by nonspecialists. By October 1934, he had registered patent application U.S. Patent No. 2,058,761 for his “acidimeter”, later renamed the pH meter. Although it was priced expensively at $195, roughly the starting monthly wage for a chemistry professor at that time, it was significantly cheaper than the estimated cost of building a comparable instrument from individual components, about $500. The original pH meter weighed in at nearly 7 kg, but was a substantial improvement over a benchful of delicate equipment. The earliest meter had a design glitch, in that the pH readings changed with the depth of immersion of the electrodes, but Beckman fixed the problem by sealing the glass bulb of the electrode. The pH meter is an important device for measuring the pH of a solution, and by 11 May 1939, sales were successful enough that Beckman left Caltech to become the full-time president of National Technical Laboratories. By 1940, Beckman was able to take out a loan to build his own 12,000 square foot factory in South Pasadena.

In 1940, the equipment needed to analyze emission spectra in the visible spectrum could cost a laboratory as much as $3,000, a huge amount at that time. There was also growing interest in examining ultraviolet spectra beyond that range. In the same way that he had created a single easy-to-use instrument for measuring pH, Beckman made it a goal to create an easy-to-use instrument for spectrophotometry. Beckman’s research team, led by Howard Cary, developed several models.

The new spectrophotometers used a prism to spread light into its absorption spectra and a phototube to “read” the spectra and generate electrical signals, creating a standardized “fingerprint” for the material tested. With Beckman’s model D, later known as the DU spectrophotometer, National Technical Laboratories successfully created the first easy-to-use single instrument containing both the optical and electronic components needed for ultraviolet-absorption spectrophotometry. The user could insert a sample, dial up the desired frequency, and read the amount of absorption of that frequency from a simple meter. It produced accurate absorption spectra in both the ultraviolet and the visible regions of the spectrum with relative ease and repeatable accuracy. The National Bureau of Standards ran tests to certify that the DU’s results were accurate and repeatable and recommended its use.

Beckman’s DU spectrophotometer has been referred to as the “Model T” of scientific instruments: “This device forever simplified and streamlined chemical analysis, by allowing researchers to perform a 99.9% accurate biological assessment of a substance within minutes, as opposed to the weeks required previously for results of only 25% accuracy.” Nobel laureate Bruce Merrifield is quoted as calling the DU spectrophotometer “probably the most important instrument ever developed towards the advancement of bioscience.”

Development of the spectrophotometer also had direct relevance to the war effort. The role of vitamins in health was being studied, and scientists wanted to identify Vitamin A-rich foods to keep soldiers healthy. Previous methods involved feeding rats for several weeks, then performing a biopsy to estimate Vitamin A levels. The DU spectrophotometer yielded better results in a matter of minutes. The DU spectrophotometer was also an important tool for scientists studying and producing the new wonder drug penicillin. By the end of the war, American pharmaceutical companies were producing 650 billion units of penicillin each month. Much of the work done in this area during World War II was kept secret until after the war.

Beckman also developed the infrared spectrophotometer, first the the IR-1, then, in 1953, he redesigned the instrument. The result was the IR-4, which could be operated using either a single or double beam of infrared light. This allowed a user to take both the reference measurement and the sample measurement at the same time.

Beckman Coulter Inc., is an American company that makes biomedical laboratory instruments. Founded by Caltech professor Arnold O. Beckman in 1935 as National Technical Laboratories to commercialize a pH meter that he had invented, the company eventually grew to employ over 10,000 people, with $2.4 billion in annual sales by 2004. Its current headquarters are in Brea, California.

In the 1940s, Beckman changed the name to Arnold O. Beckman, Inc. to sell oxygen analyzers, the Helipot precision potentiometer, and spectrophotometers. In the 1950s, the company name changed to Beckman Instruments, Inc.

Beckman was contacted by Paul Rosenberg. Rosenberg worked at MIT’s Radiation Laboratory. The lab was part of a secret network of research institutions in both the United States and Britain that were working to develop radar, “radio detecting and ranging”. The project was interested in Beckman because of the high quality of the tuning knobs or “potentiometers” which were used on his pH meters. Beckman had trademarked the design of the pH meter knobs, under the name “helipot” for “helical potentiometer”. Rosenberg had found that the helipot was more precise, by a factor of ten, than other knobs. He redesigned the knob to have a continuous groove, in which the contact could not be jarred out of contact.

Beckman instruments were also used by the Manhattan Project to measure radiation in gas-filled, electrically charged ionization chambers in nuclear reactors.
The pH meter was adapted to do the job with a relatively minor adjustment – substituting an input-load resistor for the glass electrode. As a result, Beckman Instruments developed a new product, the micro-ammeter

After the war, Beckman developed oxygen analyzers that were used to monitor conditions in incubators for premature babies. Doctors at Johns Hopkins University used them to determine recommendations for healthy oxygen levels for incubators.

Beckman himself was approached by California governor Goodwin Knight to head a Special Committee on Air Pollution, to propose ways to combat smog. At the end of 1953, the committee made its findings public. The “Beckman Bible” advised key steps to be taken immediately:

In 1955, Beckman established the seminal Shockley Semiconductor Laboratory as a division of Beckman Instruments to begin commercializing the semiconductor transistor technology invented by Caltech alumnus William Shockley. The Shockley Laboratory was established in nearby Mountain View, California, and thus, “Silicon Valley” was born.

Beckman also saw that computers and automation offered a myriad of opportunities for integration into instruments, and the development of new instruments.

The Arnold and Mabel Beckman Foundation was incorporated in September 1977.  At the time of Beckman’s death, the Foundation had given more than 400 million dollars to a variety of charities and organizations. In 1990, it was considered one of the top ten foundations in California, based on annual gifts. Donations chiefly went to scientists and scientific causes as well as Beckman’s alma maters. He is quoted as saying, “I accumulated my wealth by selling instruments to scientists,… so I thought it would be appropriate to make contributions to science, and that’s been my number one guideline for charity.”

Wallace H. Coulter

Engineer, Inventor, Entrepreneur, Visionary

Wallace Henry Coulter was an engineer, inventor, entrepreneur and visionary. He was co-founder and Chairman of Coulter® Corporation, a worldwide medical diagnostics company headquartered in Miami, Florida. The two great passions of his life were applying engineering principles to scientific research, and embracing the diversity of world cultures. The first passion led him to invent the Coulter Principle™, the reference method for counting and sizing microscopic particles suspended in a fluid.

This invention served as the cornerstone for automating the labor intensive process of counting and testing blood. With his vision and tenacity, Wallace Coulter, was a founding father in the field of laboratory hematology, the science and study of blood. His global viewpoint and passion for world cultures inspired him to establish over twenty international subsidiaries. He recognized that it was imperative to employ locally based staff to service his customers before this became standard business strategy.

Wallace’s first attempts to patent his invention were turned away by more than one attorney who believed “you cannot patent a hole”. Persistent as always, Wallace finally applied for his first patent in 1949 and it was issued on October 20, 1953. That same year, two prototypes were sent to the National Institutes of Health for evaluation. Shortly after, the NIH published its findings in two key papers, citing improved accuracy and convenience of the Coulter method of counting blood cells. That same year, Wallace publicly disclosed his invention in his one and only technical paper at the National Electronics Conference, “High Speed Automatic Blood Cell Counter and Cell Size Analyzer”.

Leonard Skeggs was the inventor of the first continuous flow analyser way back in 1957. This groundbreaking event completely changed the way that chemistry was carried out. Many of the laborious tests that dominated lab work could be automated, increasing productivity and freeing personnel for other more challenging tasks

Continuous flow analysis and its offshoots and decedents are an integral part of modern chemistry. It might therefore be some conciliation to Leonard Skeggs to know that not only was he the beneficiary of an appellation with a long and fascinating history, he also created a revolution in wet chemistry that is still with us today.

Technicon

The AutoAnalyzer is an automated analyzer using a flow technique called continuous flow analysis (CFA), first made by the Technicon Corporation. The instrument was invented 1957 by Leonard Skeggs, PhD and commercialized by Jack Whitehead’s Technicon Corporation. The first applications were for clinical analysis, but methods for industrial analysis soon followed. The design is based on separating a continuously flowing stream with air bubbles.

In continuous flow analysis (CFA) a continuous stream of material is divided by air bubbles into discrete segments in which chemical reactions occur. The continuous stream of liquid samples and reagents are combined and transported in tubing and mixing coils. The tubing passes the samples from one apparatus to the other with each apparatus performing different functions, such as distillation, dialysis, extraction, ion exchange, heating, incubation, and subsequent recording of a signal. An essential principle of the system is the introduction of air bubbles. The air bubbles segment each sample into discrete packets and act as a barrier between packets to prevent cross contamination as they travel down the length of the tubing. The air bubbles also assist mixing by creating turbulent flow (bolus flow), and provide operators with a quick and easy check of the flow characteristics of the liquid. Samples and standards are treated in an exactly identical manner as they travel the length of the tubing, eliminating the necessity of a steady state signal, however, since the presence of bubbles create an almost square wave profile, bringing the system to steady state does not significantly decrease throughput ( third generation CFA analyzers average 90 or more samples per hour) and is desirable in that steady state signals (chemical equilibrium) are more accurate and reproducible.

A continuous flow analyzer (CFA) consists of different modules including a sampler, pump, mixing coils, optional sample treatments (dialysis, distillation, heating, etc.), a detector, and data generator. Most continuous flow analyzers depend on color reactions using a flow through photometer, however, also methods have been developed that use ISE, flame photometry, ICAP, fluorometry, and so forth.

Flow injection analysis (FIA), was introduced in 1975 by Ruzicka and Hansen.
Jaromir (Jarda) Ruzicka is a Professor  of Chemistry (Emeritus at the University of Washington and Affiliate at the University of Hawaii), and member of the Danish Academy of Technical Sciences. Born in Prague in 1934, he graduated from the Department of Analytical Chemistry, Facultyof Sciences, Charles University. In 1968, when Soviets occupied Czechoslovakia, he emigrated to Denmark. There, he joined The Technical University of Denmark, where, ten years  later, received a newly created Chair in Analytical Chemistry. When Jarda met Elo Hansen, they invented Flow Injection.

The first generation of FIA technology, termed flow injection (FI), was inspired by the AutoAnalyzer technique invented by Skeggs in early 1950s. While Skeggs’ AutoAnalyzer uses air segmentation to separate a flowing stream into numerous discrete segments to establish a long train of individual samples moving through a flow channel, FIA systems separate each sample from subsequent sample with a carrier reagent. While the AutoAnalyzer mixes sample homogeneously with reagents, in all FIA techniques sample and reagents are merged to form a concentration gradient that yields analysis results

Arthur Karmen.

Dr. Karmen was born in New York City in 1930. He graduated from the Bronx High School of Science in 1946 and earned an A.B. and M.D. in 1950 and 1954, respectively, from New York University. In 1952, while a medical student working on a summer project at Memorial-Sloan Kettering, he used paper chromatography of amino acids to demonstrate the presence of glutamic-oxaloacetic and glutaniic-pyruvic ransaminases (aspartate and alanine aminotransferases) in serum and blood. In 1954, he devised the spectrophotometric method for measuring aspartate aminotransferase in serum, which, with minor modifications, is still used for diagnostic testing today. When developing this assay, he studied the reaction of NADH with serum and demonstrated the presence of lactate and malate dehydrogenases, both of which were also later used in diagnosis. Using the spectrophotometric method, he found that aspartate aminotransferase increased in the period immediately after an acute myocardial infarction and did the pilot studies that showed its diagnostic utility in heart and liver diseases.  This became as important as the EKG. It was replaced in cardiology usage by the MB isoenzyme of creatine kinase, which was driven by Burton Sobel’s work on infarct size, and later by the troponins.

History of Laboratory Medicine at Yale University.

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry (2.3); and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum (4). This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

Nathan Gochman.  Developer of Automated Chemistries.

Nathan Gochman, PhD, has over 40 years of experience in the clinical diagnostics industry. This includes academic teaching and research, and 30 years in the pharmaceutical and in vitro diagnostics industry. He has managed R & D, technical marketing and technical support departments. As a leader in the industry he was President of the American Association for Clinical Chemistry (AACC) and the National Committee for Clinical Laboratory Standards (NCCLS, now CLSI). He is currently a Consultant to investment firms and IVD companies.

William Sunderman

A doctor and scientist who lived a remarkable century and beyond — making medical advances, playing his Stradivarius violin at Carnegie Hall at 99 and being honored as the nation’s oldest worker at 100.

He developed a method for measuring glucose in the blood, the Sunderman Sugar Tube, and was one of the first doctors to use insulin to bring a patient out of a diabetic coma. He established quality-control techniques for medical laboratories that ended the wide variation in the results of laboratories doing the same tests.

He taught at several medical schools and founded and edited the journal Annals of Clinical and Laboratory Science. In World War II, he was a medical director for the Manhattan Project, which developed the atomic bomb.

Dr. Sunderman was president of the American Society of Clinical Pathologists and a founding governor of the College of American Pathologists. He also helped organize the Association of Clinical Scientists and was its first president.

Yale Department of Laboratory Medicine

The roots of the Department of Laboratory Medicine at Yale can be traced back to John Peters, the head of what he called the “Chemical Division” of the Department of Internal Medicine, subsequently known as the Section of Metabolism, who co-authored with Donald Van Slyke the landmark 1931 textbook Quantitative Clinical Chemistry; and to Pauline Hald, research collaborator of Dr. Peters who subsequently served as Director of Clinical Chemistry at Yale-New Haven Hospital for many years. In 1947, Miss Hald reported the very first flame photometric measurements of sodium and potassium in serum. This study helped to lay the foundation for modern studies of metabolism and their application to clinical care.

The Laboratory Medicine program at Yale had its inception in 1958 as a section of Internal Medicine under the leadership of David Seligson. In 1965, Laboratory Medicine achieved autonomous section status and in 1971, became a full-fledged academic department. Dr. Seligson, who served as the first Chair, pioneered modern automation and computerized data processing in the clinical laboratory. In particular, he demonstrated the feasibility of discrete sample handling for automation that is now the basis of virtually all automated chemistry analyzers. In addition, Seligson and Zetner demonstrated the first clinical use of atomic absorption spectrophotometry. He was one of the founding members of the major Laboratory Medicine academic society, the Academy of Clinical Laboratory Physicians and Scientists.

The discipline of clinical chemistry and the broader field of laboratory medicine, as they are practiced today, are attributed in no small part to Seligson’s vision and creativity.

Born in Philadelphia in 1916, Seligson graduated from University of Maryland and received a D.Sc. from Johns Hopkins University and an M.D. from the University of Utah. In 1953, he served as captain in the U.S. Army, chief of the Hepatic and Metabolic Disease Laboratory at Walter Reed Army Medical Center.

Recruited to Yale and Grace-New Haven Hospital in 1958 from the University of Pennsylvania as professor of internal medicine at the medical school and the first director of clinical laboratories at the hospital, Seligson subsequently established the infrastructure of the Department of Laboratory Medicine, creating divisions of clinical chemistry, microbiology, transfusion medicine (blood banking) and hematology – each with its own strong clinical, teaching and research programs.

Challenging the continuous flow approach, Seligson designed, built and validated “discrete sample handling” instruments wherein each sample was treated independently, which allowed better choice of methods and greater efficiency. Today continuous flow has essentially disappeared and virtually all modern automated clinical laboratory instruments are based upon discrete sample handling technology.

Seligson was one of the early visionaries who recognized the potential for computers in the clinical laboratory. One of the first applications of a digital computer in the clinical laboratory occurred in Seligson’s department at Yale, and shortly thereafter data were being transmitted directly from the laboratory computer to data stations on the patient wards. Now, such laboratory information systems represent the standard of care.

He was also among the first to highlight the clinical importance of test specificity and accuracy, as compared to simple reproducibility. One of his favorite slides was one that showed almost perfectly reproducible results for 10 successive measurements of blood sugar obtained with what was then the most widely used and popular analytical instrument. However, he would note, the answer was wrong; the assay was not accurate.

Seligson established one of the nation’s first residency programs focused on laboratory medicine or clinical pathology, and also developed a teaching curriculum in laboratory medicine for medical students. In so doing, he created a model for the modern practice of laboratory medicine in an academic environment, and his trainees spread throughout the country as leaders in the field.

Ernest Cotlove

Ernest Cotlove’s scientific and medical career started at NYU where, after finishing medicine in 1943, he pursued studies in renal physiology and chemistry. His outstanding ability to acquire knowledge and conduct innovative investigations earned him an invitation from James Shannon, then Director of the National Heart Institute at NIH. He continued studies of renal physiology and chemistry until 1953 when he became Head of Clinical Chemistry Laboratories in the new Department of Clinical Pathology being developed by George Z. Williams during the Clinical Center’s construction. Dr. Cotlove seized the opportunity to design and equip the most advanced and functional clinical chemistry facility in our country.

Dr. Cotlove’s career exemplified the progress seen in medical research and technology. He designed the electronic chloridometer that bears his name, in spite of published reports that such an approach was theoretically impossible. He used this innovative skill to develop new instruments and methods at the Clinical Center. Many recognized him as an expert in clinical chemistry, computer programming, systems design for laboratory operations, and automation of analytical instruments.

Effects of Automation on Laboratory Diagnosis

George Z. Williams

There are four primary effects of laboratory automation on the practice of medicine: The range of laboratory support is being greatly extended to both diagnosis and guidance of therapeutic management; the new feasibility of multiphasic periodic health evaluation promises effective health and manpower conservation in the future; and substantially lowered unit cost for laboratory analysis will permit more extensive use of comprehensive laboratory medicine in everyday practice. There is, however, a real and growing danger of naive acceptance of and overconfidence in the reliability and accuracy of automated analysis and computer processing without critical evaluation. Erroneous results can jeopardize the patient’s welfare. Every physician has the responsibility to obtain proof of accuracy and reliability from the laboratories which serve his patients.

. Mario Werner

Dr. Werner received his medical degree from the University of Zurich, Switzerland in 1956. After specializing in internal medicine at the University Clinic in Basel, he came to the United States–as a fellow of the Swiss Academy of Medical Sciences–to work at NIH and at the Rockefeller University. From 1964 to 1966, he served as chief of the Central Laboratory at the Klinikum Essen, Ruhr-University, Germany. In 1967, he returned to the US, joining the Division of Clinical Pathology and Laboratory Medicine at the University of California, San Francisco, as an assistant professor. Three years later, he became Associate Professor of Pathology and Laboratory Medicine at Washington University in St. Louis, where he was instrumental in establishing the training program in laboratory medicine. In 1972, he was appointed Professor of Pathology at The George Washington University in Washington, DC.

Norbert Tietz

Professor Norbert W. Tietz received the degree of Doctor of Natural Sciences from the Technical University Stuttgart, Germany, in 1950. In 1954 he immigrated to the United States where he subsequently held positions or appointments at several Chicago area institutions including the Mount Sinai Hospital Medical Center, Chicago Medical School/University of Health Sciences and Rush Medical College.

Professor Tietz is best known as the editor of the Fundamentals of Clinical Chemistry. This book, now in its sixth edition, remains a primary information source for both students and educators in laboratory medicine. It was the first modem textbook that integrated clinical chemistry with the basic sciences and pathophysiology.

Throughout his career, Dr. Tietz taught a range of students from the undergraduate through post-graduate level including (1) medical technology students, (2) medical students, (3) clinical chemistry graduate students, (4) pathology residents, and (5) practicing chemists. For example, in the late 1960’s he began the first master’s of science degree program in clinical chemistry in the United States at the Chicago Medical School. This program subsequently evolved into one of the first Ph.D. programs in clinical chemistry.

Automation and other recent developments in clinical chemistry.

Griffiths J.

http://www.ncbi.nlm.nih.gov/pubmed/1344702

The decade 1980 to 1990 was the most progressive period in the short, but
turbulent, history of clinical chemistry. New techniques and the instrumentation
needed to perform assays have opened a chemical Pandora’s box. Multichannel
analyzers, the base spectrophotometric key to automated laboratories, have
become almost perfect. The extended use of the antigen-monoclonal antibody
reaction with increasing sensitive labels has extended analyte detection
routinely into the picomole/liter range. Devices that aid the automation of
serum processing and distribution of specimens are emerging. Laboratory
computerization has significantly matured, permitting better integration of
laboratory instruments, improving communication between laboratory personnel
and the patient’s physician, and facilitating the use of expert systems and
robotics in the chemistry laboratory

Automation and Expert Systems in a Core Clinical Chemistry Laboratory
Streitberg, GT, et al.  JALA 2009;14:94–105

Clinical pathology or laboratory medicine has a great
influence on clinical decisions and 60e70% of the
most important decisions on admission, discharge,
and medication are based on laboratory results.1
As we learn more about clinical laboratory results
and incorporate them in outcome optimization
schemes, the laboratory will play a more pivotal role
in management of patients and the eventual outcomes.
2 It has been stated that the development of
information technology and automation in laboratory
medicine has allowed laboratory professionals
to keep in pace with the growth in workload.

Since the reasons to automate and the impact of automation have
similarities and these include reduction in errors, increase in productivity,
and improvement in safety. Advances in technology in clinical chemistry
that have included total laboratory automation call for changes in job
responsibilities to include skills in information technology, data management,
instrumentation, patient preparation for diagnostic analysis, interpretation
of pathology results, dissemination of knowledge and information to
patients and other health staff, as well as skills in research.

The clinical laboratory has become so productive, particularly in chemistry and immunology, and the labor, instrument and reagent costs are well determined, that today a physician’s medical decisions are 80% determined by the clinical laboratory.  Medical information systems have lagged far behind.  Why is that?  Because the decision for a MIS has historical been based on billing capture.  Moreover, the historical use of chemical profiles were quite good at validating healthy dtatus in an outpatient population, but the profiles became restricted under Diagnostic Related Groups.    Thus, it came to be that the diagnostics was considered a “commodity”.  In order to be competitive, a laboratory had to provide “high complexity” tests that were drawn in by a large volume of “moderate complexity” tests.

Advertisements

Read Full Post »


Landscape of Cardiac Biomarkers for Improved Clinical Utilization

Curator and Author: Larry H Bernstein, MD, FCAP

Curation

This reviewer has been engaged in the development, the application, and the validation of cardiac biomarkers for over 30 years. There has been a nonlinear introduction of new biomarkers in that period, with an explosion of methods discovery and large studies to validate them in concert with clinical trials. The improvement of interventional methods, imaging methods, and the unraveling of patient characteristics associated with emerging cardiovascular disease is both cause for alarm (technology costs) and for raised expectations for both prevention, risk reduction, and treatment. What is strikingly missing is the kind of data analyses on the population database that could alleviate the burden of physician overload. It is an urgent requirement for the EHR, and it needs to be put in place to facilitate patient care.

Introduction

This is a journey through the current status of biochemical markers in cardiac evaluation. 

In the traditional use of cardiac biomarkers, the is a timed blood sampling from the decubital fossa. This was the case with alanine aminotransferase (AST, then SGOT), creatine kinase (CK) or its isoenzyme MB, and lactic dehydrogenase (or the isoenzyme-1). The time of sampling was based on time to appearance from time of damage, and the release of the biomarker is a stochastic process. The earliest studies of CK-MB appearance, peak height, and disappearance was by Burton Sobel and associates related to measuring the extent of damage, and determined that reperfusion had an effect. A significant reason for using a combination of CK-MB and LD-1 was that a patient who is a late arrival might have a CK-MB on the decline (peak at 18 h) while the LD-1 is rising (peak at 48 h).

The introduction of the troponins was accompanied by a serial 4 h measurement, usually for 4 draws (0, 4, 8, 12 h). The computational power of laboratory information systems was limited until recently, so it is somewhat surprising, given what we have seen – in addition to published work in the 1980’s – that this capability is not in use today, when regression and nonparametric classification algorithms are now so advanced that would enable much improved and effective communication to the physician needing the information.

J Adan, LH Bernstein, J Babb. Can peak CK-MB segregate patients with acute myocardial infarction into different outcome classes? Clin Chem 1985; 31(2):996-997. ICID: 844986.

RA Rudolph, LH Bernstein, J Babb. Information induction for predicting acute myocardial infarction. Clin Chem 1988; 34(10):2031-2038. ICID: 825568.

LH Bernstein, IJ Good, GI Holtzman, ML Deaton, J Babb. Diagnosis of acute myocardial infarction from two measurements of creatine kinase isoenzyme MB with use of nonparametric probability estimation. Clin Chem 1989; 35(3):444-447. ICID: 825570.

L H Bernstein, A Qamar, C McPherson, S Zarich, R Rudolph. Diagnosis of myocardial infarction: integration of serum markers and clinical descriptors using information theory. Clin Chem 1999; 72(1):5-13. ICID: 825618

Vermunt, J.K. & Magidson, J. (2000a). “Latent Class Cluster Analysis”, chapter 3 in J.A. Hagenaars and A.L. McCutcheon (eds.), Advances in Latent Class AnalysisCambridge University Press.

Vermunt, J.K. & Magidson, J. (2000b). Latent GOLD 2.0 User’s Guide. Belmont, MA: Statistical Innovations Inc.

LH Bernstein, A Qamar, C McPherson, S Zarich. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory data. Yale J Biol Med. 1999; 72(4):259-268. ICID: 825617.

L Bernstein, K Bradley, S Zarich. GOLDmineR: improving models for classifying patients with chest pain.
Yale J Biol Med. 2002; 75(4):183-198. ICID: 825624

SA Haq, M Tavakol, LH Bernstein, J Kneifati-Hayek, M Schlefer, et al. The ACC/ESC Recommendation for 99th Percentile of the Reference NormalTroponin I Overestimates the Risk of an Acute Myocardial Infarction: a novel enhancement in the diagnostic performance of troponins. “6th Scientific Forum on Quality of Care and Outcomes Research in Cardiovascular Disease and Stroke.” Circulation 2005; 111(20):e313-313. ICID: 939931.

LH Bernstein, MY Zions, SA Haq, S Zarich, J Rucinski, B Seamonds, …., John F Heitner. Effect of renal function loss on NT-proBNP level variations. Clin Biochem 2009; 42(10-11):1091-1098. ICID: 937529

SA Haq, M Tavakol, S Silber, L Bernstein, J Kneifati-Hayek, et al. Enhancing the diagnostic performance of troponins in the acute care setting. J Emerg Med 2008; ICID: 937619

Gil David, LarryH Bernstein, Ronald Coifman. Generating Evidence Based Interpretation of Hematology Screens via Anomaly Characterization. OCCJ 2011; 4(1):10-16. ICID: 939928

The use and limitations of high-sensitivity cardiac troponin and natriuretic peptide concentrations in at risk populations

Background: High-sensitivity cardiac troponin (hs-cTn) assays are now available that can detect measurable troponin in significantly more individuals in the general population than conventional assays. The clinical use of these hs-cTn assays depends on the development of proper reference values. However, even with a univariate biomarker for risk and/or severity of ischemic heart disease, a single reference value for the cardiac biomarker does not discriminate the probabilities between 2 or 3 different cardiac disorders, or identify any combination of these, such as, heart failure or renal disease > stage 2 and acute coronary syndrome. True, the physician has a knowledge of the history and presentation as a guide. Do we know how adequate the information is in a patient who has an atypical presentation? Again, the same problem arises with the use of the natriuretic peptides, but the value of these tests is improved over the previous generation tests. Let us parse through the components of this diagnostic problem, which is critical for reaching the best decisions under the circumstances.

Issue 1. The use of the clinical information, such as, patient age, gender, past medical history, known medical illness, CHEST PAIN, ECG, medications, are the basis of longstanding clinical practice. These may be sufficient in a patient who presents with acute coronary syndrome and a Q-wave not previously seen, or with ST-elevation, ST-depression, T-wave inversion, or rhythm abnormality. Many patients don’t present that way.

Issue 2. The use of a single ‘decision-value’ for critical situations decribed, leaves us with a yes-no answer. If you use a receiver-operator characteristic curve, all of the patients used to construct the sensitivity/specificity analysis have to be decisively known for identification. Otherwise, one might just take the median of a very large population, and the median represents the best value for a data set that is not normal distribution. However, the ROC method may inform about an acute event, if that is the purpose, but with a single value for a single variable, it can’t identify a likelihood of an event in the next six months.

Issue 3. There are several quantitative biomarkers that are considerably better than were available 15 years prior to this discussion. These can be used alone, but preferably in combination for diagnostic evaluation, for predictiong prognosis, and for therapeutic decision-making. What is now available was unimagined 20 years ago, both in test selection and in treatment selection.

Cardiac troponin assays were recently reviewed in Clin Chem by Fred Apple and Amy Seenger. (The State of Cardiac Troponin Assays: Looking Bright and Moving in the Right Direction).

Cardiac troponin assays have evolved substantially over 20 years, owing to the efforts of manufacturers to make them more precise and sensitive. These enhancements have led to high-sensitivity cardiac troponin assays, which ideally would give measureable values above the limit of detection (LoD) for 100% of healthy individuals and demonstrate an imprecision (CV) of ≤10% at the 99th percentile.

As laboratorians, we wish to comment on the recently published “ACCF 2012 Expert Consensus Document on Practical Clinical Considerations in the Implementation of Troponin Elevations”. Our purpose is to address 8 analytical issues that we believe have the potential to cause confusion and that therefore deserve clarification.

Since the initial publications by the National Academy of Clinical Biochemistry (NACB) in 1999 and by the European Society of Cardiology/American College of Cardiology in 2000, when both organizations endorsed cardiac troponin I (cTnI) or cTnT as the preferred biomarker for the detection of myocardial infaction, numerous other organizations have followed suit and promoted the sole use of cardiac troponin in this clinical application. The American College of Cardiology Foundation (ACCF) 2012 Expert Consensus Document summarizes the recently published 2012 Third Universal Definition of Myocardial Infarction by the Global Task Force, thus providing some practical recommendations on the use and interpretation of cardiac troponin in clinical practice.

This commentator has already expressed the view that there is no ‘silver bullet’, and the potential for confusion is not yet going to be resolved. The potential for greater accuracy in diagnosis is bolstered by currently available imaging.

Current strength of cardiac biomarker opportunities:

A recent study measured hs-tnI in 1716 (93%) of the community-based study cohort and 499 (88%) of the healthy reference cohort. Parameters that significantly contributed to higher hs-cTnI concentrations in the healthy reference cohort included age, male sex, systolic blood pressure, and left ventricular mass. Glomerular filtration rate and body mass index were not independently associated with hs-cTnI in the healthy reference cohort. Individuals with diastolic and systolic dysfunction, hypertension, and coronary artery disease (but not impaired renal function) had significantly higher hs-cTnI values than the healthy reference cohort.

The authors concluded that hs-cTnI assay with the aid of echocardiographic imaging in a large, well-characterized community-based cohort demonstrated hs-cTnI to be remarkably sensitive in the general population, and there are important sex and age differences among healthy reference individuals. Even though the results have important implications for defining hs-cTnI reference values and identifying disease, the reference value is not presented, and the question remains about how many subjects in the 88% (499) healthy reference consort had elevated systolic blood pressure or left ventricular hypertrophy (LVH) measured by imaging. Furthermore, while impaired renal function dropped out as an independent predictor of associated hs-cTnI, one would expect it to have a strong association with LVH.

Defining High-Sensitivity Cardiac Troponin Concentrations in the Community.
PM McKie, DM Heublein, CG. Scott, ML Gantzer, …and AS Jaffe.
Depart Med & Lab Med and Pathology, Mayo Clinic and Foundation, Rochester, MN; Siemens Diagnostics, Newark, DE. Clin Chem 2013.

hsTnI with NSTEMI

Another study looks at the prognostic performance of hs-TnI assay with non-STEMI. High-sensitivity assays for cardiac troponin enable more precise measurement of very low concentrations and improved diagnostic accuracy. However, the prognostic value of these measurements, particularly at low concentrations, is less well defined. (This is the sensitivity vs specificity dilemma raised with regard to the impoved hs-cTn assays.) But the value of low measured values is a matter for prognostic evaluation, based on the hypothesis that any cTnI that is measured in serum is leaked from cardiomyocytes. This assay evaluation used the Abbott ARCHITECT. The data were 4695 patients with non–ST-segment elevation acute coronary syndromes (NSTE-ACS) from the EARLY-ACS (Early Glycoprotein IIb/IIIa Inhibition in NSTE-ACS) and SEPIA-ACS1-TIMI 42 (Otamixaban for the Treatment of Patients with NSTE-ACS–Thrombolysis in Myocardial Infarction 42) trials. The primary endpoint was cardiovascular death or new myocardial infarction (MI) at 30 days. Baseline cardiac troponin was categorized at the 99th percentile reference limit (26 ng/L for hs-cTnI; 10 ng/L for cTnT) and at sex-specific 99th percentiles for hs-cTnI.

All patients at baseline had detectable hs-cTnI compared with 94.5% with detectable cTnT. With adjustment for all other elements of the TIMI risk score, patients with hs-cTnI ≥99th percentile had a 3.7-fold higher adjusted risk of cardiovascular death or MI at 30 days relative to patients with hs-cTnI <99th percentile (9.7% vs 3.0%; odds ratio, 3.7; 95% CI, 2.3–5.7; P < 0.001). Similarly, when stratified by categories of hs-cTnI, very low concentrations demonstrated a graded association with cardiovascular death or MI (P-trend < 0.001). Thus, Application of this hs-cTnI assay identified a clinically relevant higher risk of recurrent events among patients with NSTE-ACS, even at very low troponin concentrations.

Prognostic Performance of a High-Sensitivity Cardiac Troponin I Assay in Patients with Non–ST-Elevation Acute Coronary Syndrome. EA Bohula May, MP Bonaca, P Jarolim, EM Antman, …and DA Morrow. Clin Chem 2013.

Combination test with cTnI and a troponin

The next study looks at the value of a combination of cTnT and N-Terminal pro-B-type-natriuretic-peptide (NT proBNP) to predict heart failure risk. Recall that NT proBNP has been a stabd-alone biomarker for CHF. The study was done with the consideration that heart failure (HF) is projected to have the largest increases in incidence over the coming decades. Therefore, would cardiac troponin T (cTnT) measured with a high-sensitivity assay and N-terminal pro-B–type natriuretic peptide (NT-proBNP), biomarkers strongly associated with incident HF, improve HF risk prediction in the Atherosclerosis Risk in Communities (ARIC) study?

Using sex-specific models, we added cTnT and NT-proBNP to age and race (“laboratory report” model) and to the ARIC HF model (includes age, race, systolic blood pressure, antihypertensive medication use, current/former smoking, diabetes, body mass index, prevalent coronary heart disease, and heart rate) in 9868 participants without prevalent HF; area under the receiver operating characteristic curve (AUC), integrated discrimination improvement, net reclassification improvement (NRI), and model fit were described.

Over a mean follow-up of 10.4 years, 970 participants developed incident HF. Adding cTnT and NT-proBNP to the ARIC HF model significantly improved all statistical parameters (AUCs increased by 0.040 and 0.057; the continuous NRIs were 50.7% and 54.7% in women and men, respectively). Interestingly, the simpler laboratory report model was statistically no different than the ARIC HF model.

Troponin T and N-Terminal Pro-B–Type Natriuretic Peptide: A Biomarker Approach to Predict Heart Failure Risk: The Atherosclerosis Risk in Communities Study. V Nambi, X Liu, LE Chambless, JA de Lemos, SS Virani, et al.
Clin Chem 2013.

BCM Researchers Discover Simpler, Improved Biomarkers to Predict Heart Failure As Accurate As Complex Models     Posted by: Anna Ishibashi Sep 17, 2013

Biomarkers for heart failure Researchers at the Baylor College of Medicine and the Michael E. DeBakey Veterans Affairs hospital discovered two improved biomarkers in the bloodstream that predict who is at higher risk of having heart failure in 10 years. The study was published in the journal Clinical Chemistry.

In the Atherosclerosis Risk in Communities (ARIC) clinical study, researchers measured the blood concentration of troponin T and N-terminal-pro-B-type natriuretic peptide (NT-proBNP) in the models, while also collecting age and race data. The important point taken from the study was that researchers did not find any difference in the accuracy of heart failure risk prediction statistically between this simpler test and the traditional, more complex one, which includes information of age, race, systolic blood pressure, antihypertensive medication use, smoking status, diabetes, body-mass index, prevalent coronary heart disease and heart rate.

Troponin T is an indicator of damaged heart muscle and can be detected in low levels even in individuals with no symptoms through this simpler, improved testing method. Similarly, NT-proBNP is a by-product of brain natriuretic peptide (BNP), which is a small neuropeptide hormone that has been shown to be effective in diagnosing congestive heart failure.

The critical issues that we must now address is what lifestyle and drug therapies can prevent the development of heart failures for individuals who are at high risk – according to Dr. Christie Ballantyne, professor of medicine and section chief of cardiology and cardiovascular research at BCM and the Houston Methodist Center for Cardiovascular Disease Prevention.

Although chest pain is widely considered a key symptom in the diagnosis of myocardial infarction (MI), not all patients with MI present with chest pain. This study was done the frequency with which patients with MI present without chest pain and to examine their subsequent management and outcome. A total of 434,877 patients with confirmed MI enrolled June 1994 to March 1998 in the National Registry of Myocardial Infarction, which includes 1674 hospitals in the United States. Outcome measures were prevalence of presentation without chest pain; clinical characteristics, treatment, and mortality among MI patients without chest pain vs those with chest pain.

Of all patients diagnosed as having MI, 142,445 (33%) did not have chest pain on presentation to the hospital. This group of MI patients was, on average, 7 years older than those with chest pain (74.2 vs 66.9 years), with a higher proportion of women (49.0% vs 38.0%) and patients with diabetes mellitus (32.6% vs 25.4%) or prior heart failure (26.4% vs 12.3%). Also, MI patients without chest pain had a longer delay before hospital presentation (mean, 7.9 vs 5.3 hours), were less likely to be diagnosed as having confirmed MI at the time of admission (22.2% vs 50.3%), and were less likely to receive thrombolysis or primary angioplasty (25.3% vs 74.0%), aspirin (60.4% vs 84.5%), β-blockers (28.0% vs 48.0%), or heparin (53.4% vs 83.2%). Myocardial infarction patients without chest pain had a 23.3% in-hospital mortality rate compared with 9.3% among patients with chest pain (adjusted odds ratio for mortality, 2.21 [95% confidence interval, 2.17-2.26]).

We tested the hypotheses that MI patients without chest pain compared with those with chest pain would present later for medical attention, would be less likely to be diagnosed as having acute MI on initial evaluation, and would receive fewer appropriate medical treatments within the first 24 hours. We also evaluated the association between the presence of atypical presenting symptoms and hospital mortality related to MI.

Our results suggest that patients without chest pain on presentation represent a large segment of the MI population and are at increased risk for delays in seeking medical attention, less aggressive treatments, and in-hospital mortality.

Prevalence, Clinical Characteristics, and Mortality Among Patients With Myocardial Infarction Presenting Without Chest Pain. JG Canto, MG Shlipak, WJ Rogers, JA Malmgren, PD Frederick, et al. JAMA 2013; 283(24):3223-3229. http://dx.doi.org/10.1001/jama.283.24.3223

cTnT degraded forms in circulation

This recent study questions whether degraded cTnT forms circulate in the patient’s blood. Separation of cTnT forms by gel filtration chromatography (GFC) was performed in sera from 13 AMI patients to examine cTnT degradation. The GFC eluates were subjected to Western blot analysis with the original antibodies from the Roche immunoassay used to mimic the clinical cTnT assay. GFC analysis of AMI patients’ sera revealed 2 cTnT peaks with retention volumes of 5 and 21 mL. Western blot analysis identified these peaks as cTnT fragments of 29 and 14–18 kDa, respectively. Furthermore, the performance of direct Western blots on standardized serum samples demonstrated a time-dependent degradation pattern of cTnT, with fragments ranging between 14 and 40 kDa. Intact cTnT (40 kDa) was present in only 3 patients within the first 8 h after hospital admission.

Time-Dependent Degradation Pattern of Cardiac Troponin T Following Myocardial Infarction. EPM Cardinaels, AMA Mingels T van Rooij, PO Collinson, FW Prinzen and MP van Dieijen-Visser. Clin Chem 2013.

Older patients with higher cTNI

One of the problems of interpretation of cTnI is the age relationship to the 99th percentile of the elderly. cTnI was measured using a high-sensitivity assay (Abbott Diagnostics) in 814 community-dwelling individuals at both 70 and 75 years of age. The cTnI 99th percentiles were determined separately using nonparametric methods in the total sample, in men and women, and in individuals with and without CVD.

The cTnI 99th percentile at baseline was 55.2 ng/L for the total cohort. Higher 99th percentiles were noted in men (69.3 ng/L) and individuals with CVD (74.5 ng/L). The cTnI 99th percentile in individuals free from CVD at baseline (n = 498) increased by 51% from 38.4 to 58.0 ng/L during the 5-year observation period. Relative increases ranging from 44% to 83% were noted across all subgroups. Male sex [odds ratio, 5.3 (95% CI, 1.5–18.3)], log-transformed N-terminal pro-B-type natriuretic peptide [odds ratio, 1.9 (95% CI, 1.2–3.0)], and left-ventricular mass index [odds ratio, 1.3 (95% CI, 1.1–1.5)] predicted increases in cTnI concentrations from below the 99th percentile (i.e., 38.4 ng/L) at baseline to concentrations above the 99th percentile at the age of 75 years.

cTnI concentration and its 99th percentile threshold depend strongly on the characteristics of the population being assessed. Among elderly community dwellers, higher concentrations were seen in men and individuals with prevalent CVD. Aging contributes to increasing concentrations, given the pronounced changes seen with increasing age across all subgroups. These findings should be taken into consideration when applying cTnI decision thresholds in clinical settings.

KM Eggers, Lars Lind, Per Venge and Bertil Lindahl. Factors Influencing the 99th Percentile of Cardiac Troponin I Evaluated in Community-Dwelling Individuals at 70 and 75 Years of Age/. Clin Chem 2013.

Background: Atrial natriuretic peptide (ANP) has antihypertrophic and antifibrotic properties that are relevant to AF substrates. The −G664C and rs5065 ANP single nucleotide polymorphisms (SNP) have been described in association with clinical phenotypes, including hypertension and left ventricular hypertrophy. A recent study assessed the association of early AF and rs5065 SNPs in low-risk subjects. In a Caucasian population with moderate-to-high cardiovascular risk profile and structural AF, we conducted a case-control study to assess whether the ANP −G664C and rs5065 SNP associate with nonfamilial structural AF.
Methods: 168 patients with nonfamilial structural AF and 168 age- and sex-matched controls were recruited. The rs5065 and −G664C ANP SNPs were genotyped.
Results: The study population had a moderate-to-high cardiovascular risk profile with 86% having hypertension, 23% diabetes, 26% previous myocardial infarction, and 23% left ventricular systolic dysfunction. Patients with AF had greater left atrial diameter (44 ± 7 vs. 39 ± 5 mm; P , 0.001) and higher plasma NTproANP levels (6240 ± 5317 vs. 3649 ± 2946 pmol/mL; P , 0.01). Odds ratios (ORs) for rs5065 and −G664C gene variants were 1.1 (95% confidence interval [CI], 0.7–1.8; P = 0.71) and 1.2 (95% CI, 0.3–3.2; P = 0.79), respectively, indicating no association with AF. There were no differences in baseline clinical characteristics among carriers and noncarriers of the −664C and rs5065 minor allele variants.
Conclusions: We report lack of association between the rs5065 and −G664C ANP gene SNPs and AF in a Caucasian population of patients with structural AF. Further studies will clarify whether these or other ANP gene variants affect the risk of different subphenotypes of AF driven by distinct pathophysiological mechanisms.

P Francia, A Ricotta, A Frattari, R Stanzione, A Modestino, et al.
Atrial Natriuretic Peptide Single Nucleotide Polymorphisms in Patients with Nonfamilial Structural Atrial Fibrillation.
Clinical Medicine Insights: Cardiology 2013:7 153–159   http://dx.doi.org/10.4137/CMC.S12239  http://www.la-press.com/atrial-natriuretic-peptide-single-nucleotide-polymorphisms-in-patients-article-a3882

Cystatin C and eGFR predict AMI or CVD mortality

BACKGROUND: The estimated glomerular filtration rate (eGFR) independently predicts cardiovascular death or myocardial infarction (MI) and can be estimated by creatinine and cystatin C concentrations. We evaluated 2 different cystatin C assays, alone or combined with creatinine, in patients with acute coronary syndrome.
METHODS: We analyzed plasma cystatin C, measured with assays from Gentian and Roche, and serum creatinine in 16 279 patients from the PLATelet Inhibition and Patient Outcomes (PLATO) trial. We evaluated Pearson correlation and agreement (Bland–Altman) between methods, as well as prognostic value in relation to cardiovascular death or MI during 1 year of follow up by multivariable logistic regression analysis including clinical variables, biomarkers, c-statistics, and relative integrated discrimination improvement (IDI).
RESULTS: Median cystatin C concentrations (interquartile intervals) were 0.83 (0.68–1.01) mg/L (Gentian) and 0.94 (0.80–1.14) mg/L (Roche). Overall correlation was 0.86 (95% CI 0.85–0.86). The level of agreement was within 0.39 mg/L (2 SD) (n = 16 279).
The areas under the curve (AUCs) in the multivariable risk prediction model with cystatin C (Gentian, Roche) or Chronic Kidney Disease Epidemiology Collaboration eGFR (CKD-EPI) added were 0.6914, 0.6913, and 0.6932. Corresponding relative IDI values were 2.96%, 3.86%, and 4.68% (n = 13 050). Addition of eGFR by the combined creatinine–cystatin C equation yielded AUCs of 0.6923 (Gentian) and 0.6924 (Roche) with relative IDI values of 3.54% and 3.24%.
CONCLUSIONS: Despite differences in cystatin C concentrations, overall correlation between the Gentian and Roche assays was good, while agreement was moderate. The combined creatinine–cystatin C equation did not outperform risk prediction by CKD-EPI.
A Åkerblom, L Wallentin, A Larsson, A Siegbahn, et al.
Cystatin C– and Creatinine-Based Estimates of Renal Function and Their Value for Risk Prediction in Patients with Acute Coronary Syndrome: Results from the PLATelet Inhibition and Patient Outcomes (PLATO) Study.
 

T2Dm has many subphenotypes in the prediabetic phase

For decades, glucose, hemoglobin A1c, insulin, and C peptide have been the laboratory tests of choice to detect and monitor diabetes. However, these tests do not identify individuals at risk for developing type 2 diabetes (T2Dm) (so-called prediabetic individuals and the subphenotypes therein), which would be a prerequisite for individualized prevention. Nor are these parameters suitable to identify T2Dm subphenotypes, a prerequisite for individualized therapeutic interventions. The oral glucose tolerance test (oGTT) is still the only means for the early and reliable identification of people in the prediabetic phase with impaired glucose tolerance (IGT). This procedure, however, is very time-consuming and expensive and is unsuitable as a screening method in a doctor′s office. Hence, there is an urgent need for innovative laboratory tests to simplify the early detection of alterations in glucose metabolism.
The search for diabetic risk genes was the first and most intensively pursued approach for individualized diabetes prevention and treatment. Over the last 20 years cohorts of tens of thousands of people have been analyzed, and more than 70 susceptibility loci associated with T2Dm and related metabolic traits have been identified. But despite extensive replication, no susceptibility loci or combinations of loci have proven suitable for diagnostic purposes.
Why did the genomic studies fail? One reason might be that T2Dm is a polygenetic disease, but there is another more important reason. The large diabetes cohorts investigated in these studies were very heterogeneous, consisting of poorly characterized individuals who were usually selected because they had an increase in blood glucose. Subsequently it has become clear that many different subphenotypes already exist in the prediabetic phase.
Metabolomics represents a new potential approach to move the diagnosis of diabetes beyond the application of the classical diabetic laboratory tests.
Rainer Lehmann. Diabetes Subphenotypes and Metabolomics: The Key to Discovering Laboratory Markers for Personalized Medicine?
 

Ca2+/calmodulin-dependent protein kinase II (CaMKII) has recently emerged as a ROS activated proarrhythmic signal

Background—Atrial fibrillation is a growing public health problem without adequate therapies. Angiotensin II (Ang II) and reactive oxygen species (ROS) are validated risk factors for atrial fibrillation (AF) in patients, but the molecular pathway(s) connecting ROS and AF is unknown. The Ca2+/calmodulin-dependent protein kinase II (CaMKII) has recently emerged as a ROS activated proarrhythmic signal, so we hypothesized that oxidized CaMKII􀄯(ox-CaMKII) could contribute to AF.
Methods and Results—We found ox-CaMKII was increased in atria from AF patients compared to patients in sinus rhythm and from mice infused with Ang II compared with saline. Ang II treated mice had increased susceptibility to AF compared to saline treated WT mice, establishing Ang II as a risk factor for AF in mice. Knock in mice lacking critical oxidation sites in CaMKII􀄯 (MM-VV) and mice with myocardial-restricted transgenic over-expression of methionine sulfoxide reductase A (MsrA TG), an enzyme that reduces ox-CaMKII, were resistant to AF induction after Ang II infusion.
 
RyR and Ca+ release from SR
 
ANS-   autonomic innervation of heart
 
 
mongillo_fig1  regulation of cardiac Ca++ cycling by ANS
 
jce561317.fig3    cardiac contraction
 
 
 
 
serum levels of MAA differentiated stable CAD from MI. For IgM antibodies to MAA, results were consistent with IgGantibodies to MAA
 
 
 
Conclusions—Our studies suggest that CaMKII is a molecular signal that couples increased ROS with AF and that therapeutic strategies to decrease ox-CaMKII may prevent or reduce AF.
Key words: atrial fibrillation, calcium/calmodulin-dependent protein kinase II, angiotensin II, reactive oxygen species, arrhythmia (mechanisms)
A Purohit, AG Rokita, X Guan, B Chen, et al.  Oxidized CaMKII Triggers Atrial Fibrillation.  Circulation. Sep 12, 2013;
 

Microparticles (MP)s give clues about vascular endothelial injury

BACKGROUND: Endothelial dysfunction is an early event in the development and progression of a wide range of cardiovascular diseases. Various human studies have identified that measures of endothelial dysfunction may offer prognostic information with respect to vascular events. Microparticles (MPs) are a heterogeneous population of small membrane fragments shed from various cell types. The endothelium is one of the primary targets of circulating MPs, and MPs isolated from blood have been considered biomarkers of vascular injury and inflammation.
CONTENT: This review summarizes current knowledge of the potential functional role of circulating MPs in promoting endothelial dysfunction. Cells exposed to different stimuli such as shear stress, physiological agonists, proapoptotic stimulation, or damage release MPs, which contribute to endothelial dysfunction and the development of cardiovascular diseases. Numerous studies indicate that MPs may trigger endothelial dysfunction by disrupting production of nitric oxide release from vascular endothelial cells and subsequently modifying vascular tone. Circulating MPs affect both proinflammatory and proatherosclerotic processes in endothelial cells. In addition, MPs can promote coagulation and inflammation or alter angiogenesis and apoptosis in endothelial cells.
SUMMARY: MPs play an important role in promoting endothelial dysfunction and may prove to be true biomarkers of disease state and progression.
Fina Lovren and Subodh Verma.  Evolving Role of Microparticles in the Pathophysiology of Endothelial Dysfunction.
 
Outcomes of STEMI and NSTEMI different predicted by NPs after MI
Patients with increased blood concentrations of natriuretic peptides (NPs) have poor cardiovascular outcomes after myocardial infarction (MI). Data from 41 683 patients with non–ST-segment elevation MI (NSTEMI) and 27 860 patients with ST-segment elevation MI (STEMI) at 309 US hospitals were collected as part of the ACTION Registry®–GWTG™ (Acute Coronary Treatment and Intervention Outcomes Network Registry–Get with the Guidelines) (AR-G) between July 2008 and September 2009.

B-type natriuretic peptide (BNP) or N-terminal pro-BNP (NT-proBNP) was measured in 19 528 (47%) of NSTEMI and 9220 (33%) of STEMI patients. Patients in whom NPs were measured were older and had more comorbidities, including prior heart failure or MI. There was a stepwise increase in the risk of in-hospital mortality with increasing BNP quartiles for both NSTEMI (1.3% vs 3.2% vs 5.8% vs 11.1%) and STEMI (1.9% vs 3.9% vs 8.2% vs 17.9%). The addition of BNP to the AR-G clinical model improved the C statistic from 0.796 to 0.807 (P < 0.001) for NSTEMI and from 0.848 to 0.855 (P = 0.003) for STEMI. The relationship between NPs and mortality was similar in patients without a history of heart failure or cardiogenic shock on presentation and in patients with preserved left ventricular function.

NPs are measured in almost 50% of patients in the US admitted with MI and appear to be used in patients with more comorbidities. Higher NP concentrations were strongly and independently associated with in-hospital mortality in the almost 30 000 patients in whom NPs were assessed, including patients without heart failure.

BM Scirica, MB Kadakia, JA de Lemos, MT Roe, DA Morrow, et al. Association between Natriuretic Peptides and Mortality among Patients Admitted with Myocardial Infarction: A Report from the ACTION Registry®–GWTG™.

Predictive value of processed forms of BNP in circulation

B-type natriuretic peptide (BNP) is secreted in response to pathologic stress from the heart. Its use as a biomarker of heart failure is well known; however, its diagnostic potential in ischemic heart disease is less explored. Recently, it has been reported that processed forms of BNP exist in the circulation. We characterized processed forms of BNP by a newly developed mass spectrometry–based detection method combined with immunocapture using commercial anti-BNP antibodies.

Measurements of processed forms of BNP by this assay were found to be strongly associated with presence of restenosis. Reduced concentrations of the amino-terminal processed peptide BNP(5–32) relative to BNP(3–32) [as the index parameter BNP(5–32)/BNP(3–32) ratio] were seen in patients with restenosis [median (interquartile range) 1.19 (1.11–1.34), n = 22] vs without restenosis [1.43 (1.22–1.61), n = 83; P < 0.001] in a cross-sectional study of 105 patients undergoing follow-up coronary angiography. A sensitivity of 100% to rule out the presence of restenosis was attained at a ratio of 1.52. Processed forms of BNP may serve as viable potential biomarkers to rule out restenosis.

H Fujimoto, T Suzuki, K Aizawa, D Sawaki, J Ishida, et al. Processed B-Type Natriuretic Peptide Is a Biomarker of Postinterventional Restenosis in Ischemic Heart Disease. Clin Chem 2013.

Circulating proteins from patients requiring revascularization

More than a million diagnostic cardiac catheterizations are performed annually in the US for evaluation of coronary artery anatomy and the presence of atherosclerosis. Nearly half of these patients have no significant coronary lesions or do not require mechanical or surgical revascularization. Consequently, the ability to rule out clinically significant coronary artery disease (CAD) using low cost, low risk tests of serum biomarkers in even a small percentage of patients with normal coronary arteries could be highly beneficial. METHODS: Serum from 359 symptomatic subjects referred for catheterization was interrogated for proteins involved in atherogenesis, atherosclerosis, and plaque vulnerability. Coronary angiography classified 150 patients without flow-limiting CAD who did not require percutaneous intervention (PCI) while 209 required coronary revascularization (stents, angioplasty, or coronary artery bypass graft surgery). Continuous variables were compared across the two patient groups for each analyte including calculation of false discovery rate (FDR [less than or equal to]1%) and Q value (P value for statistical significance adjusted to [less than or equal to]0.01).

Significant differences were detected in circulating proteins from patients requiring revascularization including increased apolipoprotein B100 (APO-B100), C-reactive protein (CRP), fibrinogen, vascular cell adhesion molecule 1 (VCAM-1), myeloperoxidase (MPO), resistin, osteopontin, interleukin (IL)-1beta, IL-6, IL-10 and N-terminal fragment protein precursor brain natriuretic peptide (NT-pBNP) and decreased apolipoprotein A1 (APO-A1). Biomarker classification signatures comprising up to 5 analytes were identified using a tunable scoring function trained against 239 samples and validated with 120 additional samples. A total of 14 overlapping signatures classified patients without significant coronary disease (38% to 59% specificity) while maintaining 95% sensitivity for patients requiring revascularization. Osteopontin (14 times) and resistin (10 times) were most frequently represented among these diagnostic signatures. The most efficacious protein signature in validation studies comprised osteopontin (OPN), resistin, matrix metalloproteinase 7 (MMP7) and interferon gamma (IFNgamma) as a four-marker panel while the addition of either CRP or adiponectin (ACRP-30) yielded comparable results in five protein signatures.

Proteins in the serum of CAD patients predominantly reflected

  1. a positive acute phase, inflammatory response and

  2. alterations in lipid metabolism, transport, peroxidation and accumulation.

    There were surprisingly few indicators of growth factor activation or extracellular matrix remodeling in the serum of CAD patients except for elevated OPN. These data suggest that many symptomatic patients without significant CAD could be identified by a targeted multiplex serum protein test without cardiac catheterization thereby eliminating exposure to ionizing radiation and decreasing the economic burden of angiographic testing for these patients.

WA Laframboise, R Dhir, LA Kelly, P Petrosko, JM Krill-Burger, et al. Serum protein profiles predict coronary artery disease in symptomatic patients referred for coronary angiography.
BMC Medicine (impact factor: 6.03). 12/2012; 10(1):157. http://dx.doi.org/10.1186/1741-7015-10-157

miRNAs in CAD

MicroRNAs are small RNAs that control gene expression. Besides their cell intrinsic function, recent studies reported that microRNAs are released by cultured cells and can be detected in the blood. To address the regulation of circulating microRNAs in patients with stable coronary artery disease. To determine the regulation of microRNAs, we performed a microRNA profile using RNA isolated from n=8 healthy volunteers and n=8 patients with stable coronary artery disease that received state-of-the-art pharmacological treatment. Interestingly, most of the highly expressed microRNAs that were lower in the blood of patients with coronary artery disease are known to be expressed in endothelial cells (eg, miR-126 and members of the miR-17 approximately 92 cluster). To prospectively confirm these data, we detected selected microRNAs in plasma of 36 patients with coronary artery disease and 17 healthy volunteers by quantitative PCR. Consistent with the data obtained by the profile, circulating levels of miR-126, miR-17, miR-92a, and the inflammation-associated miR-155 were significantly reduced in patients with coronary artery disease compared with healthy controls. Likewise, the smooth muscle-enriched miR-145 was significantly reduced. In contrast, cardiac muscle-enriched microRNAs (miR-133a, miR-208a) tend to be higher in patients with coronary artery disease. These results were validated in a second cohort of 31 patients with documented coronary artery disease and 14 controls. Circulating levels of vascular and inflammation-associated microRNAs are significantly downregulated in patients with coronary artery disease.

S Fichtlscherer, S De Rosa, H Fox, T Schwietz, A Fischer, et al. Circulating microRNAs in patients with coronary artery disease. Circulation Research 09/2010; 107(5):677-84.

Imaging modalities compared

This review compares the noninvasive anatomical imaging modalities of coronary artery calcium scoring and coronary CT angiography to the functional assessment modality of MPI in the diagnosis and prognostication of significant CAD in symptomatic patients. A large number of studies investigating this subject are analyzed with a critical look on the evidence, underlying the strengths and limitations. Although the overall findings of the presented studies are favoring the use of CT-based anatomical imaging modalities over MPI in the diagnosis and prognosticating of CAD, the lack of a high number of large- scale, multicenter randomized controlled studies limits the generalizability of this early evidence. Further studies comparing the short- and long-term clinical outcomes and cost-effectiveness of these tests are required to determine their optimal role in the management of symptomatic patients with suspected CAD.

Y Hacioglu, M Gupta, Matthew J Budoff. Noninvasive anatomical coronary artery imaging versus myocardial perfusion imaging: which confers superior diagnostic and prognostic information?
Journal of computer assisted tomography 34(5):637-44.

Three Dimensional In-Room Imaging (3DCA) in PCI

Introduction: Coronary angiography is a two-dimensional (2D) imaging modality and thus is limited in its ability to represent complex three-dimensional (3D) vascular anatomy. Lesion length, bifurcation angles/lesions, and tortuosity are often inadequately assessed using 2D angiography due to vessel overlap and foreshortening. 3D Rotational Angiography (3DRA) with subsequent reconstruction generates models of the coronary vasculature from which lesion length measurements and Optimal View Maps (OVM) defining the amount of vessel foreshortening for each gantry angle can be derived. This study sought to determine if 3DRA-assisted percutaneous coronary interventions resulted in improved procedural results by minimizing foreshortening and optimizing stent selection.
 Rotational angiographic acquisitions were performed and a 3D model was generated from two images greater than 30° apart. An optimal view map identifying the least amount of vessel foreshortening and overlap was derived from the 3D model.
The clinical validation of in-room image-processing tools such as 3DCA and optimal view maps is important since FDA approval of these tools does not require the presentation of any data on clinical experience and impact on clinical outcomes. While the technology of 3DRA and optimal view calculations has been well validated by the work of Chen and colleagues, this study is important in demonstrating how clinical care may be impacted [4,5,7]. This study was biased toward minimizing the impact of these tools on clinical decision-making since the study site, cardiologists, and staff have extensive experience in rotational angiography, 3-D modeling and reconstruction, and the impact of foreshortening on the assessment of lesion length and choice of stent size.
3DRA assistance significantly reduced target vessel foreshortening when compared to operator’s choice of working view for PCI (2.99% ± 2.96 vs. 9.48% ± 7.56, p=0.0001). The operators concluded that 3DRA recommended better optimal view selection for PCI in 14 of 26 (54%) total cases. In 9 (35%) of 26 cases 3DRA assistance facilitated stent positioning. 3DRA based imaging prompted stent length changes in 4/26 patients (15%).
MH. Eng, PA Hudson, AJ Klein, SYJ Chen, … , JA Garcia. Impact of Three Dimensional In-Room Imaging (3DCA) in the Facilitation of Percutaneous Coronary Interventions. J Cardio Vasc Med 2013; 1: 1-5.

 

Related References from PharmaceuticalIntelligence.com:

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013
Curators: Aviva Lev-Ari, PhD, RN and Larry H. Bernstein, MD, FCAP
https://pharmaceuticalintelligence.com/2013/03/07/genomics-genet…cs-32010-32013/
http://wp.me/p2kEDv-2Jp

Prognostic Marker Importance of Troponin I in Acute Decompensated Heart Failure (ADHF)
Larry H Bernstein and  Aviva Lev-Ari
https://pharmaceuticalintelligence.com/2013/06/30/troponin-i-in-…-heart-failure
http://wp.me/p2kEDv-41S

A Changing expectation from cardiac biomarkers.
Larry H Bernstein
https://pharmaceuticalintelligence.com/2012/12/25/assessing-card…ith-biomarkers/
http://wp.me/p2kEDv-1DN

Dealing with the Use of the High Sensitivity Troponin (hs cTn) Assays
Larry H Bernstein and Aviva Lev-Ari
https://pharmaceuticalintelligence.com/2013/05/18/dealing-with-t…-hs-ctn-assays/
https://pharmaceuticalintelligence.wordpress.com/wp-admin/post.php?post=13255
http://wp.me/p2kEDv-3rN

For Disruption of Calcium Homeostasis in Cardiomyocyte Cells, see

Part VI: Calcium Cycling (ATPase Pump) in Cardiac Gene Therapy: Inhalable Gene Therapy for Pulmonary Arterial Hypertension and Percutaneous Intra-coronary Artery Infusion for Heart Failure: Contributions by Roger J. Hajjar, MD

Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/08/01/calcium-molecule-in-cardiac-gene-therapy-inhalable-gene-therapy-for-pulmonary-arterial-hypertension-and-percutaneous-intra-coronary-artery-infusion-for-heart-failure-contributions-by-roger-j-hajjar/

Part VII: Cardiac Contractility & Myocardium Performance: Ventricular Arrhythmias and Non-ischemic Heart Failure – Therapeutic Implications for Cardiomyocyte Ryanopathy (Calcium Release-related Contractile Dysfunction) and Catecholamine Responses

Justin Pearlman, MD, PhD, FACC, Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/08/28/cardiac-contractility-myocardium-performance-ventricular-arrhythmias-and-non-ischemic-heart-failure-therapeutic-implications-for-cardiomyocyte-ryanopathy-calcium-release-related-contractile/

Part VIII: Disruption of Calcium Homeostasis: Cardiomyocytes and Vascular Smooth Muscle Cells: The Cardiac and Cardiovascular Calcium Signaling Mechanism

Justin Pearlman, MD, PhD, FACC, Larry H Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2013/09/12/disruption-of-calcium-homeostasis-cardiomyocytes-and-vascular-smooth-muscle-cells-the-cardiac-and-cardiovascular-calcium-signaling-mechanism/

Read Full Post »


Dealing with the Use of the High Sensitivity Troponin (hs cTn) Assays: Preparing the United States for High-Sensitivity Cardiac Troponin Assays

Author and Curator: Larry H Bernstein, MD, FCAP
Author and Curator: Aviva Lev-Ari, PhD, RD

In this article we shall address the two following papers:
  1. Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI – Corus CAD, hs cTn, CCTA
  2. Frederick K. Korley, MD, Allan S. Jaffe, MD in Journal of the American College of Cardiology  J Am Coll Cardiol. 2013; 61(17):1753-1758.

In a previous posting I commented on the problem of hs cTn use and the on site ED performance of cardiac treadmill (done in Europe)

  • prior to a decision of CT scan (not done in US).

Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI – Corus CAD, hs cTn, CCTA

We examine the emergence of Alternatives to Angiography and PCI as most common strategy for ER admission with listed cause of Acute Chest Pain. The Goal is to use methods that will improve the process to identify for an Interventional procedure only the patients that a PCI is a must to have.

Alternative #1: Corus®  CAD

Alternative #2: High-Sensitivity Cardiac Troponins in Acute Cardiac Care

Alternative #3: Coronary CT Angiography for Acute Chest Pain
After presenting the the Three alternatives, the Editorial by R.F. Redberg, Division of Cardiology, UCSF, will be analyzed.
  • Alternative #1:  First-Line Test to Help Clinicians Exclude Obstructive CAD as a Cause of the Patient’s Symptoms

Corus®  CAD, a blood-based  gene expression test, demonstrated high accuracy with both a high negative predictive value (96 percent) and high sensitivity (89 percent) for assessing  obstructive coronary artery disease  (CAD) in a population of patients referred for stress testing with myocardial perfusion imaging (MPI).

COMPASS enrolled stable patients with symptoms suggestive of CAD who had been referred for MPI at 19 U.S. sites.  A blood sample was obtained in all 431 patients prior to MPI and Corus CAD gene expression testing was performed with study investigators blinded to Corus CAD test results.Following MPI, patients underwent either invasive coronary angiography orcoronary CT angiography, gold-standard anatomical tests for the diagnosis of coronary artery disease.

A Blood Based Gene Expression Test for Obstructive Coronary Artery Disease Tested in Symptomatic Non-Diabetic Patients Referred for Myocardial Perfusion Imaging: The COMPASS Study

https://pharmaceuticalintelligence.com/2012/08/14/obstructive-coronary-artery-disease-diagnosed-by-rna-levels-of-23-genes-cardiodx-heart-disease-test-wins-medicare-coverage/

  • Alternative #2: High-Sensitivity Cardiac Troponins in Acute Cardiac Care

Recommendations for the use of cardiac troponin (cTn) measurement in acute cardiac care have recently been published.[1] Subsequently, a high-sensitivity (hs) cTn T assay was introduced into routine clinical practice.[2] This assay, as others, called highly sensitive, permits measurement of cTn concentrations in significant numbers of apparently illness-free individuals. These assays can measure cTn in the single digit range of nanograms per litre (=picograms per millilitre) and some research assays even allow detection of concentrations <1 ng/L.[2–4] Thus, they provide a more precise calculation of the 99th percentile of cTn concentration in reference subjects (the recommended upper reference limit [URL]). These assays measure the URL with a coefficient of variation (CV) <10%.[2–4]The high precision of hs-cTn assays increases their ability to determine small differences in cTn over time. Many assays currently in use have a CV >10% at the 99th percentile URL limiting that ability.[5–7] However, the less precise cTn assays do not cause clinically relevant false-positive diagnosis of acute myocardial infarction (AMI) and a CV <20% at the 99th percentile URL is still considered acceptable.[8]

We believe that hs-cTn assays, if used appropriately, will improve clinical care. We propose criteria for the clinical interpretation of test results based on the limited evidence available at this time.

References

1. Thygesen K, Mair J, Katus H, Plebani M, Venge P, Collinson P, Lindahl B, Giannitsis E, Hasin Y, Galvani M, Tubaro M, Alpert JS, Biasucci LM, Koenig W, Mueller C, Huber K, Hamm C, Jaffe AS; Study Group on Biomarkers in Cardiology  of the ESC Working Group on Acute Cardiac Care. Recommendations  for the use of cardiac troponin measurement in acute cardiac care. Eur Heart J 2010;31:2197–2204.

2. Saenger AK, Beyrau R, Braun S, Cooray R, Dolci A, Freidank H, Giannitsis E, Gustafson S, Handy B, Katus H, Melanson SE, Panteghini M, Venge P, Zorn M, Jarolim P, Bruton D, Jarausch J, Jaffe AS. Multicenter analytical evaluation of a high sensitivity troponin T assay. Clin Chim Acta 2011;412:748–754.

3. Zaninotto M, Mion MM, Novello E, Moretti M, Delprete E, Rocchi MB, Sisti D, Plebani M. Precision performance at low levels and 99th percentile concentration of the Access AccuTnI assay on two different platforms. Clin Chem Lab Med 2009; 47:367–371.

4. Todd J, Freese B, Lu A, Held D, Morey J, Livingston R, Goix P. Ultrasensitive flow based immunoassays using single-molecule counting. Clin Chem 2007; 53:1990–1995.

5. van de Kerkhof D, Peters B, Scharnhorst V. Performance of Advia Centaur second-generation troponin assay TnI-Ultra compared with the first-generation cTnI assay. Ann Clin Biochem 2008; 45:316–317.

6. Lam Q, Black M, Youdell O, Spilsbury H, Schneider HG. Performance evaluation and subsequent clinical experience with the Abbott automated Architect STAT Troponin-I assay. Clin Chem 2006; 52:298–300.

7. Tate JR, Ferguson W, Bais R, Kostner K, Marwick T, Carter A. The determination of the 99th percentile level for troponin assays in an Australian reference population. Ann Clin Biochem 2008; 45:275–288.

8. Jaffe AS, Apple FS, Morrow DA, Lindahl B, Katus HA. Being rational about (im)-precision: a statement from the Biochemistry Subcommittee of the Joint European Society of Cardiology/American College of Cardiology Foundation/American Heart Association/World Heart Federation Task Force for the definition of myocardial infarction. Clin Chem 2010; 56:921–943.

To the Editor:

Hoffmann et al. (July 26 issue)1 conclude that, among patients with low-to-intermediate-risk acute coronary syndromes, the incorporation of coronary computed tomographic angiography (CCTA) improves the standard evaluation strategy.2 However, it may be difficult to generalize their results, owing to different situations on the two sides of the Atlantic and the availability of high-sensitivity troponin T assays in Europe. In the United States, the Food and Drug Administration has still not approved a high-sensitivity troponin test, and patients in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT-II) trial only underwent testing with the conventional troponin T test. As we found in the biomarker substudy in the ROMICAT-I trial, a single high-sensitivity troponin T test at the time of CCTA accurately ruled out acute myocardial infarction (negative predictive value, 100%) (Table 1TABLE 1Results of High-Sensitivity Troponin T Testing for the Diagnosis of Acute Coronary Syndromes in ROMICAT-I.).3 In addition, patients with acute myocardial infarction can be reliably identified, with up to 100% sensitivity, with the use of two high-sensitivity measurements of troponin T within 3 hours after admission.4,5

It seems plausible to assume that the incorporation of high-sensitivity troponin T assays in this trial would have outperformed CCTA. Therefore, it is important to assess the performance of such testing and compare it with routine CCTA testing in terms of length of stay in the hospital and secondary end points, especially cumulative costs and major adverse coronary events at 28 days.

Mahir Karakas, M.D.
Wolfgang Koenig, M.D.
University of Ulm Medical Center, Ulm, Germany
wolfgang.koenig@uniklinik-ulm.de

References

  1. Hoffmann U, Truong QA, Schoenfeld DA, et al. Coronary CT angiography versus standard evaluation in acute chest pain. N Engl J Med 2012;367:299-308

  2. Redberg RF. Coronary CT angiography for acute chest pain. N Engl J Med 2012;367:375-376

  3. Januzzi JL Jr, Bamberg F, Lee H, et al. High-sensitivity troponin T concentrations in acute chest pain patients evaluated with cardiac computed tomography. Circulation2010;121:1227-1234

  4. Keller T, Zeller T, Ojeda F, et al. Serial changes in highly sensitive troponin I assay and early diagnosis of myocardial infarction. JAMA 2011;306:2684-2693

  5. Thygesen K, Mair J, Giannitsis E, et al. How to use high-sensitivity cardiac troponins in acute cardiac care. Eur Heart J 2012;33:2252-2257

Author/Editor Response

In response to Karakas and Koenig: we agree that high-sensitivity troponin T assays may permit more efficient care of low-risk patients presenting to the emergency department with acute chest pain1 and may also have the potential to identify patients with unstable angina because cardiac troponin T levels are associated with the degree and severity of coronary artery disease.2 Hence, high-sensitivity troponin T assays performed early may constitute an efficient and safe gatekeeper for imaging. CCTA, however, may be useful for ruling out coronary artery disease in patients who have cardiac troponin T levels above the 99th percentile but below levels that are diagnostic for myocardial infarction. The hypothesis that high-sensitivity troponin T testing followed by CCTA, as compared with other strategies, may enable safe and more efficient treatment of patients in the emergency department who are at low-to-moderate risk warrants further assessment. The generalizability of our data to clinical settings outside the United States may also be limited because of differences in the risk profile of emergency-department populations and the use of nuclear stress imaging.3

Udo Hoffmann, M.D., M.P.H.
Massachusetts General Hospital, Boston, MA
uhoffmann@partners.org

W. Frank Peacock, M.D.
Baylor College of Medicine, Houston, TX

James E. Udelson, M.D.
Tufts Medical Center, Boston, MA

Since publication of their article, the authors report no further potential conflict of interest.

References

  1. Than M, Cullen L, Reid CM, et al. A 2-h diagnostic protocol to assess patients with chest pain symptoms in the Asia-Pacific region (ASPECT): a prospective observational validation study. Lancet 2011;377:1077-1084

  2. Januzzi JL Jr, Bamberg F, Lee H, et al. High-sensitivity troponin T concentrations in acute chest pain patients evaluated with cardiac computed tomography. Circulation2010;121:1227-1234

  3. Peacock WF. The value of nothing: the consequence of a negative troponin test. J Am Coll Cardiol 2011;58:1340-1342

  • Alternative #3: Coronary CT Angiography for Acute Chest Pain

The Study concluded:

There was increased diagnostic testing and higher radiation exposure in the CCTA group, with no overall reduction in the cost of care. 

Coronary CT Angiography versus Standard Evaluation in Acute Chest Pain

Udo Hoffmann, M.D., M.P.H., Quynh A. Truong, M.D., M.P.H., David A. Schoenfeld, Ph.D., Eric T. Chou, M.D., Pamela K. Woodard, M.D., John T. Nagurney, M.D., M.P.H., J. Hector Pope, M.D., Thomas H. Hauser, M.D., M.P.H., Charles S. White, M.D., Scott G. Weiner, M.D., M.P.H., Shant Kalanjian, M.D., Michael E. Mullins, M.D., Issam Mikati, M.D., W. Frank Peacock, M.D., Pearl Zakroysky, B.A., Douglas Hayden, Ph.D., Alexander Goehler, M.D., Ph.D., Hang Lee, Ph.D., G. Scott Gazelle, M.D., M.P.H., Ph.D., Stephen D. Wiviott, M.D., Jerome L. Fleg, M.D., and James E. Udelson, M.D. for the ROMICAT-II Investigators

N Engl J Med 2012; 367:299-308 July 26, 2012  http://dx.doi.org/10.1056/NEJMoa1201161

BACKGROUND

It is unclear whether an evaluation incorporating coronary computed tomographic angiography (CCTA) is more effective than standard evaluation in the emergency department in patients with symptoms suggestive of acute coronary syndromes.

METHODS

In this multicenter trial, we randomly assigned patients 40 to 74 years of age with symptoms suggestive of acute coronary syndromes but without ischemic electrocardiographic changes or an initial positive troponin test to early CCTA or to standard evaluation in the emergency department on weekdays during daylight hours between April 2010 and January 2012. The primary end point was length of stay in the hospital. Secondary end points included rates of discharge from the emergency department, major adverse cardiovascular events at 28 days, and cumulative costs. Safety end points were undetected acute coronary syndromes.

RESULTS

The rate of acute coronary syndromes among 1000 patients with a mean (±SD) age of 54±8 years (47% women) was 8%. After early CCTA, as compared with standard evaluation, the mean length of stay in the hospital was reduced by 7.6 hours (P<0.001) and more patients were discharged directly from the emergency department (47% vs. 12%, P<0.001). There were no undetected acute coronary syndromes and no significant differences in major adverse cardiovascular events at 28 days. After CCTA, there was more downstream testing and higher radiation exposure. The cumulative mean cost of care was similar in the CCTA group and the standard-evaluation group ($4,289 and $4,060, respectively; P=0.65).

CONCLUSIONS

In patients in the emergency department with symptoms suggestive of acute coronary syndromes, incorporating CCTA into a triage strategy improved the efficiency of clinical decision making, as compared with a standard evaluation in the emergency department, but it resulted in an increase in downstream testing and radiation exposure with no decrease in the overall costs of care. (Funded by the National Heart, Lung, and Blood Institute; ROMICAT-II ClinicalTrials.gov number, NCT01084239.)

http://www.nejm.org/doi/full/10.1056/NEJMoa1201161#t=abstract

REFERENCES

  1. Roe MT, Harrington RA, Prosper DM, et al. Clinical and therapeutic profile of patients presenting with acute coronary syndromes who do not have significant coronary artery disease. Circulation 2000;102:1101-1106

  2. Miller JM, Rochitte CE, Dewey M, et al. Diagnostic performance of coronary angiography by 64-row CT. N Engl J Med 2008;359:2324-2336

  3. Budoff MJ, Dowe D, Jollis JG, et al. Diagnostic performance of 64-multidetector row coronary computed tomographic angiography for evaluation of coronary artery stenosis in individuals without known coronary artery disease: results from the prospective multicenter ACCURACY (Assessment by Coronary Computed Tomographic Angiography of Individuals Undergoing Invasive Coronary Angiography) trial. J Am Coll Cardiol 2008;52:1724-1732

  4. Marano R, De Cobelli F, Floriani I, et al. Italian multicenter, prospective study to evaluate the negative predictive value of 16- and 64-slice MDCT imaging in patients scheduled for coronary angiography (NIMISCAD-Non Invasive Multicenter Italian Study for Coronary Artery Disease). Eur Radiol 2009;19:1114-1123
  5. Meijboom WB, Meijs MF, Schuijf JD, et al. Diagnostic accuracy of 64-slice computed tomography coronary angiography: a prospective, multicenter, multivendor study. J Am Coll Cardiol 2008;52:2135-2144
  6. Hoffmann U, Bamberg F, Chae CU, et al. Coronary computed tomography angiography for early triage of patients with acute chest pain: the ROMICAT (Rule Out Myocardial Infarction using Computer Assisted Tomography) trial. J Am Coll Cardiol 2009;53:1642-1650

  7. Hollander JE, Chang AM, Shofer FS, et al. One-year outcomes following coronary computerized tomographic angiography for evaluation of emergency department patients with potential acute coronary syndrome. Acad Emerg Med 2009;16:693-698

  8. Rubinshtein R, Halon DA, Gaspar T, et al. Usefulness of 64-slice cardiac computed tomographic angiography for diagnosing acute coronary syndromes and predicting clinical outcome in emergency department patients with chest pain of uncertain origin. Circulation2007;115:1762-1768

  9. Schlett CL, Banerji D, Siegel E, et al. Prognostic value of CT angiography for major adverse cardiac events in patients with acute chest pain from the emergency department: 2-year outcomes of the ROMICAT trial. JACC Cardiovasc Imaging 2011;4:481-491

  10. Goldstein JA, Chinnaiyan KM, Abidov A, et al. The CT-STAT (Coronary Computed Tomographic Angiography for Systematic Triage of Acute Chest Pain Patients to Treatment) trial. J Am Coll Cardiol 2011;58:1414-1422

  11. Litt HI, Gatsonis C, Snyder B, et al. CT angiography for safe discharge of patients with possible acute coronary syndromes. N Engl J Med 2012;366:1393-1403

  12. Shreibati JB, Baker LC, Hlatky MA. Association of coronary CT angiography or stress testing with subsequent utilization and spending among Medicare beneficiaries. JAMA2011;306:2128-2136

  13. Hoffmann U, Truong QA, Fleg JL, et al. Design of the Rule Out Myocardial Ischemia/Infarction Using Computer Assisted Tomography: a multicenter randomized comparative effectiveness trial of cardiac computed tomography versus alternative triage strategies in patients with acute chest pain in the emergency department. Am Heart J2012;163:330-338

  14. Abbara S, Arbab-Zadeh A, Callister TQ, et al. SCCT guidelines for performance of coronary computed tomographic angiography: a report of the Society of Cardiovascular Computed Tomography Guidelines Committee. J Cardiovasc Comput Tomogr 2009;3:190-204

  15. Gerber TC, Carr JJ, Arai AE, et al. Ionizing radiation in cardiac imaging: a science advisory from the American Heart Association Committee on Cardiac Imaging of the Council on Clinical Cardiology and Committee on Cardiovascular Imaging and Intervention of the Council on Cardiovascular Radiology and Intervention. Circulation 2009;119:1056-1065

  16. von Ballmoos MW, Haring B, Juillerat P, Alkadhi H. Meta-analysis: diagnostic performance of low-radiation-dose coronary computed tomography angiography. Ann Intern Med2011;154:413-420[Erratum, Ann Intern Med 2011;154:848.]

  17. Achenbach S, Marwan M, Ropers D, et al. Coronary computed tomography angiography with a consistent dose below 1 mSv using prospectively electrocardiogram-triggered high-pitch spiral acquisition. Eur Heart J 2010;31:340-346

  18. Than M, Cullen L, Reid CM, et al. A 2-h diagnostic protocol to assess patients with chest pain symptoms in the Asia-Pacific region (ASPECT): a prospective observational validation study. Lancet 2011;377:1077-1084

In the EDITORIAL by Redberg RF. Dr. Redberg, Cardiology Division, UCSF made the following points in:

Coronary CT angiography for acute chest pain. N Engl J Med 2012;367:375-376

  • Six million people present to ER annually with Acute Chest Pain, most have other diseases that Heart.
  • Current diagnostic methods lead to admission to the hospital, unnecessary stays and over-treatment – improvement of outcomes is needed.
  • Rule Out Myocardial Infarction Using Computer Assisted Tomography II (ROMICAT-II) 100 patients were randomly assigned to CCTA group or Standard Diagnosis Procedures Group in the ER which involved Stress Test in 74%.

CRITIQUE and Study FLAWS in MGH Study:

  • ROMICAT-II enrolled patients only during “weekday daytime hours, no weekend or nights when the costs are higher.
  • Assumption that a diagnostic test must be done before discharge for low-to-intermediate-risk patients is unproven and probably unwarranted.. No evidence that the tests performed let to improved outcomes.
  • Events rate for patient underwent CCTA, Stress test or no testing at al were less that 1% to have an MI, no one died. Thus, it is impossible to assign a benefit to the CCTA Group. So very low rates were observed in other studies
  • CCTA patients were exposed to substantial dose of Radiation, , contrast die,
  • Patients underwent ECG and Negative Troponin, no evidence that additional testing further reduced the risk.
  • Average age of patients: 54, 47% women.Demographic Characteristics with low incidence of CAD, NEJM, 1979; 300:1350-8
  • Risk of Cancer from radiation in younger population is higher, same in women.
  • Hoffmann’s Study: Radiation burden was clinically significant: Standard Evaluation Group: (4.7+-8.4 mSv), CCTA: (13.9+-10.4 mSv), exposure of 10 mSv have been projected to lead to 1 death from Cancer per 2000 persons, Arch Intern Med 2009; 169:2071-7
  • Middle Age women, increased risk of Breast Cancer from radiation, Arch Intern Med 2012 June 11 (ePub ahead of Print)
  • ROMICAT-II study: discharge diagnosis Acute Coronary Syndrome – less than 10%
  • CCTA Group: more tests, more radiation, more interventions tht the standard-evaluation group.
  • Choose Wisely Campaign – order test only when the benefit will exceed the risks

Dr. Redberd advocates ECG and Troponin, if NORMAL, no further testing.

Epicrisis on Part 1

Redberg’s conclusions are correct for the initial screening. The issue has been whether to do further testing for low or intermediate risk patients.

The most intriguing finding that is not at all surprising is that the CCTA added very little in the suspect group with small or moderate risk. My original studies using a receiver operator characteristic curve were very good, although some patients with CRF or ESRD had extremely high values. The ultra sensitive troponin threw the Area Under the ROC out the window, under the assumption that a perfect assay would exclude AMI, or any injury to the heart. The improved assay does pick up minor elevations of troponin in the absence of MI as a result of plaque rupture. It is possible that 50% of these elevations need medical attention, but then the question is an out of hospital referral or admission and further workup. I have discussed this at some length on several occasions with Dr. Jaffe at Mayo Clinic.

Many of those with minor or intermediate elevation have significant renal insufficiency, but they might also be in CKD Class 3 and not 1 or 2. The coexistence of Type 2 diabetes would go into the standard assessment, but is not mentioned in the study with respect to immediate admission or outcome 28 days after discharge.

The hs troponin I has been in daily use on the Ortho J&J (formerly Kodak) for about 2 years, and the QC standards are very high. I expected the Roche hs-TnT assay to be in use in US as well, but there may have been delays.  Januzzi , Jaffe, and Fred Aplle would be involved in the evaluation in the US, but Paul Collinson in UK, Katus and Mair in Germany, and other Europena centers certainly have been using the Roche Assay.

The biggest problem in these studies is as my mentor called my attention to – the frontrunners aren’t going to support a nose-to-nose up front study. Given that a diagnosis requires more information at minimal cost, especially when diagnosis of the heart that are not MI have to be evaluated as well, it is incomprehensibe to me that such information as

  1. mean arterial blood pressure,
  2. natriuretic peptides,
  3. the calculated EGFR are not used in the evaluation.

It is quite impossible to clear the deck when you have patients who don’t have

  1. ST elevation,
  2. depression, or
  3. T-wave inversion who are seen for vague

(not to mention long QT abnormalities).

  • predordial tightness or shortness of breath
  • pain that resembles gall bladder.

Is this an indication of the obsolescence of the RCT.

A Retrospective Quality and Cost Driven Audit on Effect of hs cTn Assay with On-Site CT Followup. (No treadmill availability)

A retrospective multisite study showed that doing the hs cTn followed by CT on-site was a good choice for US.

I also considered  the selective release of

  • low- moderate-risk patients cardiology followup in a timely manner.

This report is an excellent analysis of my point by Korley and Jaffe in Medscape, and satisfies some several years discussion

I have had with Dr. Jaffe, at Mayo Clinic.  He pointed out the importance of

  • Type 1 and Type 2 AMI

at a discussion with Dr. Fred Apple at a meeting of the Amer Assn for Clinical Chemistry that he fully elaborates on here.
It is really a refinement of other proposals that are being discussed.  It is also timely because hs cTnI is already being used
widely in the US, while there might be a holdup on the hs cTnT.

Highlights

  1. Need for a Universally Accepted Nomenclature
  2. Defining Uniform Criteria for Reference Populations
  3. Discriminating Between Acute and Nonacute Causes of hs-cTn Elevations
  4. Distinguishing Between Type 1 and Type 2 AMI
  5. Analytical Imprecision in Cardiac Troponin Assays
  6. Ruling Out AMI
  7. Investigating the Causes of Positive Troponin Values in Non-AMI Patients
  8. Risk Stratifying Patients With Nonacute Coronary Syndrome Conditions
  9. Conclusions

Abstract

It is only a matter of time before the use of high-sensitivity cardiac
troponin assays (hs-cTn) becomes common throughout the United
States. In preparation  for this inevitability, this article raises a number
of important issues regarding  these assays that deserve consideration.

These include: the need for

  • the adoption  of a universal nomenclature; the importance
  • of defining uniform criteria for reference populations;
  • the challenge of discriminating between acute and nonacute
    causes of hs-cTn elevations, and
  • between type 1 and type 2 acute myocardial infarction (AMI);

factors influencing the analytical precision of hs-cTn;

  • ascertaining the optimal duration  of the rule-out period for AMI;
  • the need for further evaluation to determine the causes
    of a positive hs-cTn in non-AMI patients; and
  • the use of hs-cTn to risk-stratify patients with disease conditions
    other than AMI.

This review elaborates on these critical issues  as a means of
educating clinicians and researchers about them.

Introduction

Recently, clinicians have begun to use the recommended cut-off values
for current generation cardiac troponin (cTn) assays:

  • the 99th percentile upper reference limit (URL).

Previously, there was reluctance to use these cut-off values because

  • of  cTn elevations from non-acute ischemic heart disease conditions.

Thus, there was a tendency to use cut-off values for troponin that equated with the

  • prior gold standard diagnosis developed with less sensitive markers
    • creatinine kinase-MB isoenzyme (CK-MB) or
    • the lowest value at which assay achieved a 10%
      coefficient of variation (CV),

which would reduce false-positive elevations (without plaque rupture).

The use of the 99th percentile URL increases the ability of these assays to detect both

  •   acute myocardial infarction (AMI) and
  •   structural cardiac morbidities.[1]

This change in practice should not be confused with

  •   newer-generation high-sensitivity assays.

Improvements in the analytic performance of cTn assays have resulted in

  •   superior sensitivity and precision.

Improved sensitivity occurs because of

  •   more sensitive antigen binding and detection antibodies,
  •   increases in the concentration of the detection probes on the tag antibodies,
  •   increases in sample volume, and buffer optimization.[2]

Assays now are able to measure

  •   10-fold lower concentrations with high precision

(a CV <10% at the 99th percentile  of the URL).

The high-sensitivity cardiac troponin T (hs-cTnT) assay is already in clinical use
throughout most of the world. It is only a matter of time before high- sensitivity
assays are approved for use in the United States. In preparation for this, as well as

  •   using the 99th percentile URL with contemporary assays,

there are a number of important issues that deserve consideration. Key concepts are included in (Table 1).

Table 1: Key ConceptsThere is a need to develop a universal nomenclature for troponin assays.There is a need for uniform criteria for selecting reference populations.The optimal delta criteria for distinguishing between acute and chronic cardiac injury remain unclear and are likely to be assay-specific.Distinguishing between type 1 and type 2 AMI is challenging, and
more type 2 AMIs will be detected with hsTn assays.Factors affecting the analytical precision of troponin assays (including how we collect samples) will become more important with the use of hs-cTn assays.The optimal duration for ruling out AMI remains unclear;

  • novel approaches to this issue are being developed.

Elevated hs-cTn, regardless of the cause, has important

  • prognostic implications and deserves additional evaluation; 

Many cases of chronic elevations can be evaluated in an outpatient setting.

Hs-cTn can be used to

  • risk-stratify patients with non-ACS cardiovascular comorbidities.

Need for a Universally Accepted Nomenclature

The literature is replete with terms used to refer to cTn assays.
We advocate the use of the term “high-sensitivity cardiac troponin assays”  (hs-cTn) for

  • cTn assays that v   measure cardiac troponin values in
  • in  at least 50% of a reference population.[2,3]

This policy has now been embraced by the journal Clinical Chemistry. High-sensitivity
assays can be further categorized as well (Table 2) with respect to generations of cTn.

Table 2.  Classification of High-Sensitivity Cardiac Troponin Assays 

Category

Description

First Generation                                   Able to measure cTn in
50%–75% of                                       a reference population
Second Generation                              Able to measure cTn in
75%–95% of                                       a reference population
Third Generation                                 Able to measure cTn in
> 95%                                               a reference population
Adapted from Apple and Collinson (3)
  • Ideally, assays should have a CV of <10% at the 99th percentile value.

Assays that do not achieve this level are less sensitive which protects against
false-positive results, and they can be used.[4]

Defining Uniform Criteria for Reference Populations
There is a lack of consistency in the types and numbers of subjects that constitute a reference
population.[2] Often, participants are included after simple screening by check list but without a

  • physical examination,
  • electrocardiogram, or
  • laboratory testing.

At other times, a

  • normal creatinine and/or a normal natriuretic peptide value is required.
  • Imaging to detect structural heart disease is rarely used. 

Because it is known that

  • gender,
  • age,
  • race,
  • renal function,
  • heart failure, and
  • structural heart disease, including
  • increased left ventricular (LV) mass

are associated with increased cTn concentrations,[5,6,7] An assay’s 99th percentile value depends on the composition of the reference group. Thus, the more criteria used, the lower the reference values (Figure 1).[5]

http://img.medscape.com/article/803/159/803159-fig1.jpg

Have no history of

  • vascular disease or diabetes, and
  • not taking cardioactive drugs,
    • based on questionnaire.
Normal defined as those individuals who had
  • no history of vascular or cardiovascular disease,
  • diabetes mellitus,
  • hypertension, or
  • heavy alcohol intake and who were
  • receiving no cardiac medication AND
  • had blood pressure ≤140/90 mmHg;
  • fasting glucose  <110 mg/dL;
  • eGFR >60mL/min;
  • LVEF > 50%; normal lung function; and no significant
  • valvular heart disease,
  • LVH,
  • diastolic HF, or
  • regional wall-motion abnormalities on ECHO.

The appropriate reference value to use clinically also is far from a settled issue.
It might be argued that

  • using a higher 99th percentile value for the elderly
  • allows comparison of the patient to his or her peers, but

in raising the cut-off value, if the increases are caused by comorbidities,

  • those who are particularly healthy will be disadvantaged.[8]

Gender and ethnicity are not comorbidities, and we would urge that those should be taken into account.
Regardless of the assay, there will need to be

  • 99th percentile values for men that are different for women.[2]

The reference population for assay validation studies should ideally be based on  –
demographic characteristics that mirror the U.S. population and include subjects whose

  • blood pressure,
  • serum glucose, and
  • creatinine and
  • natriuretic peptide values are
  • within the normal reference range and
  • who take no cardiac  medications.

These subjects should be

  • free from structural heart disease,
  • documented by echocardiography,
  • cardiac magnetic resonance imaging (MRI) or
  • computed tomography (CT) angiography.

Meeting these criteria will be a major challenge, especially for older individuals.
A conjoint pool of samples collected with manufacturers’ support so that all methods were derived from an

  • identical patient population for their reference ranges would be ideal.

[However, the method of collection and possible freeze-thaw effects is unavoidable].

One large national effort might be advantageous over multiple efforts.

 Discriminating Between Acute and Nonacute Causes of hs-cTn Elevations

With the ability to precisely measure small concentrations of cTn,

  • clinicians will be faced with the challenge of distinguishing patients
    • who have acute problems from those with chronic elevations from other causes.

Using the fourth-generation cTnT assay, approximately 0.7% of patients in
the general population have modest elevations >99th percentile URL.[11]

In the same population, this number was 2% with the hs-cTnT assay.[6]  Only

  • half of them had documentation (even with imaging) of cardiac abnormalities.

If the prevalence of a positive cTnT is 2% in the general population,

  • it will likely be 10% or 20% in the emergency department (ED)
  • and even higher in hospitalized patients, as
  • these patients often have cardiac comorbidities.

Measurement of changes in hs-cTn over time (δ hs-cTn)

  • improves the specificity of hs-cTn for the diagnosis of acute cardiac injury.[12,13]

However, it does so at the cost of sensitivity. With contemporary assays, differences

  • in analytical variation have been used to define an increasing pattern.

At elevated values, CV for most assays is in the range of 5% to 7%, so

  • a change of 20% ensures that a given change is not caused

by analytical variation alone.[10]

At values near the 99th percentile URL, higher change values are necessary.[13]  The situation with hs-cTn assays is much more complex, as follows:

1. Change criteria are unique for each assay.
2. It will be easy to misclassify patients with coronary artery disease who may present with a noncardiac cause of chest pain

  • but have elevated values.

They could be having unstable ischemia or elevations caused by structural cardiac abnormalities and noncardiac discomfort.

If hs-cTn is rising significantly, the issue is easy but

  • if the values are not rising, a diagnosis of AMI still might be made.
  • If so, some patients may be included as having AMI without a changing pattern.
  • This occurred in 14% patients studied by Hammarsten et al.[14]

If patients with elevated hs-cTn without a changing pattern are not called AMI,

  • should they be called patients with “unstable angina and cardiac injury” or patients with structural heart disease and noncardiac chest pain?

Perhaps both exist?

3. The release of biomarkers is flow-dependent.Thus, there may not always be rapid access to the circulation. An area of injury distal to a totally occluded vessel (when collateral channels close) may be different in terms of the dynAMIcs of

  • hs-cTn change than an intermittently occluded coronary artery.
4. Conjoint biological and analytical variation can be measured.

  • They are assay-dependent, and the reference change values range from 35% to 85%.[2]

The use of criteria less than that (which may be what is needed clinically) will thus
likely include individuals with changes caused by

  • conjoint biological and analytical variation alone.

This has been shown to be the case in

  • many patients with nonacute cardiovascular diagnoses.[14,15]
5. Most evaluations have attempted to define the optimal delta, often with receiver operator curve analysis. Such an approach is based on the concept that sensitivity and specificity deserve equivalent weight.[But higher deltas improve specificity more and lower ones improve sensitivity and it is not clear that all physicians want the same tradeoffs in this regard.]ED physicians often prefer high-sensitivity so that their miss rate is low (<1%),[16] whereas hospital clinicians want increased specificity. This tension will need to be addressed in defining the optimal delta.
6. The delta associated with AMI may be different from that associated with other cardiac injury.[14] In addition, women have less marked elevations of cTn in response to coronary artery disease[17] and in earlier studies were less apt to have elevated values.[18] Given their pathology is at times different,

  • it may be that different metrics may be necessary based on gender
7. Some groups have assumed that if a change is of a given magnitude over 6 hours, it can be divided by 6 and the 1-h values can be used.

  • This approach is not data driven, and biomarker release is more likely to be discontinuous rather than continuous.[19]

In addition, the values obtained with this approach are too small to be distinguished from a lack of change with most assays.

These issues pose a major challenge even for defining the ideal delta change value and provide the reasons why

  • the use of this approach will reduce sensitivity[20,21] (Figure 2).

http://img.medscape.com/article/803/159/803159-fig2.jpg

Defining the Optimal Delta: Tension Between Sensitivity and Specificity

There is a reciprocal relationship between sensitivity and specificity. With marked percentage changes,

  • specificity is improved at the expense of sensitivity, and
  • at lower values, the opposite occurs.

In addition, there is controversy in regard to the metrics that should be used with high-sensitivity assays.
The Australian-New Zealand group proposed

  • a 50% change for hs-cTnT for values below 53 ng/l and
  • a 20% change above that value.[22]
  • The 20% change is much less than conjoint biological and analytical variation.

A number of publications have suggested the superiority of

  • absolute δ cTn compared to relative δ cTn in discriminating between AMI and non-AMI causes of elevated cTn.[23,24,25]
  • The utility of the absolute or relative δ cTn appears to depend on the initial cTn concentration, and
  • the major benefit may be at higher values.[23]

A recent publication by Apple et al.[26] calculates deltas in several different ways with a contemporary assay and

  • provides a template for how to do such studies optimally.[26]

If all studies were carried out in a similar fashion, it would help immensely. In the long run, institutions will need to
define the approach they wish to take. We believe this discussion is a critical one and should include

  • laboratory,
  • ED, and
  • cardiology professionals.

Distinguishing Between Type 1 and Type 2 AMI

Although δ cTn is helpful in distinguishing between AMI and nonacute causes of Tn release,

  • it may or may not be useful in discerning type 1 from type 2 AMI.

As assay sensitivity increases, it appears that the frequency of type 2 AMI increases.
Making this distinction is not easy.

Type 1 AMI is caused by a primary coronary event, usually plaque rupture.

  • It is managed acutely with aggressive anticoagulation and
  • revascularization (percutaneous coronary intervention or coronary artery bypass).[10]

Type 2 AMI typically evolves secondary to ischemia from an oxygen demand/supply mismatch

  • severe tachycardia and
  • hypo- or hypertension and the like,
  • with or without a coronary abnormality.

These events usually are treated by addressing the underlying abnormalities.

They are particularly common in patients who are

  • critically ill and those who
  • are postoperative.[27]

However, autopsy studies from patients with postoperative AMI often manifest plaque rupture.[28]
Thus, the more important events, even if less common, may be type 1 AMIs. Type 2 events
seem more common in women,  who tend to have

  • more endothelial dysfunction,
  • more plaque erosion, and
  • less fixed coronary artery disease.[28-30]

Additional studies are needed to determine how best to make this clinical distinction.
For now, clinical judgment is recommended.

Analytical Imprecision in Cardiac Troponin Assays

All analytical problems will be more critical with hs-cTn assays. Cardiac troponin I (cTnI) and cardiac troponin T (cTnT) are measured using enzyme linked immune- sorbent assays.

  •   quantification of hs-cTn can be influenced by interference by reagent antibodies to analyte (cTn), leading to false- positive or negative results.[31]
  •   Autoantibodies to cTnI or cTnT are found in 5% to 20% of individuals and can reduce detection of cTn.[32,33]
  •   Additionally, fetal cTn isoforms can be re-expressed in diseased skeletal muscle and detected by the cTnT assays, resulting in false-positive values.[34]

Several strategies, including the use of

  •   blocking reagents,
  •  assay redesign, and use of
  •  antibody fragments,

have been used to reduce interference.[35–36]

There are differences in measured cTn values based on specimen type (serum versus heparinized plasma versus EDTA plasma).
In addition, hemolysis may affect the accuracy of cTn measurement,[37] and with blood draws from peripheral IV lines, common in ICU.

Ruling Out AMI

Studies evaluating the diagnostic performance of hs-cTn assays for the early diagnosis of AMI usually define AMI on

  • the basis of a rising and/or falling pattern of current generation cTn values.[21,38]

However, defining AMI on the basis of the less sensitive current generation assay results in an underestimation of the true prevalence of AMI and

  • an overestimation of negative predictive value of the experimental assay.
  • shortens the time it takes to rule in all the AMIs and
  • to definitively exclude AMI as it
  • ignores the new AMIs more sensitively detected by the hs-cTn assay.

Thus, in the study by Hammarsten et al.,[14]

  • the time to exclude all AMIs was 8.5 hours when all of the AMIs detected
    with the high-sensitivity assay were included, whereas
  • others that do not include these additional events report this can be done
    in 3 to 4 hours.[21,29,38]

In our view, Hammarsten is correct.

This does not mean that hs-cTn cannot help in excluding AMI. Body et al.[39] reported that patients who present with undetectable values (less than the LOB of the hs-cTnT assay) were unlikely to have adverse events during follow-up. If that group of patients is added to those who present later than 6 hours, then perhaps a significant proportion of patients

 

  • with possible acute coronary syndrome (ACS) could
  • have that diagnosis excluded with the initial value.[40]
    • studies need to continue to evaluate cTn values for at least 6 h
      to define the frequency of additional AMIs detected in that manner.

Using follow-up evaluations of patients with small event rates

  • who are likely to have additional care during the follow-up period are likely to be underpowered.

It may be that better initial risk stratification may help with this, as recently reported.[16,41]
Low-risk patients who have good follow-up after an ED visit

  • may be a group that can be released as early as 2 h after presentation.[16]

Investigating the Causes of Positive Troponin Values in Non-AMI Patients

Elevated Tn values (including those obtained with high-sensitivity assays) are associated with

  • a 2-fold higher risk for longer-term all-cause mortality and
  • cardiovascular death than a negative troponin values.[6,42-44]

This association is dose-dependent.

  • If values are rising, they are indicative of acute cardiac injury.

Those patients should be admitted because the risk is often short-term. However,

  • if the values are stable, assuming the timing of any acute event would
    allow detection of a changing pattern,
  • the risk, although substantive, in our view, often plays out in the longer term.[44]
  • Many of these individuals, assuming they are doing well clinically, can be
    evaluated outside of the hospital, in our view.
  • However, because such elevations are an indicator of a subclinical
    cardiovascular injury,  such evaluations should be early and aggressive.

Data from several studies suggest that there may well be risk far below the 99th percentile URL value.
Thus, it may evolve that patients in the upper ranges of the normal range also require some degree of cardiovascular evaluation.

Risk Stratifying Patients With Nonacute Coronary Syndrome
Conditions

Patients who have a rising pattern of values have a higher risk of mortality than those with negative values regardless of the cause.
Investigations are ongoing to determine how well results from hs-cTn testing help to risk-stratify patients with

  • pulmonary embolism,[45]
  • congestive heart failure,[46]
  • sepsis,[47]
  • hypertensive emergency,[48] and
  • chronic obstructive pulmonary disease.[49]

Presently, the studies suggest that cTn values classify patients into clinically relevant  risk subgroups. Studies are needed

  • to evaluate the incremental prognostic benefit of hs-cTn.

Conclusions

Routine use of hs-cTn assays in the United States is inevitable. These assays hold
the promise of

  • improving the sensitivity of AMI diagnoses,
  • shortening the duration of AMI evaluation and
  • improving the risk stratification of other noncardiac diagnoses.

However, to be able to fully realize their potential, additional studies are needed to address the

  • knowledge gaps we have identified. In the interim, clinicians need to
    • learn how to use the 99th% URL and
    • the concept of changing values

John Adan, MD, FACC

In 2008 CMS commissioned Yale University to analyze 30 days mortality after myocardial infarction in their hospitals.

The study has been based on review of medical records. Consensus criteria for diagnosis of myocardial infarction include

  • clinical symptoms,
  • EKG,
  • troponins,
  • CK MB,
  • ECHO,
  • cath,
  • histopathology, etc.

How the reviewed hospitals performed diagnostic coding is unknown. In clinical practice we are bombarded by consults

  • for elevated troponins due to causes other than myocardial infarction, like
    • pneumonia,
    • accelerated hypertension,
    • arrhythmias,
    • renal failure, etc.

The metric started out over 19%. Now it is below 15%, on average.

CT Angiography (CCTA) Reduced Medical Resource Utilization compared to Standard Care reported in JACC
Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2013/05/16/ct-angiography-ccta-reduced-medical-resource-utilization-compared-
to-standard-care-reported-in-jacc/?goback=%2Egde_4346921_member_241569351

typical changes in CK-MB and cardiac troponin ...

typical changes in CK-MB and cardiac troponin in Acute Myocardial Infarction (Photo credit: Wikipedia)

Phosphotungstic acid-haematoxylin staining dem...

Phosphotungstic acid-haematoxylin staining demonstrating contraction band necrosis in an individual that had a myocardial infarction (heart attack). (Photo credit: Wikipedia)

English: Troponin(SVG Version) 日本語: トロポニン(SVG修正版)

English: Troponin(SVG Version) 日本語: トロポニン(SVG修正版) (Photo credit: Wikipedia)

Read Full Post »