Posts Tagged ‘University of Washington’

The Life and Work of Allan Wilson

Curator: Larry H. Bernstein, MD, FCAP


Allan Charles Wilson (18 October 1934 – 21 July 1991) was a Professor of Biochemistry at the University of California, Berkeley, a pioneer in the use of molecular approaches to understand evolutionary change and reconstruct phylogenies, and a revolutionary contributor to the study of human evolution. He was one of the most controversial figures in post-war biology; his work attracted a great deal of attention both from within and outside the academic world. He is the only New Zealander to have won the MacArthur Fellowship.

He is best known for experimental demonstration of the concept of the molecular clock (with his doctoral student Vincent Sarich), which was theoretically postulated by Linus Pauling and Emile Zuckerkandl, revolutionary insights into the nature of the molecular anthropology of higher primates and human evolution, called Mitochondrial Eve hypothesis (with his doctoral students Rebecca L. Cann and Mark Stoneking).

Allan Wilson was born in Ngaruawahia, New Zealand, and raised on his family’s rural dairy farm at Helvetia, Pukekohe, about twenty miles south of Auckland. At his local Sunday School, the vicar’s wife was impressed by young Allan’s interest in evolution and encouraged Allan’s mother to enroll him at the elite King’s College secondary school in Auckland. There he excelled in mathematics, chemistry, and sports.

Wilson already had an interest in evolution and biochemistry, but intended to be the first in his family to attend university by pursuing studies in agriculture and animal science. Wilson met Professor Campbell Percy McMeekan, a New Zealand pioneer in animal science, who suggested that Wilson attend the University of Otago in southern New Zealand to further his study in biochemistry rather than veterinary science. Wilson gained a BSc from the University of Otago in 1955, majoring in both zoology and biochemistry.

The bird physiologist Donald S. Farner met Wilson as an undergraduate at Otago and invited him to Washington State University at Pullman as his graduate student. Wilson obliged and completed a master’s degree in zoology at WSU under Farner in 1957, where he worked on the effects of photoperiod on the physiology of birds.

Wilson then moved to the University of California, Berkeley, to pursue his doctoral research. At the time the family thought Allan would only be gone two years. Instead, Wilson remained in the United States, gaining his PhD at Berkeley in 1961 under the direction of biochemist Arthur Pardee for work on the regulation of flavin biosynthesis in bacteria. From 1961 to 1964, Wilson studied as a post-doc under biochemist Nathan O. Kaplan at Brandeis University in Waltham, Massachusetts. In Kaplan’s lab, working with lactate and malate dehydrogenases, Wilson was first introduced to the nascent field of molecular evolution. Nate Kaplan was one of the very earliest pioneers to address phylogenetic problems with evidence from protein molecules, an approach that Wilson later famously applied to human evolution and primate relationships. After Brandeis, Wilson returned to Berkeley where he set up his own lab in the Biochemistry department, remaining there for the rest of his life.

Wilson joined the UC Berkeley faculty of biochemistry in 1964, and was promoted to full professor in 1972. His first major scientific contribution was published as Immunological Time-Scale For Hominid Evolution in the journal Science in December 1967. With his student Vincent Sarich, he showed that evolutionary relationships of the human species with other primates, in particular the Great Apes (chimpanzees, gorillas, and orangutans), could be inferred from molecular evidence obtained from living species, rather than solely from fossils of extinct creatures.

Their microcomplement fixation method (see complement system) measured the strength of the immune reaction between an antigen (serum albumin) from one species and an antibody raised against the same antigen in another species. The strength of the antibody-antigen reaction was known to be stronger between more closely related species: their innovation was to measure it quantitatively among many species pairs as an “immunological distance”. When these distances were plotted against the divergence times of species pair with well-established evolutionary histories, the data showed that the molecular difference increased linearly with time, in what was termed a “molecular clock”. Given this calibration curve, the time of divergence between species pairs with unknown or uncertain fossil histories could be inferred. Most controversially, their data suggested that divergence times between humans, chimpanzees, and gorillas were on the order of 3~5 million years, far less than the estimates of 9~30 million years accepted by conventional paleoanthropologists from fossil hominids such as Ramapithecus. This ‘recent origin’ theory of human/ape divergence remained controversial until the discovery of the “Lucy” fossils in 1974.

Wilson and another PhD student Mary-Claire King subsequently compared several lines of genetic evidence (immunology, amino acid differences, and protein electrophoresis) on the divergence of humans and chimpanzees, and showed that all methods agreed that the two species were >99% similar.[4][19] Given the large organismal differences between the two species in the absence of large genetic differences, King and Wilson argued that it was not structural gene differences that were responsible for species differences, but gene regulation of those differences, that is, the timing and manner in which near-identical gene products are assembled during embryology and development. In combination with the “molecular clock” hypothesis, this contrasted sharply with the accepted view that larger or smaller organismal differences were due to large or smaller rates of genetic divergence.

In the early 1980s, Wilson further refined traditional anthropological thinking with his work with PhD students Rebecca Cann and Mark Stoneking on the so-called “Mitochondrial Eve” hypothesis.[20] In his efforts to identify informative genetic markers for tracking human evolutionary history, he focused on mitochondrial DNA (mtDNA) — genes that are found in mitochondria in the cytoplasm of the cell outside the nucleus. Because of its location in the cytoplasm, mtDNA is passed exclusively from mother to child, the father making no contribution, and in the absence of genetic recombination defines female lineages over evolutionary timescales. Because it also mutates rapidly, it is possible to measure the small genetic differences between individual within species by restriction endonuclease gene mapping. Wilson, Cann, and Stoneking measured differences among many individuals from different human continental groups, and found that humans from Africa showed the greatest inter-individual differences, consistent with an African origin of the human species (the so-called “Out of Africa” hypothesis). The data further indicated that all living humans shared a common maternal ancestor, who lived in Africa only a few hundreds of thousands of years ago.

This common ancestor became widely known in the media and popular culture as the Mitochondrial Eve. This had the unfortunate and erroneous implication that only a single female lived at that time, when in fact the occurrence of a coalescent ancestor is a necessary consequence of population genetic theory, and the Mitochondrial Eve would have been only one of many humans (male and female) alive at that time.[2][3] This finding was, like his earlier results, not readily accepted by anthropologists. Conventional hypothesis was that various human continental groups had evolved from diverse ancestors, over several million of years since divergence from chimpanzees. The mtDNA data, however, strongly suggested that all humans descended from a common, quite recent, African mother.

Wilson became ill with leukemia, and after a bone marrow transplant, died on Sunday, 21 July 1991, at the Fred Hutchinson Memorial Cancer Research Center in Seattle. He had been scheduled to give the keynote address at an international conference the same day. He was 56, at the height of his scientific recognition and powers.

Wilson’s success can be attributed to his strong interest and depth of knowledge in biochemistry and evolutionary biology, his insistence of quantification of evolutionary phenomena, and has early recognition of new molecular techniques that could shed light on questions of evolutionary biology. After development of quantitative immunological methods, his lab was the first to recognize restriction endonuclease mapping analysis as a quantitative evolutionary genetic method, which led to his early use of DNA sequencing, and the then-nascent technique of PCR to obtain large DNA sets for genetic analysis of populations. He trained scores of undergraduate, graduate (34 people, 17 each of men and women, received their doctoral degrees in his lab), and post-doctoral students in molecular evolutionary biology, including sabbatical visitors from six continents. His lab published more than 300 technical papers, and was recognized as a mecca for those wishing to enter the field of molecular evolution in the 1970s and 1980s.

The Allan Wilson Centre for Molecular Ecology and Evolution was established in 2002 in his honour to advance knowledge of the evolution and ecology of New Zealand and Pacific plant and animal life, and human history in the Pacific. The Centre is under the Massey University, at Palmerston North, New Zealand, and is a national collaboration involving the University of Auckland, Victoria University of Wellington, the University of Otago, University of Canterbury and the New Zealand Institute for Plant and Food Research.

A 41-minutes documentary film of his life entitled Allan Wilson, Evolutionary: Biochemist, Biologist, Giant of Molecular Biology was released by Films Media Group in 2008.


Allan Charles Wilson. 18 October 1934 — 21 July 1991

Rebecca L. Cann

Department of Cell and Molecular Biology, University of Hawaii at Manoa, Biomedical Sciences Building T514, 1960 East–West Rd, Honolulu, HI 96822, USA


Allan Charles Wilson was born on 18 October 1934 at Ngaruawahia, New Zealand. He died in Seattle, Washington, on 21 July 1991 while undergoing treatment for leukemia.  Allan was known as a pioneering and highly innovative biochemist, helping to define the field of molecular evolution and establish the use of a molecular clock to measure evolutionary change between living species. The molecular clock, a method of measuring the timescale of evolutionary change between two organisms on the basis of the number of mutations that they have accumulated since last sharing a common genetic ancestor, was an idea initially championed by Émile Zuckerkandl and Linus Pauling (Zuckerkandl & Pauling 1962), on the basis of their observations that the number of changes in an amino acid sequence was roughly linear with time in the aligned hemoglobin proteins of animals. Although it is now not unusual to see the words ‘molecular evolution’ and ‘molecular phylogeny’ together, when Allan formed his own biochemistry laboratory in 1964 at the University of California, Berkeley, many scientists in the field of evolutionary biology considered these ideas complete heresy. Allan’s death at the relatively young age of 56 years left behind his wife, Leona (deceased in 2009), a daughter, Ruth (b. 1961), and a son, David (b. 1964), as well his as mother, Eunice (deceased in 2002), a younger brother, Gary Wilson, and a sister, Colleen Macmillan, along with numerous nieces, nephews and cousins in New Zealand, Australia and the USA. In this short span of time, he trained more than 55 doctoral students and helped launch the careers of numerous postdoctoral fellows.

Allan Charles Wilson, Biochemistry; Molecular Biology: Berkeley



The sudden death of Allan Wilson, of leukemia, on 21 July 1991, at the age of 56, and at the height of his powers, robbed the Berkeley campus and the international scientific community of one of its most active and respected leaders.

Read Full Post »

Larry H Bernstein, MD, Reporter

Leaders in Pharmaceutical Intelligence

Special Achievement Award in Medical Science

Award Description

Mary-Claire King
For bold, imaginative, and diverse contributions to medical science and human rights — she discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families.

The 2014 Lasker~Koshland Award for Special Achievement in Medical Science honors a scientist who has made bold, imaginative, and diverse contributions to medical science and human rights. Mary-Claire King(University of Washington, Seattle) discovered the BRCA1 gene locus that causes hereditary breast cancer and deployed DNA strategies that reunite missing persons or their remains with their families. Her work has touched families around the world.

As a statistics graduate student in the late 1960s, King took the late Curt Stern’s genetics course just for fun. The puzzles she encountered there—problems posed by Stern—enchanted her. She was delighted to learn that people could be paid to solve such problems, and that mathematics holds their key. She decided to study genetics and never looked back.

During her Ph.D. work with the late Allan Wilson (University of California, Berkeley), King discovered that the sequences of human and chimpanzee proteins are, on average, more than 99 percent identical; DNA sequences that do not code for proteins differ only a little more. The two primates therefore are much closer cousins than suggested by fossil studies of the time. The genetic resemblance seemed to contradict obvious distinctions: Human brains outsize those of chimps; their limbs dwarf ours; and modes of communication, food gathering, and other lifestyle features diverge dramatically. King and Wilson proposed that these contrasts arise not from disparities in DNA sequences that encode proteins, but from a small number of differences in DNA sequences that turn the protein-coding genes on and off.

Just as genetic changes drive species in new directions, they also can propel cells toward malignancy. From an evolutionary perspective, the topic of breast cancer began to intrigue King. The illness runs in families and is clearly inherited, yet many affected women have no close relatives with the disease. It is especially deadly for women whose mothers succumbed to it—and risk increases for those who have a mother or sister with breast cancer, particularly if the cancer struck bilaterally or before menopause. Unlike the situation with lung cancer, no environmental exposure distinguishes sisters who get breast cancer from those who remain disease free.

By studying a rare familial cancer, Alfred Knudsen (Lasker Clinical Medical Research Award, 1998) had shown in the early 1970’s how an inherited genetic defect could increase vulnerability to cancer. In the model he advanced, some families harbor a damaged version of a gene that normally encourages proper cellular behavior. Genetic mishaps occur during a person’s lifetime, and a second “hit” in a cell with the first physiological liability nudges the injured cell toward malignancy. A similar story might play out in families with a high incidence of breast cancer, King reasoned. She began to hunt for the theoretical pernicious gene in 1974.

The hunt
Many geneticists doubted that susceptibility to breast cancer would map to a single gene; even if it did, finding the culprit seemed unlikely for numerous reasons. First, most cases are not familial and the disease is common—so common that inherited and non-inherited cases could occur in the same families. Furthermore, the malady might not strike all women who carry a high-risk gene, and different families might carry different high-risk genes. Prevailing views held that the ailment arises from the additive effects of multiple undefined genetic and environmental insults and from complicated interactions among them. No one had previously tacked such complexities, and an attempt to unearth a breast cancer gene seemed woefully naïve.

To test whether she could find evidence that particular genes increase the odds of getting breast cancer, King applied mathematical methods to data from more than 1500 families of women younger than 55 years old with newly diagnosed breast cancer. The analysis, published in 1988, suggested that four percent of the families carry a single gene that predisposes individuals to the illness.

The most convincing way to validate this idea was to track down the gene. Toward this end, King analyzed DNA from 329 participating relatives with 146 cases of invasive breast cancer. In many of the 23 families to which the participants belonged, the scourge struck young women, often in both breasts, and in some families, even men.

In late 1990, King (by then a professor at the University of California, Berkeley) hit her quarry. She had zeroed in on a suspicious section of chromosome 17 that carried particular genetic markers in women with breast cancer in the most severely affected families. Somewhere in that stretch of DNA lay the gene, which she named BRCA1.

This discovery spurred an international race to find the gene. Four years later, scientists at Myriad Genetics, Inc. isolated it. Alterations in either BRCA1 or a second breast-cancer susceptibility gene, BRCA2, found by Michael Stratton and colleagues (Institute of Cancer Research, UK) increase risk of ovarian as well as breast cancer. The proteins encoded by these genes help maintain cellular health by repairing marred DNA. When theBRCA1 or BRCA2 proteins fail to perform their jobs, genetic integrity is compromised, thus setting the stage for cancer.

About 12 percent of women in the general population get breast cancer at some point in their lives. In contrast, 65 percent of women who inherit an abnormal version of BRCA1 and about 45 percent of women who inherit an abnormal version of BRCA2 develop breast cancer by the time they are 70 years old. Individuals with troublesome forms of BRCA1 and BRCA2 can now be identified, monitored, counseled, and treated appropriately.

Harmful versions of other genes also predispose women to breast cancer, ovarian cancer, or both. Several years ago, King devised a scheme to screen for all of these genetic miscreants. This strategy allows genetic testing and risk determination for breast and ovarian cancer; it is already in clinical practice.

Genetic tools, human rights
King has applied her expertise to aid people who suffer from ills perpetrated by humans as well as genes. She helped find the “lost children” of Argentina—those who had been kidnapped as infants or born while their mothers were in prison during the military regime of the late 1970s and early 1980s. Some of these youngsters had been illegally adopted, many by military families. In 1983, King began identifying individuals, first with a technique that was originally designed to match potential organ transplant donors and recipients. She then developed an approach that relies on analysis of DNA from mitochondria—a cellular component that passes specifically from mother to child, and is powerful for connecting people to their female forebears. King helped prove genetic relationships and thus facilitated the reunion of more than 100 of the children with their families.

Later, the Argentinian government asked if she could help identify dead bodies of individuals thought to have been murdered. King harnessed the same method to figure out who had been buried in mass graves. She established that teeth, whose enamel coating protects DNA in the dental pulp from degradation, offer a valuable resource when attempting to trace remains in situations where long periods have elapsed since the time of death.

This and related approaches have been used to identify soldiers who went missing in action, including the remains of an American serviceman who was buried beneath the Tomb of the Unknowns in Arlington National Cemetery for 14 years, as well as victims of natural disasters and man-made tragedies such as 9/11.

Mary-Claire King has employed her intellect, dedication, and ethical sensibilities to generate knowledge that has catalyzed profound changes in health care, and she has applied her expertise to promote justice where nefarious governments have terrorized their citizens.

by Evelyn Strauss

Read Full Post »

Resuscitation From Sudden Cardiac Arrest: Common Variation in Fatty Acid Genes

Reporter: Aviva Lev-Ari, PhD, RN

Common Variation in Fatty Acid Genes and Resuscitation From Sudden Cardiac Arrest

Catherine O. Johnson, PhD, MPH, Rozenn N. Lemaitre, PhD, MPH, Carol E. Fahrenbruch, MSPH, Stephanie Hesselson, PhD, Nona Sotoodehnia, MD, MPH,Barbara McKnight, PhD, Kenneth M. Rice, PhD, Pui-Yan Kwok, MD, PhD, David S. Siscovick, MD, MPH and Thomas D. Rea, MD, MPH

Author Affiliations

From the Departments of Medicine (C.O.J., R.N.L., N.S., D.S.S., T.D.R.), Biostatistics (B.M., K.M.R.), and Epidemiology (D.S.S), University of Washington, Seattle; King County Emergency Medical Services, Seattle, WA (C.E.F.); and Institute of Human Genetics, University of California San Francisco (S.H., P.-Y.K.).

Correspondence to Catherine O. Johnson, PhD, MPH, Department of Medicine, University of Washington, CHRU 1730 Minor Ave, Suite 1360, Seattle, WA 98101. E-mail johnsoco@uw.edu


Background—Fatty acids provide energy and structural substrates for the heart and brain and may influence resuscitation from sudden cardiac arrest (SCA). We investigated whether genetic variation in fatty acid metabolism pathways was associated with SCA survival.

Methods and Results—Subjects (mean age, 67 years; 80% male, white) were out-of-hospital SCA patients found in ventricular fibrillation in King County, WA. We compared subjects who survived to hospital admission (n=664) with those who did not (n=689), and subjects who survived to hospital discharge (n=334) with those who did not (n=1019). Associations between survival and genetic variants were assessed using logistic regression adjusting for age, sex, location, time to arrival of paramedics, whether the event was witnessed, and receipt of bystander cardiopulmonary resuscitation. Within-gene permutation tests were used to correct for multiple comparisons. Variants in 5 genes were significantly associated with SCA survival. After correction for multiple comparisons, single-nucleotide polymorphisms in ACSL1 and ACSL3 were significantly associated with survival to hospital admission. Single-nucleotide polymorphisms in ACSL3, AGPAT3, MLYCD, and SLC27A6 were significantly associated with survival to hospital discharge.

Conclusions—Our findings indicate that variants in genes important in fatty acid metabolism are associated with SCA survival in this population.


Circulation: Cardiovascular Genetics.2012; 5: 422-429

Published online before print June 1, 2012

doi: 10.1161/ CIRCGENETICS.111.961912


Read Full Post »

HeLa DNA: Lacks family and the N.I.H. settled on an agreement: the data from both studies should be stored in the institutes’ database of Genotypes and Phenotypes

Reporter: Aviva Lev-Ari, PhD, RN


A Family Consents to a Medical Gift, 62 Years Later

Henrietta Lacks was only 31 when she died of cervical cancer in 1951 in a Baltimore hospital. Not long before her death, doctors removed some of her tumor cells. They later discovered that the cells could thrive in a lab, a feat no human cells had achieved before.

Lacks Family, via The Henrietta Lacks Foundation — Henrietta Lacks in the 1940s.

Soon the cells, called HeLa cells, were being shipped from Baltimore around the world. In the 62 years since — twice as long as Ms. Lacks’s own life — her cells have been the subject of more than 74,000 studies, many of which have yielded profound insights into cell biology, vaccines, in vitro fertilization and cancer.

But Henrietta Lacks, who was poor, black and uneducated, never consented to her cells’ being studied. For 62 years, her family has been left out of the decision-making about that research. Now, over the past four months, the National Institutes of Health has come to an agreement with the Lacks family to grant them some control over how Henrietta Lacks’s genome is used.

“In 20 years at N.I.H., I can’t remember something like this,” Dr. Francis S. Collins, the institute’s director, said in an interview.

The agreement, which does not provide the family with the right to potential earnings from future research on Ms. Lacks’s genome, was prompted by two projects to sequence the genome of HeLa cells, the second of which was published Wednesday in the journal Nature.

Though the agreement, which was announced Wednesday, is a milestone in the saga of Ms. Lacks, it also draws attention to a lack of policies to balance the benefits of studying genomes with the risks to the privacy of people whose genomes are studied — as well as their relatives.

As the journalist Rebecca Skloot recounted in her 2010 best-seller, “The Immortal Life of Henrietta Lacks,” it was not until 1973, when a scientist called to ask for blood samples to study the genes her children had inherited from her, that Ms. Lacks’s family learned that their mother’s cells were, in effect, scattered across the planet.

Some members of the family tried to find more information. Some wanted a portion of the profits that companies were earning from research on HeLa cells. They were largely ignored for years.

Ms. Lacks is survived by children, grandchildren and great-grandchildren, many still living in or around Baltimore.

And this March they experienced an intense feeling of déjà vu.

Scientists at the European Molecular Biology Laboratory published the genome of a line of HeLa cells, making it publicly available for downloading. Another study, sponsored by the National Institutes of Health at the University of Washington, was about to be published in Nature. The Lacks family was made aware of neither project.

“I said, ‘No, this is not right,’ ” Jeri Lacks Whye, one of Henrietta Lacks’s grandchildren, said in an interview. “They should not have this up unless they have consent from the family.”

Officials at the National Institutes of Health now acknowledge that they should have contacted the Lacks family when researchers first applied for a grant to sequence the HeLa genome. They belatedly addressed the problem after the family raised its objections.

The European researchers took down their public data, and the publication of the University of Washington paper was stopped. Dr. Collins and Kathy L. Hudson, the National Institutes of Health deputy director for science, outreach and policy, made three trips to Baltimore to meet with the Lacks family to discuss the research and what to do about it.

“The biggest concern was privacy — what information was actually going to be out there about our grandmother, and what information they can obtain from her sequencing that will tell them about her children and grandchildren and going down the line,” Ms. Lacks Whye said.

The Lacks family and the N.I.H. settled on an agreement: the data from both studies should be stored in the institutes’ database of genotypes and phenotypes. Researchers who want to use the data can apply for access and will have to submit annual reports about their research. A so-called HeLa Genome Data Access working group at the N.I.H. will review the applications. Two members of the Lacks family will be members. The agreement does not provide the Lacks family with proceeds from any commercial products that may be developed from research on the HeLa genome.

With this agreement in place, the University of Washington researchers were then able to publish their results. Their analysis goes beyond the European study in several ways. Most important, they show precisely where each gene is situated in HeLa DNA.

A human genome is actually two genomes, each passed down from a parent. The two versions of a gene may be identical, or they may carry genetic variations setting them apart.

“If you think of the variations as beads on a string, you really have two strings,” said Dr. Jay Shendure, who led the Washington genome study. “The way we sequence genomes today, for the most part we just get a list of where the genes are located, but no information about which ones are on which string.”

Dr. Shendure and his colleagues have developed new methods that allow them to gather that information. By reconstructing both strings of the HeLa genome, they could better understand how Ms. Lacks’s healthy cells had been transformed over the past 60 years.

For example, they could see how Ms. Lacks got cancer. Cervical cancer is caused by human papillomavirus infections. The virus accelerates the growth of infected cells, which may go on to become tumors.

Dr. Shendure and his colleagues discovered the DNA of a human papillomavirus embedded in Ms. Lacks’s genome. By landing at a particular spot, Ms. Lacks’s virus may have given her cancer cells their remarkable endurance.

“That’s one of the frequent questions that I and the Lacks family get whenever we talk about this stuff,” Ms. Skloot said. “The answer was always, ‘We don’t know.’ Now, there’s at least somewhat of an answer: because it happened to land right there.”

Richard Sharp, the director of biomedical ethics at the Mayo Clinic, said he thought the agreement “was pretty well handled.” But he warned that it was only a “one-off solution,” rather than a broad policy to address the tension between genome research and the privacy of relatives, now that recent research has demonstrated that it is possible to reveal a person’s identity through sequencing.

Dr. Sharp considered it impractical to set up a working group of scientists and relatives for every genome with these issues. “There’s absolutely a need for a new policy,” he said.

Eric S. Lander, the founding director of the Broad Institute, a science research center at Harvard and M.I.T., said resolving these issues was crucial to taking advantage of the knowledge hidden in our genomes.

“If we are going to solve cancer, it’s going to take a movement of tens of thousands, or hundreds of thousands, of patients willing to contribute information from their cancer genomes towards a common good,” Dr. Lander said. “We are going to need to have ways to have patients feel comfortable doing that. We can’t do it without a foundation of respect and trust.”



Read Full Post »

Clinical Decision Support Systems for Management Decision Making of Cardiovascular Diseases

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC


Curator: Aviva Lev-Ari, PhD, RN

This image has an empty alt attribute; its file name is ArticleID-46.png

WordCloud Image Produced by Adam Tubman

Clinical Decision Support Systems (CDSS)

Clinical decision support system (CDSS) is an interactive decision support system (DSS). It generally relies on computer software designed to assist physicians and other health professionals with decision-making tasks, such as when to apply a particular diagnosis, further specific tests or treatments. A functional definition proposed by Robert Hayward of the Centre for Health Evidence defines CDSS as follows:  “Clinical Decision Support systems link health observations with health knowledge to influence health choices by clinicians for improved health care”. CDSS is a major topic in artificial intelligence in medicine.

Vinod Khosla of A Khosla Ventures investment, in a Fortune Magazine article, “Technology will replace 80% of what doctors do”, on December 4, 2012, wrote about CDSS as a harbinger of science in medicine.

Computer-assisted decision support is in its infancy, but we have already begun to see meaningful impact on healthcare. Meaningful use of computer systems is now rewarded under the Affordable Care Act.  Studies have demonstrated the ability of computerized clinical decision support systems to lower diagnostic errors of omission significantly, by directly countering cognitive bias.  Isabel is a differential diagnosis tool and, according to a Stony Book study, matched the diagnoses of experienced clinicians in 74% of complex cases. The system improved to a 95% match after a more rigorous entry of patient data. The IBM supercomputer, Watson, after beating all humans at the intelligence-based task of playing Jeopardy, is now turning its attention to medical diagnosis. It can process natural language questions and is fast at parsing high volumes of medical information, reading and understanding 200 million pages of text in 3 seconds. 

Examples of CDSS

  2. DiagnosisPro
  3. Dxplain
  4. MYCIN
  5. RODIA


“When Should a Physician Deviate from the Diagnostic Decision Support Tool and What Are the Associated Risks?”


Justin D. Pearlman, MD, PhD

A Decision Support System consists of one or more tools to help achieve good decisions. For example, decisions that can benefit from DSS include whether or not to undergo surgery, whether or not to undergo a stress test first, whether or not to have an annual mammogram starting at a particular age, or a computed tomography (CT) to screen for lung cancer, whether or not to utilize intensive care support such as a ventilator, chest shocks, chest compressions, forced feeding, strong antibiotics and so on versus care directed to comfort measures only without regard to longevity.

Any DSS can be viewed like a digestive tract, chewing on input, and producing output, and like the digestive tract, the output may only be valuable to a farmer. A well designed DSS is efficient in the input, timely in its processing and useful in the output. Mathematically, a DSS is a model with input parameters and an output variable or set of variables that can be used to determine an action. The input can be categorical (alive, dead), semi-quantitative (cold-warm-hot), or quantitative (temperature, systolic blood pressure, heart rate, oxygen saturation). The output can be binary (yes-no) or it can express probabilities or confidence intervals.

The process of defining specifications for a function and then deriving a useful function is called mathematical modeling. We will derive the function for “average” as an example. By way of specifications, we want to take a list of numbers as input, and come out with a single number that represents the middle of the pack or “central tendency.”   The order of the list should not matter, and if we change scales, the output should scale the same way. For example, if we use centimeters instead of inches, and we apply 2.54 centimeters to an inch, then the output should increase by the multiplier 2.54. If the list of numbers are all the same then the output should be the consistent value. Representing these specifications symbolically:

1. order doesn’t matter: f(a,b) = f(b,a), where “a” and “b” are input values, “f” is the function.

2. multipliers pass through (linearity):  f(ka,kb)=k f(a,b), where k is a scalar e.g. 2.54 cm/inch.

3. identity:  f(a,a,a,…) = a

Properties 1 and 2 lead us to consider linear functions consisting of sums and multipliers: f(a,b,c)=Aa+Bb+Cc …, where the capital letters are multipliers by “constants” – numbers that are independent of the list values a,b,c, and since the order should not matter, we simplify to f(a,b,c)=K (a+b+c+…) because a constant multiplier K makes order not matter. Property 3 forces us to pick K = 1/N where N is the length of the list. These properties lead us to the mathematical solution: average = sum of list of numbers divided by the length of the list.

A coin flip is a simple DSS: heads I do it, tails I don’t. The challenge of a good DSS is to perform better than random choice and also perform better (more accurately, more efficiently, more reliably, more timely and/or under more adverse conditions) than unassisted human decision making.

Therefore, I propose the following guiding principles for DSS design: choose inputs wisely (accessible, timely, efficient, relevant), determine to what you want output to be sensitive AND to what you want output to be insensitive, and be very clear about your measures of success.

For example, consider designing a DSS to determine whether a patient should receive the full range of support capabilities of an intensive care unit (ICU), or not. Politicians have cited the large bump in the cost of the last year of life as an opportunity to reduce costs of healthcare, and now pay primary care doctors to encourage patients to establish advanced directives not to use ICU services. From the DSS standpoint, the reasoning is flawed because the decision not to use ICU services should be sensitive to benefit as well as cost, commonly called cost-benefit analysis. If we measure success of ICU services by the benefit of quality life net gain (QLNG, “quailing”), measured in quality life-years (QuaLYs) and achieve 50% success with that, then the cost per QuaLY measures the cost-benefit of ICU services. In various cost-benefit decisions, the US Congress has decided to proceed if the cost is under $20-$100,000/QuaLY. If ICU services are achieving such a cost-benefit, then it is not logical to summarily block such services in advance. Rather, the ways to reduce those costs include improving the cost efficiency of ICU care, and improving the decision-making of who will benefit.

An example of a DSS is the prediction of plane failure from a thousand measurements of strain and function of various parts of an airplane. The desired output is probability of failure to complete the next flight safely. Cost-Benefit analysis then establishes what threshold or operating point merits grounding the plane for further inspection and preventative maintenance repairs. If a DSS reports probability of failure, then the decision (to ground the plane) needs to establish a threshold at which a certain probability triggers the decision to ground the plane.

The notion of an operating point brings up another important concept in decision support. At first blush, one might think the success of a DSS is determined by its ability to correctly identify a predicted outcome, such as futility of ICU care (when will the end result be no quality life net gain). The flaw in that measure of success is that it depends on prevalence in the study group. As an extreme example, if you study a group of patients with fatal gunshot wounds to the head, none will benefit and the DSS requirement is trivial and any DSS that says no for that group has performed well. At the other extreme, if all patients become healthy, the DSS requirement is also trivial, just say yes. Therefore the proper assessment of a DSS should pay attention to the prevalence and the operating point.

The impact of prevalence and operating point on decision-making is addressed by receiver-operator curves. Consider looking at the blood concentration of Troponin-I (TnI) as the sole determinant to decide who is having a heart attack.  If one plots a graph with horizontal axis troponin level and vertical axis ultimate proof of heart attack, the percentage of hits will generally be higher for higher values of TnI. To create such a graph, we compute a “truth table” which reports whether the test was above or below a decision threshold operating point, and whether or not the disease (heart attack) was in fact present:


              Disease            Not Disease
Test Positive



Test Negative






The sensitivity to the disease is the true positive rate (TPR), the percentage of all disease cases that are ranked by the decision support as positive: TPR = TP/(TP+FN). 100% sensitivity can be achieved trivially by lowering the threshold for a positive test to zero, at a cost.  While sensitivity is necessary for success it is not sufficient. In addition to wanting sensitivity to disease, we want to avoid labeling non-disease as disease. That is often measured by specificity, the true negative rate (TNR), the percentage of those without disease who are correctly identified as not having disease: TNR = TN/(FP+TN). I propose also we define the complement to specificity, the anti-sensitivity, as the false positive rate (FPR), FPR = FP/(FP+TN) = 1 – TNR. Anti-sensitivity is a penalty cost of lowering the diagnostic threshold to boost sensitivity, as the concomitant rise in anti-sensitivity means a growing number of non-disease subjects are labeled as having disease. We want high sensitivity to true disease without high anti-sensitivity to false disease, and we want to be insensitive to common distractors. In these formulas, note that false negatives (FN) are True for disease, and false positives (FP) are False for disease, so the denominators add FN to TP for total True disease, and add FP to TN for total False for disease.

The graph in figure 1 justifies the definition of anti-sensitivity. It is an ROC or “Receiver-Operator Curve” which is a plot of sensitivity versus anti-sensitivity for different diagnostic thresholds of a test (operating points). Note, higher sensitivity comes at the cost of higher anti-sensitivity. Where to operate (what threshold to use for diagnosis) can be selected according to cost-benefit analysis of sensitivity versus anti-sensitivity (and specificity).

FIgure 1 ROC (Receiver-Operator Curve): Graph of sensitivity (true positive rate) versus anti-sensitivity (false positive rate) computed by changing the operating point (threshold for declaring a test numeric value positive for disease). High area under the curve (AUC) is favorable because it means less anti-sensitivity for high sensitivity (upper left corner of shaded area more to the left, and higher). The dots on the curve are operating points. An inclusive operating point (high on the curve, high sensitivity) is used for screening tests, whereas an exclusive operating point (low on the curve, low anti-sensitivity) is used for definitive diagnosis.

Cost benefit analysis generally is based on a semi-lattice, or upside-down branching tree, which represents all choices and outcomes. It is important to include all branches down to final outcomes. For example, if the test is a mammogram to screen for breast cancer, the cost is not just the cost of the test, and the benefit “early diagnosis.” The cost-benefit calculation forces us to put a numerical value on the impact, such as a financial cost to an avoidable death, or we can get a numerical result in terms of quality life years expected. The cost, however, is not just the cost of the mammogram, but also of downstream events such as the cost of the needle biopsies for the suspicious “positives” and so on.

semilattice decision treeFigure 2 Semi-lattice Decision Tree: Starting from all patients, create a branch point for your test result, and add further branch points for any subsequent step-wise outcomes until you reach the “bottom line.” Assign a value to each, resulting in a numerical net cost and net benefit. If tests invoke risks (for example, needle biopsy of lung can collapse a lung and require hospitalization for a chest tube) then insert branch points for whether the complication occurs or not, as the treatment of a complication counts as part of the cost. The intermediary nodes can have probability of occurrence as their numeric factor, and the bottom line can apply the net probability of the path leading to a value as a multiplier to the dollar value (a 10% chance of costing $10,000 counts as an expectation cost of 0.1 x 10,000 = $1,000).

A third area of discussion is the statistical power of a DSS – how reliable is it in the application that you care about? Commonly DSS design is contrary to common statistical applications which address significance of a deviation in a small number of variables that have been measured many times in a large population. Instead, DSS often uses many variables to fully describe or characterize the status of a small population. For example, thousands of different measurements may be performed on a few dozen airplanes, aiming to predict when the plane should be grounded for repairs. A similar inversion of numbers – numerous variables, small number of cases – is common in genomics studies.

The success of a DDS is measured by its predictive value compared to outcomes or other measures of success. Thus measures of success include positive predictive value, negative predictive value, and confidence. A major problem with DDS is the inversion of the usually desired ratio of repetitions to measurement variables. When you get a single medical lab test, you have a single measurement value such as potassium level and a large number of normal subjects for comparison. If we knew the  mean μ and standard deviation σ that describes the distribution of normal values in the population at large, then we could compute the confidence in the decision to call our observed value abnormal based on the normal distribution:  , <br /><br /><br /><br /><br /><br /><br /><br /><br />
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }.<br /><br /><br /><br /><br /><br /><br /><br /><br />

A value may be deemed distinctive based on a 95% confidence interval if it falls outside of the norm, say by more than twice the standard deviation σ, thereby establishing that it is unlikely to be random as the distance from the mean excludes 95% of the normal distribution.

The determination of confidence in an observed set of results stems from maximized likelihood estimates. Earlier in this article we described how to derive the the mean, or center, of a set of measurements. A similar analysis can derive the standard deviation (square root of variance) as a measure of spread around the mean, as well as other descriptive statistics based on sample values. These formulas describe the distribution of sample values about the mean. The calculation is based on a simple inversion. If we knew the mean and variance of a population of values for a measurement, we could calculate the likelihood of each new measurement falling a particular distance from the mean, and we could calculate the combined likelihood for a set of observed values. Maximized Likelihood Estimation (MLE) simply inverts the method of calculation. Instead of treating the mean and variance as known, we can treat the sample observations as the known data, to characterize a distribution for the observed data samples from an estimate of the spread about an unknown mean from a set of N normal samples x(one can apply calculus to compute the formulas below for the unknown mean and unknown variance, based simply on computing how to maximize the joint likelihood of the observations  xfrom the frequency distribution above, in order t0 derive the following formulas): 

\sigma = \sqrt{\frac{1}{N}\left[(x_1-\mu)^2 + (x_2-\mu)^2 + \cdots + (x_N - \mu)^2\right]}, {\rm \ \ where\ \ } \mu = \frac{1}{N} (x_1 + \cdots + x_N),

The frequency distribution (a function of mean and spread) reports the frequency of observing x if it is drawn from a population with the specified mean μ and standard deviation σ . We can invert that by treating the observations, x, as known and the mean μ and standard deviation σ unknown, then calculate the values μ and  σ that maximize the likelihood of our sample set as coming from the dynamically described population.

In DSS there is typically an inversion of the usually requisite large number of samples (small versus large) and number of variables (large versus small. This inversion has major consequences on data confidence. If you measure just 14 independent variables versus one variable, each at 95% confidence, the net confidence drops exponentially to less than 50%: 0.9514=49%. In the airplane grounding screen tests, 1000 independent variables, at 95% confidence each, yields a net confidence of only 5 x 10-23 which is 10 sextillion times less than 50% confidence. This same problem arises in genomics research, in which we have a large array of gene product measurements on a small number of patients. Standard statistical tools are problematic at high variable counts. One can turn to qualitative grouping tools such as exploratory factor analysis, or recover statistical robustness with HykGene, a combined cluster and ranking method devised by the author to improve dramatically the ability to identify distinctions with confidence when the number of variables is high.

Evolution of DSS

Aviva Lev-Ari, PhD, RN

The examples provided above refer to sets of binary models, one family of DSS. Another type of DSS is multivariate in nature, a corollary of multivariate scenarios constitute alternative choice options. Last decade development in the DSS field involved the design of Recommendation Engines given manifested preference functions that involved simultaneous trade-off functions against cost function. Game theoretical context is embedded into Recommendation Engines. The output mentioned above, is in fact an array of options with probabilities of saving reward assigned by the Recommendation Engine.

Underlining Computation Engines

Methodological Basis of Clinical DSS

There are many different methodologies that can be used by a CDSS in order to provide support to the health care professional.[7]

The basic components of a CDSS include a dynamic (medical) knowledge base and an inference mechanism (usually a set of rules derived from the experts and evidence-based medicine) and implemented through medical logic modules based on a language such as Arden syntax. It could be based on Expert systems or artificial neural networks or both (connectionist expert systems).

Bayesian Network

The Bayesian network is a knowledge-based graphical representation that shows a set of variables and their probabilistic relationships between diseases and symptoms. They are based on conditional probabilities, the probability of an event given the occurrence of another event, such as the interpretation of diagnostic tests. Bayes’ rule helps us compute the probability of an event with the help of some more readily available information and it consistently processes options as new evidence is presented. In the context of CDSS, the Bayesian network can be used to compute the probabilities of the presence of the possible diseases given their symptoms.

Some of the advantages of Bayesian Network include the knowledge and conclusions of experts in the form of probabilities, assistance in decision making as new information is available and are based on unbiased probabilities that are applicable to many models.

Some of the disadvantages of Bayesian Network include the difficulty to get the probability knowledge for possible diagnosis and not being practical for large complex systems given multiple symptoms. The Bayesian calculations on multiple simultaneous symptoms could be overwhelming for users.

Example of a Bayesian network in the CDSS context is the Iliad system which makes use of Bayesian reasoning to calculate posterior probabilities of possible diagnoses depending on the symptoms provided. The system now covers about 1500 diagnoses based on thousands of findings.

Another example is the DXplain system that uses a modified form of the Bayesian logic. This CDSS produces a list of ranked diagnoses associated with the symptoms.

A third example is SimulConsult, which began in the area of neurogenetics. By the end of 2010 it covered ~2,600 diseases in neurology and genetics, or roughly 25% of known diagnoses. It addresses the core issue of Bayesian systems, that of a scalable way to input data and calculate probabilities, by focusing specialty by specialty and achieving completeness. Such completeness allows the system to calculate the relative probabilities, rather than the person inputting the data. Using the peer-reviewed medical literature as its source, and applying two levels of peer-review to the data entries, SimulConsult can add a disease with less than a total of four hours of clinician time. It is widely used by pediatric neurologists today in the US and in 85 countries around the world.

Neural Network

Artificial Neural Networks (ANN) is a nonknowledge-based adaptive CDSS that uses a form of artificial intelligence, also known as machine learning, that allows the systems to learn from past experiences / examples and recognizes patterns in clinical information. It consists of nodes called neuron and weighted connections that transmit signals between the neurons in a forward or looped fashion. An ANN consists of 3 main layers: Input (data receiver or findings), Output (communicates results or possible diseases) and Hidden (processes data). The system becomes more efficient with known results for large amounts of data.

The advantages of ANN include the elimination of needing to program the systems and providing input from experts. The ANN CDSS can process incomplete data by making educated guesses about missing data and improves with every use due to its adaptive system learning. Additionally, ANN systems do not require large databases to store outcome data with its associated probabilities. Some of the disadvantages are that the training process may be time consuming leading users to not make use of the systems effectively. The ANN systems derive their own formulas for weighting and combining data based on the statistical recognition patterns over time which may be difficult to interpret and doubt the system’s reliability.

Examples include the diagnosis of appendicitis, back pain, myocardial infarction, psychiatric emergencies and skin disorders. The ANN’s diagnostic predictions of pulmonary embolisms were in some cases even better than physician’s predictions. Additionally, ANN based applications have been useful in the analysis of ECG (A.K.A. EKG) waveforms.

Genetic Algorithms

Genetic Algorithm (GA) is a nonknowledge-based method developed in the 1940s at the Massachusetts Institute of Technology based on Darwin’s evolutionary theories that dealt with the survival of the fittest. These algorithms rearrange to form different re-combinations that are better than the previous solutions. Similar to neural networks, the genetic algorithms derive their information from patient data.

An advantage of genetic algorithms is these systems go through an iterative process to produce an optimal solution. The fitness function determines the good solutions and the solutions that can be eliminated. A disadvantage is the lack of transparency in the reasoning involved for the decision support systems making it undesirable for physicians. The main challenge in using genetic algorithms is in defining the fitness criteria. In order to use a genetic algorithm, there must be many components such as multiple drugs, symptoms, treatment therapy and so on available in order to solve a problem. Genetic algorithms have proved to be useful in the diagnosis of female urinary incontinence.

Rule-Based System

A rule-based expert system attempts to capture knowledge of domain experts into expressions that can be evaluated known as rules; an example rule might read, “If the patient has high blood pressure, he or she is at risk for a stroke.” Once enough of these rules have been compiled into a rule base, the current working knowledge will be evaluated against the rule base by chaining rules together until a conclusion is reached. Some of the advantages of a rule-based expert system are the fact that it makes it easy to store a large amount of information, and coming up with the rules will help to clarify the logic used in the decision-making process. However, it can be difficult for an expert to transfer their knowledge into distinct rules, and many rules can be required for a system to be effective.

Rule-based systems can aid physicians in many different areas, including diagnosis and treatment. An example of a rule-based expert system in the clinical setting is MYCIN. Developed at Stanford University by Edward Shortliffe in the 1970s, MYCIN was based on around 600 rules and was used to help identify the type of bacteria causing an infection. While useful, MYCIN can help to demonstrate the magnitude of these types of systems by comparing the size of the rule base (600) to the narrow scope of the problem space.

The Stanford AI group subsequently developed ONCOCIN, another rules-based expert system coded in Lisp in the early 1980s.[8] The system was intended to reduce the number of clinical trial protocol violations, and reduce the time required to make decisions about the timing and dosing of chemotherapy in late phase clinical trials. As with MYCIN, the domain of medical knowledge addressed by ONCOCIN was limited in scope and consisted of a series of eligibility criteria, laboratory values, and diagnostic testing and chemotherapy treatment protocols that could be translated into unambiguous rules. Oncocin was put into production in the Stanford Oncology Clinic.

Logical Condition

The methodology behind logical condition is fairly simplistic; given a variable and a bound, check to see if the variable is within or outside of the bounds and take action based on the result. An example statement might be “Is the patient’s heart rate less than 50 BPM?” It is possible to link multiple statements together to form more complex conditions. Technology such as a decision table can be used to provide an easy to analyze representation of these statements.

In the clinical setting, logical conditions are primarily used to provide alerts and reminders to individuals across the care domain. For example, an alert may warn an anesthesiologist that their patient’s heart rate is too low; a reminder could tell a nurse to isolate a patient based on their health condition; finally, another reminder could tell a doctor to make sure he discusses smoking cessation with his patient. Alerts and reminders have been shown to help increase physician compliance with many different guidelines; however, the risk exists that creating too many alerts and reminders could overwhelm doctors, nurses, and other staff and cause them to ignore the alerts altogether.

Causal Probabilistic Network

The primary basis behind the causal network methodology is cause and effect. In a clinical causal probabilistic network, nodes are used to represent items such as symptoms, patient states or disease categories. Connections between nodes indicate a cause and effect relationship. A system based on this logic will attempt to trace a path from symptom nodes all the way to disease classification nodes, using probability to determine which path is the best fit. Some of the advantages of this approach are the fact that it helps to model the progression of a disease over time and the interaction between diseases; however, it is not always the case that medical knowledge knows exactly what causes certain symptoms, and it can be difficult to choose what level of detail to build the model to.

The first clinical decision support system to use a causal probabilistic network was CASNET, used to assist in the diagnosis of glaucoma. CASNET featured a hierarchical representation of knowledge, splitting all of its nodes into one of three separate tiers: symptoms, states and diseases.

  1. a b c d e “Decision support systems .” 26 July 2005. 17 Feb. 2009 <http://www.openclinical.org/dss.html>.
  2. 2^ a b c d e f g Berner, Eta S., ed. Clinical Decision Support Systems. New York, NY: Springer, 2007.
  3. 3^ Khosla, Vinod (December 4, 2012). “Technology will replace 80% of what doctors do”. Retrieved April 25, 2013.
  4. ^ Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J et al. (2005). “Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review.”JAMA 293 (10): 1223–38. doi:10.1001/jama.293.10.1223PMID 15755945.
  5. ^ Kensaku Kawamoto, Caitlin A Houlihan, E Andrew Balas, David F Lobach. (2005). “Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.”BMJ 330 (7494): 765. doi:10.1136/bmj.38398.500764.8FPMC 555881PMID 15767266.
  6. ^ Gluud C, Nikolova D (2007). “Likely country of origin in publications on randomised controlled trials and controlled clinical trials during the last 60 years.”Trials 8: 7. doi:10.1186/1745-6215-8-7PMC 1808475PMID 17326823.
  7. ^ Wagholikar, K. “Modeling Paradigms for Medical Diagnostic Decision Support: A Survey and Future Directions”. Journal of Medical Systems. Retrieved 2012.
  8. ^ ONCOCIN: An expert system for oncology protocol management E. H. Shortliffe, A. C. Scott, M. B. Bischoff, A. B. Campbell, W. V. Melle, C. D. Jacobs Seventh International Joint Conference on Artificial Intelligence, Vancouver, B.C.. Published in 1981

SOURCE for Computation Engines Section and REFERENCES:


Cardiovascular Diseases: Decision Support Systems (DSS) for Disease Management Decision Making – DSS analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

Can aviation technology reduce heart surgery complications?

Algorithm for real-time analysis of data holds promise for forecasting
August 13, 2012 | By 

British researchers are working to adapt technology from the aviation industry to help prevent complications among heart patients after surgery. Up to 1,000 sensors aboard aircraft help airlines determine when a plane requires maintenance, reports The Engineer, serving as a model for the British risk-prediction system.

The system analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

“There are vast amounts of clinical data currently collected which is not analyzed in any meaningful way. This tool has the potential to identify subtle early signs of complications from real-time data,” Stuart Grant, a research fellow in surgery at University Hospital of South Manchester, says in a hospital statement. Grant is part of the Academic Surgery Unit working with Lancaster University on the project, which is still its early stages.

The software predicts the patient’s condition over a 24-hour period using four metrics: systolic blood pressure, heart rate, respiration rate and peripheral oxygen saturationexplains EE Times.

As a comparison tool, the researchers obtained a database of 30,000 patient records from the Massachusetts Institute of Technology and combined it with a smaller, more specialized database from Manchester.

In six months of testing, its accuracy is about 75 percent, The Engineer reports. More data and an improved algorithm could boost that rate to 85 percent, the researchers believe. Making the software web-based would allow physicians to access the data anywhere, even on tablets or phones, and could enable remote consultation with specialists.

In their next step, the researchers are applying for more funding and for ethical clearance for a large-scale trial.

U.S. researchers are working on a similar crystal ball, but one covering an array of conditions. Researchers from the University of Washington, MIT and Columbia University are using a statistical model that can predict future ailments based on a patient’s history–and that of thousands of others.

And the U.S. Department of Health & Human Services is using mathematical modeling to analyze effects of specific healthcare interventions.

Predictive modeling also holds promise to make clinical research easier by using algorithms examine multiple scenarios based on different kinds of patient populations, specified health conditions and various treatment regimens

To learn more:
– here’s the Engineer article
– check out the hospital report
– read the EE Times article

Related Articles:
Algorithm looks to past to predict future health conditions
HHS moves to mathematical modeling for research, intervention evaluation
Decision support, predictive modeling may speed clinical research


Can aviation technology reduce heart surgery complications? – FierceHealthIT http://www.fiercehealthit.com/story/can-aviation-technology-reduce-heart-surgery-complications/2012-08-13#ixzz2SITHc61J


Medical Decision Making Tools: Overview of DSS available to date  


Clinical Decision Support Systems – used for Cardiovascular Medical Decisions

Stud Health Technol Inform. 2010;160(Pt 2):846-50.

AALIM: a cardiac clinical decision support system powered by advanced multi-modal analytics.

Amir A, Beymer D, Grace J, Greenspan H, Gruhl D, Hobbs A, Pohl K, Syeda-Mahmood T, Terdiman J, Wang F.


IBM Almaden Research Center, San Jose, CA, USA.


Modern Electronic Medical Record (EMR) systems often integrate large amounts of data from multiple disparate sources. To do so, EMR systems must align the data to create consistency between these sources. The data should also be presented in a manner that allows a clinician to quickly understand the complete condition and history of a patient’s health. We develop the AALIM system to address these issues using advanced multimodal analytics. First, it extracts and computes multiple features and cues from the patient records and medical tests. This additional metadata facilitates more accurate alignment of the various modalities, enables consistency check and empowers a clear, concise presentation of the patient’s complete health information. The system further provides a multimodal search for similar cases within the EMR system, and derives related conditions and drugs information from them. We applied our approach to cardiac data from a major medical care organization and found that it produced results with sufficient quality to assist the clinician making appropriate clinical decisions.

PMID: 20841805 [PubMed – indexed for MEDLINE]

DSS development for Enhancement of Heart Drug Compliance by Cardiac Patients 

A good example of a thorough and effective CDSS development process is an electronic checklist developed by Riggio et al. at Thomas Jefferson University Hospital (TJUH) [12]. TJUH had a computerized physician order-entry system in place. To meet congestive heart failure and acute myocardial infarction quality measures (e.g., use of aspirin, beta blockers, and angiotensin-converting enzyme (ACE) inhibitors), a multidisciplinary team including a focus group of residents developed a checklist, embedded in the computerized discharge instructions, that required resident physicians to prescribe the recommended medications or choose from a drop-down list of contraindications. The checklist was vetted by several committees, including the medical executive committee, and presented at resident conferences for feedback and suggestions. Implementation resulted in a dramatic improvement in compliance.


Early DSS Development at Stanford Medical Center in the 70s

MYCIN (1976)     MYCIN was a rule-based expert system designed to diagnose and recommend treatment for certain blood infections (antimicrobial selection for patients with bacteremia or meningitis). It was later extended to handle other infectious diseases. Clinical knowledge in MYCIN is represented as a set of IF-THEN rules with certainty factors attached to diagnoses. It was a goal-directed system, using a basic backward chaining reasoning strategy (resulting in exhaustive depth-first search of the rules base for relevant rules though with additional heuristic support to control the search for a proposed solution). MYCIN was developed in the mid-1970s by Ted Shortliffe and colleagues at Stanford University. It is probably the most famous early expert system, described by Mark Musen as being “the first convincing demonstration of the power of the rule-based approach in the development of robust clinical decision-support systems” [Musen, 1999].

The EMYCIN (Essential MYCIN) expert system shell, employing MYCIN’s control structures was developed at Stanford in 1980. This domain-independent framework was used to build diagnostic rule-based expert systems such as PUFF, a system designed to interpret pulmonary function tests for patients with lung disease.


ECG for Detection of MI: DSS use in Cardiovascualr Disease Management


also showed that neural networks did a better job than two experienced cardiologists in detecting acute myocardial infarction in electrocardiograms with concomitant left bundle branch block.

Olsson SE, Ohlsson M, Ohlin H, Edenbrandt L. Neural networks—a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block. Clin Physiol Funct Imaging 2002;22:295–299.

Sven-Erik Olsson, Hans Öhlin, Mattias Ohlsson and Lars Edenbrandt
Neural networks – a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block
Clinical Physiology and Functional Imaging 22, 295-299 (2002) 

The prognosis of acute myocardial infarction (AMI) improves by early revascularization. However the presence of left bundle branch block (LBBB) in the electrocardiogram (ECG) increases the difficulty in recognizing an AMI and different ECG criteria for the diagnosis of AMI have proved to be of limited value. The purpose of this study was to detect AMI in ECGs with LBBB using artificial neural networks and to compare the performance of the networks to that of six sets of conventional ECG criteria and two experienced cardiologists. A total of 518 ECGs, recorded at an emergency department, with a QRS duration > 120 ms and an LBBB configuration, were selected from the clinical ECG database. Of this sample 120 ECGs were recorded on patients with AMI, the remaining 398 ECGs being used as a control group. Artificial neural networks of feed-forward type were trained to classify the ECGs as AMI or not AMI. The neural network showed higher sensitivities than both the cardiologists and the criteria when compared at the same levels of specificity. The sensitivity of the neural network was 12% (P = 0.02) and 19% (P = 0.001) higher than that of the cardiologists. Artificial neural networks can be trained to detect AMI in ECGs with concomitant LBBB more effectively than conventional ECG criteria or experienced cardiologists.


Additional SOURCES:



 Comment of Note

During 1979-1983 Dr. Aviva Lev-Ari was part of Prof. Ronald A. Howard, Stanford University, Study Team, the consulting group to Stanford Medical Center during MYCIN feature enhancement development.

Professor Howard is one of the founders of the decision analysis discipline. His books on probabilistic modeling, decision analysis, dynamic programming, and Markov processes serve as major references for courses and research in these fields.


It was Prof. Howard from EES, Prof. Amos Tversky of Behavior Science  (Advisor of Dr. Lev-Ari’s Masters Thesis at HUJ), and Prof. Kenneth Arrow, Economics, with 15 doctoral students in the early 80s, that formed the Interdisciplinary Decision Analysis Core Group at Stanford. Students of Prof. Howard, chiefly, James E. Matheson, started the Decision Analysis Practice at Stanford Research Institute (SRI, Int’l) in Menlo Park, CA.


Dr. Lev-Ari  was hired on 3/1985 to head SRI’s effort in algorithm-based DSS development. The models she developed were applied in problem solving for  SRI Clients, among them Pharmaceutical Manufacturers: Ciba Geigy, now NOVARTIS, DuPont, FMC, Rhone-Poulenc, now Sanofi-Aventis.

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

The Men of Misconduct

January 24, 2013
After reviewing US Office of Research Integrity misconduct reports issued since 1994, the University of Washington‘s Ferric Fang, Joan Bennett at Rutgers University, and Arturo Casadevall from the Albert Einstein College of Medicine found that 88 percent of faculty members who committed fraud were male as were 69 percent of postdocs, 58 percent of students, and 42 percent of other research personnel, as they write in mBio.

Only nine of the 72 faculty members who committed research misconduct were female, which is “one-third of the number that would have been predicted from their overall representation among life sciences faculty,” the researchers write. They note, though, that they cannot rule out that women are less likely to get caught.

But what is behind this gender difference and why people committed research misconduct is unknown. Fang, Bennett, and Casadevall say that “while not excluding a role for biological factors, recent studies suggest an important contribution of social and cultural influences in the competitive tendencies of males and females” and note that “it is generally known that men are more likely to engage in risky behaviors than women.”


Males Are Overrepresented among Life Science Researchers Committing Scientific Misconduct

  1. Ferric C. Fanga,
  2. Joan W. Bennettb, and
  3. Arturo Casadevallc

+Author Affiliations

  1. Departments of Laboratory Medicine and Microbiology, University of Washington, School of Medicine, Seattle, Washington, USAa;
  2. Department of Plant Biology and Pathology, Rutgers University, New Brunswick, New Jersey, USAb;
  3. Departments of Microbiology & Immunology and Medicine, Albert Einstein College of Medicine, Bronx, New York, USAc
  1. Address correspondence to Ferric C. Fang, fcfang@u.washington.edu.
  1. Editor Françoise Dromer, Institut Pasteur


A review of the United States Office of Research Integrity annual reports identified 228 individuals who have committed misconduct, of which 94% involved fraud. Analysis of the data by career stage and gender revealed that misconduct occurred across the entire career spectrum from trainee to senior scientist and that two-thirds of the individuals found to have committed misconduct were male. This exceeds the overall proportion of males among life science trainees and faculty. These observations underscore the need for additional efforts to understand scientific misconduct and to ensure the responsible conduct of research.

IMPORTANCE As many of humanity’s greatest problems require scientific solutions, it is critical for the scientific enterprise to function optimally. Misconduct threatens the scientific enterprise by undermining trust in the validity of scientific findings. We have examined specific demographic characteristics of individuals found to have committed research misconduct in the life sciences. Our finding that misconduct occurs across all stages of career development suggests that attention to ethical aspects of the conduct of science should not be limited to those in training. The observation that males are overrepresented among those who commit misconduct implies a gender difference that needs to be better understood in any effort to promote research integrity.


With our colleague Grant Steen, two of us (F.F. and A.C.) recently studied all 2,047 retracted scientific articles indexed by PubMed as of 3 May 2012 (1). Unexpectedly, we found that misconduct is responsible for most retracted articles and that fraud or suspected fraud is the most common form of misconduct. Moreover, the incidence of retractions due to fraud is increasing, a trend that should be concerning to scientists and nonscientists alike. To devise effective strategies to reduce scientific misconduct, it will be essential to understand why scientists commit misconduct. However, deducing the motives for misconduct from the study of retractions alone is difficult, because retraction notices provide limited information, and many instances of misconduct do not result in retracted publications.

We therefore undertook an alternative approach by reviewing the findings of misconduct summarized in the annual reports of the U.S. Office of Research Integrity (ORI) (http://ori.hhs.gov/about-ori). The ORI is responsible for promoting the responsible conduct of research and overseeing the investigation of misconduct allegations relating to research supported by the Department of Health and Human Services. From 1994 to the present, the annual reports detail 228 individuals found by the ORI to have committed misconduct (23). Fraud was involved in 215 (94%) of these cases. The total number of ORI investigations performed over this period is not known. However, data from the first ten years indicate that approximately one-half of ORI investigations conclude with a finding of misconduct (3). Although we expected most cases of misconduct to involve research trainees, we found that only 40% of instances of misconduct were attributed to a postdoctoral fellow (25%) or student (16%). Faculty members (32%) and other research personnel (28%) were responsible for the remaining instances of misconduct, and these included both junior and senior faculty members, research scientists, technicians, study coordinators, and interviewers.

We were able to determine the gender of the individual committing misconduct in all but a single case, and 149 (65%) were male. However, the gender predominance varied according to academic rank. An overwhelming 88% of faculty members committing misconduct were male, compared with 69% of postdocs, 58% of students, and 42% of other research personnel (Fig. 1). The male-female distribution of postdocs and students corresponds with the gender distribution of postdocs and students in science and engineering fields (4). However, nearly all instances of misconduct investigated by the ORI involved research in the life sciences, and the proportion of male trainees among those committing misconduct was greater than would be predicted from the gender distribution of life sciences trainees. Males also were substantially overrepresented among faculty committing misconduct in comparison to their proportion among science and engineering faculty overall, and the difference is even more pronounced for faculty in the life sciences (5). Of the 72 faculty members found to have committed misconduct, only 9 were female, or one-third of the number that would have been predicted from their overall representation among life sciences faculty. We cannot exclude the possibility that females commit research misconduct as frequently as males but are less likely to be detected.

FIG 1Gender distribution of scientists committing misconduct. The percentage of scientists sanctioned by the U.S. Office of Research Integrity who are male, stratified by rank, is compared with the percentage of males in the overall United States scientific workforce (error bars show standard deviations) (blue and green bars are from NSF data, 1999–2006 [45]).

What motivates individuals to commit research misconduct? Does competition for prestige and resources disproportionately drive misconduct among male scientists? Are women more sensitive to the threat of sanctions? Is gender a correlate of integrity?

The disparity between the number of men and women in academic science fields has been considered to be evidence of biologically driven gender differences (6). Thus, it may be tempting to explain the preponderance of male fraud in terms of various evolutionary theories about Y chromosome-driven competitiveness and aggressiveness (7). For example, for more than a century the male baboon has been used to symbolize male aggression. However, stereotypes of male baboon aggression and dominance have been called into question by primatologists focusing on female social networks and competitive strategies (8). Deterministic theories based in biology have been facilely used to explain the persistent gender gap in wages and other measures in the labor market (discussed in reference 9). The pitfalls associated with such simplistic generalizations have been extensively dissected by scholars of gender in science (see, for example, references 10 and 11 and citations therein). While not excluding a role for biological factors, recent studies suggest an important contribution of social and cultural influences in the competitive tendencies of males and females (12).

Nevertheless, it is generally known that men are more likely to engage in risky behaviors than women (13) and that crime rates for men are higher than those for women. Sociologists have hypothesized that as the roles of men and women become more similar, so will their crime rates (14). There is evidence for this “convergence hypothesis” in terms of arrests for robbery, burglary, and motor vehicle theft but not for homicide (15). Similarly, while most studies show that male students cheat more frequently than female students, recent data suggest that within similar areas of study, the gender differences are small. Women majoring in engineering self-report cheating at rates comparable to those reported by men majoring in engineering (16). We did not observe a significant convergence in scientific misconduct by males and females reported by the ORI over time (Fig. 2), although the analysis was limited by the small sample size. Interestingly, we also failed to observe an overall increase in research misconduct in the ORI findings, in contrast to an increase in retractions for fraud observed in our earlier study (1), with the caveat that the present study focused on a much smaller and incompletely overlapping subset of cases.

FIG 2Gender distribution of scientists committing misconduct over time. The percentage of scientists sanctioned by the U.S. Office of Research Integrity who are male, female, or of unknown gender are shown for each reporting year. For the gender ratio in 1994–2002 (n = 120) compared with 2003–2012 (n= 108), χ2 =1.405 and P = 0.24 (calculated using the online tool athttp://www.quantpsy.org/chisq/chisq.htm).

The predominant economic system in science is “winner-take-all” (1718). Such a reward system has the benefit of promoting competition and the open communication of new discoveries but has many perverse effects on the scientific enterprise (19). The scientific misconduct among both male and female scientists observed in this study may well reflect a darker side of competition in science. That said, the preponderance of males committing research misconduct raises a number of interesting questions. The overrepresentation of males among scientists committing misconduct is evident, even against the backdrop of male overrepresentation among scientists, a disparity more pronounced at the highest academic ranks, a parallel with the so-called “leaky pipeline.” There are multiple factors contributing to the latter, and considerable attention has been paid to factors such as the unique challenges facing young female scientists balancing personal and career interests (20), as well as bias in hiring decisions by senior scientists, who are mostly male (21). It is quite possible that, in at least some cases, misconduct at high levels may contribute to attrition of woman from the senior ranks of academic researchers.

Our observations also raise the question of whether current efforts at ethics training are targeting the right individuals. The NIH currently mandates training in the responsible conduct of research for students and postdocs receiving support from training grants. However, these groups were responsible for only 40% of the misconduct documented in the ORI reports. The psychiatrist Donald Kornfeld has analyzed a subset of the ORI data (22) and observed “an intense fear of failure” in many trainees who committed misconduct, while some faculty members seemed to possess a “conviction that they could avoid detection.” This suggests that efforts to improve ethical conduct may also need to target faculty scientists, who in some cases are directly responsible for misconduct and in others may be unintentionally fostering a research environment in which trainees and other research personnel feel pressured to tailor results to meet expectations. Programs to help scientists become more effective mentors should be more widely implemented (23). The male predominance among senior scientists who commit misconduct also suggests that social expectations associated with gender may play a role in the likelihood of committing fraud and that the impact of culture and gender should be considered in ethics training. Curricula should become more sensitive to the heterogeneity of the target population because “one size does not fit all.”

The role of external influences on the scientific enterprise must not be ignored. With funding success rates at historically low levels, scientists are under enormous pressure to produce high-impact publications and obtain research grants. The importance of these influences is reflected in the burgeoning literature on research misconduct, including surveys that suggest that approximately 2% of scientists admit to having fabricated, falsified, or inappropriately modified results at least once (24). A substantial proportion of instances of faculty misconduct involve misrepresentation of data in publications (61%) and grant applications (72%); only 3% of faculty misconduct involved neither publications nor grant applications.

In summary, we emphasize two observations from this study: first, misconduct is distributed along the continuum from trainee to senior scientist. Second, men are overrepresented among scientists committing misconduct, with a skewed gender ratio being most pronounced for senior scientists. While we acknowledge that our observations were made from a relatively small database that focuses exclusively on research supported by the U.S. Department of Health and Human Services, we note that each case was extensively documented, and this case series may represent the most reliable information currently available. From our findings, new challenges are directed to the scientific community to maintain the integrity of the scientific enterprise. The occurrence of misconduct at every level of the scientific hierarchy indicates that misconduct is not a problem limited to trainees and requires careful attention to pressures placed on scientists during different stages of their careers. Male predominance is but another example of the scientific enterprise reflecting social and cultural contexts.

In closing, the vital importance of the ORI is acknowledged. Without public access to their investigations, it would have been impossible to carry out this study. All countries should have independent agencies with the authority and resources to ensure proper conduct of scientific research. Although our findings may cause concern regarding the scientific enterprise, recognition is a first step toward solving a problem. With so many of the world’s current challenges dependent on scientific solutions, science must look for new ways to ensure the responsible conduct of scientific research (25).


  • Citation Fang FC, Bennett JW, Casadevall A. 2013. Males are overrepresented among life science researchers committing scientific misconduct. mBio 4(1):e00640-12. doi:10.1128/mBio.00640-12.
  • Received 31 December 2012
  • Accepted 7 January 2013
  • Published 22 January 2013
  • Copyright © 2013 Fang et al.

This is an open-access article distributed under the terms of the Creative Commons Attribution-Noncommercial-ShareAlike 3.0 Unported license, which permits unrestricted noncommercial use, distribution, and reproduction in any medium, provided the original author and source are credited.


  1. 1.
    1. Fang FC,
    2. Steen RG,
    3. Casadevall A

    . 2012. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl. Acad. Sci. U. S. A. 109:17028–17033.

  2. 2.
    1. Office of Research Integrity

    . 2012. http://ori.hhs.gov/.

  3. 3.
    1. Rhoades LJ

    . 2004. ORI closed investigations into misconduct allegations involving research supported by the public health service: 1994–2003. Office of Research Integrity, Department of Health and Human Services, Washington, DC.http://ori.hhs.gov/content/ori-closed-investigations-misconduct-allegations-involving-research-supported-public-health-.

  4. 4.
    1. National Science Foundation

    . 2012. Women, minorities, and persons with disabilities in science and engineering. National Science Foundation, National Center for Science and Engineering Statistics, Washington, DC.http://www.nsf.gov/statistics/wmpd/tables.cfm. Accessed 18 October 2012.

  5. 5.
    1. Burrelli J

    . 2008. Thirty-three years of women in S&E faculty positions. NSF 08–308. National Science Foundation, National Center for Science and Engineering Statistics, Washington, DC. http://www.nsf.gov/statistics/infbrief/nsf08308/.

  6. 6.
    1. National Academy of Sciences

    . 2007. Beyond biases and barriers. Fulfilling the potential of women in academic science and engineering. National Academies Press,Washington, DC.

  7. 7.
    1. Trivers R

    . 1972. Parental investment and sexual selection, p 136–179. In CampbellB , Sexual selection and the descent of man. Aldine, Chicago, IL.

  8. 8.
    1. Schiebinger L

    . 1999. Has feminism changed science? Harvard University Press,Cambridge.

  9. 9.
    1. Croson R,
    2. Gneezy U

    . 2009. Gender differences in preferences. J. Econ. Lit.47:1–27. 

  10. 10.
    1. Keller EF

    . 1985. Reflections on gender and science. Yale University Press, New Haven.

  11. 11.
    1. Keller EF,
    2. Longino HE

    . 2006. Feminism and science. Oxford University Press,Oxford, United Kingdom.

  12. 12.
    1. Anderson S,
    2. Ertac S,
    3. Gneezy U,
    4. List JA,
    5. Maximiano S

    . 2012. Gender, competitiveness and socialization at a young age: evidence from a matrilineal and patriarchal society. Rev. Econ. Stat. [Epub ahead of print.]

  13. 13.
    1. Harris CR,
    2. Jenkins M,
    3. Glaser D

    . 2006. Gender differences in risk assessment: why do women take fewer risks than men? Judgm. Decis. Mak. 1:48–63.

  14. 14.
    1. Adler F

    . 1975. Sisters in crime. McGraw-Hill, New York, NY.

  15. 15.
    1. O’Brien R

    . 1999. Measuring the convergence/divergence of “serious crime” arrest rates for males and females; 1960–1995. J. Quant. Criminol. 15:97–114.

  16. 16.
    1. McCabe DL,
    2. Trevino LK,
    3. Butterfield KD

    . 2001. Cheating in academic institutions: a decade of research. Ethics Behav. 11:219–232. 

  17. 17.
    1. Goodstein D

    . 2002. Scientific misconduct. Academe 88:18–21.

  18. 18.
    1. Casadevall A,
    2. Fang FC

    . 2012. Winner takes all. Sci. Am. 307:13. 

  19. 19.
    1. Anderson MS,
    2. Ronning EA,
    3. De Vries R,
    4. Martinson BC

    . 2007. The perverse effects of competition on scientists’ work and relationships. Sci. Eng. Ethics13:437–461. 

  20. 20.
    1. Goulden M,
    2. Mason MA,
    3. Frasch K

    . 2011. Keeping women in the science pipeline. Ann. Am. Acad. Pol. Soc. Sci. 638:141–162. 

  21. 21.
    1. Moss-Racusin CA,
    2. Dovidio JF,
    3. Brescoll VL,
    4. Graham MJ,
    5. Handelsman J

    .2012. Science faculty’s subtle gender biases favor male students. Proc. Natl. Acad. Sci. U. S. A. 109:16474–16479. 

  22. 22.
    1. Kornfeld DS

    . 2012. Perspective: Research misconduct: the search for a remedy.Acad. Med. 87:877–882. 

  23. 23.
    1. Handelsman J,
    2. Pfund C,
    3. Lauffer SM,
    4. Pribbenow CM

    . 2005. Entering mentoring, a seminar to train a new generation of scientists. Board of Regents of the University of Wisconsin, Madison, WI.

  24. 24.
    1. Fanelli D

    . 2009. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 4:e5738.http://dx.doi.org/10.1371/journal.pone.0005738.

  25. 25.
    1. Fang FC,
    2. Casadevall A.

     2012. Reforming science: structural reforms. Infect. Immun. 80:897–901. 


Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Game On

September 2012

Combining gaming and genomics may sound odd, but play can produce useful data. By presenting complex biological problems as games and distributing those games far and wide, researchers can take advantage of a large network of virtual computation in the form of thousands of players. Games can also harness the natural human ability for pattern recognition and take advantage of the ways in which the human brain is better at that than even the most powerful supercomputer.

One of the first games to harness the power of the crowd was Foldit, which was developed by the University of Washington’s David Baker in 2008. Foldit encourages players to solve protein structure prediction problems by folding proteins into stable shapes.

Last September, an elite contingent of 15 Foldit players used molecular replacement to solve the crystal structure of a retroviral protease from the Mason-Pfizer Monkey Virus, which causes simian AIDS.

Baker’s team published a paper in Nature Structural & Molecular Biology describing how the solutions facilitated the identification of novel structural features that could provide a foundation for the design of new antiretroviral drugs. According to the authors, this marked the first time gamers solved a longstanding -biological problem.

Then in December 2010, a team from McGill University rolled out Phylo, a Sudoku-like game that utilizes players’ abilities to match visual patterns between regions of similarity in multiple sequence alignments. Phylo’s designers reported in a PLOS One paper published in March that, since the game’s launch, they had received more than 350,000 solutions produced by more than 12,000 registered players.

“We don’t know right now what other interesting problems we could solve using these crowdsourcing techniques. We still have a lack of deep understanding about when human intuition or help is very useful,” says Jérôme Waldispühl, an assistant professor at McGill University and lead developer of Phylo. “But what I like is the involvement of society digging into the most meaningful and deep scientific questions — you are trying to involve society into the very process of scientific discovery.”

Last year, a group from Carnegie Mellon University and Stanford University released an online game called EteRNA, the purpose of which is to help investigators envision RNA knots, polyhedra, and any other RNA shapes that have yet to be identified. Top designs are analyzed each week to determine if the molecules can fold themselves into the 3D shapes predicted by RNA modeling software.

Purposeful play

More “games with a purpose” — as they are sometimes called — aimed at solving biological problems are in development.

“No matter how big your super-computer is, you can’t try all gene combinations within a 20,000-gene space,” says Benjamin Good, a research associate at the Scripps Research Institute. “Humans have a role to play here because maybe we can do better than what happens when you just compute for just many, many random datasets.”

Good and his colleagues have developed a suite of games, called Gene Games, which they hope to use to build out genomic databases and to improve existing algorithms. This suite includes GenESP, a two-player game in which both players see the same disease and each must guess what gene the other is typing from a dropdown list of possible genes. This game is aimed at an audience with some knowledge of the field and takes advantage of players’ expertise.

Another game is Combo, which challenges players to find the ideal combination of genes for phenotype prediction. Combo players can choose to start at an introductory level where they separate mammals from other animals or divide the animal kingdom into five classes, or begin at a more challenging level where they have to identify gene expression signatures in tumors to predict survival or metastasis.

“The goal of issuing GenESP is to provide a new way of building out gene annotation databases, and Combo is specifically made to enrich a machine-learning algorithm. It’s all an attempt to use games to tap into a large communities of people to get after what they know,” Good says.

But he is quick to point out that there is no specific critical mass of players that will provide the desired data — instead, it is about finding the right players just as the Foldit project seems to have done.

“I can’t say when we hit 1,000 users or 10,000 users, we’ll have X units of compute available to us — it just doesn’t work like that. But if we end up getting the right 100 people playing these games, we can make a lot of progress,” Good adds.

Diagnostic disport

Whether taking a crowdsourcing approach to access networks of thousands of players or to get at a handful of skilled players, gaming can not only provide new data for researchers, but can also provide a handy diagnostic resource for clinicians.

In May, a group from the University of California, Los Angeles, released Molt, a game in which players aid in the diagnosis of malaria-infected red blood cells. After completing training, players are presented with frames of red blood cell images and use a virtual syringe tool to eliminate infected cells and then use a collect-all tool to designate the remaining cells in the frame as healthy. Results from the games are sent back to the point-of-care or hospital setting.

Popular online games are also providing models for new bioinformatics applications. A new software tool called ImageJS marries bioinformatics with pathology, which its developers say takes its cue from Rovio Entertainment’s popular Angry Birds game. Developed by a team at the University of Alabama at Birmingham, this free app allows pathologists to drag pathology slides into a Web app to identify malignancies based on color.

“There are two bioinformatics problems at the point of care that an Angry Birds-approach would solve. The first is that it delivers the code to the machine rather than forcing the data to travel, so the code does the traveling,” says Jonas Almeida, a professor at UAB. “The second problem it solves is that it doesn’t require installation of any application; they are written in JavaScript, which is a native language of Web development, and the application has no access to the local file system, so we have an application that does not make the IT people nervous.”

Almeida and his team have also designed a module for ImageJS that can analyze genomes using the same visual interface as the malignancy–diagnosis module to provide clinicians with even better diagnostic accuracy. In the spirit of crowdsourcing, Almeida adds that the success of ImageJS will rely upon the willingness of its users to develop their own modules to solve their own specific problems using the game-like interface.

“Funding agencies are also interested in these out-of-the-box solutions. However, there is also some skepticism that will probably need more time to be overcome through clear success stories with these new gaming based solutions to existing problems,” says Molt’s designer Aydogan Ozcan, an associate professor at UCLA.

Despite the funding limitations — and the formidable challenge of designing a game that people will want to play — gaming is gaining traction and enjoying a favorable reception from the research community.

“On a one-to-one basis, everyone seems to love it. They get the potential and are just waiting to see what is going to happen,” Scripps’ Good says. “But developing these games is an enormous challenge. Our background is in bioinformatics, it’s not in making things that are fun. It’s hard to make a fun game all by itself, let alone make one that will solve a difficult problem.”

Matthew Dublin is a senior writer at Genome Technology.



Related topic on this Scientific Web Site

‘Gamifying’ Drug R&D: Boehringer Ingelheim, Sanofi, Eli Lilly


Read Full Post »

Gaps, Tensions, and Conflicts in the FDA Approval Process: Implications for Clinical Practice

Reporter: Aviva Lev-Ari, PhD, RN


FDA 501(k) Approval Process

Posted by DCNGA » Wed Nov 03, 2010 4:24 pm

Medical Devices: Gaps, Tensions, and Conflicts in the FDA Approval Process: Medical Devices

Author: Richard A. Deyo, MD, MPH, Departments of Medicine and Health Services and the Center for Cost and Outcomes Research, University of Washington, Seattle

The FDA’s approach to approving medical devices differs substantially from the approach to drugs, being in some ways both more complex and less stringent.[13] The FDA’s authority over devices dates only to 1976. Device legislation was a response, in part, to public outcry over some well-publicized device failures. The most prominent was the Dalkon Shield—an intrauterine contraceptive device associated with serious infections.[14] In contrast, the FDA’s authority over drugs dates to 1938, although it existed in weaker form starting in 1906.[15]

With few exceptions, given the timing of the FDA’s authority, devices introduced before 1976 were never required to undergo rigorous evaluation of safety and efficacy. With the huge volume of “things” that suddenly fell under its purview, the FDA had to prioritize its resources and efforts.

One way of prioritizing was to focus first on safety. Evaluation of effectiveness, in many cases, was reduced to engineering performance: does the device hold up under its intended uses, does it deliver an electric current as advertised? The potential benefits for relieving pain, improving function, or ameliorating disease did not generally have to be demonstrated.

Another way of prioritizing was to assign categories of risk associated with the devices. Rubber gloves seemed less risky than cardiac pacemakers, for example. So the agency assigned devices to 1 of 3 levels of scrutiny. Class I devices have low risk; oversight, performed mainly by industry itself, is to maintain high manufacturing quality standards, assure proper labeling, and prevent adulteration. Latex gloves are an example.

At the other extreme, class III devices are the highest risk. These include many implantable devices, things that are life-supporting, and diagnostic and treatment devices that pose substantial risk. Artificial heart valves and electrical catheters for ablating arrhythmogenic foci in the heart are examples. This class also includes any new technology that the FDA does not recognize or understand. New components or materials, for example, may suggest to FDA that it should perform a more formal evaluation. In general, these devices require a “premarket approval,” including data on performance in people (not just animals), extensive safety information, and extensive data on effectiveness. This evaluation comes closest to that required of drugs. In fact, Dr. Kessler says, these applications “look a lot like a drug applications: big stacks of paper. They almost always require clinical data—almost always. And they often require randomized trials. Not always, but often” (L. Kessler, personal communication). These devices are often expensive and sometimes controversial because of their costs.

Class II devices are perhaps the most interesting. They comprise an intermediate group, generally requiring only performance standards. Examples would be biopsy forceps, surgical lasers, and some hip prostheses. The performance standards focus on the engineering characteristics of the device: does it deliver an electrical stimulus if it claims to, and is it in a safe range? Is it made of noncorrosive materials? Most of these devices get approved by the “510(k)” mechanism. The 510(k) approval requires demonstrating “substantial equivalence” to a device marketed before 1976. “And,” says Kessler, “the products that have been pushed through 510(k) are astonishing” (L. Kessler, personal communication).

Kessler points out, “For the first 5 to 10 years after 1976, this approach made sense. But in 2001, 25 years after the Medical Device Amendment, does it make sense? There was a lot of stuff on the market that wasn’t necessarily great in 1975—why would you put it back on the market now?” (L. Kessler, personal communication). The new device need not prove superiority to the older product—just functional equivalence. If a company wants to tout a new device as a breakthrough, why would it claim substantial equivalence to something 25 years old?

The reason is that the 510(k) process is easier and cheaper than seeking a premarket approval. The 510(k) process usually does not require clinical research. In the mid-1990s, a 510(k) application on average required 3 months for approval, and about $13 million. A premarket approval required, on average, about a year and $36 million. Both are modest compared with new drug approvals. The process by which the agency decides if something is “equivalent enough” to be approved by the 501(k) mechanism is subjective.

Because pre-1976 devices were not subject to any rigorous tests of clinical effectiveness, a newly approved device may be equivalent to something that has little or no therapeutic value. Doctors, patients, and payers therefore often have little ability to judge the value of new devices. As an example, the FDA still receives 510(k) applications for intermittent positive pressure breathing machines.[12] Yet a thorough review by the federal Agency for Health Care Policy and Research found that these devices offer no important benefits.[16]

How much do manufacturers take advantage of the easier 510(k) approach? Since 1976, nearly 98% of new devices entering the market in class II or III have been approved through the 510(k) process.[13] In 2002, the FDA reported 41 premarket approvals and 3708 approvals through the 510(k) process.[17]

“It is a good thing to learn caution from the misfortunes of others.”

“If you wish to succeed in life, make perseverance your bosom friend, experience your wise counselor, caution your elder brother, and hope your guardian genius.”

Dr. Richard A. Deyo, has published an article on this topic in 2004. His observations and references are most valuable for our Blog.

For fulll article go to:

JABFP March–April 2004 Vol.17 No.2 http://www.science.smith.edu/departments/Biochem/Chm_357/Articles/Drug%20Approval.pdf



Author:  Richard A. Deyo, MD, MPH

Despite many successes, drug approval at the Food and Drug Administration (FDA) is subject to gaps, internal tensions, and conflicts of interest. Recalls of drugs and devices and studies demonstrating advantages of older drugs over newer ones highlight the importance of these limitations. The FDA does not compare competing drugs and rarely requires tests of clinical efficacy for new devices. It does not review advertisements before use, assess cost-effectiveness, or regulate surgery (except for devices). Many believe postmarketing surveillance of drugs and devices is inadequate. A source of tension within the agency is pressure for speedy approvals. This may have resulted in “burn-out” among medical officers and has prompted criticism that safety is ignored. Others argue, however, that the agency is unnecessarily slow and bureaucratic. Recent reports identify conflicts of interest (stock ownership, consulting fees, research grants) among some members of the FDA’s advisory committees. FDA review serves a critical function, but physicians should be aware that new drugs may not be as effective as old ones; that new drugs are likely to have undiscovered side effects at the time of marketing; that direct-to-consumer ads are sometimes misleading; that new devices generally have less rigorous evidence of efficacy than new drugs; and that value for money is not considered in approval. J Am Board Fam Pract 2004;17: 142–9.

The process of drug development and approval by the United States Food and Drug Administration (FDA) was recently reviewed by Lipsky and Sharp.1 Using clinical literature and web sites addressing FDA procedures, that review concisely described the FDA’s history, the official approval process, and recent developments in drug approval. However, it did not delve into common misconceptions about the FDA, tensions within the agency, or conflicts of interest in the drug approval process. The rapidly growing business of medical device development, distinct from the drug approval process, also was not addressed. Although most aspects of the FDA review process are highly successful, its limitations deserve careful consideration, because they may have important implications for choosing treatments in practice.

Recent recalls of drugs and devices call attention to limitations of the approval process.2–4 Recent news about complications of hormone replacement therapy5,6 and new data supporting the superiority of diuretic therapy over newer, more expensive alternatives for hypertension7 emphasize gaps in the process. Clinicians should be aware of regulatory limitations as they prescribe treatments and counsel patients, so they have realistic ideas about what FDA approval does and does not mean.

Because controversies relating to internal conflicts or political issues are infrequently reported in scientific journals, this discussion draws not only on scientific articles, but also internet resources, news accounts, and interviews.The goal was not to be exhaustive, but to provide examples of tensions, conflicts, and gaps in the FDA process. As Lipsky and Sharp noted, the FDA approves new drugs and devices (as well as assuring that foods and cosmetics are safe).It monitors over $1 trillion worth of products, which represents nearly a fourth of consumer spending.1 In the medical arena, the basic goal of the FDA is to prevent the marketing of treatments that are ineffective or harmful.

However, the agency faces limitations that result from many factors, including the agency’s legal mandate, pressures from industry, pressures from advocacy groups, funding constraints, and varied political pressures.

Pressures for Approval

Perhaps the biggest challenge and source of friction for the FDA is the speed of approvals for drugs and devices. Protecting the public from ineffective or harmful products would dictate a deliberate, cautious, thorough process. On the other hand, getting valuable new technology to the public—to save lives or improve quality of life—would argue for a speedy process. Some consumer protection groups claim the agency is far too hasty and lenient, bending to drug and device company pressure. On the other hand, manufacturers argue that the agency drags its feet and kills people waiting for new cures. Says Kessler: “That’s been the biggest fight between the industry, the Congress, and the FDA over the past decade: getting products out fast” (L. Kessler, personal communication).

To speed up the review process, Congress passed a law in 1992 that allowed the FDA to collect “user fees” from drug companies. This was in part a response to AIDS advocates, who demanded quick approval of experimental drugs that might offer even a ray of hope.These fees, over $300,000 for each new drug application, now account for about half the FDA’s budget for drug evaluation, and 12% of the agency’s overall $1.3 billion budget.18 The extra funds have indeed accelerated the approval process.By 1999, average approval time had dropped by about 20 months, to an average of a year.In 1988, only 4% of new drugs introduced worldwide were approved first by the FDA.By 1998, FDA was first in approving two thirds of new drugs introduced worldwide.The percentage of applications ultimately approved had also increased substantially.18 Nonetheless, industry complained that approval times slipped to about 14 months in 2001.19

In 2002, device makers announced an agreement with the FDA for similar user fees to expedite approval of new devices, and Congressional approval followed with the Medical Device User Fee and Modernization Act.20 Critics, such as 2 former editors of the New England Journal of Medicine, argue that the user fees create an obvious conflict of interest. So much of the FDA budget now comes from the industry it regulates that the agency must be careful not to alienate its corporate “sponsors.”21

FDA officials believe they remain careful but concede that user fees have imposed pressures that make review more difficult, according to The Wall Street Journal .22 An internal FDA report in 2002 indicated that a third of FDA employees felt uncomfortable expressing “contrary scientific opinions” to the conclusions reached in drug trials.Another third felt that negative actions against applications were “stigmatized.”

The report also said some drug reviewers stated “that decisions should be based more on science and less on corporate wishes.”22  The Los Angeles Times reported that agency drug reviewers felt if drugs were not approved, drug companies would complain to Congress, which might retaliate by failing to renew the users’ fees 18 (although they were just re-approved in summer, 2002).This in turn would hamstring FDA operations and probably cost jobs.

Another criticism is that the approval process has allowed many dangerous drugs to reach the market. A recent analysis showed that of all new drugs approved from 1975 to 1999, almost 3% were subsequently withdrawn for safety reasons, and 8% acquired “black box warnings” of potentially serious side effects. Projections based on the pace of these events suggested that 1 in 5 approved drugs would eventually receive a black box warning or be withdrawn. The authors of the analysis, from Harvard Medical School and Public Citizen Health Research Group, suggested that the FDA should raise the bar for new drug approval when safe and effective treatments are already available or when the drug is for a non–life-threatening condition.2

According to The Los Angeles Times, 7 drugs withdrawn between 1993 and 2000 had been approved while the FDA disregarded “danger signs or blunt warnings from its own specialists. Then, after receiving reports of significant harm to patients, the agency was slow to seek withdrawals.” These drugs were suspected in 1002 deaths reported to FDA. None were life-saving drugs.They included, for example, one for heartburn (cisapride), a diet pill (dexfenfluramine), and a painkiller (bromfenac). The Times reported that the 7 drugs had US sales of $5 billion before they were recalled.18

After analysis, FDA officials concluded that the accelerated drug approval process is unrelated to the drug withdrawals. They pointed out that the number of drugs on the market has risen dramatically, the number of applications has increased, and the population is using more medications.3  More withdrawals are not surprising, in their view. Dr. Janet Woodcock, director of the FDA’s drug review center and one of the analysts, argued that “All drugs have risks; most of them have serious risks.”

She believes the withdrawn drugs were valuable and that their removal from the market was a loss, even if the removal was necessary, according to The Los Angeles Times.18 Nonetheless, many believe the pressures for approval are so strong that they contribute to employee burnout at FDA.In August 2002, The Wall Street Journal reported that 15% of the agency’s medical officer jobs were unfilled.22 Their attrition rate is higher than for medical officers at the National Institutes of Health or the Centers for Disease Control and Prevention. The Journal reported that the reasons, among others, included pressure to increase the pace of drug approvals and an atmosphere that discourages negative actions on drug applications.

Attrition caused by employee “burnout” is now judged to threaten the speed of the approval process. In 2000, even Dr. Woodcock acknowledged a “sweatshop environment that’s causing high staffing turnover.”18 FDA medical and statistical staff have echoed the need for speed and described insufficient time to master details.18,19  An opposing view of FDA function is articulated in an editorial from The Wall Street Journal, by Robert Goldberg of the Manhattan Institute. He wrote that the agency “protects people from the drugs that can save their lives” and needs to shift its role to “speedily put into the market place… new miracle drugs and technologies…. ” He argues that increasing approval times for new treatments are a result of “careless scientific reasoning” and “bureaucratic incompetence,” and that the FDA should monitor the impact of new treatments after marketing rather than wait for “needless clinical trials” that delay approvals.23

Thus, the FDA faces a constant “damned if it does, damned if it doesn’t” environment. No one has undertaken a comprehensive study of the speed of drug or device approval to determine the appropriate metrics for this process, much less the optimal speed. It remains unclear how best to balance the benefits of making new products rapidly available with the risks of unanticipated complications and recalls.

Postmarketing Surveillance of New Products

Although user fees have facilitated pre-approval evaluation of new drugs, the money cannot be used to evaluate the safety of drugs after they are marketed. Experts point out that approximately half of approved drugs have serious side effects not known before approval, and only post-marketing surveillance can detect them. But in the opinion of some, FDA lacks the mandate, the money, and the staff to provide effective and efficient surveillance of over 5000 drugs already in the marketplace. 24 Although reporting of adverse effects by manufacturers is mandatory, late or non reporting of cases by drug companies are major problems. Some companies have been prosecuted for failure to report, and the

FDA has issued several warning letters as a result of late reporting. Spontaneous reporting by practitioners is estimated to capture only 1% to 13% of serious adverse events. 25  Widespread promotion of new drugs—before some of the serious effects are known—increases exposure of patients to the unknown risks. It is estimated that nearly 20 million patients (almost 10% of the US population) were exposed to the 5 drugs that were recalled in 1997 and 1998 alone.26 The new law allowing user fees for device manufacturers does not have the same restriction on post-marketing surveillance that has hampered drug surveillance.

Conflicts of Interest in the Approval Process

Another problem that has recently come to light in the FDA approval process is conflict of interest on the part of some members of the agency’s 18 drug advisory committees. These committees include about 300 members, and are influential in recommending whether drugs should be approved, whether they should remain on the market, how drug studies should be designed, and what warning labels should say. The decisions of these committees have enormous financial implications for drug makers.

A report by USA Today indicated that roughly half the experts on these panels had a direct financial interest in the drug or topic they were asked to evaluate. The conflicts of interest included stock ownership, consulting fees, and research grants from the companies whose products they were evaluating. In some cases, committee members had helped to develop the drugs they were evaluating. Although federal law tries to restrict the use of experts with conflicts of interest, USA Today reported that FDA had waived the rule more than 800 times between 1998 and 2000.

FDA does not reveal the magnitude of any financial interest or the drug companies involved.27 Nonetheless, USA Today reported that in considering 159 Advisory Committee meetings from 1998 through the first half of 2000, at least one member had a financial conflict of interest 92% of the time. Half or more of the members had conflicts at more than half the meetings. At 102 meetings that dealt specifically with drug approval, 33% of committee members had conflicts.27 The Los Angeles Times reported that such conflicts were present at committee reviews of some recently withdrawn drugs.18

The FDA official responsible for waiving the conflict-of-interest rules pointed out that the same experts who consult with industry are often the best for consulting with the FDA, because of their knowledge of certain drugs and diseases. But according to a summary of the USA Today survey reported in the electronic American Health Line, “even consumer and patient representatives on the committees often receive drug company money.”28  In 2001, Congressional staff from the House Government Reform Committee began examining the FDA advisory committees, to determine whether conflicts of interest were affecting the approval process.29


Despite derogatory comments from some politicians and some in the industries it regulates, the FDA does a credible job of trying to protect the public and to quickly review new drugs and devices. However, pressures for speed, conflicts of interest in decision-making, constrained legislative mandates, inadequate budgets, and often limited surveillance after products enter the market mean that scientific considerations are only part of the regulatory equation. These limitations can lead to misleading advertising of new drugs; promotion of less effective over more effective treatments; delays in identifying treatment risks; and perhaps unnecessary exposure of patients to treatments whose risks outweigh their benefits.

Regulatory approval provides many critical functions. However, it does not in itself help clinicians to identify the best treatment strategies. Physicians should be aware that new drugs may not be as effective as old ones; that new drugs are likely to have undiscovered side effects at the time they are marketed; that direct-to-consumer ads are sometimes misleading; that new devices generally have less rigorous evidence of efficacy than new drugs; and that value for money is not considered in the approval process. If clinicians are to practice evidence-based and cost-effective medicine, they must use additional skills and resources to evaluate new treatments. Depending exclusively on the regulatory process may lead to suboptimal care.


1.Lipsky MS, Sharp LK. From idea to market: the drug approval process.J Am Board Fam Pract 2001; 14:362–7.

2.Lasser KE, Allen PD, Woolhandler SJ, Himmelstein DU, Wolfe SM, Bor DH.Timing of new black box warnings and withdrawals for prescription medications. JAMA 2002;287:2215–20.

3.Friedman MA, Woodcock J, Lumpkin MM, Shuren JE, Hass AE, Thompson LJ.The safety of newly approved medicines: do recent market removals mean there is a problem? JAMA 1999;281:1728 –34.

4.Maisel WH, Sweeney MO, Stevenson WG, Ellison KE, Epstein LM.Recalls and safety alerts involving pacemakers and implantable cardioverter-defibrillator devices. JAMA 2001;286:793–9.

5.Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results from the Women’s Health Initiative randomized controlled trial. JAMA 2002;288:321–33.

6.Grady D, Herrington D, Bittner V, et al. Cardiovascular disease outcomes during 68 years of hormone therapy: Heart and Estrogen/progestin Replacement Study Follow-up (HERS II). JAMA 2002;288:49–57.

7.ALLHAT Officers and Coordinators for the ALLHAT Collaborative Research Group. The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial.Major outcomes in high-risk hypertensive patients randomized to angiotensin-converting enzyme inhibitor or calcium channel blocker vs diuretic: The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT). JAMA 2002;288:2981–97.

8.Echt DS, Liebson PR, Mitchell LB, et al. Mortality and morbidity in patients receiving encainide, flecainide, or placebo.The Cardiac Arrhythmia Suppression Trial. N Engl J Med 1991;324:781–8.

9.Moore TJ. Deadly medicine: why tens of thousands of heart patients died in America’s worst drug disaster. New York: Simon and Schuster; 1995.

10.Petersen M. Diuretics’ value drowned out by trumpeting of newer drugs. The New York Times 2002 Dec 18;Sect A:32.

11.Gorelick PB, Richardson D, Kelly M, et al. Aspirin and ticlopidine for prevention of recurrent stroke in black patients: a randomized trial. JAMA 2003;289: 2947–57.

12.Gahart MT, Duhamel LM, Dievler A, Price R. Examining the FDA’s oversight of direct-to-consumer advertising. Health Aff (Millwood) 2003 Suppl W3– 120–3.

13.Ramsey SD, Luce BR, Deyo R, Franklin G. The 148 JABFP March–April 2004 Vol.17 No.2  limited state of technology assessment for medical devices: facing the issues. Am J Manag Care 1998;4 Spec No:SP188–99.

14.Merrill RA. Modernizing the FDA: an incremental revolution. Health Aff (Millwood) 1999;18:96–111.

15.Milestones in US food and drug law history. United States Food and Drug Administration. http://www. fda.gov/opacom/backgrounders/miles.html, accessed 8/19/02.

16.Handelsman H. Intermittent positive pressure breathing (IPPB) therapy. Health Technol Assess Rep 1991;(1):1 9.

17.FDA Center for Devices and Radiological Health. Office of Device Evaluation annual report 2002. Available at: URL:http://www.fda.gov/cdrh/annual/ fy2002/ode/index.html.

18.Willman D. How a new policy led to seven deadly drugs. The Los Angeles Times 2000 Dec 20;Sect. A:1.

19.Adams C, Hensley S. Health and Technology: drug makers want FDA to move quicker. Wall Street Journal 2002 Jan 29; Sect.B:12.

20.Adams C. FDA may start assessing fees on makers of medical devices. The Wall Street Journal 2002 May 21;Sect.D:6.

21. Angell M, Relman AS.Prescription for profit. The Washington Post 2001 Jun 20; Sect.A:27.

22.Adams C. FDA searches for an elixir for agency’s attrition rate. The Wall Street Journal 2002 Aug 19;Sect.A:4.

23. Goldberg R.FDA needs a dose of reform.The Wall Street Journal 2002 Sep 30;Sect.A:16.Available at: URL: http://www.aei.brookings.org/policy/page. php?id113

24.Moore TJ, Psaty BM, Furberg CD. Time to act on drug safety. JAMA 1998;279:1571–3.

25.Ahmad SR. Adverse drug event monitoring at the Food and Drug Administration: your report can make a difference. J Gen Intern Med 2003;18:57–60.

26.Wood AJJ. The safety of new medicines: the importance of asking the right questions. JAMA 1999;281:


27. Cauchon D.FDA advisers tied to industry.USA Today 2000 Sep 25; Sect.A:1.

28.Cauchon, D. Number of drug experts available is limited. Many waivers granted for those who have conflicts of interest. USA Today 2000 Sep 25;Sect. A:10.

29.Gribbin A. House investigates panels involved with drug safety. Mismanagement claims spur action. The Washington Times 2001 Jun 18;Sect.A:1.




Read Full Post »

Reporter: Prabodh Kandala, PhD

Mice and monkeys don’t develop diseases in the same way that humans do. Nevertheless, after medical researchers have studied human cells in a Petri dish, they have little choice but to move on to study mice and primates.

University of Washington bioengineers have developed the first structure to grow small human blood vessels, creating a 3-D test bed that offers a better way to study disease, test drugs and perhaps someday grow human tissues for transplant.

The findings are published this week in the Proceedings of the National Academy of Sciences.

“In clinical research you just draw a blood sample,” said first author Ying Zheng, a UW research assistant professor of bioengineering. “But with this, we can really dissect what happens at the interface between the blood and the tissue. We can start to look at how these diseases start to progress and develop efficient therapies.”

Zheng first built the structure out of the body’s most abundant protein, collagen, while working as a postdoctoral researcher at Cornell University. She created tiny channels and injected this honeycomb with human endothelial cells, which line human blood vessels.

During a period of two weeks, the endothelial cells grew throughout the structure and formed tubes through the mold’s rectangular channels, just as they do in the human body.

When brain cells were injected into the surrounding gel, the cells released chemicals that prompted the engineered vessels to sprout new branches, extending the network. A similar system could supply blood to engineered tissue before transplant into the body.

After joining the UW last year, Zheng collaborated with the Puget Sound Blood Center to see how this research platform would work to transport real blood.

The engineered vessels could transport human blood smoothly, even around corners. And when treated with an inflammatory compound the vessels developed clots, similar to what real vessels do when they become inflamed.

The system also shows promise as a model for tumor progression. Cancer begins as a hard tumor but secretes chemicals that cause nearby vessels to bulge and then sprout. Eventually tumor cells use these blood vessels to penetrate the bloodstream and colonize new parts of the body.

When the researchers added to their system a signaling protein for vessel growth that’s overabundant in cancer and other diseases, new blood vessels sprouted from the originals. These new vessels were leaky, just as they are in human cancers.

“With this system we can dissect out each component or we can put them together to look at a complex problem. That’s a nice thing — we can isolate the biophysical, biochemical or cellular components. How do endothelial cells respond to blood flow or to different chemicals, how do the endothelial cells interact with their surroundings, and how do these interactions affect the vessels’ barrier function? We have a lot of degrees of freedom,” Zheng said.

The system could also be used to study malaria, which becomes fatal when diseased blood cells stick to the vessel walls and block small openings, cutting off blood supply to the brain, placenta or other vital organs.

“I think this is a tremendous system for studying how blood clots form on vessels walls, how the vessel responds to shear stress and other mechanical and chemical factors, and for studying the many diseases that affect small blood vessels,” said co-author Dr. José López, a professor of biochemistry and hematology at UW Medicine and chief scientific officer at the Puget Sound Blood Center.

Future work will use the system to further explore blood vessel interactions that involve inflammation and clotting. Zheng is also pursuing tissue engineering as a member of the UW’s Center for Cardiovascular Biology and the Institute for Stem Cell and Regenerative Medicine.

Ref: http://www.sciencedaily.com/releases/2012/05/120528154907.htm

Read Full Post »

A Word of Caution especially to Cardiacs’

Zithromax (azithromycin), is not only a more expensive antibiotic than other antibiotics but also seems to be an expensive one at Heart. Doctors should weigh other options for people already prone to heart problems, the researchers and other experts suggested. It is a popular antibiotic because it often can be taken for a fewer days compared to other antibiotics, for example: about 10 days for amoxicillin and other antibiotics and five-day course will suffice in case of Zithromax.


Azithromycin (Photo credit: Wikipedia)

It is widely used for bronchitis, sinus infections and pneumonia, and other common infections but seems to increase chances for sudden deadly heart problems. A rare but surprising risk found in a 14-year study. Also, antibiotics in the same class as Zithromax have been linked with sudden cardiac death. In the current study, patients those on Zithromax were about as healthy as those on other antibiotics, making it unlikely that an underlying condition might explain the increased death risk, researchers said.

Researchers analysis at Vanderbilt University indicates that there were 29 heart-related deaths among those who took Zithromax during five days of treatment. Their risk of death while taking the drug was more than double that of patients on another antibiotic, amoxicillin, or those who took none.

To compare risks, the researchers calculated that the number of deaths per 1 million courses of antibiotics would be about 85 among Zithromax patients versus 32 among amoxicillin patients and 30 among those on no antibiotics. The highest risks were in Zithromax patients with existing heart problems. Patients in each group started out with comparable risks for heart trouble, the researchers said. The results suggest there would be 47 extra heart-related deaths per 1 million courses of treatment with Zithromax. The risk of cardiovascular death was significantly greater with azithromycin than with ciprofloxacin but did not differ significantly from that with levofloxacin.

Dr. Harlan Krumholz, a Yale University health outcomes specialist who was not involved in the study said that “People need to recognize that the overall risk is low,”. More research is needed to confirm the findings, but still, he said patients with heart disease “should probably be steered away” from Zithromax for now.

At the same time, Dr. Bruce Psaty, a professor of medicine at the University of Washington, of opinion that doctors and patients need to know about the potential risks. He said the results also raise concerns about long-term use of Zithromax, which other research suggests could benefit people with severe lung disease. Additional research is needed to determine if that kind of use could be dangerous, he said.

The study appears in the New England Journal of Medicine. The National Heart, Lung and Blood Institute helped pay for the research. Wayne Ray, a Vanderbilt professor of medicine, studied the drug’s risks because of evidence linking it with potential heart rhythm problems.

Pfizer is committed to patients safety and issued a statement saying it would thoroughly review the study and “Patient safety is of the utmost importance to Pfizer and we continuously monitor the safety and efficacy of our products to ensure that the benefits and risks are accurately described,” the company said.


Additional info on Zithromax

Reported by Dr. Venkat Karra, Ph.D

Read Full Post »

Older Posts »

%d bloggers like this: