Feeds:
Posts
Comments

Posts Tagged ‘Massachusetts Institute of Technology’

Chapter 2 in 

R&D Alliances between Big Pharma and Academic Research Centers: Pharma’s Realization that Internal R&D Groups alone aren’t enough 

Israel’s Innovation System: A Triple Helix with Four Sub-helices

Prof. Henry Etzkowitz

It is fitting that the Triple Helix, with universities as a key innovation actor, along with industry and government, has been taken up in Israel, a knowledge-based society, rooted in Talmudic scholarship and scientific research. Biblical literature provided legitimation for the creation of the Jewish state while science helped create the economic base that made state formation feasible. In this case, the establishment of a government followed the creation of  (agricultural) industry and academia as the third element in a triple helix.  Nevertheless, a triple helix dynamic can be identified in the earliest phases of the formation of Israeli society, well before a formal state apparatus was constructed. Founding a state was a key objective of industry and academia but these intertwined helical strands did not accomplish the objective without assistance from other sources nor is innovation in contemporary Israel, along with many other societies, solely a triple helix phenomenon.

Several analysts have identified additional helices as relevant to innovation (Drori, Ch. 1). However, if everything is relevant than nothing is especially significant and a model that originally posited the transformation of the university from a secondary supporting institution of industrial society to a primary institution of a knowledge based society is vitiated. A second academic revolution expanded academic tasks from education and research to include entrepreneurship as a third mission. An entrepreneurial university, interacting closely with industry and government, is the core of a Triple Helix. By engaging in such relations an academic sector may, depending upon its previous experience, maintain or gain, relative independence. Triple Helix actors must also continually renew their commitment to entrepreneurship and innovation, lest they fall back into traditional roles and relationships.

What is the source of the Israeli Triple Helix? The contributors to this volume have identified seven helical strands as constitutive of the Israeli innovation system. I suggest that these strands may be grouped into primary and secondary categories: the primary strands are the classic triple helix (university-industry-government) while the secondary strands are supporting linkages, like the two diasporas (Israeli and foreign), or hybrid organizations like the military and non-governmental organizations (NGO’s). Thus, the resulting Israeli innovation system takes the form of a Trivium and a Quadrivium consisting of three primary and four secondary strands, in a variety of relationships with each other in different historical periods. The Innovation Trivium and Quadrivium are the constellation of core and supporting actors that constitute a knowledge-based innovation system. [1]

2.1 Triple Helix Origins

The triple helix innovation model originated in the analysis of MIT’s role in the renewal of New England, a region suffering industrial decline from the early 20th century (Etzkowitz, 2002).  MIT was founded in the mid 19th century, with industry and government support to raise the technological level of the regions’ industries but by the time it had developed research capabilities many of those industries had already left the region, to move closer to sources of raw materials, lines of distribution and less expensive labor. It was in this context, during the 1920’s, that the governors of New England called together the leadership of the region in a Council to address the region’s economic decline. Given a unique feature of the region, its extensive network of academic institutions, it is not surprising that the governors included the academic leadership of the region in their call.

However, their inclusion of academia had an unexpected consequence, transforming the usual public-private partnership model into a unique configuration- a proto-triple helix with a proclivity to originality. Triads are more flexible than dyads that typically take a strong common direction or devolve into opposition and stasis (Simmel, 1950).  Industry-government groups typically repeat timeworn strategies to attract industries from other regions in a zero sum game or attempt to revive local declining industries that may be beyond resuscitation. The inclusion of academia along with industry and government introduced an element of novelty into the government-industry dyad.  A moment of collective creativity occurred, during the discussions of the New England Council, inspired by the leadership of MIT’s President Karl Compton.  A triple helix dynamic, with the university as a key actor in an innovation strategy, was instituted that was highly unusual at the time.

The Council made an analysis of the strengths and weakness of the New England region and invented the venture capital firm to fill a gap in its innovation system, expanding a previously sporadic and uneven process of firm-formation from academic research into a powerful stream of start-ups and growth firms. A coalition of industry, government and university leaders invented a new model of knowledge-based economic and social development, building upon the superior academic resources of the region. This was not an isolated development but built upon previous financial and organizational innovations in the whaling industry and in academia.  In New England, industry and government, inspired by an academic entrepreneur and visionary, William Barton Rogers, earlier came together in the mid 19th century to found MIT, the first entrepreneurial university, thereby establishing the preconditions for a triple helix dynamic in that region.

2.2 From a Double to a Triple Helix

In a remote province of the Ottoman Empire in the early 20th century, Jewish agricultural settlements and an agricultural research institute created a triple helix dynamic that assisted the formation of the State of Israel. An industry-academia double helix provided the knowledge-based foundation for the Israeli triple helix. It preceded the founding of the state of Israel and indeed supplied many of the building blocks from which it was constructed. In a possibly unique configuration, state formation built upon scientific research and an agricultural industrial base. Before the Technion, the Weizmann Institute and the Hebrew University, there was the Jewish Agricultural Experiment Station in Atlit, founded in 1909 by agronomist Aaron Aaronsohn, with the support of Julius Rosenwald, an American-Jewish philanthropist (Florence, 2007).

Hints in the Bible of agricultural surplus, a land flowing with “milk and honey,” were investigated in an early 20th century context of desertification in Palestine.  The station’s researchers hypothesized that a seeming desert had a greater carrying capacity than was expected and thus could support a much larger population. Aronsohn and his colleagues’ advances in  “arid zone agriculture” opened the way to the transformation of a network of isolated agricultural settlements into a modern urban society.  The Atlit research program, conducted in collaboration with the US Department of Agriculture, was then introduced to California.

However, in California, arid zone methods were soon made superfluous by hydraulic transfer projects, from north to south, of enormous water resources. Arid agricultural methods remained relevant in the Israeli context of scarce water resources. Israel’s first high tech industry was based upon the development of drip irrigation techniques in the late 1950’s that preceded the IT wave by decades. Labor saving methods of agricultural production were also driven by ideological concerns of not wanting to be dependent upon hired Arab labor.  Science-based technology was thus at the heart of a developing Israeli society as well as a key link to a Diaspora that supplied infusions of support from abroad.

The Atlit agricultural research institute transformed itself into an intelligence network on behalf of the British during the First World War, betting that assisting the exit of Palestine from the Ottoman Empire could provide a pathway for the creation of a Jewish state (Florence, 2007). The Atlit network was uncovered, and some of its members perished, but it had already provided significant information on invasion routes that assisted the British takeover of Palestine. Its leader, Aaron Aaronsohn, died in a plane crash over the English channel in 1919 while bringing maps to the post-war Paris peace conference. The Institute itself did not survive its repurposing but its mission was taken up by other agricultural research units.

A linkage between helices and the translation of social capital from one sphere to another was another element of the state building project. The Balfour Declaration, issued by the British government in 1917, favored a “national home” for the Jewish people in Palestine, without prejudicing the rights of other peoples, and was the first such statement by a major power. Although the Declaration was part of a geopolitical balancing act to gain support for the British war effort, and may have occurred for that reason alone, British-Jewish scientist Chaim Weizmann’s accomplishments gave it a boost (Weizmann, 1949).

Weizmann’s invention of a bacterial method of producing the feedstock for explosives assisted the British war effort. Weizmann, a professor at Manchester University was able to transmute this discovery into support for a projected Jewish state through his relationship with Arthur Balfour, the Foreign Secretary, and an MP from Manchester. Weizmann dual roles as an eminent scientist and as a political leader in the Zionist movement coincided and he used an achievement in one arena to advance his goals in another. The Diaspora, of which he was a member in that era, aggregated international support for the state-building project.

Science also served to legitimate the new state of Israel. Albert Einstein was offered the presidency of the newly founded state of Israel. While the aura of his renown was one reason for the offer, that fame was primarily based on his scientific achievements. When Einstein turned down the position, the presidency was offered to another scientist, Chaim Weizmann, who accepted. The fact that the position was offered to two scientists in a row suggests that science was implicitly seen as legitimating the state, while also recognizing its role in the founding of Israel.

2.3 Innovation Trivium and Quadrivium

Identification of additional secondary contributors to innovation is a useful task but their relationship to the primary helices, and the roles that they play, should be specified. For example, the Israeli military may be viewed as a hybrid entity. In addition to the usual functions of a military, the Israel Defense Forces also serves as an educational institution for virtually the entire society, intermediating between secondary and university education and as an industrial development platform, spinning off aircraft and software industries. It has some of the characteristics of an independent helix but remains a part of the state, embodying hybrid elements that give it some of the characteristics of an independent institutional sphere.

It is a significant actor in Israeli society, having a significantly higher profile than the militaries in most societies. Therefore we locate it in the “Quadrivium” of support helices that comprise hybrid organizations or links with other societies. The military derived from the “Shomrim”, watches mounted by isolated settlements while nascent governmental institutions were a confluence between the networks of settlements and more general support structures such as the Jewish Agency, a mix of local and Diaspora efforts. A proto-state was constructed from these elements prior to independence.

The Israeli Diaspora played a key role, along with government, in founding Israel’s venture capital industry. After several unsuccessful attempts at developing a venture industry, government hit on the idea of combining public and private elements, providing government funds to encourage private partners to participate by reducing their risk. Key to the efforts success was the recruitment of members of the Israeli Diaspora, working in financial and venture capital firms in the US, to return to Israel and participate in the Yozma project and the funds that emanated from it. [2]

2.4 Israel: A Triple Helix Society

This volume, analyzing Israel’s innovation actors, makes a significant contribution to triple helix theory and practice by providing evidence of their relative salience. Identifying multiple contributors to the innovation project is a useful exercise but not all helices are equal. A key contribution of the triple helix model is that it identified the increased significance of the university in a knowledge based society and the fundamental importance of creative triple helix interactions and relationships to societies that wish to increase their innovation potential (Durrani et al., 2012).

We can also identify the qualities of an emergent social structure that encourages innovation. Multiple sources of initiative, organizational venues that combine different perspectives and experiences and persons with dual roles across the helices are more likely to produce innovation and hybridization than isolated rigid structures, even with great resources behind them. The Israeli experience takes the triple helix model a step beyond organizational innovation by demonstrating the significance of triple helix roles and relationships to the creation of an innovative society.

 References

Durrani, Tariq and Jann Hidajat Tjakraatmadja and Wawan Dhewanto Eds. 2012. 10th Triple Helix Conference 2012 Procedia – Social and Behavioral Sciences, Volume 52.

Etzkowitz, Henry. 2002. MIT and the Rise of Entrepreneurial Science. London: Routledge.

Etzkowitz, Henry, Marina Ranga and James Dzisah, 2012. “Wither the University? The Novum Trivium and the transition from industrial to knowledge society.” Social Science Information June 2012 51: 143-164.

Florence, Ronald. 2007. Lawrence and Aaronsohn: 
T. E. Lawrence, Aaron Aaronsohn, and the Seeds of the Arab-Israeli Conflict 
 
New York: Viking.

Simmel, Georg. 1950. Conflict and the Web of Group Affiliations. Glencoe: Free Press.

Weizmann, Chaim. 1949. Trial and Error: the autobiography of Chaim Weizmann. New York: Harper & Bros.

[1] The classic Trivium and Quadrivium were the core and supporting academic disciplines that constituted the knowledge-base of medieval Europe. See Etzkowitz, Ranga and Dzisah, 2012.

[2] Author discussion with Yozma founders at the 3rd Triple Helix Conference in Rio de Janeiro, 1999. FINEPE, the Brazil Development Agency invited Yozma representatives to the conference and held side meetings to arrange transfer of the Yozma model to Brazil. FINEPE added an additional element, “FINEPE University,” a series of workshops held around the country to train entrepreneurs in “pitching” to venture firms.

 

Other articles by same author were published in this Open Access Online Scientific Journal, include the following:

BEYOND THE “MALE MODEL”: AN ALTERNATIVE FEMALE MODEL OF SCIENCE, TECHNOLOGY AND INNOVATION

 Professor Henry Etzkowitz 8/1/2012

http://pharmaceuticalintelligence.com/2012/08/01/beyond-the-male-model-an-alternative-female-model-of-science-technology-and-innovation/

BEYOND THE “MALE MODEL”: AN ALTERNATIVE FEMALE MODEL OF SCIENCE, TECHNOLOGY AND INNOVATION

THE TRIPLE HELIX ASSOCIATION NEWSLETTER, VOLUME 1 ISSUE 3 JULY 2012

Hélice www.triplehelixassociation.org  Triple Helix X, 2012, Bandung,Indonesia . . . www.th2012.org

by Professor Henry Etzkowitz, President of the Triple Helix Association,  Senior Researcher, H-STAR Institute, Stanford University, Visiting Professor, Birkbeck, London University and Edinburgh University Business School

henry.etzkowitz@stanford.edu

Professor Henry Etzkowitz paper is based on his Keynote Address to the FemTalent Conference, Barcelona, Spain 2011

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

 

Nature Genetics (2013) doi:10.1038/ng.2705

Independent specialization of the human and mouse X chromosomes for the male germ line

  1. Whitehead Institute, Cambridge, Massachusetts, USA.

    • Jacob L Mueller,
    • Helen Skaletsky,
    • Laura G Brown,
    • Sara Zaghlul &
    • David C Page
  2. Howard Hughes Medical Institute, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.

    • Helen Skaletsky,
    • Laura G Brown &
    • David C Page
  3. The Genome Institute, Washington University School of Medicine, St. Louis, Missouri, USA.

    • Susan Rock,
    • Tina Graves,
    • Wesley C Warren &
    • Richard K Wilson
  4. The Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton, Cambridge, UK.

    • Katherine Auger
  5. Department of Biology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.

    • David C Page

Contributions

J.L.M., H.S., W.C.W., R.K.W. and D.C.P. planned the project. J.L.M. and L.G.B. performed BAC mapping. J.L.M. performed RNA deep sequencing. T.G., S.R., K.A. and S.Z. were responsible for finished BAC sequencing. J.L.M. and H.S. performed sequence analyses. J.L.M. and D.C.P. wrote the manuscript.

Competing financial interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to:

Nature Genetics (2013) doi:10.1038/ng.2705

Received

 11 February 2013 Accepted

20 June 2013 Published online

21 July 2013

We compared the human and mouse X chromosomes to systematically test Ohno’s law, which states that the gene content of X chromosomes is conserved across placental mammals1. First, we improved the accuracy of the human X-chromosome reference sequence through single-haplotype sequencing of ampliconic regions. The new sequence closed gaps in the reference sequence, corrected previously misassembled regions and identified new palindromic amplicons. Our subsequent analysis led us to conclude that the evolution of human and mouse X chromosomes was bimodal. In accord with Ohno’s law, 94–95% of X-linked single-copy genes are shared by humans and mice; most are expressed in both sexes. Notably, most X-ampliconic genes are exceptions to Ohno’s law: only 31% of human and 22% of mouse X-ampliconic genes had orthologs in the other species. X-ampliconic genes are expressed predominantly in testicular germ cells, and many were independently acquired since divergence from the common ancestor of humans and mice, specializing portions of their X chromosomes for sperm production.

Refined X Chromosome Assembly Hints at Possible Role in Sperm Production

July 22, 2013

NEW YORK (GenomeWeb News) – A US and UK team that delved into previously untapped stretches of sequence on the mammalian X chromosome has uncovered clues that sequences on the female sex chromosome may play a previously unappreciated role in sperm production.

The work, published online yesterday in Nature Genetics, also indicated such portions of the X chromosome may be prone to genetic changes that are more rapid than those described over other, better-characterized X chromosome sequences.

“We view this as the double life of the X chromosome,” senior author David Page, director of the Whitehead Institute, said in a statement.

“[T]he story of the X has been the story of X-linked recessive diseases, such as color blindness, hemophilia, and Duchenne’s muscular dystrophy,” he said. “But there’s another side to the X, a side that is rapidly evolving and seems to be attuned to the reproductive needs of males.”

As part of a mouse and human X chromosome comparison intended to assess the sex chromosome’s similarities across placental mammals, Page and his colleagues used a technique called single-haplotype iterative mapping and sequencing, or SHIMS, to scrutinize human X chromosome sequence and structure in more detail than was available previously.

With the refined human X chromosome assembly and existing mouse data, the team did see cross-mammal conservation for many X-linked genes, particularly those present in single copies. But that was not the case for a few hundred species-specific genes, many of which fell in segmentally duplicated, or “ampliconic,” parts of the X chromosome. Moreover, those genes were prone to expression by germ cells in male testes tissue, pointing to a potential role in sperm production-related processes.

“X-ampliconic genes are expressed predominantly in testicular germ cells,” the study authors noted, “and many were independently acquired since divergence from the common ancestor of humans and mice, specializing portions of their X chromosomes for sperm production.”

The work was part of a larger effort to look at a theory known as Ohno’s law, which predicts extensive X-linked gene similarities from one placental mammal to the next, Page and company turned to the same SHIMS method they used to get a more comprehensive view of the Y chromosome for previous studies.

Using that sequencing method, the group resequenced portions of the human X chromosome, originally assembled from a mishmash of sequence from the 16 or more individuals whose DNA was used to sequence the human X chromosome reference.

Their goal: to track down sections of segmental duplication, called ampliconic regions, that may have been missed or assembled incorrectly in the mosaic human X chromosome sequence.

“Ampliconic regions assembled from multiple haplotypes may have expansions, contractions, or inversions that do not accurately reflect the structure of any extant haplotype,” the study’s authors explained.

“To thoroughly test Ohno’s law,” they wrote, “we constructed a more accurate assembly of the human X chromosome’s ampliconic regions to compare the gene contents of the human and mouse X chromosomes.”

The team focused their attention on 29 predicted ampliconic regions of the human X chromosome, using SHIMS to generate millions of bases of non-overlapping X chromosome sequence.

With that sequence in hand, they went on to refine the human X chromosome assembly before comparing it with the reference sequence for the mouse X chromosome, which already represented just one mouse haplotype.

The analysis indicated that 144 of the genes on the human X chromosome don’t have orthologs in mice, while 197 X-linked mouse genes lack human orthologs.

A minority of those species-specific genes arose as the result of gene duplication or gene loss events since the human and mouse lineages split from one around 80 million years ago, researchers determined. But most appear to have resulted from retrotransposition or transposition events involving sequences from autosomal chromosomes.

And when the team used RNA sequencing and existing gene expression data to look at which mouse and human tissues flip on particular genes, it found that many of the species-specific genes on the X chromosome showed preferential expression in testicular cells known for their role in sperm production.

Based on such findings, the study’s authors concluded that “the gene repertoires of the human and mouse X chromosomes are products of two complementary evolutionary processes: conservation of single-copy genes that serve in functions shared by the sexes and ongoing gene acquisition, usually involving the formation of amplicons, which leads to the differentiation and specialization of X chromosomes for functions in male gametogenesis.”

The group plans to incorporate results of its SHIMS-based assembly into the X chromosome portion of the human reference genome.

“This is a collection of genes that has largely eluded medical geneticists,” the study’s first author Jacob Mueller, a post-doctoral researcher in Page’s Whitehead lab, said in a statement. “Now that we’re confident of the assembly and gene content of these highly repetitive regions on the X chromosome, we can start to dissect their biological significance.”

Related Stories

SOURCE

http://www.genomeweb.com//node/1256251?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=X%20Chromosome’s%20Possible%20New%20Role;%20NanoString%20Coverage%20Initiated;%20SynapDx%20Raises%20Funds;%20More%20-%2007/22/2013%2010:50:00%20AM

 

REFERENCES in the Nature Genetics

  • Ohno, S. Sex Chromosomes and Sex-Linked Genes (Springer, Berlin, 1967).
  1. Kuroiwa, A. et al. Conservation of the rat X chromosome gene order in rodent species.Chromosome Res. 9, 61–67 (2001).
  2. Delgado, C.L., Waters, P.D., Gilbert, C., Robinson, T.J. & Graves, J.A. Physical mapping of the elephant X chromosome: conservation of gene order over 105 million years.Chromosome Res. 17, 917–926 (2009).
  3. Prakash, B., Kuosku, V., Olsaker, I., Gustavsson, I. & Chowdhary, B.P. Comparative FISH mapping of bovine cosmids to reindeer chromosomes demonstrates conservation of the X-chromosome. Chromosome Res. 4, 214–217 (1996).
  4. Ross, M.T. et al. The DNA sequence of the human X chromosome. Nature 434, 325–337(2005).
  5. Veyrunes, F. et al. Bird-like sex chromosomes of platypus imply recent origin of mammal sex chromosomes. Genome Res. 18, 965–973 (2008).
  6. Watanabe, T.K. et al. A radiation hybrid map of the rat genome containing 5,255 markers.Nat. Genet. 22, 27–36 (1999).
  7. Raudsepp, T. et al. Exceptional conservation of horse-human gene order on X chromosome revealed by high-resolution radiation hybrid mapping. Proc. Natl. Acad. Sci. USA 101,2386–2391 (2004).
  8. Band, M.R. et al. An ordered comparative map of the cattle and human genomes. Genome Res. 10, 1359–1368 (2000).
  9. Murphy, W.J., Sun, S., Chen, Z.Q., Pecon-Slattery, J. & O’Brien, S.J. Extensive conservation of sex chromosome organization between cat and human revealed by parallel radiation hybrid mapping. Genome Res. 9, 1223–1230 (1999).
  10. Spriggs, H.F. et al. Construction and integration of radiation-hybrid and cytogenetic maps of dog chromosome X. Mamm. Genome 14, 214–221 (2003).
  11. Palmer, S., Perry, J. & Ashworth, A. A contravention of Ohno’s law in mice. Nat. Genet. 10,472–476 (1995).
  12. Rugarli, E.I. et al. Different chromosomal localization of the Clcn4 gene in Mus spretus and C57BL/6J mice. Nat. Genet. 10, 466–471 (1995).
  13. She, X. et al. Shotgun sequence assembly and recent segmental duplications within the human genome. Nature 431, 927–930 (2004).
  14. Olivier, M. et al. A high-resolution radiation hybrid map of the human genome draft sequence. Science 291, 1298–1302 (2001).
  15. Dietrich, W.F. et al. A comprehensive genetic map of the mouse genome. Nature 380,149–152 (1996).
  16. Church, D.M. et al. Lineage-specific biology revealed by a finished genome assembly of the mouse. PLoS Biol. 7, e1000112 (2009).
  17. Tishkoff, S.A. & Kidd, K.K. Implications of biogeography of human populations for ‘race’ and medicine. Nat. Genet. 36, S21–S27 (2004).
  18. Bovee, D. et al. Closing gaps in the human genome with fosmid resources generated from multiple individuals. Nat. Genet. 40, 96–101 (2008).
  19. Kidd, J.M. et al. Mapping and sequencing of structural variation from eight human genomes.Nature 453, 56–64 (2008).
  20. Skaletsky, H. et al. The male-specific region of the human Y chromosome is a mosaic of discrete sequence classes. Nature 423, 825–837 (2003).
  21. Hughes, J.F. et al. Chimpanzee and human Y chromosomes are remarkably divergent in structure and gene content. Nature 463, 536–539 (2010).
  22. Kuroda-Kawaguchi, T. et al. The AZFc region of the Y chromosome features massive palindromes and uniform recurrent deletions in infertile men. Nat. Genet. 29, 279–286(2001).
  23. Bellott, D.W. et al. Convergent evolution of chicken Z and human X chromosomes by expansion and gene acquisition. Nature 466, 612–616 (2010).
  24. Lindblad-Toh, K. et al. Genome sequence, comparative analysis and haplotype structure of the domestic dog. Nature 438, 803–819 (2005).
  25. Wade, C.M. et al. Genome sequence, comparative analysis, and population genetics of the domestic horse. Science 326, 865–867 (2009).
  26. International Chicken Genome Sequencing Consortium. Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432,695–716 (2004).
  27. Wang, E.T. et al. Alternative isoform regulation in human tissue transcriptomes. Nature 456,470–476 (2008).
  28. Mortazavi, A., Williams, B.A., McCue, K., Schaeffer, L. & Wold, B. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat. Methods 5, 621–628 (2008).
  29. Bradley, R.K., Merkin, J., Lambert, N.J. & Burge, C.B. Alternative splicing of RNA triplets is often regulated and accelerates proteome evolution. PLoS Biol. 10, e1001229 (2012).
  30. Handel, M.A. & Eppig, J.J. Sertoli cell differentiation in the testes of mice genetically deficient in germ cells. Biol. Reprod. 20, 1031–1038 (1979).
  31. Mueller, J.L. et al. The mouse X chromosome is enriched for multicopy testis genes showing postmeiotic expression. Nat. Genet. 40, 794–799 (2008).
  32. Coyne, J.A. & Orr, H.A. Speciation (Sinauer Associates, Sunderland, MA, 2004).
  33. Elliott, R.W. et al. Genetic analysis of testis weight and fertility in an interspecies hybrid congenic strain for chromosome X. Mamm. Genome 12, 45–51 (2001).
  34. Elliott, R.W., Poslinski, D., Tabaczynski, D., Hohman, C. & Pazik, J. Loci affecting male fertility in hybrids between Mus macedonicus and C57BL/6. Mamm. Genome 15, 704–710(2004).
  35. Storchová, R. et al. Genetic analysis of X-linked hybrid sterility in the house mouse. Mamm. Genome 15, 515–524 (2004).
  36. Fujita, P.A. et al. The UCSC Genome Browser database: update 2011. Nucleic Acids Res.39, D876–D882 (2011).
  37. Schwartz, S. et al. Human-mouse alignments with BLASTZ. Genome Res. 13, 103–107(2003).
  38. Bailey, J.A. et al. Recent segmental duplications in the human genome. Science 297,1003–1007 (2002).
  39. Osoegawa, K. et al. A bacterial artificial chromosome library for sequencing the complete human genome. Genome Res. 11, 483–496 (2001).
  40. Salido, E.C. et al. Cloning and expression of the mouse pseudoautosomal steroid sulphatase gene (Sts). Nat. Genet. 13, 83–86 (1996).
  41. Yeh, R.F., Lim, L.P. & Burge, C.B. Computational inference of homologous gene structures in the human genome. Genome Res. 11, 803–816 (2001).
  42. Altschul, S.F., Gish, W., Miller, W., Myers, E.W. & Lipman, D.J. Basic local alignment search tool. J. Mol. Biol. 215, 403–410 (1990).
  43. Thornton, K. & Long, M. Rapid divergence of gene duplicates on the Drosophila melanogaster X chromosome. Mol. Biol. Evol. 19, 918–925 (2002).
  44. Trapnell, C., Pachter, L. & Salzberg, S.L. TopHat: discovering splice junctions with RNA-Seq.Bioinformatics 25, 1105–1111 (2009).
  45. Trapnell, C. et al. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat. Biotechnol. 28, 511–515(2010).
  46. Brawand, D. et al. The evolution of gene expression levels in mammalian organs. Nature478, 343–348 (2011).
  47. Deng, X. et al. Evidence for compensatory upregulation of expressed X-linked genes in mammals, Caenorhabditis elegans and Drosophila melanogaster. Nat. Genet. 43,1179–1185 (2011).

 

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

 

Researchers have identified a transcription factor, known as ZEB1, that is capable of converting non-aggressive basal-type cancer cells into highly malignant, tumor-forming cancer stem cells. [Press release from the Whitehead Institute for Biomedical Research discussing online prepublication in Cell]

http://wi.mit.edu/news/archive/2013/scientists-identify-gene-controls-aggressiveness-breast-cancer-cells?elq=89143e4ab79d46b3a662d360ca313556&elqCampaignId=11

Whitehead Institute is a world-renowned non-profit research institution dedicated to improving human health through basic biomedical research.
Wholly independent in its governance, finances, and research programs, Whitehead shares a close affiliation with Massachusetts Institute of Technology
through its faculty, who hold joint MIT appointments.

Cells from basal cancers are able to switch relatively easily into cancer stem cell (CSC) state, unlike luminal breast cancer cells, which tend to remain in the non-CSC state. The gene ZEB1 is critical for this conversion. The difference in ZEB1’s effects is due to the way the gene is marked in the two types of cancers. In luminal breast cancer cells, the ZEB1 gene is occupied with modifications that shut it down. But in basal breast cancer cells, ZEB1’s state is more tenuous, with repressing and activating markers coexisting on the gene. When these cells are exposed to certain signals, including those from TGFß, the repressive marks are removed and ZEB1 is expressed, thereby converting the basal non-CSCs into CSCs.

SCIENTISTS IDENTIFY GENE THAT CONTROLS AGGRESSIVENESS IN BREAST CANCER CELLS

Diagram of the mechanism cancer cells use to convert into cancer stem cells

Cells from basal cancers are able to switch relatively easily into cancer stem cell (CSC) state, unlike luminal breast cancer cells, which tend to remain in the non-CSC state. The gene ZEB1 is critical for this conversion. The difference in ZEB1’s effects is due to the way the gene is marked in the two types of cancers. In luminal breast cancer cells, the ZEB1 gene is occupied with modifications that shut it down. But in basal breast cancer cells, ZEB1’s state is more tenuous, with repressing and activating markers coexisting on the gene. When these cells are exposed to certain signals, including those from TGFß, the repressive marks are removed and ZEB1 is expressed, thereby converting the basal non-CSCs into CSCs.

JULY 3, 2013

TAGS: WEINBERG LABYOUNG LABCANCER

CAMBRIDGE, Mass. – In a discovery that sheds new light on the aggressiveness of certain breast cancers, Whitehead Institute researchers have identified a transcription factor, known as ZEB1, that is capable of converting non-aggressive basal-type cancer cells into highly malignant, tumor-forming cancer stem cells (CSCs). Intriguingly, luminal breast cancer cells, which are associated with a much better clinical prognosis, carry this gene in a state in which it seems to be permanently shut down.

The researchers, whose findings are published this week in the journal Cell, report that the ZEB1 gene is held in a poised state in basal non-CSCs, such that it can readily respond to environmental cues that consequently drive those non-CSCs into the dangerous CSC state. Basal-type breast carcinoma is a highly aggressive form of breast cancer. According to a 2011 epidemiological study, the 5-year survival rate for patients with basal breast cancer is 76%, compared with a roughly 90% 5-year survival rate among patients with other forms of breast cancer.

“We may have found a root source, maybe the root source, of what ultimately determines the destiny of breast cancer cells—their future benign or aggressive clinical behavior,” says Whitehead Founding Member Robert Weinberg, who is also a professor of biology at MIT and Director of the MIT/Ludwig Center for Molecular Oncology.

Transcription factors are genes that control the expression of other genes, and therefore have a significant impact on cell activities. In the case of ZEB1, it has an important role in the so-called epithelial-to-mesenchymal transition (EMT), during which epithelial cells acquire the traits of mesenchymal cells. Unlike the tightly-packed epithelial cells that stick to one another, mesenchymal cells are loose and free to move around a tissue. Previous work in the Weinberg lab showed that adult cancer cells passing through an EMT are able to self-renew and to seed new tumors with high efficiency, hallmark traits of CSCs.

Other earlier work led by Christine Chaffer, a postdoctoral researcher in the Weinberg lab, demonstrated that cancer cells are able to spontaneously become CSCs. Now Chaffer and Nemanja Marjanovic have pinpointed ZEB1, a key player in the EMT, as a gene critical for this conversion in breast cancer cells.

Breast cancers are categorized into at least five different subgroups based on their molecular profiles. More broadly these groups can be subdivided into the less aggressive ‘luminal’ subgroup or more aggressive ‘basal’ subgroup. The aggressive basal-type breast cancers often metastasize, seeding new tumors in distant parts of the body. Patients with basal breast cancer generally have a poorer prognosis than those with the less aggressive luminal-type breast cancer.

Chaffer and Marjanovic, a former research assistant in the Weinberg lab, studied non-CSCs from luminal- and basal-type cancers and determined that cells from basal cancers are able to switch relatively easily into CSC state, unlike luminal breast cancer cells, which tend to remain in the non-CSC state.

The scientists determined that the difference in ZEB1’s effects is due to the way the gene is marked in the two types of cancers. In luminal breast cancer cells, the ZEB1 gene is occupied with modifications that shut it down. But in basal breast cancer cells, ZEB1’s state is more tenuous, with repressing and activating markers coexisting on the gene. When these cells are exposed to certain signals, including those from TGFß, the repressive marks are removed and ZEB1 is expressed, thereby converting the basal non-CSCs into CSCs.

So what does this new insight mean for treating basal breast cancer?

“Well, we know that these basal breast cancer cells are very plastic and we need to incorporate that kind of thinking into treatment regimes,” says Chaffer. “As well as targeting cancer stem cells, we also need to think about how we can prevent the non-cancer stem cells from continually replenishing the pool of cancer stem cells. For example, adjuvant therapies that inhibit this type of cell plasticity may be a very effective way to keep metastasis at bay.”

Marjnaovic agrees but cautions that the model may not be applicable for every cancer.

“This is an example of how adaptable cancer cells can be,,” says Marjanovic, who is currently a research assistant at the Broad Institute. “We have yet to determine if ZEB1 plays a similar role in all cancer types, but the idea that cancer cells reside in a poised state that enables them to adapt to changing environments may be a mechanism used by many cancers to increase their aggressiveness.”

 

This work is supported the Advanced Medical Research Foundation (AMRF), Breast Cancer Research Foundation, and National Institutes of Health (NIH) grants HG002668 and CA146445.

Written by Nicole Giese Rura

* * *

Robert Weinberg’s primary affiliation is with Whitehead Institute for Biomedical Research, where his laboratory is located and all his research is conducted. He is also a professor of biology at Massachusetts Institute of Technology and Director of the MIT/Ludwig Center for Molecular Oncology.

* * *

Full Citation:

“Poised chromatin at the ZEB1 promoter enables breast cancer cell plasticity and enhances tumorigenicity”

Cell, July 3, 2013.

Christine L Chaffer (1*), Nemanja D Marjanovic (1*), Tony Lee (1), George Bell (1), Celina G Kleer (2), Ferenc Reinhardt (1), Ana C D’Alessio (1), Richard A Young (1,3), and Robert A Weinberg (1,4).

1.Whitehead Institute for Biomedical Research, Cambridge, MA 02142, USA

2. University of Michigan Medical School, Department of Pathology, Ann Arbor MI, 48109, USA

3.Department of Biology, Massachusetts Institute of Technology, Cambridge, MA 02139, USA

4.Ludwig MIT Center for Molecular Oncology, Cambridge, MA 02139, USA

*These authors contributed equally to this work.

 

CONTACT

Communications and Public Affairs
Phone: 617-258-6851
Email: newsroom@wi.mit.edu


RELATED LINKS

Signaling pathways point to vulnerability in breast cancer stem cells

Scientists identify a surprising new source of cancer stem cells

New method takes aim at aggressive cancer cells

RNA snippet suppresses spread of aggressive breast cancer

http://wi.mit.edu/news/archive/2013/scientists-identify-gene-controls-aggressiveness-breast-cancer-cells?elq=89143e4ab79d46b3a662d360ca313556&elqCampaignId=11

 

Read Full Post »

Precision Medicine: The Future of Medicine?

Reporter: Aviva Lev-Ari, PhD, RN

Dr. Laurie Glimcher, dean of Weill Cornell Medical College, and Dr. Robert Langer, the Koch Institute Professor at MIT, talk to the “CBS This Morning” co-hosts about what’s next in the fight against diseases like Alzheimer’s, cancer, and diabetes.

VIEW VIDEO

http://www.cbsnews.com/video/watch/?id=50149783n

Free Webinar:

The Economics of Precision Medicine: 

How Personalizing Treatment can Bend the Cost Curve by 

Improving the Value Delivered by Healthcare Innovations

In a world where it is clear that healthcare costs must be contained, how can we afford to pay for innovation? This webinar will explore how personalizing treatment can offer an escape from the innovation-cost conundrum. By simultaneously increasing clinical development efficiency and the treatment effectiveness, targeting clinical innovations to the patients most likely to benefit can improve healthcare value per dollar spent while maintaining the ROI levels needed to support investment in innovation. We believe precision medicine should play a more prominent role in the cost containment discussion of healthcare reform.

By attending this Webinar, you will learn how to:

Help clients develop product development and commercialization strategies that get leverage from the benefits of precision medicine 

Support positioning of innovations as part of the healthcare solution, not the problem 

Understand and communicate the value proposition of precision medicine for payers, government decision makers, and legislators

The Economics of Precision Medicine: How Personalizing Treatment can Bend the Cost Curve by Improving the Value Delivered by Healthcare Innovations

Thursday, July 25, 2013

11:30 am PDT / 2:30 pm EDT

1 hour

Who should attend:

Franchise and Marketing Leaders

Therapeutic Area Leads

Medical Affairs

Government Affairs/Public Policy

Health Economics and Market Access

Webinar agenda:

Is the high cost of healthcare innovation incompatible with control of healthcare costs?

Cost-effectiveness criteria and how they can be met

Taking cost out of clinical development

Case Example: How everyone can win

Practical impact on development and commercialization strategies

Q&A

Speaker information: 

David Parker, Ph.D., Vice President, Market Access Strategy, Precision for Medicine

Vicki L. Seyfert-Margolis, Chief Scientific and Strategy Officer, Precision for Medicine

Harry Glorikian, Managing Director, Strategy, Precision for Medicine

Cambridge Healthtech Institute, 250 First Avenue, Suite 300, Needham, MA 02494

Tel: 781-972-5400 | Fax: 781-972-5425

Read Full Post »

Larry H Bernstein, MD, Writer, Curator
http://pharmaceutical intelligence.com/2013/06/22/ Demythologizing sharks, cancer, and shark fins/lhbern

Screen Shot 2021-07-19 at 6.26.48 PM

Word Cloud By Danielle Smolyar

Sharks have survived some 400 million years on Earth. Could their longevity be due in part to an extraordinary resistance to cancer and other diseases? If so, humans might someday benefit from the shark’s secrets—but leading researchers caution that today’s popular shark cartilage “cancer cures” aren’t part of the solution.

The belief that sharks do not get cancer is not supported in fact, but it is the basis for decimating a significant part of the shark population for shark fins, and for medicinal use.  The unfortunate result is that there is no benefit.

A basis for this thinking is that going back to the late 1800s, sharks have been fished commercially and there have been few reports of anything out of the ordinary when removing internal organs or preparing meat for the marketplace.  In addition, pre-medical students may have dissected dogfish sharks in comparative anatomy, but you don’t see reports of cancerous tumors.

Carl Luer of the MOTE Marine Laboratory’s Center for Shark Research in Sarasota, Florida, has been studying sharks’ cancer resistance for some 25 years.  Systematic surveys of sharks are difficult to conduct, as capturing the animals in large numbers is time-consuming, and cancer tests would likely require the deaths of large numbers of sharks. Of the thousands of fish tumors in the collections of the Smithsonian Institution, only about 15 are from elasmobranchs, and only two of these are thought to have been malignant.

Scientists have been studying cancerous tumors in sharks for 150 years.

The first chondrichthyes’ (cartilaginous fishes, including sharks) tumor was found on a skate and recorded by Dislonghamcps in 1853. The first shark tumor was recorded in 1908. Scientists have since discovered benign and cancerous tumors in 18 of the 1,168 species of sharks. Scarcity of studies on shark physiology has perhaps allowed this myth to be accepted as fact for so many years.

In April 2000, John Harshbarger and Gary Ostrander countered this shark myth with a presentation on 40 benign and cancerous tumors known to be found in sharks, and soon after a blue shark was found with cancerous tumors in both its liver and testes. Several years later a cancerous gingival tumor was removed from the mouth of a captive sand tiger shark, Carcharias Taurus. Advances in shark research continue to produce studies on types of cancer found in various species of shark.  Sharks, like fish, encounter and take in large quantities of environmental pollutants, which may actually make them more susceptible to tumorous growth. Despite recorded cases of shark cancer and evidence that shark cartilage has no curative powers against cancer sharks continue to be harvested for their cartilage.

Sharks and their relatives, the skates and rays, have enjoyed tremendous success during their nearly 400 million years of existence on earth, according to Dr. Luer. He points out that one reason for this certainly is their uncanny ability to resist disease. Sharks do get sick, but their incidence of disease is much lower than among the other fishes. While statistics are not available on most diseases in fishes, reptiles, amphibians, and invertebrates, tumor incidence in these animals is carefully monitored by the Smithsonian Institution in Washington, D.C.

The Smithsonian’s enormous database, called the Registry of Tumors in Lower Animals, catalogs tissues suspected of being tumorous, including cancers, from all possible sources throughout the world. Of the thousands of tissues in the Registry, most of them are from fish but only a few are from elasmobranchs. Only 8 to 10 legitimate tumors are among all the shark and ray tissues examined, and only two of these are thought to have been malignant.

An observation by Gary Ostrander, a Professor at Johns Hopkins University, is that there may be fundamental differences in shark immune systems so that they aren’t as prone to cancer.  The major thrust of the Motes research focuses on the immunity of sharks and their relatives the skates and rays. While skates aren’t as interesting to the public as their shark relatives, their similar biochemical immunology and their ability to breed in captivity make them perhaps more vital to Luer’s lab work.   The result is to study the differences and similarities to the higher animals, and what might possibly be the role of the immune system in their low incidence of disease.

This low incidence of tumors among the sharks and their relatives has prompted biochemists and immunologists at Mote Marine Laboratory (MML) to explore the mechanisms that may explain the unusual disease resistance of these animals. To do this, they established the nurse shark and clearnose skate as laboratory animals. They designed experiments to see whether tumors could be induced in the sharks and skates by exposing them to potent carcinogenic (cancer-causing) chemicals, and then monitored pathways of metabolism or detoxification of the carcinogens in the test animals. While there were similarities and differences in the responses when compared with mammals, no changes in the target tissues or their genetic material ever resulted in cancerous tumor formation in the sharks or skates.

The chemical exposure studies led to investigations of the shark immune system. As with mammals, including humans, the immune system of sharks probably plays a vital role in the overall health of these animals. But there are some important differences between the immune arsenals of mammals and sharks. The immune system of mammals typically consists of two parts which utilize a variety of immune cells as well as several classes of proteins called immunoglobulins (antibodies).

Compared to the mammalian system, which is quite specialized, the shark immune system appears primitive but remarkably effective. Sharks apparently possess immune cells with the same functions as those of mammals, but the shark cells appear to be produced and stimulated differently. Furthermore, in contrast to the variety of immunoglobulins produced in the mammalian immune system, sharks have only one class of immunoglobulin (termed IgM). This Immunoglobulin normally circulates in shark blood at very high levels and appears to be ready to attack invading substances at all times.

Another difference lies in the fact that sharks, skates, and rays lack a bony skeleton, and so do not have bone marrow. In mammals, immune cells are produced and mature in the bone marrow and other sites, and, after a brief lag time, these cells are mobilized to the bloodstream to fight invading substances. In sharks, the immune cells are produced in the spleen, thymus and unique tissues associated with the gonads (epigonal organ) and esophagus (Leydig organ). Some maturation of these immune cells occurs at the sites of cell production, as with mammals. But a significant number of immune cells in these animals actually mature as they circulate in the bloodstream. Like the ever-present IgM molecule, immune cells already in the shark’s blood may be available to respond without a lag period, resulting in a more efficient immune response.

Research was being carried out during the 1980’s at the Massachusetts Institute of Technology (MIT) and at Mote Marine Laboratory designed to understand how cartilage is naturally able to resist penetration by blood capillaries. If the basis for this inhibition could be identified, it was reasoned, it might lead to the development of a new drug therapy. Such a drug could control the spread of blood vessels feeding a cancerous tumor, or the inflammation associated with arthritis.

The results of the research showed only that a very small amount of an active material, with limited ability to control blood vessel growth, can be obtained from large amounts of raw cartilage. The cartilage must be subjected to several weeks of harsh chemical procedures to extract and concentrate the active ingredients. Once this is done, the resulting material is able to inhibit blood vessel growth in laboratory tests on animal models, when the concentrated extract is directly applied near the growing blood vessels.  One cannot assume that comparable material in sufficient amount and strength is released passively from cartilage when still in the animal to inhibit blood vessel growth anywhere in the body.

Tumors release chemicals stimulating the capillary growth so a nutrient-rich blood supply is created to feed the tumorous cells. This process is called angiogenesis. If scientists can control angiogenesis, they could limit tumor growth. Cartilage lacks capillaries running through it. Why should this be a surprise?  Cartilage cells are called chondrocytes, and they fuction to produce a acellular interstitial matrix consisting of hyaluronan (complex carbohydrate formed from hyaluronic acid and chondroitin sulfate) which is protective of interlaced collagen.   Early research into the anti-angiogenesis properties of cartilage revealed that tiny amounts of proteins could be extracted from cartilage, and, when applied in concentration to animal tumors, the formation of capillaries and the spread of tumors was inhibited.

Henry Brem and Judah Folkman from the Johns Hopkins School of Medicine first noted that cartilage prevented the growth of new blood vessels into tissues in the 1970s. The creation of a blood supply, called angiogenesis, is a characteristic of malignant tumors, as the rapidly dividing cells need lots of nutrients to continue growing.  It is valuable to consider that these neovascular generating cells are not of epithelial derivation, but are endothelial and mesenchymal. To support their very high metabolism, tumors secrete a hormone called ‘angiogenin’ which causes nearby blood vessels to grow new branches that surround the tumor, bringing in nutrients and carrying away waste products

Brem and Folkman began studying cartilage to search for anti-angiogenic compounds. They reasoned that since all cartilage lacks blood vessels, it must contain some signaling molecules or enzymes that prevent capillaries from forming. They found that inserting cartilage from baby rabbits alongside tumors in experimental animals completely prevented the tumors from growing. Further research showed calf cartilage, too, had anti-angiogenic properties.

A young researcher by the name of Robert Langer repeated the initial rabbit cartilage experiments, except this time using shark cartilage. Indeed, shark cartilage, like calf and rabbit cartilage, inhibited blood vessels from growing toward tumors. Research by Dr. Robert Langer of M.I.T. and other workers revealed a promising anti-tumor agent obtainable in quantity from shark cartilage. The compound antagonistic to the effects of angiogenin, called ‘angiogenin inhibitor’, inhibits the formation of new blood vessels, neovascularization, that is essential for supporting cancer growth.

The consequence of the”shark myth” is not surprising. An inhabitant of the open ocean, the Silky Shark is ‘hit’ hard by the shark fin and shark cartilage industries – away from the prying eyes of a mostly land bound public. As a consequence of this ‘invisibility’, mortality of Silkies is difficult to estimate or regulate.  North American populations of sharks have decreased by up to 80% in the past decade, as cartilage companies harvest up to 200,000 sharks every month in US waters to create their products. One American-owned shark cartilage plant in Costa Rica is estimated to destroy 2.8 million sharks per year. Sharks are slow growing species compared to other fish, and simply cannot reproduce fast enough to survive such sustained, intense fishing pressure. Unless fishing is dramatically decreased worldwide, a number of species of sharks will go extinct before we even notice.

Sources:
1.  National Geographic News: NATIONALGEOGRAPHIC.COM/NEWS

2. Do Sharks Hold Secret to Human Cancer Fight?
by Brian Handwerk for National Geographic News.  August 20, 2003

3. Busting Marine Myths: Sharks DO Get Cancer!
by Christie Wilcox   November 9th 2009

Sand tiger shark (Carcharias taurus) at the Ne...

Sand tiger shark (Carcharias taurus) at the Newport Aquarium. (Photo credit: Wikipedia)

Angiogenesis

Angiogenesis (Photo credit: Wikipedia)

The immune response

The immune response (Photo credit: Wikipedia)

Read Full Post »

Biomaterials Technology: Models of Tissue Engineering for Reperfusion and Implantable Devices for Revascularization

Author and Curator: Larry H Bernstein, MD, FACP

and

Curator: Aviva Lev-Ari, PhD, RN

http://pharmaceuticalintelligence.com/5_04_2013/bernstein_lev-ari/Bioengineering_of_Vascular_and_Tissue_Models

This is the THIRD of a three part series on the evolution of vascular biology and the studies of the effects of biomaterials
in vascular reconstruction and on drug delivery, which has embraced a collaboration of cardiologists at Harvard Medical School , Affiliated Hospitals, and MIT,
requiring cardiovascular scientists at the PhD and MD level, physicists, and computational biologists working in concert, and
an exploration of the depth of the contributions by a distinguished physician, scientist, and thinker.

The FIRST part – Vascular Biology and Disease – covered the advances in the research on

Drug Eluting Stents: On MIT’s Edelman Lab’s Contributions to Vascular Biology and its Pioneering Research on DES

  • vascular biology,
  • signaling pathways,
  • drug diffusion across the endothelium and
  • the interactions with the underlying muscularis (media),
  • with additional considerations for type 2 diabetes mellitus.

The SECOND part – Stents and Drug Delivery – covered the

Vascular Repair: Stents and Biologically Active Implants

  • purposes,
  • properties and
  • evolution of stent technology with
  • the acquired knowledge of the pharmacodynamics of drug interactions and drug distribution.

In this THIRD part, on Problems and Promise of Biomaterials Technology, we cover the biomaterials used and the design of the cardiovascular devices, extension of uses, and opportunities for improvement

Biomaterials Technology: Tissue Engineering and Vascular Models –

Problems and Promise

We have thus far elaborated on developments in the last 15 years that have led to significant improvements in cardiovascular health.

First, there has been development of smaller sized catheters that can be introduced into

  • not only coronary arteries, but into the carotid and peripheral vasculature;

Second, there has been specific design of coated-stents that can be placed into an artery

  • for delivery of a therapeutic drug.

This began with a focus on restenosis, a serious problem after vascular repair, beginning
with the difficult problem of  control of heparin activity given intravenously, and was
extended to modifying the heparan-sulfate molecular structure

  • to diminish vascular endothelial hyperplasia,
  • concurrent with restriction of the anticoagulant activity.

Third, the ability to place stents with medicated biomaterials locally has extended to

  • the realm of chemotherapy, and we shall see where this progresses.

The Engineered Arterial Blood Flow Models

Biomedical engineers, in collaboration with physicians, biologists, chemists, physicists, and
mathematicians, have developed models to predict vascular repair by knowledge of

  • the impact of interventions on blood flow.

These models have become increasingly sophisticated and precise, and they propel us
toward optimization of cardiovascular therapeutics in general and personalizing treatments
for patients with cardiovascular disease. (1)
The science of vascular biology has been primarily stimulated by the clinical imperative to

  • combat complications that ensue from vascular interventions.

Thus, when a novel vascular biological finding or cardiovascular medical/surgical technique
is presented, we are required to ask the 2-fold question:

  • what have we learned about the biology of the blood vessel?
  • how might this knowledge be used to enhance clinical perspective and treatment?

The innovative method of engineering arterial conduits presented by Campbell et al. in
Circulation Research presents us with just such a challenge, and we deal with it’s biological and clinical ramifications.

Each of four pivotal studies in vascular tissue engineering has been an important advance
in the progression to a tissue-engineered blood vessel that can serve as a

  • living graft, responsive to the biological environment as
  • a self-renewing tissue with an inherent healing potential.
  • Weinberg and Bell taught us that a tissue-engineered graft could be constructed
  • and could be composed of human cells.

L’heureux et al demonstrated that the mechanical strength of such a material

  • derived in major part from the extracellular matrix and
  • production of matrix and integrity of cellular sheets
  • could be enhanced by alterations in culture conditions.

Niklason et al. noted that grafts are optimally formed

  • when incubated within environmental conditions that they will confront in vivo
  • or would have experienced if formed naturally.

Campbell et al. now demonstrate that it is possible to remove

  • the immune reaction and acute rejection that may follow cell-based grafting
  • by culturing tissues in the anticipated host and
  • address a fundamental issue of whether cell source or site of cell placement
  • dictates function after cell implantation.

It appears that the vascular matrix can be remodeled by the body according to the needs of the environment. It may
very well be that the ultimate configuration of autologous cell-based vascular graft need not be determined at
outset by the cells that comprise the device, but rather

  • by a dynamics that is established by environmental needs, wherein the body molds
  • tissue-engineered constructs to meet
    • local flow,
    • metabolic, and
    • inflammatory requirements.

In other words, cell source for tissue reconstruction may be secondary to
cell pliability to environmental influence.

Endothelial and smooth muscle cells from many, perhaps any,

  • vascular bed can be used to create new grafts and will then
  • achieve secondary function once in place in the artery.

The environmental remodeling observed after implantation

  • may modify limitations of grafts that are composed of nonvascular peritoneal cells whose initial structure
    is not either venous or arterial. (2)
  • The trilaminate vascular architecture provides biochemical regulation and mechanical integrity.
  • Yet regulatory control can be regained after injury without recapitulating tertiary structure.

Tissue-engineered (TE) endothelium controls repair even when

  • placed in the perivascular space of injured vessels.

It remains unclear from vascular repair studies whether endothelial implants recapitulate the vascular
epithelial lining
or expose injured tissues to endothelial cells (ECs) with unique healing potential because

  • ECs line the vascular epithelium and the vasa vasorum.

Authors examined this issue in a nonvascular tubular system, asking whether airway repair is controlled by

  • bronchial epithelial cells (EPs) or by
  • Endothelial Cells (ECs) of the perfusing bronchial vasculature.

Localized bronchial denuding injury

  • damaged epithelium, narrowed bronchial lumen, and led to
  • mesenchymal cell hyperplasia, hypervascularity, and inflammatory
  • cell infiltration. Peribronchial TE constructs embedded with

EPs or ECs limited airway injury, although optimum repair was obtained

  • when both cells were present in TE matrices.

EC and EP expression of

  • PGE2, TGF1, TGF2, GM-CSF, IL-8, MCP-1, and soluble VCAM-1
  • and ICAM-1 was altered by matrix embedding,

but expression was altered most significantly when both,

  • EC and EP,  cells were present simultaneously.

EPs may provide for functional control of organ injury and fibrous response, and

ECs may provide for preservation of tissue perfusion and the epithelium in particular.

Together the two cells

  • optimize functional restoration and healing, suggesting that
  • multiple cells of a tissue contribute to the differentiated biochemical function and repair
    of a tissue, but 
    need not assume
  • a fixed, ordered architectural relationship, as in intact tissues, to achieve these effects. (3)

Matrix-embedded Endothelial Cells (MEECs) Implants

The implantation of matrix-embedded endothelial cells (MEECs)

  • is considered to have therapeutic potential in controlling the vascular response to injury and
  • maintaining patency in arteriovenous anastomoses.

Authors considered the 3-dimensional microarchitecture of the tissue engineering scaffold to be
a key regulator of endothelial behavior in MEEC constructs.

Notably, Authors found that

  • ECs in porous collagen scaffold had a markedly altered cytoskeletal structure with oriented actin
    fibers
    and rearranged focal adhesion proteins, in comparison to cells grown on 2D surfaces.

Examining the immunomodulatory capabilities of MEECs revealed, MEECs were able to reduce the recruitment
of monocytes
to an inflamed endothelial monolayer by 5-fold compared to EC on 2D surfaces.

An analysis of secreted factors from the cells revealed

  • an 8-fold lower release of Monocyte Chemotactic Protein-1 (MCP-1) from MEECs.

Differences between 3D and 2D cultured cells were abolished in the presence of

  • inhibitors to the focal adhesion associated signaling molecule Src, suggesting that
  • adhesion-mediated signaling is essential in controlling the potent immunomodulatory
    effects of MEEC. (4)

Cardiogenesis is regulated by a complex interplay between transcription factors. How do these interactions
regulate the transition from mesodermal precursors to cardiac progenitor cells (CPCs)?

Yin Yang 1 (YY1), a member of the GLI-Kruppel

  • family of DNA-binding zinc finger transcription factor (TF), can
  • activate or inhibit transcription in a context-dependent manner.

Bioinformatic-based Transcription Factor Genome-wide Sequencing Analysis

These investigators performed a bioinformatic-based transcription factor genome-wide sequencing analysis

  • binding  site analysis on upstream promoter regions of genes that are enriched in embryonic stem cell–derived CPCs
  • to identify novel regulators of mesodermal cardiac lineage

From 32 candidate transcription factors screened, they found that

  • Yin Yang 1 (YY1), a repressor of sarcomeric gene expression, is present in CPCs.

They uncovered the ability of YY1 to transcriptionally activate Nkx2.5,

  • Nkx2.5 as a key marker of early cardiogenic commitment.
  • YY1 regulates Nkx2.5 expression via a 2.1-kb cardiac-specific enhancer as demonstrated by in vitro
  1. luciferase-based assays,
  2. in vivo chromatin immunoprecipitation,
  3. and genome-wide sequencing analysis.

Furthermore, the ability of YY1 to activate Nkx2.5 expression depends on its cooperative interaction with Gata4.

Cardiac mesoderm–specific loss-of-function of YY1 resulted in early embryonic lethality.

This was corroborated in vitro by embryonic stem cell–based assays which showed the

  • overexpression of YY1 enhanced the cardiogenic differentiation of embryonic stem cells into CPCs.

The results indicate an essential and unexpected role for YY1

  • to promote cardiogenesis as a transcriptional activator of Nkx2.5
  • and other CPC-enriched genes. (5)

Proportional Hazards Models to Analyze First-onset of Major
Cardiovascular Disease Events

Various measures of arterial stiffness and wave reflection are considered to be cardiovascular risk markers.

Prior studies have not assessed relations of a comprehensive panel of stiffness measures to prognosis

Authors used Proportional Hazards Models to analyze first-onset of major cardiovascular disease events 

  • myocardial infarction,
  • unstable angina,
  • heart failure, or
  • stroke

In relation to arterial stiffness measured by

  • pulse wave velocity [PWV]
  • wave reflection
  • augmentation index [AI]
  • carotid-brachial pressure amplification [PPA]
  • and central pulse pressure [CPP]

in 2232 participants (mean age, 63 years; 58% women) in the Framingham Heart Study.

During median follow-up of 7.8 (range, 0.2 to 8.9) years,

  • 151 of 2232 participants (6.8%) experienced an event.

In multivariable models adjusted for

  • age,
  • sex,
  • systolic blood pressure,
  • use of antihypertensive therapy,
  • total and high-density lipoprotein cholesterol concentrations,
  • smoking, and
  • presence of diabetes mellitus,

Higher aortic PWV was associated with a 48% increase in

  • cardiovascular disease risk
    (95% confidence interval, 1.16 to 1.91 per SD; P0.002).

After PWV was added to a standard risk factor model,

  • integrated discrimination improvement was 0.7%
    (95% confidence interval, 0.05% to 1.3%; P < 0.05).

In contrast, AI, CPP, and PPA were not related to

  • cardiovascular disease outcomes in multivariable models.

(1) Higher aortic stiffness assessed by PWV is associated with

  • increased risk for a first cardiovascular event.

(2) Aortic PWV improves risk prediction when added to standard risk factors

  • and may represent a valuable biomarker of CVD risk in the community. (6)

1. Engineered arterial models to correlate blood flow to tissue biological response. J Martorell, P Santoma, JJ Molins,
AA Garcıa-Granada, JA Bea, et al.  Ann NY Acad Sci 2012: 1254:51–56. (Issue: Evolving Challenges in Promoting
Cardiovascular Health)    http://dx.doi.org/10.1111/j.1749-6632.2012.06518.x

2.  Vascular Tissue Engineering. Designer Arteries. Elazer R. Edelman. Circ Res. 1999; 85:1115-1117
http://www.circresaha.org  http://dx.doi.org/10.1161/01.RES.85.12

3.  Tissue-engineered endothelial and epithelial implants differentially and synergistically regulate airway repair.
BG Zani, K Kojima, CA Vacanti, and ER Edelman.   PNAS 13, 2008; 105(19):7046–7051.
http://www.pnas.org/cgi/doi/10.1073/pnas.0802463105

4.  The role of scaffold microarchitecture in engineering endothelial cell immunomodulation.
L Indolfi, AB Baker, ER Edelman. Biomaterials 2012; http://dx.doi.org/10.1016/j.biomaterials.2012.06.052

5.  Essential and Unexpected Role of Yin Yang 1 to Promote Mesodermal Cardiac Differentiation. S Gregoire, R Karra,
D Passer, Marcus-André Deutsch, et al.  Circ Res. 2013;112:900-910. http://dx.doi.org/10.1161/CIRCRESAHA.113.259259
http://circres.ahajournals.org/doi:10.1161/CIRCRESAHA.113.259259

6.  Arterial Stiffness and Cardiovascular Events. The Framingham Heart Study.
GF Mitchell, Shih-Jen Hwang, RS Vasan, MG Larson, et al.  Circulation. 2010;121:505-511.
http://circ.ahajournals.org/doi/10.1161/CIRCULATIONAHA.109.886655

Cardiology Diagnosis of ACS and Stents – 2012

The Year in Cardiology 2012: Acute Coronary Syndromes.

Nick E.J. West      http://www.medscape.com/viewarticle/779039

The European Society of Cardiology (ESC) produced updated guidance on management of STEMI in 2012.
It also produced a third version of the Universal Definition of Myocardial Infarction.
The importance of early diagnosis is stressed, with first ECG in patients

  • with suspected STEMI recommended within 10 min of first medical contact (FMC)
  • and primary percutaneous coronary intervention (PPCI) for STEMI
  • ideally within 90 min (rated ‘acceptable’ out to a maximum of 120 min).

The guidance highlights the importance of collaborative networks

  • to facilitate achievement of such targets.
  • the importance of prompt assessment
  • management of atypical presentations not always considered under the umbrella of STEMI, including
    • left bundle branch block (LBBB),
    • paced rhythms, and
    • isolated ST-segment elevation in lead aVR,

especially when accompanied by symptoms consistent with myocardial ischaemia.

Therapeutic hypothermia is now recommended for

  • all resuscitated patients with STEMI complicated by cardiac arrest
  •  immediate coronary angiography with a view to follow-on PPCI
  • when the ECG demonstrates persistent ST-segment elevation.

In the light of recently published studies and meta-analyses,

  • including that of Kalesan et al., drug-eluting stents (DES) are
  • now routinely preferred to bare metal stents (BMS) in view of
  • the reduced need for repeat revascularization and the lack of
  • previously perceived hazard for stent thrombosis.

The more potent antiplatelet agents prasugrel and ticagrelor are also preferred

  • to clopidogrel for all STEMI cases, with duration of dual antiplatelet therapy (DAPT)
  • ideally for 1 year, but reduced to a strict
  • minimum of 6 months for patients receiving DES.

The Third Universal Definition of Myocardial Infarction was published
simultaneously with the STEMI guidance. This guideline endorses

  • cardiac troponin as the biomarker of choice to detect myocardial necrosis
  • with spontaneously occurring myocardial infarction (MI) defined as an
  • elevation above the 99th percentile upper reference value for the assay.

There is further development and clarification of MI in different settings

  • to allow standardization across trials and registries

in particular after revascularization procedures: after CABG with normal baseline troponin

  • MI is defined as a rise to a value 10 times greater than baseline in the first 48 h, and
  • a rise to 5 times greater than 99th percentile upper reference after PCI

in patients with a normal baseline level (or a 20% rise when troponin is elevated and stable or falling pre-procedure).

ACCF/AHA  updated guidance on the management of unstable angina/non-STEMI:

angiography with a view to revascularization

  • is now recommended within 12–24 h of presentation, with
  • DAPT pre-loading prior to PCI procedures also now advocated.

Ticagrelor and prasugrel are cited as acceptable alternatives to clopidogrel.
The maintenance dose of aspirin recommended for the majority of cases is 81 mg daily.
This guideline brings about transatlantic agreement in most areas.

Risk Stratification

Identification and appropriate triage of patients presenting to emergency departments
with acute chest pain remains a difficult dilemma:

  • many are low-risk and have a non-cardiac origin
  • a significant minority with coronary artery disease may not be picked up
    on clinical grounds even when accompanied by appropriate tests,

    • including ECG and biomarker estimation used in conjunction
    • with a clinical risk score (e.g. GRACE, TIMI).

As endorsed in ESC guidance, there has been increasing interest in

  • non-typical ECG patterns for the diagnosis of STEMI; although LBBB is
  • an accepted surrogate

Widimsky et al.  retrospectively analysed 6742 patients admitted to hospital with acute MI

  • in patients presenting with right bundle branch block, a blocked epicardial vessel was
  • more common (51.7 vs. 39.4%; P < 0.001) and incidence of both shock and mortality
  • comparable with LBBB (14.3 vs. 13.1%; P = NS; and 15.8 vs. 15.4%; P = NS, respectively).

Wong et al. demonstrated the importance of ST-elevation in lead aVR,

  • often viewed as indicative of left main stem occlusion, having increased mortality
  • in patients presenting with both inferior and anterior infarction.

Perhaps the most important data regarding the ECG in 2012 were also the most simple:

  • Antoni et al. highlighted a powerful and very simple method of risk stratification;
  •  heart rate measured on a 12-lead ECG at discharge after Primary PCI (PPCI) is an
  • independent predictor of mortality at 1 and 4 years of follow-up.

Patients with a discharge heart rate of ≥70 b.p.m. had a two-fold higher mortality at both follow-up
time points, with every increase of 5 b.p.m. in heart rate

  • equating to a 29% increase in mortality at 1 year and 24% at 5 years.

These findings have important implications for the optimization of patient therapies after MI (including the use of
rate-limiting agents such as beta-blockers, calcium channel-blockers, and ivabradine), although large randomized
trials are needed to confirm that

  • interventions to reduce heart rate will replicate the benefits observed in this study.

http://img.medscape.com/article/779/039/779039-thumb1.png

Figure 1.  Kaplan–Meier time-to-event plots for heart rate at discharge divided by quartiles and all-cause mortality
(A and C) and cardiovascular mortality (B and D) at 1-year (A and B) and 4-year (C and D) follow-up,
demonstrating relationship between discharge heart rate and mortality after PPCI for STEMI.
Modified from Antoni et al.

Coronary Intervention and Cardioprotection in Acute Coronary Syndromes

Microvascular obstruction during PCI for ACS/STEMI is associated with increased infarct size and adverse prognosis;
its pathophysiology is thought to be a combination of

  • mechanical distal embolization of thrombus and plaque constituents during PCI,  coupled with
  • enhanced constriction/hyperreactivity of the distal vascular bed.

The most novel Strategy to Reduce Infarct Size

is the use of a Bare Metal Stent (BMS) covered on its outer surface with a mesh micronet designed to
trap and hold potentially friable material that might embolize distally at the time of PCI.

The MASTER study randomized 433 STEMI patients to PPCI

  • with conventional BMS or DES at the operator’s discretion vs.
  • the novel MGuard stent (InspireMD, Tel Aviv, Israel);

the primary endpoint of complete ST-segment resolution was better

  • in patients receiving MGuard (57.85 vs. 44.7%; P = 0.008), as was
  • the achievement of TIMI grade 3 flow in the treated vessel (91.7 vs. 82.9%; P = 0.006).

Nevertheless, median ST-segment resolution did not differ

  • between treatment groups,
  • myocardial blush grade was no different, and
  • safety outcomes at 30 days (death, adverse events) as well as
  • overall MRI-determined infarct mass.

Higher TVR rates may accrue with a BMS platform when compared with

  • current-generation DES (as now endorsed for PPCI in ESC guidance).

In comparing the four studies in cardioprotection, there remains little to choose between strategies as evidenced by

  • the relatively minor differences between surrogate endpoints employed regardless of
  • therapeutic intervention chosen (Figure 2).

http://img.medscape.com/article/779/039/779039-fig2.jpg

Figure 2.  Comparison of study endpoints for reduction in infarct size in STEMI.
Study endpoints listed on the x-axis. STR, ST-segment resolution; TIMI 3, thrombolysis in
myocardial infarction grade 3 antegrade flow; myocardial blush grade 2/3 (MBG 2/3).

Recent advances in

  • PCI equipment,
  • peri-procedural pharmacology,
  • technique, and safety, as well as
  • convergence of national guidance,

are leading to the point where

  • even in the highest risk patients such as those presenting with ACS, small improvements
  • may be difficult to discern despite large well-designed and -conducted studies.

References

  1. a. The Task Force on the management of ST-segment elevation acute myocardial infarction
    of the European Society of Cardiology. ESC guidelines for the management of acute
    myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J
    2012;33:2569–2619.  b. Management of acute myocardial infarction in patients presenting
    with ST-segment elevation. The Task Force on the Management of Acute Myocardial
    Infarction of the European Society of Cardiology.  Eur Heart J 2003; 24 (1): 28-66.
    http://dx.doi.org/10.1093/eurheartj/ehs215
  2. ESC Guidelines for the management of acute coronary syndromes in patients presenting
    without persistent ST-segment elevation: The Task Force for the management of acute
    coronary syndromes (ACS) in patients presenting without persistent ST-segment elevation
    of the European Society of Cardiology (ESC).  http://dx.doi.org/10.1093/eurheartj/ehr236
  3. Thygesen K, Alpert JS, Jaffe AS, Simoons ML, Chaitman BS, White HD. The Writing Group on
    behalf of the Joint ESC/ACCF/AHA/WHF Task Force for the Universal Definition of
    Myocardial Infarction. Third universal definition of myocardial infarction.
    Eur Heart J 2012;33:2551–2567.  http://dx.doi.org/10.1093/eurheartj/ehm355
  4. Kalesan B, Pilgrim T, Heinimann K, Raber L, Stefanini GG, et al. Comparison of drug-eluting
    stents with bare metal stents in patients with ST-segment elevation myocardial infarction.
    Eur Heart 2012;33:977–987.
  5. Jneid H, Anderson JL, Wright RS, Adams CS, et al. 2012 ACCF/AHA Focused Update of the
    Guideline for the Management of Patients with Unstable Angina/Non-ST-Elevation Myocardial
    Infraction (Updating the 2007 Guideline and Replacing the 2011 Focused Update). A Report
    of the American College of CardiologyFoundation/American Heart Association Task Force
    on Practice Guidelines. J Am Coll Cardiol 2012;60:645–681.
  6. Widimsky P, Rohác F, Stásek J, Kala P, Rokyta R, et al. Primary angioplasty in acute myocardial
    infarction with right bundle branch block: should new onset right bundle branch block be added
    to future guidelines as an indication for reperfusion therapy? Eur HeartJ 2012;33:86–95.
  7. Wong CK, Gao W, Stewart RA, French JK, and the HERO-2 Investigators. The prognostic meaning of
    the full spectrum of aVR ST-segment changes in acute myocardial infarction.
    Eur Heart J 2012;33:384–392.
  8. Antoni L, Boden H, Delgado V, Boersma E, et al. Relationship between discharge heart rate and mortality
    in patients after myocardial infarction treated with primary percutaneous coronary intervention.
    Eur Heart J 2012;33:96–102.
  9. Stone GW, Abizaid A, Silber S, Dizon JM, Merkely B, et al. Prospective, randomised, multicenter evaluation
    of a polyethylene terephthalate micronet mesh-covered stent (MGuard) in ST-segment elevation myocardial
    infarction. The MASTER Trial. J Am Coll Cardiol. doi:pii:S0735-1097(12)04506-8. 10.1016/j.jacc.2012.09.004. 
  10. Zhou C, Yao Y, Zheng Z, Gong J, Wang W, Hu S, Li L. Stenting technique, gender, and age are associated with
    cardioprotection by ischaemic postconditioning in primary coronary intervention: a systematic review of
    10 randomized trials. Eur Heart J 2012;33:3070–3077.

Resistant Hypertension.

Robert M. Carey.
Hypertension. 2013;61:746-750.  http://dx.doi.org/10.1161/HYPERTENSIONAHA.111.00601

Resistant hypertension is defined as failure to achieve goal blood pressure (BP) <140/90 mm Hg
(or <130/80 mm Hg in patients with diabetes mellitus or chronic kidney disease) in patients with

  • hypertension who are compliant with maximum tolerated doses of an appropriate antihypertensive drug regimen consisting of a minimum of 3 agents of different classes, including a diuretic.
  • Patients who meet the criteria for resistant hypertension but whose BP can be controlled on maximum tolerated
    doses of ≥4 antihypertensive agents are classified as having controlled resistant hypertension.

Although the number of failed antihypertensive drugs required for the classification of resistant hypertension is arbitrary,

  • this diagnosis identifies patients at high risk for having a potentially curable form of hypertension, and
  • those who may benefit from specific therapeutic approaches to lower BP.

Summary

The first portion of this document shows the impact that ER Edelman and his peers have had in the development
of interventional cardiology, and in carrying out studies to test, validate, or reject assumptions about the interaction of
biomaterials with

  • vascular and smooth muscle tissue in the repair of injured vessels, by
  1. trauma
  2. inflammatory injury
  3. stent placement.

In the second portion of this discussion, I introduce current views about complications in implanted devices, evolving
standards, and the current definitions of stable, unstable, and previously unclassified ACS risk.

Pushing Drug-Eluting Stents Into Uncharted Territory

Simpler Than You Think—More Complex Than You Imagine

Campbell Rogers, MD; Elazer R. Edelman, MD, PhD.  Circulation 2006; 113: 2262-2265.
http://dx.doi.org/10.1161/​CIRCULATIONAHA.106.623470

Mechanical failure is a characteristic of a material or a device and not necessarily an indication of inadequacy. All devices
will fail under some specific stress. It is only failure at the lowest levels of stress that may represent inadequacy. Stress on
a material, for example, rises with strain until a critical load is exceeded, at which point the material fatigues and loses
mechanical integrity. Failure analysis, the science by which these conditions are rigorously defined, is an important
component of device design, development, and use. Once the transition point to failure is identified, material use can be
restricted to the zone of safety or modified so as to have this zone expanded. Just as the characterization of a material is
incomplete unless pushed to the limits of load bearing, characterization of an implantable device is incomplete unlesspreclinical and clinical environments test the limits of device functionality. It was in this light in 1999 that the Authors noted the impossibility of defining the functional limits of novel bare metal stents in head-to-head trials, which, by necessity, could only include lesions into which the predicate device (the Palmaz-Schatz stent, Cordis, Warren, NJ) could have be placed.

New School Percutaneous Interventions

Over the past 5 years, the number of percutaneous interventions has grown by 40%. This expansion derives from an
increased breadth of cases, as percutaneous interventions are now routinely performed in diabetic, small-vessel, multilesion,diffuse disease, and acute coronary syndrome settings. Contemporaneously, widespread adoption of drug-eluting stents has emboldened clinicians and provided greater security in the use of these devices in lesions or patients previously thought to

Head-to-head randomized trial data have accumulated so that analysis may demonstrate differences among drug-eluting stents. The playing field for prospective randomized trials could enhance the weight of evidence to unanswered questions about what underlying factors determine device failure.

Complexity Simplified

Drug-eluting stent “failure” can be defined operationally in the same way as material failure:

  • inadequate function in the setting of a given load or strain.

The inability to withstand stress may take many forms that can change over time. Failure may be manifest acutely as

  • the inability to deliver a stent to the desired location,
  • subacutely as stent thrombosis or
  • postprocedural myonecrosis, and later as
  • restenosis

“Simple lesions” are those in which few devices should fail;“Complex” lesions have a heightened risk of failure. To be of value, each scale of advancing complexity must provoke higher failure rates.  For any device may fail sooner than another along one such “complexity” scale and later along another. As advanced drug-eluting stent designs have enhanced deliverability and reduced restenosis rates, 7 randomized trials comparing directly the two Food and Drug Administration (FDA)-approved drug-eluting stents, Cypher (Cordis-Johnson and Johnson) and Taxus (Boston Scientific, Boston, Mass), have been reported.  These trials report a broad range of restenotic failure as evidenced by the need for revascularization. Across these trials, driven by a variety of factors, revascularization rates vary quite widely.

The clinical end point of target lesion revascularization (TLR) becomes

  • a single measure of device failure.

When the 7 trials are depicted in order of increasing TLR, the rate of failure increases more slowly with 1 device than
the other.  This gives two regression plots for Taxus vs Cypher with different slopes, as complexity increases, and the

  • separation between the failure rates of the two devices broadens plotted against “degree of complexity” assigned by the  slopes of the lines.

Finally, the correlation between TLR rates for Taxus and Cypher stents indicates that trial-specific events and conditions determined TLR (with a sharp slope of Taxus vs Cypher (r-sq = 0.85).  The ratio of TLR (the slope) wasgreater than 3, suggesting that although both devices are subject to increasing failure as complexity increases,

  • one device becomes ever-more likely than the other to fail when applied in settings with ever-higher TLR risk.

In other words, composite medical devices with a wide range of

  • structural,
  • geometric, and
  • pharmacological differences
    • can be shown to produce different clinical effects
    • as the environments in which they are tested become increasingly complex.

What the Individual Trials Cannot Tell Us

The progressive difference between the performances of the 2 FDA-approved drug-eluting stents as they are pushed into
more complex settings is precisely what one would anticipate from medical devices with different performance signatures.
Most randomized trials, even if they include high complexity, are unable to identify predictors of failure because of the low numbers of patients enrolled, and the problem gets worse as the number of subsets increase. Consequently, device development, and clinical practice, knowing which patient or lesion characteristics confer higher failure rates is critical.
This analysis has centered on restenosis. Other failure modes to be considered are

  • stent thrombosis,
  • postprocedural myonecrosis
  • late plaque rupture
  • vascular disease away from the site
  • heightened inflammatory reaction
    • are no less critical and may be determined by
    • completely different device or patient characteristics.

Well-executed registry or pooled data

It is in this light that the registry report of Kastrati et al. in the current issue of Circulation is of greatest value. There are
two ways in which well-executed registry or pooled data can be most complementary to randomized trials.

First, large numbers of patients provide a higher incidence of rare failure modes as well as allow more granular determination of lesion- or patient-specific predictors of failure (meta-analysis or better, combined data file). A pooled analysis of several head-to-head randomized bare metal stent trials allowed identification of clear risk factors for stent thrombosis that had eluded analysis of the individual (smaller) trials.

Second, registry or pooled data may incorporate a broader range of patient characteristics, allowing greater discrimination between devices. The report of Kastrati et al may fall into this category as well, as it includes “high risk” populations from several randomized trials. They report on more than 2000 lesions in 1845 patients treated with either Taxus or Cypher drug-eluting stents at two hospitals.  The study population is from a series of randomized trials comparing Taxus and Cypher stents.   Using multivariate analysis to identify what lesion and patient characteristics predict failure (restenosis), they identified risk factors that included

  • prior history of coronary bypass surgery
  • calcification
  • smaller vessel size
  • greater degree of prestent and poststent stenosis.

Use of a Cypher rather than Taxus stent was independently associated with lower restenosis risk.

An interesting negative finding was the absence of diabetes as a significant predictor, at odds with strong suggestions from several other analyses. A better understanding from preclinical or clinical studies of the effect of diabetic states on restenosis is critical.

Author’s opinion voiced:

This Author (LHB), considers the study underpowered to answer that question because of further partitioning with several variables. Pooled data with

  • rigorous ascertainment and
  • careful statistical methodology, taken
  • together with randomized trial data, open a door to device choice based on the knowledge that risk of failure (complexity) does vary, and
  • the higher the complexity, the greater the incremental benefit of choosing one device over another.

A decision algorithm is therefore possible, whereby multiple failure modes and risk factors are weighed, and

  • an optimum stent choice made which balances
  • safety and efficacy based on the totality of evidence, rather than anecdote and loose comparisons of disparate subgroups from individual trials.

Evaluating Clinical Trials

The subject of trial(s) is difficult… the aim and meaning of all the trials… is

  • to let people know what they ought to do or what they must believe

It was perhaps naïve to imagine that devices as different one from another as the two current FDA-approved drug-eluting
stents would produce identical clinical results. If so, it ought not to come as a surprise that head-to-head randomized trial
data from many different countries in complex settings are now indicating just how differently the 2 devices may perform.

Future trials should be designed and evaluated to examine why these differences exist. Trials residing
only in previous safety and complexity domains

  • are unlikely to offer deeper insights into
    1. device performance,
    2. patient care decisions, or
    3. discrimination of alternative therapies.

We look forward to more trials that will examine what we currently believe to be the limits of

  • drug-eluting stents and interventional cardiology and to

help define in simple terms differences

  • between complex devices applied to complex problems.

This 2009 article was an excellent demonstration of comparing two commonly used coated-stents, and then extending the argument to the need for more data to further delineated the factors that explain the differences they found. In the previous article, the SECOND in the three article series,  Stents and Drug Delivery

Vascular Repair: Stents and Biologically Active Implants

we concentrated on stents and drug delivery, and not on stent failure.  But the following article in J Control Release,

was published the following year, and is another example of this method of explanatory approach to the problem.

Lesion Complexity Determines Arterial Drug Distribution After Local Drug Delivery

AR Tzafriri,  N Vukmirovic, VB Kolachalama, I Astafieva, ER Edelman. J Control Release. 2010; 142(3): 332–338.
http://:dx. doi:.org/10.1016/j.jconrel.2009.11.007       PMCID: PMC2994187

Local drug delivery from endovascular stents has transformed how we treat coronary artery disease. Yet, few drugs are in fact effective when delivered from endovascular implants and those that possess a narrow therapeutic window. The width of this window is predicated to a great degree upon the extent of drug deposition and distribution through the arterial wall.

  • Drugs that are retained within the blood vessel are far more effective than those that are not.

Thus, for example, heparin regulates virtually every aspect of the vascular response to injury, but it is so soluble and diffusible that it simply cannot stay in the artery for more than minutes after release.

  • Heparin has no effect on intimal hyperplasia when eluted from a stent.
  • Paclitaxel and sirolimus in contradistinction are far smaller compounds with perhaps more narrow and specific effects than heparin.

These drugs bind tenaciously to tissue protein elements and specific intracellular targets and remain beneath stent struts long after release.

The clinical efficacy of paclitaxel and sirolimus at reducing coronary artery restenosis rates following elution from stents appears incontrovertible. Emerging clinical and preclinical data suggest that the benefit of the local release of these drugs is beset by significant complications, that rise with lesion complexity as

  • the native composition and layered ultrastructure of the native artery is more significantly disrupted.

Virmani and others have hypothesized that the attraction of lipophilic drugs like paclitaxel and sirolimus to fat should affect their retention within and effects upon atheromatous lesions.

Though stents are deployed in diseased arteries drug distribution has only been quantified in intact, non-diseased vessels.

Authors @ MIT, correlated steady-state arterial drug distribution with tissue ultrastructure and composition in abdominal aortae from atherosclerotic human autopsy specimens and rabbits

  • with lesions induced by dietary manipulation and controlled injury.

Drug and compositional metrics were quantified and correlated at a compartmental level, in each of the tunica layers, or at an intra-compartmental level. All images were processed to

  • eliminate backgrounds and artifacts, and
  • pixel values between thresholds were extracted for all zones of interest.

Specific algorithms analyzed each of the histo/immuno-stained arterial structures. Intra-compartmental analyses were

  • performed by sub-dividing arterial cross-sections into 2–64 equal sectors and
  • evaluating the pixel-average luminosity for each sector.

Linear regression of drug versus compositional luminosities asymptotically approached steady state after subdivision into 16 sectors. This system controlled delivered dose and removed the significant unpredictability in release that is imposed by variability

  • in stent position relative to the arterial wall,
  • inflation techniques and stent geometry.
As steady state tissue distribution results were obtained under constant source conditions, without washout by flowing blood,
  • they constitute upper bounds for arterial drug distribution
  • following transient modes of in vivo drug delivery wherein
  • only a fraction of the eluted dose is absorbed by the artery

Paclitaxel, everolimus, and sirolimus deposition in human aortae was maximal in the media and scaled inversely with lipid content.

Net tissue paclitaxel and everolimus levels were indistinguishable in mildly injured rabbit arteries independent of diet. Yet, serial sectioning of cryopreserved arterial segments demonstrated

  • a differential transmural deposition pattern that was amplified with disease and
  • correlated with expression of their intracellular targets, tubulin and FKBP-12.

Tubulin distribution and paclitaxel binding increased with

  • vascular injury and macrophage infiltration, and
  • were reduced with (reduced) lipid content.

Sirolimus analogues and their specific binding target FKBP-12 were less sensitive to alterations of diet
in mildly injured arteries, presumably reflecting a faster transient response of FKBP-12 to injury.

The idea that drug deposition after balloon inflation and stent implantation within diseased, atheromatous and sclerotic vessels tracks so precisely with specific tissue elements is

  • an important consideration of drug-eluting technologies and
  • may well require that we consider diseased rather than naïve tissues in preclinical evaluations.

Another publication in the same year reveals the immense analytical power used in understanding the complexities
of drug-eluting stents.

Luminal Flow Amplifies Stent-Based Drug Deposition in Arterial Bifurcations

Kolachalama VB, Levine EG, Edelman ER.    PLoS ONE 2009; 4(12): e8105.
 http://dx.doi.org/10.1371/journal.pone.0008105

Treatment of arterial bifurcation lesions using drug-eluting stents (DES) is now common clinical practice.
Arterial drug distribution patterns become challenging to analyze if the lesion involves more than a vessel
such as in the case of bifurcations.  As use extends to nonstraightforward lesions and complex geometries,
questions abound

  • regarding DES longevity and safety

Indeed, there is no consensus on best stent placement scenario, no understanding as to

  • whether DES will behave in bifurcations as they do in straight segments, and
  • whether drug from a main-branch (MB) stent can be deposited within a side-branch (SB).

It is not evident how to

  • efficiently determine the efficacy of local drug delivery and
  • quantify zones of excessive drug that are
  • harbingers of vascular toxicity and thrombosis,
  • and areas of depletion that are associated
  • with tissue overgrowth and
  • luminal re-narrowing.

Geometry modeling and governing equations

Authors @MIT constructed two-phase computational models of stent-deployed arterial bifurcations

  • simulating blood flow and drug transport to investigate the
  • factors modulating drug distribution when the main-branch (MB) was treated using a DES.

The framework for constructing physiologically realistic three dimensional computational models of single
and bifurcated arterial vessels was SolidWorks (Dassault Systemes) (Figs. 1A–1B, Movie S1). The geometry
generation algorithm allowed for controlled alteration of several parameters including

  • stent location
  • strut dimensions
  • stent-cell shape
  • lumen diameter to arterial tissue thickness ratio
  • lengths of the arterial branches
  • extent of stent apposition and
  • the bifurcation angle.

For the current study, equal lengths (2LS) were assumed for the proximal and distal sections of the MB from the bifurcation. The SB was constructed at an angle of 300. The inlet conditions were based on

  • mean blood flow and
  • diameter measurements

obtained from human left anterior descending coronary artery (LAD).

The diameter of the lumen (DMB) and thickness (TMB) for the MB were defined such that DMB=TMB~10 and

  • this ratio was also maintained for the SB.

Schematics of the computational models used for the study. A stent of length LS is placed at the upstream section of the arterial vessel in the (A) absence and in the (B) presence of a bifurcation, respectively.

  • Insets in (B) denote delta wing stent design (i),
  • strut thickness (d) (ii), and
  • the outlets of the side-branch in (iii) and
  • and the main-branch in (iv).

A delta wing-shaped cell design belonging to the class of slotted-tube stents was used for all simulations.
The length (LS) and diameter (DS) were

  • fixed at 9|10-2 m and 3|10-2 m, respectively, for the MB stent.

All stents were assumed to be perfectly apposed to the lumen of MB and the intrinsic strut shape was modeled as

  • square with length 10-4 m.

The continuity and momentum equations were solved within the arterial lumen, where

vf , rho~1060 kg=m3, P and m are

  • velocity
  • density
  • pressure and the
  • viscosity of blood.

In order to capture boundary layer effects at the lumen-wall (or mural) surface, a Carreau model was employed for

  • all the simulations to account for shear thinning behavior of blood at low shear rates

In the arterial lumen, drug transport followed advection-diffusion process.  Similar to the momentum transport in the arterial lumen, the continuity equation was solved within the arterial wall by assuming it as a porous medium.

A finite volume solver (Fluent, ANSYS Inc.) was utilized to perform the coupled flow and drug transport simulations. The semi-implicit method for pressure-linked equations-consistent (SIMPLEC) algorithm was used with second order spatial accuracy. A second order discretization scheme was used to solve the pressure equation and second order  upwind schemes were used for the momentum and concentration variables.

Simulations for each case were performed

  • for at least 2500 iterations or
  • until there was a 1028 reduction in the mass transport residual.

Drug distribution in non-bifurcating vessels

Constant flow simulations generate local recirculation zones juxtaposed to the stent which in turn act as

  • secondary sources of drug deposition and
  • induce an asymmetric tissue drug distribution profile in the longitudinal flow direction.

Our3D computational model predicts a far more extensive fluid mechanic effect on drug deposition than previously appreciated in two-dimensional (2D) domains.

Within the stented region, drug deposition on the mural interface quantified as

  • the area-weighted average drug concentration (AWAC)
  • in the distal segment of the stent is 12% higher than the proximal segment

Total drug uptake in the arterial wall denote as volume-weighted average concentration (VWAC) is highest in the middle segment of the stent and 5% higher than the proximal stent region

Increased mural drug deposition along the flow direction in a non-bifurcating arterial vessel.

Inset shows a high magnification image of drug pattern in the distal stent segment outlined by black dashed line.
The entire stent is divided into three equal sections denoted as proximal, middle and distal sections, respectively
and the same notation is followed for subsequent analyses.

http://dx.doi.org/10.1371/journal.pone.0008105.g002

These observations indicate that the flow-mediated effect induced by the presence of the stent in the artery

  • is maximal on the mural surface and
  • increases in the longitudinal flow direction.

Further, these results suggest that transmural diffusion-mediated transport sequesters drug from both

  • the proximal and distal portions of the stent
  • into the central segment of the arterial wall beneath the stent.

Predicted levels of average drug concentration varied exponentially

  • with linear increments of inlet flow rate

but maintained similar relationship between the inter-segment concentration levels within the stented region.

Stent position influences drug distribution in bifurcated beds

The location of the stent directly modulates

  • the extent to which drug is deposited on the arterial wall as well as
  • spatial gradients that are established in arterial drug distribution.

Similar to the non-bifurcating vessel case,

  • peaks in drug deposition occur directly beneath the stent struts regardless of the relative location of the SB with respect to the stent. However,
  • drug distribution and corresponding spatial heterogeneity within inter-strut regions depend on the stent location with respect to the flow divider.
  • Mural drug deposition is a function of relative stent position with respect to the side-branch and Reynolds number in arterial bifurcations.

Impact of flow on drug distribution in bifurcations

One can appreciate how blood flow and flow dividers affect arterial drug deposition, and especially on inter-strut drug deposition.

  • Drug deposition within the stented-region of MB  and the entire SB significantly decreases with flow acceleration regardless of stent placement.

Simulations predicted

Local endovascular drug delivery was long assumed to be governed by diffusion alone. The impact of flow was
thought to be restricted to systemic dilution.

  • 2D computational models suggested a complex interplay between the stent and blood flow
  1. Arterial drug deposition is a function of stent location.   http://dx.doi.org/10.1371/journal.pone.0008105.g005
  2. Arterial drug deposition is mediated by flow in bifurcated beds.
    http://dx.doi.org/10.1371/journal.pone.0008105.g006
  • extensive flow-mediated drug delivery in bifurcated vascular beds where the drug distribution patterns are heterogeneous and sensitive to relative stent position and luminal flow.

A single DES in the MB coupled with large retrograde luminal flow on the lateral wall of the side-branch (SB) can provide drug deposition on the SB lumen-wall interface, except

  • when the MB stent is downstream of the SB flow divider.
  • the presence of the SB affects drug distribution in the stented MB.

Fluid mechanic effects play an even greater role than in the SB

  • especially when the DES is across and downstream to the flow divider
  • and in a manner dependent upon

    the Reynolds number.

Summary

We presented the hemodynamic effects on drug distribution patterns using a

  • simplified uniform-cell stent design, though our methodology is adaptable to
    several types of stents with variable design features.

Variability in arterial drug distribution due to other geometric and morphologic aspects such as

  • bifurcation angle, arterial taper as well as presence of a trifurcation can also be understood using our computational framework.

Further, performance of a candidate DES using other commonly used stenting procedures for bifurcation lesions such as culotte and crush techniques can be quantified based on their resulting drug distribution patterns.

Other Related Articles that were published on this Open Access Online Scientific Journal include the following:

Vascular Repair: Stents and Biologically Active Implants

Larry H Bernstein, MD, FACP and Aviva Lev-Ari, RN, PhD, 5/4/2013

Modeling Targeted Therapy

Larry H Bernstein, MD, FACP 3/2/2013

Quantum Biology And Computational Medicine

Larry H Bernstein, MD, FACP 4/3/2013

Virtual Biopsy – is it possible?

Larry H Bernstein, MD, FACP 3/3/2013

Reprogramming cell fate  3/2/2013

Larry H Bernstein, MD, FACP

How Methionine Imbalance with Sulfur-Insufficiency Leads to Hyperhomocysteinemia

Larry H Bernstein, MD, FACP 4/4/2013

Amyloidosis with Cardiomyopathy

Larry H Bernstein, MD, FACP 3/31/2013

Nitric Oxide, Platelets, Endothelium and Hemostasis

Larry H Bernstein, MD, FACP 11/8/2012

Mitochondrial Damage and Repair under Oxidative Stress

Larry H Bernstein, MD, FACP 10/28/2012

Endothelial Function and Cardiovascular Disease

Larry H Bernstein, MD, FACP 10/25/2012

Endothelial Dysfunction, Diminished Availability of cEPCs, Increasing CVD Risk for Macrovascular Disease –Therapeutic Potential of cEPCs

Aviva Lev-Ari, PhD, RN 8/27/2012

Prostacyclin and Nitric Oxide: Adventures in Vascular Biology – A Tale of Two Mediators

Aviva Lev-Ari, RN, PhD, 4/30/2013

Genetics of Conduction Disease: Atrioventricular (AV) Conduction Disease (block): Gene Mutations – Transcription, Excitability, and Energy Homeostasis

Aviva Lev-Ari, PhD, 4/28/2013

http://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-and-energy-homeostasis/

Revascularization: PCI, Prior History of PCI vs CABG

Aviva Lev-Ari, PhD, 4/25/2013

http://pharmaceuticalintelligence.com/2013/04/25/revascularization-pci-prior-history-of-pci-vs-cabg/

Revascularization: PCI, Prior History of PCI vs CABG

Aviva Lev-Ari, PhD, RN 4/25/2013

http://pharmaceuticalintelligence.com/2013/04/25/revascularization-pci-prior-history-of-pci-vs-cabg/

Cholesteryl Ester Transfer Protein (CETP) Inhibitor: Potential of Anacetrapib to treat Atherosclerosis and CAD

Aviva Lev-Ari, PhD, RN 4/7/2013

http://pharmaceuticalintelligence.com/2013/04/07/cholesteryl-ester-transfer-protein-cetp-inhibitor-potential-of-anacetrapib-to-treat-atherosclerosis-and-cad/

Hypertriglyceridemia concurrent Hyperlipidemia: Vertical Density Gradient Ultracentrifugation a Better Test to Prevent Undertreatment of High-Risk Cardiac Patients

Aviva Lev-Ari, PhD, RN 4/4/2013

http://pharmaceuticalintelligence.com/2013/04/04/hypertriglyceridemia-concurrent-hyperlipidemia-vertical-density-gradient-ultracentrifugation-a-better-test-to-prevent-undertreatment-of-high-risk-cardiac-patients/

Fight against Atherosclerotic Cardiovascular Disease: A Biologics not a Small Molecule – Recombinant Human lecithin-cholesterol acyltransferase (rhLCAT) attracted AstraZeneca to acquire AlphaCore

Aviva Lev-Ari, PhD, RN 4/3/2013

http://pharmaceuticalintelligence.com/2013/04/03/fight-against-atherosclerotic-cardiovascular-disease-a-biologics-not-a-small-molecule-recombinant-human-lecithin-cholesterol-acyltransferase-rhlcat-attracted-astrazeneca-to-acquire-alphacore/

High-Density Lipoprotein (HDL): An Independent Predictor of Endothelial Function & Atherosclerosis, A Modulator, An Agonist, A Biomarker for Cardiovascular Risk

Aviva Lev-Ari, PhD, RN 3/31/2013

http://pharmaceuticalintelligence.com/2013/03/31/high-density-lipoprotein-hdl-an-independent-predictor-of-endothelial-function-artherosclerosis-a-modulator-an-agonist-a-biomarker-for-cardiovascular-risk/

Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI

Aviva Lev-Ari, PhD, RN 3/10/2013

http://pharmaceuticalintelligence.com/2013/03/10/acute-chest-painer-admission-three-emerging-alternatives-to-angiography-and-pci/

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013

Lev-Ari, A. and L H Bernstein 3/7/2013

http://pharmaceuticalintelligence.com/2013/03/07/genomics-genetics-of-cardiovascular-disease-diagnoses-a-literature-survey-of-ahas-circulation-cardiovascular-genetics-32010-32013/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN

Aviva Lev-Ari, PhD, RN 2/28/2013

http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-based-pharmacological-therapy-including-thymosin/

Arteriogenesis and Cardiac Repair: Two Biomaterials – Injectable Thymosin beta4 and Myocardial Matrix Hydrogel

Aviva Lev-Ari, PhD, RN 2/27/2013

http://pharmaceuticalintelligence.com/2013/02/27/arteriogenesis-and-cardiac-repair-two-biomaterials-injectable-thymosin-beta4-and-myocardial-matrix-hydrogel/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by Serum Protein Profiles

Aviva Lev-Ari, PhD, RN 12/29/2012

http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

Special Considerations in Blood Lipoproteins, Viscosity, Assessment and Treatment

Bernstein, HL and Lev-Ari, A. 11/28/2012

http://pharmaceuticalintelligence.com/2012/11/28/special-considerations-in-blood-lipoproteins-viscosity-assessment-and-treatment/

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

Aviva Lev-Ari, PhD, RN 11/13/2012

http://pharmaceuticalintelligence.com/2012/11/13/peroxisome-proliferator-activated-receptor-ppar-gamma-receptors-activation-pparγ-transrepression-for-angiogenesis-in-cardiovascular-disease-and-pparγ-transactivation-for-treatment-of-dia/

Clinical Trials Results for Endothelin System: Pathophysiological role in Chronic Heart Failure, Acute Coronary Syndromes and MI – Marker of Disease Severity or Genetic Determination?

Aviva Lev-Ari, PhD, RN 10/19/2012

http://pharmaceuticalintelligence.com/2012/10/19/clinical-trials-results-for-endothelin-system-pathophysiological-role-in-chronic-heart-failure-acute-coronary-syndromes-and-mi-marker-of-disease-severity-or-genetic-determination/

Endothelin Receptors in Cardiovascular Diseases: The Role of eNOS Stimulation

Aviva Lev-Ari, PhD, RN 10/4/2012

http://pharmaceuticalintelligence.com/2012/10/04/endothelin-receptors-in-cardiovascular-diseases-the-role-of-enos-stimulation/

Inhibition of ET-1, ETA and ETA-ETB, Induction of NO production, stimulation of eNOS and Treatment Regime with PPAR-gamma agonists (TZD): cEPCs Endogenous Augmentation for Cardiovascular Risk Reduction – A Bibliography

Aviva Lev-Ari, PhD, RN 10/4/2012

http://pharmaceuticalintelligence.com/2012/10/04/inhibition-of-et-1-eta-and-eta-etb-induction-of-no-production-and-stimulation-of-enos-and-treatment-regime-with-ppar-gamma-agonists-tzd-cepcs-endogenous-augmentation-for-cardiovascular-risk-reduc/

Positioning a Therapeutic Concept for Endogenous Augmentation of cEPCs — Therapeutic Indications for Macrovascular Disease: Coronary, Cerebrovascular and Peripheral

Aviva Lev-Ari, PhD, RN 8/29/2012

http://pharmaceuticalintelligence.com/2012/08/29/positioning-a-therapeutic-concept-for-endogenous-augmentation-of-cepcs-therapeutic-indications-for-macrovascular-disease-coronary-cerebrovascular-and-peripheral/

Cardiovascular Outcomes: Function of circulating Endothelial Progenitor Cells (cEPCs): Exploring Pharmaco-therapy targeted at Endogenous Augmentation of cEPCs

Aviva Lev-Ari, PhD, RN 8/28/2012

http://pharmaceuticalintelligence.com/2012/08/28/cardiovascular-outcomes-function-of-circulating-endothelial-progenitor-cells-cepcs-exploring-pharmaco-therapy-targeted-at-endogenous-augmentation-of-cepcs/

Endothelial Dysfunction, Diminished Availability of cEPCs, Increasing CVD Risk for Macrovascular Disease – Therapeutic Potential of cEPCs

Aviva Lev-Ari, PhD, R N 8/27/2012

http://pharmaceuticalintelligence.com/2012/08/27/endothelial-dysfunction-diminished-availability-of-cepcs-increasing-cvd-risk-for-macrovascular-disease-therapeutic-potential-of-cepcs/

Vascular Medicine and Biology: CLASSIFICATION OF FAST ACTING THERAPY FOR PATIENTS AT HIGH RISK FOR MACROVASCULAR EVENTS Macrovascular Disease – Therapeutic Potential of cEPCs

Aviva Lev-Ari, PhD, RN 8/24/2012

http://pharmaceuticalintelligence.com/2012/08/24/vascular-medicine-and-biology-classification-of-fast-acting-therapy-for-patients-at-high-risk-for-macrovascular-events-macrovascular-disease-therapeutic-potential-of-cepcs/

Cardiovascular Disease (CVD) and the Role of agent alternatives in endothelial Nitric Oxide Synthase (eNOS) Activation and Nitric Oxide Production

Aviva Lev-Ari, PhD, RN 7/19/2012

http://pharmaceuticalintelligence.com/2012/07/19/cardiovascular-disease-cvd-and-the-role-of-agent-alternatives-in-endothelial-nitric-oxide-synthase-enos-activation-and-nitric-oxide-production/

Resident-cell-based Therapy in Human Ischaemic Heart Disease: Evolution in the PROMISE of Thymosin beta4 for Cardiac Repair

Aviva Lev-Ari, PhD, RN 4/30/2012

http://pharmaceuticalintelligence.com/2012/04/30/93/

Triple Antihypertensive Combination Therapy Significantly Lowers Blood Pressure in Hard-to-Treat Patients with Hypertension and Diabetes

Aviva Lev-Ari, PhD, RN 5/29/2012

http://pharmaceuticalintelligence.com/2012/05/29/445/

Macrovascular Disease – Therapeutic Potential of cEPCs: Reduction Methods for CV Risk

Aviva Lev-Ari, PhD, RN 7/2/2012

http://pharmaceuticalintelligence.com/2012/07/02/macrovascular-disease-therapeutic-potential-of-cepcs-reduction-methods-for-cv-risk/

Mitochondria Dysfunction and Cardiovascular Disease – Mitochondria: More than just the “powerhouse of the cell”

Aviva Lev-Ari, PhD, RN 7/9/2012

http://pharmaceuticalintelligence.com/2012/07/09/mitochondria-more-than-just-the-powerhouse-of-the-cell/

Bystolic’s generic Nebivolol – positive effect on circulating Endothelial Proginetor Cells endogenous augmentation

Aviva Lev-Ari, PhD, RN 7/16/2012

http://pharmaceuticalintelligence.com/2012/07/16/bystolics-generic-nebivolol-positive-effect-on-circulating-endothilial-progrnetor-cells-endogenous-augmentation/

Arteriogenesis and Cardiac Repair: Two Biomaterials – Injectable Thymosin beta4 and Myocardial Matrix Hydrogel

Aviva Lev-Ari, PhD, RN 2/27/2013

http://pharmaceuticalintelligence.com/2013/02/27/arteriogenesis-and-cardiac-repair-two-biomaterials-injectable-thymosin-beta4-and-myocardial-matrix-hydrogel/

Cardiac Surgery Theatre in China vs. in the US: Cardiac Repair Procedures, Medical Devices in Use, Technology in Hospitals, Surgeons’ Training and Cardiac Disease Severity”

Aviva Lev-Ari, PhD, RN 1/8/2013

http://pharmaceuticalintelligence.com/2013/01/08/cardiac-surgery-theatre-in-china-vs-in-the-us-cardiac-repair-procedures-medical-devices-in-use-technology-in-hospitals-surgeons-training-and-cardiac-disease-severity/

Heart Remodeling by Design – Implantable Synchronized Cardiac Assist Device: Abiomed’s Symphony

Aviva Lev-Ari, PhD, RN 7/23/2012

http://pharmaceuticalintelligence.com/2012/07/23/heart-remodeling-by-design-implantable-synchronized-cardiac-assist-device-abiomeds-symphony/

Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI

Aviva Lev-Ari, PhD, RN 3/10/2013

http://pharmaceuticalintelligence.com/2013/03/10/acute-chest-painer-admission-three-emerging-alternatives-to-angiography-and-pci/

Dilated Cardiomyopathy: Decisions on implantable cardioverter-defibrillators (ICDs) using left ventricular ejection fraction (LVEF) and Midwall Fibrosis: Decisions on Replacement using late gadolinium enhancement cardiovascular MR (LGE-CMR)

Aviva Lev-Ari, PhD, RN 3/10/2013
http://pharmaceuticalintelligence.com/2013/03/10/dilated-cardiomyopathy-decisions-on-implantable-cardioverter-defibrillators-icds-using-left-ventricular-ejection-fraction-lvef-and-midwall-fibrosis-decisions-on-replacement-using-late-gadolinium/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN

Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-based-pharmacological-therapy-including-thymosin/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology

Aviva Lev-Ari, PhD, RN 1/28/2013
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-cardiovascular-imaging-technology/

PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not Platelet Reactivity

Aviva Lev-Ari, PhD, RN 1/10/2013
http://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX, and Clinical SYNTAX

Aviva Lev-Ari, PhD, RN 1/3/2013
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by Serum Protein Profiles

Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

Heart Renewal by pre-existing Cardiomyocytes: Source of New Heart Cell Growth Discovered

Aviva Lev-Ari, PhD, RN 12/23/2012
http://pharmaceuticalintelligence.com/2012/12/23/heart-renewal-by-pre-existing-cardiomyocytes-source-of-new-heart-cell-growth-discovered/

Cardiovascular Risk Inflammatory Marker: Risk Assessment for Coronary Heart Disease and Ischemic Stroke – Atherosclerosis.

Aviva Lev-Ari, PhD, RN 10/30/2012
http://pharmaceuticalintelligence.com/2012/10/30/cardiovascular-risk-inflammatory-marker-risk-assessment-for-coronary-heart-disease-and-ischemic-stroke-atherosclerosis/

To Stent or Not? A Critical Decision

Aviva Lev-Ari, PhD, RN 10/23/2012
http://pharmaceuticalintelligence.com/2012/10/23/to-stent-or-not-a-critical-decision/

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia

Aviva Lev-Ari, PhD, RN 8/27/2012
http://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-fractional-flow-reserve-ffrct-for-tagging-ischemia/

Ethical Considerations in Studying Drug Safety — The Institute of Medicine Report

Aviva Lev-Ari, PhD, RN 8/23/2012
http://pharmaceuticalintelligence.com/2012/08/23/ethical-considerations-in-studying-drug-safety-the-institute-of-medicine-report/

New Drug-Eluting Stent Works Well in STEMI

Aviva Lev-Ari, PhD, RN 8/22/2012
http://pharmaceuticalintelligence.com/2012/08/22/new-drug-eluting-stent-works-well-in-stemi/

Expected New Trends in Cardiology and Cardiovascular Medical Devices

Aviva Lev-Ari, PhD, RN 8/17/2012
http://pharmaceuticalintelligence.com/2012/08/17/expected-new-trends-in-cardiology-and-cardiovascular-medical-devices/

Coronary Artery Disease – Medical Devices Solutions: From First-In-Man Stent Implantation, via Medical Ethical Dilemmas to Drug Eluting Stents

Aviva Lev-Ari, PhD, RN 8/13/2012

http://pharmaceuticalintelligence.com/2012/08/13/coronary-artery-disease-medical-devices-solutions-from-first-in-man-stent-implantation-via-medical-ethical-dilemmas-to-drug-eluting-stents/

Percutaneous Endocardial Ablation of Scar-Related Ventricular Tachycardia

Aviva Lev-Ari, PhD, RN 7/18/2012

http://pharmaceuticalintelligence.com/2012/07/18/percutaneous-endocardial-ablation-of-scar-related-ventricular-tachycardia/

Competition in the Ecosystem of Medical Devices in Cardiac and Vascular Repair: Heart Valves, Stents, Catheterization Tools and Kits for Open Heart and Minimally Invasive Surgery (MIS)

Aviva Lev-Ari, PhD, RN 6/22/2012

http://pharmaceuticalintelligence.com/2012/06/22/competition-in-the-ecosystem-of-medical-devices-in-cardiac-and-vascular-repair-heart-valves-stents-catheterization-tools-and-kits-for-open-heart-and-minimally-invasive-surgery-mis/

Global Supplier Strategy for Market Penetration & Partnership Options (Niche Suppliers vs. National Leaders) in the Massachusetts Cardiology & Vascular Surgery Tools and Devices Market for Cardiac Operating Rooms and Angioplasty Suites

Aviva Lev-Ari, PhD, RN 6/22/2012

http://pharmaceuticalintelligence.com/2012/06/22/global-supplier-strategy-for-market-penetration-partnership-options-niche-suppliers-vs-national-leaders-in-the-massachusetts-cardiology-vascular-surgery-tools-and-devices-market-for-car/

Related Articles


English: tissue engineered vascular graft Deut...

English: tissue engineered vascular graft Deutsch: tissue engineerte Gefäßprothese (Photo credit: Wikipedia)

English: Cardiovascular disease: PAD therapy w...

English: Cardiovascular disease: PAD therapy with stenting Deutsch: PAVK Therapie: Kathetertherapie mit stenting (Photo credit: Wikipedia)

Endoscopic image of self-expanding metallic st...

Endoscopic image of self-expanding metallic stent in esophagus. Photograph released into public domain on permission of patient. — Samir धर्म 07:38, 2 June 2006 (UTC) (Photo credit: Wikipedia)

Read Full Post »

Clinical Decision Support Systems for Management Decision Making of Cardiovascular Diseases

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

and

Curator: Aviva Lev-Ari, PhD, RN

Clinical Decision Support Systems (CDSS)

Clinical decision support system (CDSS) is an interactive decision support system (DSS). It generally relies on computer software designed to assist physicians and other health professionals with decision-making tasks, such as when to apply a particular diagnosis, further specific tests or treatments. A functional definition proposed by Robert Hayward of the Centre for Health Evidence defines CDSS as follows:  “Clinical Decision Support systems link health observations with health knowledge to influence health choices by clinicians for improved health care”. CDSS is a major topic in artificial intelligence in medicine.

Vinod Khosla of A Khosla Ventures investment, in a Fortune Magazine article, “Technology will replace 80% of what doctors do”, on December 4, 2012, wrote about CDSS as a harbinger of science in medicine.

Computer-assisted decision support is in its infancy, but we have already begun to see meaningful impact on healthcare. Meaningful use of computer systems is now rewarded under the Affordable Care Act.  Studies have demonstrated the ability of computerized clinical decision support systems to lower diagnostic errors of omission significantly, by directly countering cognitive bias.  Isabel is a differential diagnosis tool and, according to a Stony Book study, matched the diagnoses of experienced clinicians in 74% of complex cases. The system improved to a 95% match after a more rigorous entry of patient data. The IBM supercomputer, Watson, after beating all humans at the intelligence-based task of playing Jeopardy, is now turning its attention to medical diagnosis. It can process natural language questions and is fast at parsing high volumes of medical information, reading and understanding 200 million pages of text in 3 seconds. 

Examples of CDSS

  1. CADUCEUS
  2. DiagnosisPro
  3. Dxplain
  4. MYCIN
  5. RODIA

VIEW VIDEO

“When Should a Physician Deviate from the Diagnostic Decision Support Tool and What Are the Associated Risks?”

Introduction

Justin D. Pearlman, MD, PhD

A Decision Support System consists of one or more tools to help achieve good decisions. For example, decisions that can benefit from DSS include whether or not to undergo surgery, whether or not to undergo a stress test first, whether or not to have an annual mammogram starting at a particular age, or a computed tomography (CT) to screen for lung cancer, whether or not to utilize intensive care support such as a ventilator, chest shocks, chest compressions, forced feeding, strong antibiotics and so on versus care directed to comfort measures only without regard to longevity.

Any DSS can be viewed like a digestive tract, chewing on input, and producing output, and like the digestive tract, the output may only be valuable to a farmer. A well designed DSS is efficient in the input, timely in its processing and useful in the output. Mathematically, a DSS is a model with input parameters and an output variable or set of variables that can be used to determine an action. The input can be categorical (alive, dead), semi-quantitative (cold-warm-hot), or quantitative (temperature, systolic blood pressure, heart rate, oxygen saturation). The output can be binary (yes-no) or it can express probabilities or confidence intervals.

The process of defining specifications for a function and then deriving a useful function is called mathematical modeling. We will derive the function for “average” as an example. By way of specifications, we want to take a list of numbers as input, and come out with a single number that represents the middle of the pack or “central tendency.”   The order of the list should not matter, and if we change scales, the output should scale the same way. For example, if we use centimeters instead of inches, and we apply 2.54 centimeters to an inch, then the output should increase by the multiplier 2.54. If the list of numbers are all the same then the output should be the consistent value. Representing these specifications symbolically:

1. order doesn’t matter: f(a,b) = f(b,a), where “a” and “b” are input values, “f” is the function.

2. multipliers pass through (linearity):  f(ka,kb)=k f(a,b), where k is a scalar e.g. 2.54 cm/inch.

3. identity:  f(a,a,a,…) = a

Properties 1 and 2 lead us to consider linear functions consisting of sums and multipliers: f(a,b,c)=Aa+Bb+Cc …, where the capital letters are multipliers by “constants” – numbers that are independent of the list values a,b,c, and since the order should not matter, we simplify to f(a,b,c)=K (a+b+c+…) because a constant multiplier K makes order not matter. Property 3 forces us to pick K = 1/N where N is the length of the list. These properties lead us to the mathematical solution: average = sum of list of numbers divided by the length of the list.

A coin flip is a simple DSS: heads I do it, tails I don’t. The challenge of a good DSS is to perform better than random choice and also perform better (more accurately, more efficiently, more reliably, more timely and/or under more adverse conditions) than unassisted human decision making.

Therefore, I propose the following guiding principles for DSS design: choose inputs wisely (accessible, timely, efficient, relevant), determine to what you want output to be sensitive AND to what you want output to be insensitive, and be very clear about your measures of success.

For example, consider designing a DSS to determine whether a patient should receive the full range of support capabilities of an intensive care unit (ICU), or not. Politicians have cited the large bump in the cost of the last year of life as an opportunity to reduce costs of healthcare, and now pay primary care doctors to encourage patients to establish advanced directives not to use ICU services. From the DSS standpoint, the reasoning is flawed because the decision not to use ICU services should be sensitive to benefit as well as cost, commonly called cost-benefit analysis. If we measure success of ICU services by the benefit of quality life net gain (QLNG, “quailing”), measured in quality life-years (QuaLYs) and achieve 50% success with that, then the cost per QuaLY measures the cost-benefit of ICU services. In various cost-benefit decisions, the US Congress has decided to proceed if the cost is under $20-$100,000/QuaLY. If ICU services are achieving such a cost-benefit, then it is not logical to summarily block such services in advance. Rather, the ways to reduce those costs include improving the cost efficiency of ICU care, and improving the decision-making of who will benefit.

An example of a DSS is the prediction of plane failure from a thousand measurements of strain and function of various parts of an airplane. The desired output is probability of failure to complete the next flight safely. Cost-Benefit analysis then establishes what threshold or operating point merits grounding the plane for further inspection and preventative maintenance repairs. If a DSS reports probability of failure, then the decision (to ground the plane) needs to establish a threshold at which a certain probability triggers the decision to ground the plane.

The notion of an operating point brings up another important concept in decision support. At first blush, one might think the success of a DSS is determined by its ability to correctly identify a predicted outcome, such as futility of ICU care (when will the end result be no quality life net gain). The flaw in that measure of success is that it depends on prevalence in the study group. As an extreme example, if you study a group of patients with fatal gunshot wounds to the head, none will benefit and the DSS requirement is trivial and any DSS that says no for that group has performed well. At the other extreme, if all patients become healthy, the DSS requirement is also trivial, just say yes. Therefore the proper assessment of a DSS should pay attention to the prevalence and the operating point.

The impact of prevalence and operating point on decision-making is addressed by receiver-operator curves. Consider looking at the blood concentration of Troponin-I (TnI) as the sole determinant to decide who is having a heart attack.  If one plots a graph with horizontal axis troponin level and vertical axis ultimate proof of heart attack, the percentage of hits will generally be higher for higher values of TnI. To create such a graph, we compute a “truth table” which reports whether the test was above or below a decision threshold operating point, and whether or not the disease (heart attack) was in fact present:

TRUTH TABLE

              Disease            Not Disease
Test Positive

TP

FP

Test Negative

FN

TN

Total

TP+FN

FP+TN

The sensitivity to the disease is the true positive rate (TPR), the percentage of all disease cases that are ranked by the decision support as positive: TPR = TP/(TP+FN). 100% sensitivity can be achieved trivially by lowering the threshold for a positive test to zero, at a cost.  While sensitivity is necessary for success it is not sufficient. In addition to wanting sensitivity to disease, we want to avoid labeling non-disease as disease. That is often measured by specificity, the true negative rate (TNR), the percentage of those without disease who are correctly identified as not having disease: TNR = TN/(FP+TN). I propose also we define the complement to specificity, the anti-sensitivity, as the false positive rate (FPR), FPR = FP/(FP+TN) = 1 – TNR. Anti-sensitivity is a penalty cost of lowering the diagnostic threshold to boost sensitivity, as the concomitant rise in anti-sensitivity means a growing number of non-disease subjects are labeled as having disease. We want high sensitivity to true disease without high anti-sensitivity to false disease, and we want to be insensitive to common distractors. In these formulas, note that false negatives (FN) are True for disease, and false positives (FP) are False for disease, so the denominators add FN to TP for total True disease, and add FP to TN for total False for disease.

The graph in figure 1 justifies the definition of anti-sensitivity. It is an ROC or “Receiver-Operator Curve” which is a plot of sensitivity versus anti-sensitivity for different diagnostic thresholds of a test (operating points). Note, higher sensitivity comes at the cost of higher anti-sensitivity. Where to operate (what threshold to use for diagnosis) can be selected according to cost-benefit analysis of sensitivity versus anti-sensitivity (and specificity).

 untitled
FIgure 1 ROC (Receiver-Operator Curve): Graph of sensitivity (true positive rate) versus anti-sensitivity (false positive rate) computed by changing the operating point (threshold for declaring a test numeric value positive for disease). High area under the curve (AUC) is favorable because it means less anti-sensitivity for high sensitivity (upper left corner of shaded area more to the left, and higher). The dots on the curve are operating points. An inclusive operating point (high on the curve, high sensitivity) is used for screening tests, whereas an exclusive operating point (low on the curve, low anti-sensitivity) is used for definitive diagnosis.

Cost benefit analysis generally is based on a semi-lattice, or upside-down branching tree, which represents all choices and outcomes. It is important to include all branches down to final outcomes. For example, if the test is a mammogram to screen for breast cancer, the cost is not just the cost of the test, and the benefit “early diagnosis.” The cost-benefit calculation forces us to put a numerical value on the impact, such as a financial cost to an avoidable death, or we can get a numerical result in terms of quality life years expected. The cost, however, is not just the cost of the mammogram, but also of downstream events such as the cost of the needle biopsies for the suspicious “positives” and so on.

semilattice decision treeFigure 2 Semi-lattice Decision Tree: Starting from all patients, create a branch point for your test result, and add further branch points for any subsequent step-wise outcomes until you reach the “bottom line.” Assign a value to each, resulting in a numerical net cost and net benefit. If tests invoke risks (for example, needle biopsy of lung can collapse a lung and require hospitalization for a chest tube) then insert branch points for whether the complication occurs or not, as the treatment of a complication counts as part of the cost. The intermediary nodes can have probability of occurrence as their numeric factor, and the bottom line can apply the net probability of the path leading to a value as a multiplier to the dollar value (a 10% chance of costing $10,000 counts as an expectation cost of 0.1 x 10,000 = $1,000).

A third area of discussion is the statistical power of a DSS – how reliable is it in the application that you care about? Commonly DSS design is contrary to common statistical applications which address significance of a deviation in a small number of variables that have been measured many times in a large population. Instead, DSS often uses many variables to fully describe or characterize the status of a small population. For example, thousands of different measurements may be performed on a few dozen airplanes, aiming to predict when the plane should be grounded for repairs. A similar inversion of numbers – numerous variables, small number of cases – is common in genomics studies.

The success of a DDS is measured by its predictive value compared to outcomes or other measures of success. Thus measures of success include positive predictive value, negative predictive value, and confidence. A major problem with DDS is the inversion of the usually desired ratio of repetitions to measurement variables. When you get a single medical lab test, you have a single measurement value such as potassium level and a large number of normal subjects for comparison. If we knew the  mean μ and standard deviation σ that describes the distribution of normal values in the population at large, then we could compute the confidence in the decision to call our observed value abnormal based on the normal distribution:  , <br /><br /><br /><br /><br /><br /><br /><br /><br />
f(x) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }.<br /><br /><br /><br /><br /><br /><br /><br /><br />

A value may be deemed distinctive based on a 95% confidence interval if it falls outside of the norm, say by more than twice the standard deviation σ, thereby establishing that it is unlikely to be random as the distance from the mean excludes 95% of the normal distribution.

The determination of confidence in an observed set of results stems from maximized likelihood estimates. Earlier in this article we described how to derive the the mean, or center, of a set of measurements. A similar analysis can derive the standard deviation (square root of variance) as a measure of spread around the mean, as well as other descriptive statistics based on sample values. These formulas describe the distribution of sample values about the mean. The calculation is based on a simple inversion. If we knew the mean and variance of a population of values for a measurement, we could calculate the likelihood of each new measurement falling a particular distance from the mean, and we could calculate the combined likelihood for a set of observed values. Maximized Likelihood Estimation (MLE) simply inverts the method of calculation. Instead of treating the mean and variance as known, we can treat the sample observations as the known data, to characterize a distribution for the observed data samples from an estimate of the spread about an unknown mean from a set of N normal samples x(one can apply calculus to compute the formulas below for the unknown mean and unknown variance, based simply on computing how to maximize the joint likelihood of the observations  xfrom the frequency distribution above, in order t0 derive the following formulas): 

\sigma = \sqrt{\frac{1}{N}\left[(x_1-\mu)^2 + (x_2-\mu)^2 + \cdots + (x_N - \mu)^2\right]}, {\rm \ \ where\ \ } \mu = \frac{1}{N} (x_1 + \cdots + x_N),

The frequency distribution (a function of mean and spread) reports the frequency of observing x if it is drawn from a population with the specified mean μ and standard deviation σ . We can invert that by treating the observations, x, as known and the mean μ and standard deviation σ unknown, then calculate the values μ and  σ that maximize the likelihood of our sample set as coming from the dynamically described population.

In DSS there is typically an inversion of the usually requisite large number of samples (small versus large) and number of variables (large versus small. This inversion has major consequences on data confidence. If you measure just 14 independent variables versus one variable, each at 95% confidence, the net confidence drops exponentially to less than 50%: 0.9514=49%. In the airplane grounding screen tests, 1000 independent variables, at 95% confidence each, yields a net confidence of only 5 x 10-23 which is 10 sextillion times less than 50% confidence. This same problem arises in genomics research, in which we have a large array of gene product measurements on a small number of patients. Standard statistical tools are problematic at high variable counts. One can turn to qualitative grouping tools such as exploratory factor analysis, or recover statistical robustness with HykGene, a combined cluster and ranking method devised by the author to improve dramatically the ability to identify distinctions with confidence when the number of variables is high.

Evolution of DSS

Aviva Lev-Ari, PhD, RN

The examples provided above refer to sets of binary models, one family of DSS. Another type of DSS is multivariate in nature, a corollary of multivariate scenarios constitute alternative choice options. Last decade development in the DSS field involved the design of Recommendation Engines given manifested preference functions that involved simultaneous trade-off functions against cost function. Game theoretical context is embedded into Recommendation Engines. The output mentioned above, is in fact an array of options with probabilities of saving reward assigned by the Recommendation Engine.

Underlining Computation Engines

Methodological Basis of Clinical DSS

There are many different methodologies that can be used by a CDSS in order to provide support to the health care professional.[7]

The basic components of a CDSS include a dynamic (medical) knowledge base and an inference mechanism (usually a set of rules derived from the experts and evidence-based medicine) and implemented through medical logic modules based on a language such as Arden syntax. It could be based on Expert systems or artificial neural networks or both (connectionist expert systems).

Bayesian Network

The Bayesian network is a knowledge-based graphical representation that shows a set of variables and their probabilistic relationships between diseases and symptoms. They are based on conditional probabilities, the probability of an event given the occurrence of another event, such as the interpretation of diagnostic tests. Bayes’ rule helps us compute the probability of an event with the help of some more readily available information and it consistently processes options as new evidence is presented. In the context of CDSS, the Bayesian network can be used to compute the probabilities of the presence of the possible diseases given their symptoms.

Some of the advantages of Bayesian Network include the knowledge and conclusions of experts in the form of probabilities, assistance in decision making as new information is available and are based on unbiased probabilities that are applicable to many models.

Some of the disadvantages of Bayesian Network include the difficulty to get the probability knowledge for possible diagnosis and not being practical for large complex systems given multiple symptoms. The Bayesian calculations on multiple simultaneous symptoms could be overwhelming for users.

Example of a Bayesian network in the CDSS context is the Iliad system which makes use of Bayesian reasoning to calculate posterior probabilities of possible diagnoses depending on the symptoms provided. The system now covers about 1500 diagnoses based on thousands of findings.

Another example is the DXplain system that uses a modified form of the Bayesian logic. This CDSS produces a list of ranked diagnoses associated with the symptoms.

A third example is SimulConsult, which began in the area of neurogenetics. By the end of 2010 it covered ~2,600 diseases in neurology and genetics, or roughly 25% of known diagnoses. It addresses the core issue of Bayesian systems, that of a scalable way to input data and calculate probabilities, by focusing specialty by specialty and achieving completeness. Such completeness allows the system to calculate the relative probabilities, rather than the person inputting the data. Using the peer-reviewed medical literature as its source, and applying two levels of peer-review to the data entries, SimulConsult can add a disease with less than a total of four hours of clinician time. It is widely used by pediatric neurologists today in the US and in 85 countries around the world.

Neural Network

Artificial Neural Networks (ANN) is a nonknowledge-based adaptive CDSS that uses a form of artificial intelligence, also known as machine learning, that allows the systems to learn from past experiences / examples and recognizes patterns in clinical information. It consists of nodes called neuron and weighted connections that transmit signals between the neurons in a forward or looped fashion. An ANN consists of 3 main layers: Input (data receiver or findings), Output (communicates results or possible diseases) and Hidden (processes data). The system becomes more efficient with known results for large amounts of data.

The advantages of ANN include the elimination of needing to program the systems and providing input from experts. The ANN CDSS can process incomplete data by making educated guesses about missing data and improves with every use due to its adaptive system learning. Additionally, ANN systems do not require large databases to store outcome data with its associated probabilities. Some of the disadvantages are that the training process may be time consuming leading users to not make use of the systems effectively. The ANN systems derive their own formulas for weighting and combining data based on the statistical recognition patterns over time which may be difficult to interpret and doubt the system’s reliability.

Examples include the diagnosis of appendicitis, back pain, myocardial infarction, psychiatric emergencies and skin disorders. The ANN’s diagnostic predictions of pulmonary embolisms were in some cases even better than physician’s predictions. Additionally, ANN based applications have been useful in the analysis of ECG (A.K.A. EKG) waveforms.

Genetic Algorithms

Genetic Algorithm (GA) is a nonknowledge-based method developed in the 1940s at the Massachusetts Institute of Technology based on Darwin’s evolutionary theories that dealt with the survival of the fittest. These algorithms rearrange to form different re-combinations that are better than the previous solutions. Similar to neural networks, the genetic algorithms derive their information from patient data.

An advantage of genetic algorithms is these systems go through an iterative process to produce an optimal solution. The fitness function determines the good solutions and the solutions that can be eliminated. A disadvantage is the lack of transparency in the reasoning involved for the decision support systems making it undesirable for physicians. The main challenge in using genetic algorithms is in defining the fitness criteria. In order to use a genetic algorithm, there must be many components such as multiple drugs, symptoms, treatment therapy and so on available in order to solve a problem. Genetic algorithms have proved to be useful in the diagnosis of female urinary incontinence.

Rule-Based System

A rule-based expert system attempts to capture knowledge of domain experts into expressions that can be evaluated known as rules; an example rule might read, “If the patient has high blood pressure, he or she is at risk for a stroke.” Once enough of these rules have been compiled into a rule base, the current working knowledge will be evaluated against the rule base by chaining rules together until a conclusion is reached. Some of the advantages of a rule-based expert system are the fact that it makes it easy to store a large amount of information, and coming up with the rules will help to clarify the logic used in the decision-making process. However, it can be difficult for an expert to transfer their knowledge into distinct rules, and many rules can be required for a system to be effective.

Rule-based systems can aid physicians in many different areas, including diagnosis and treatment. An example of a rule-based expert system in the clinical setting is MYCIN. Developed at Stanford University by Edward Shortliffe in the 1970s, MYCIN was based on around 600 rules and was used to help identify the type of bacteria causing an infection. While useful, MYCIN can help to demonstrate the magnitude of these types of systems by comparing the size of the rule base (600) to the narrow scope of the problem space.

The Stanford AI group subsequently developed ONCOCIN, another rules-based expert system coded in Lisp in the early 1980s.[8] The system was intended to reduce the number of clinical trial protocol violations, and reduce the time required to make decisions about the timing and dosing of chemotherapy in late phase clinical trials. As with MYCIN, the domain of medical knowledge addressed by ONCOCIN was limited in scope and consisted of a series of eligibility criteria, laboratory values, and diagnostic testing and chemotherapy treatment protocols that could be translated into unambiguous rules. Oncocin was put into production in the Stanford Oncology Clinic.

Logical Condition

The methodology behind logical condition is fairly simplistic; given a variable and a bound, check to see if the variable is within or outside of the bounds and take action based on the result. An example statement might be “Is the patient’s heart rate less than 50 BPM?” It is possible to link multiple statements together to form more complex conditions. Technology such as a decision table can be used to provide an easy to analyze representation of these statements.

In the clinical setting, logical conditions are primarily used to provide alerts and reminders to individuals across the care domain. For example, an alert may warn an anesthesiologist that their patient’s heart rate is too low; a reminder could tell a nurse to isolate a patient based on their health condition; finally, another reminder could tell a doctor to make sure he discusses smoking cessation with his patient. Alerts and reminders have been shown to help increase physician compliance with many different guidelines; however, the risk exists that creating too many alerts and reminders could overwhelm doctors, nurses, and other staff and cause them to ignore the alerts altogether.

Causal Probabilistic Network

The primary basis behind the causal network methodology is cause and effect. In a clinical causal probabilistic network, nodes are used to represent items such as symptoms, patient states or disease categories. Connections between nodes indicate a cause and effect relationship. A system based on this logic will attempt to trace a path from symptom nodes all the way to disease classification nodes, using probability to determine which path is the best fit. Some of the advantages of this approach are the fact that it helps to model the progression of a disease over time and the interaction between diseases; however, it is not always the case that medical knowledge knows exactly what causes certain symptoms, and it can be difficult to choose what level of detail to build the model to.

The first clinical decision support system to use a causal probabilistic network was CASNET, used to assist in the diagnosis of glaucoma. CASNET featured a hierarchical representation of knowledge, splitting all of its nodes into one of three separate tiers: symptoms, states and diseases.

  1. a b c d e “Decision support systems .” 26 July 2005. 17 Feb. 2009 <http://www.openclinical.org/dss.html>.
  2. 2^ a b c d e f g Berner, Eta S., ed. Clinical Decision Support Systems. New York, NY: Springer, 2007.
  3. 3^ Khosla, Vinod (December 4, 2012). “Technology will replace 80% of what doctors do”. Retrieved April 25, 2013.
  4. ^ Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, Beyene J et al. (2005). “Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review.”JAMA 293 (10): 1223–38. doi:10.1001/jama.293.10.1223PMID 15755945.
  5. ^ Kensaku Kawamoto, Caitlin A Houlihan, E Andrew Balas, David F Lobach. (2005). “Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.”BMJ 330 (7494): 765. doi:10.1136/bmj.38398.500764.8FPMC 555881PMID 15767266.
  6. ^ Gluud C, Nikolova D (2007). “Likely country of origin in publications on randomised controlled trials and controlled clinical trials during the last 60 years.”Trials 8: 7. doi:10.1186/1745-6215-8-7PMC 1808475PMID 17326823.
  7. ^ Wagholikar, K. “Modeling Paradigms for Medical Diagnostic Decision Support: A Survey and Future Directions”. Journal of Medical Systems. Retrieved 2012.
  8. ^ ONCOCIN: An expert system for oncology protocol management E. H. Shortliffe, A. C. Scott, M. B. Bischoff, A. B. Campbell, W. V. Melle, C. D. Jacobs Seventh International Joint Conference on Artificial Intelligence, Vancouver, B.C.. Published in 1981

SOURCE for Computation Engines Section and REFERENCES:

http://en.wikipedia.org/wiki/Clinical_decision_support_system

Cardiovascular Diseases: Decision Support Systems (DSS) for Disease Management Decision Making – DSS analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

Can aviation technology reduce heart surgery complications?

Algorithm for real-time analysis of data holds promise for forecasting
August 13, 2012 | By 

British researchers are working to adapt technology from the aviation industry to help prevent complications among heart patients after surgery. Up to 1,000 sensors aboard aircraft help airlines determine when a plane requires maintenance, reports The Engineer, serving as a model for the British risk-prediction system.

The system analyzes information from hospital cardiovascular patients in real time and compares it with a database of thousands of previous cases to predict the most likely outcome.

“There are vast amounts of clinical data currently collected which is not analyzed in any meaningful way. This tool has the potential to identify subtle early signs of complications from real-time data,” Stuart Grant, a research fellow in surgery at University Hospital of South Manchester, says in a hospital statement. Grant is part of the Academic Surgery Unit working with Lancaster University on the project, which is still its early stages.

The software predicts the patient’s condition over a 24-hour period using four metrics: systolic blood pressure, heart rate, respiration rate and peripheral oxygen saturationexplains EE Times.

As a comparison tool, the researchers obtained a database of 30,000 patient records from the Massachusetts Institute of Technology and combined it with a smaller, more specialized database from Manchester.

In six months of testing, its accuracy is about 75 percent, The Engineer reports. More data and an improved algorithm could boost that rate to 85 percent, the researchers believe. Making the software web-based would allow physicians to access the data anywhere, even on tablets or phones, and could enable remote consultation with specialists.

In their next step, the researchers are applying for more funding and for ethical clearance for a large-scale trial.

U.S. researchers are working on a similar crystal ball, but one covering an array of conditions. Researchers from the University of Washington, MIT and Columbia University are using a statistical model that can predict future ailments based on a patient’s history–and that of thousands of others.

And the U.S. Department of Health & Human Services is using mathematical modeling to analyze effects of specific healthcare interventions.

Predictive modeling also holds promise to make clinical research easier by using algorithms examine multiple scenarios based on different kinds of patient populations, specified health conditions and various treatment regimens

To learn more:
– here’s the Engineer article
– check out the hospital report
– read the EE Times article

Related Articles:
Algorithm looks to past to predict future health conditions
HHS moves to mathematical modeling for research, intervention evaluation
Decision support, predictive modeling may speed clinical research

SOURCE:

Can aviation technology reduce heart surgery complications? – FierceHealthIT http://www.fiercehealthit.com/story/can-aviation-technology-reduce-heart-surgery-complications/2012-08-13#ixzz2SITHc61J

http://www.fiercehealthit.com/story/study-decision-support-systems-must-be-flexible-adaptable-transparent/2012-08-20

Medical Decision Making Tools: Overview of DSS available to date  

http://www.openclinical.org/dss.html

Clinical Decision Support Systems – used for Cardiovascular Medical Decisions

Stud Health Technol Inform. 2010;160(Pt 2):846-50.

AALIM: a cardiac clinical decision support system powered by advanced multi-modal analytics.

Amir A, Beymer D, Grace J, Greenspan H, Gruhl D, Hobbs A, Pohl K, Syeda-Mahmood T, Terdiman J, Wang F.

Source

IBM Almaden Research Center, San Jose, CA, USA.

Abstract

Modern Electronic Medical Record (EMR) systems often integrate large amounts of data from multiple disparate sources. To do so, EMR systems must align the data to create consistency between these sources. The data should also be presented in a manner that allows a clinician to quickly understand the complete condition and history of a patient’s health. We develop the AALIM system to address these issues using advanced multimodal analytics. First, it extracts and computes multiple features and cues from the patient records and medical tests. This additional metadata facilitates more accurate alignment of the various modalities, enables consistency check and empowers a clear, concise presentation of the patient’s complete health information. The system further provides a multimodal search for similar cases within the EMR system, and derives related conditions and drugs information from them. We applied our approach to cardiac data from a major medical care organization and found that it produced results with sufficient quality to assist the clinician making appropriate clinical decisions.

PMID: 20841805 [PubMed – indexed for MEDLINE]

DSS development for Enhancement of Heart Drug Compliance by Cardiac Patients 

A good example of a thorough and effective CDSS development process is an electronic checklist developed by Riggio et al. at Thomas Jefferson University Hospital (TJUH) [12]. TJUH had a computerized physician order-entry system in place. To meet congestive heart failure and acute myocardial infarction quality measures (e.g., use of aspirin, beta blockers, and angiotensin-converting enzyme (ACE) inhibitors), a multidisciplinary team including a focus group of residents developed a checklist, embedded in the computerized discharge instructions, that required resident physicians to prescribe the recommended medications or choose from a drop-down list of contraindications. The checklist was vetted by several committees, including the medical executive committee, and presented at resident conferences for feedback and suggestions. Implementation resulted in a dramatic improvement in compliance.

http://virtualmentor.ama-assn.org/2011/03/medu1-1103.html

Early DSS Development at Stanford Medical Center in the 70s

MYCIN (1976)     MYCIN was a rule-based expert system designed to diagnose and recommend treatment for certain blood infections (antimicrobial selection for patients with bacteremia or meningitis). It was later extended to handle other infectious diseases. Clinical knowledge in MYCIN is represented as a set of IF-THEN rules with certainty factors attached to diagnoses. It was a goal-directed system, using a basic backward chaining reasoning strategy (resulting in exhaustive depth-first search of the rules base for relevant rules though with additional heuristic support to control the search for a proposed solution). MYCIN was developed in the mid-1970s by Ted Shortliffe and colleagues at Stanford University. It is probably the most famous early expert system, described by Mark Musen as being “the first convincing demonstration of the power of the rule-based approach in the development of robust clinical decision-support systems” [Musen, 1999].

The EMYCIN (Essential MYCIN) expert system shell, employing MYCIN’s control structures was developed at Stanford in 1980. This domain-independent framework was used to build diagnostic rule-based expert systems such as PUFF, a system designed to interpret pulmonary function tests for patients with lung disease.

http://www.bmj.com/content/346/bmj.f657

ECG for Detection of MI: DSS use in Cardiovascualr Disease Management

http://faculty.ksu.edu.sa/AlBarrak/Documents/Clinical%20Decision%20Support%20Systems_Ch01.pdf

also showed that neural networks did a better job than two experienced cardiologists in detecting acute myocardial infarction in electrocardiograms with concomitant left bundle branch block.

Olsson SE, Ohlsson M, Ohlin H, Edenbrandt L. Neural networks—a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block. Clin Physiol Funct Imaging 2002;22:295–299.

Sven-Erik Olsson, Hans Öhlin, Mattias Ohlsson and Lars Edenbrandt
Neural networks – a diagnostic tool in acute myocardial infarction with concomitant left bundle branch block
Clinical Physiology and Functional Imaging 22, 295-299 (2002) 

Abstract
The prognosis of acute myocardial infarction (AMI) improves by early revascularization. However the presence of left bundle branch block (LBBB) in the electrocardiogram (ECG) increases the difficulty in recognizing an AMI and different ECG criteria for the diagnosis of AMI have proved to be of limited value. The purpose of this study was to detect AMI in ECGs with LBBB using artificial neural networks and to compare the performance of the networks to that of six sets of conventional ECG criteria and two experienced cardiologists. A total of 518 ECGs, recorded at an emergency department, with a QRS duration > 120 ms and an LBBB configuration, were selected from the clinical ECG database. Of this sample 120 ECGs were recorded on patients with AMI, the remaining 398 ECGs being used as a control group. Artificial neural networks of feed-forward type were trained to classify the ECGs as AMI or not AMI. The neural network showed higher sensitivities than both the cardiologists and the criteria when compared at the same levels of specificity. The sensitivity of the neural network was 12% (P = 0.02) and 19% (P = 0.001) higher than that of the cardiologists. Artificial neural networks can be trained to detect AMI in ECGs with concomitant LBBB more effectively than conventional ECG criteria or experienced cardiologists.

http://home.thep.lu.se/~mattias/publications/papers/lu_tp_00_38_abs.html

Additional SOURCES:

http://www.implementationscience.com/content/6/1/92

http://www.fiercehealthit.com/story/study-decision-support-systems-must-be-flexible-adaptable-transparent/2012-08-20

 Comment of Note

During 1979-1983 Dr. Aviva Lev-Ari was part of Prof. Ronald A. Howard, Stanford University, Study Team, the consulting group to Stanford Medical Center during MYCIN feature enhancement development.

Professor Howard is one of the founders of the decision analysis discipline. His books on probabilistic modeling, decision analysis, dynamic programming, and Markov processes serve as major references for courses and research in these fields.

https://engineering.stanford.edu/profile/rhoward

It was Prof. Howard from EES, Prof. Amos Tversky of Behavior Science  (Advisor of Dr. Lev-Ari’s Masters Thesis at HUJ), and Prof. Kenneth Arrow, Economics, with 15 doctoral students in the early 80s, that formed the Interdisciplinary Decision Analysis Core Group at Stanford. Students of Prof. Howard, chiefly, James E. Matheson, started the Decision Analysis Practice at Stanford Research Institute (SRI, Int’l) in Menlo Park, CA.

http://www.sri.com/

Dr. Lev-Ari  was hired on 3/1985 to head SRI’s effort in algorithm-based DSS development. The models she developed were applied in problem solving for  SRI Clients, among them Pharmaceutical Manufacturers: Ciba Geigy, now NOVARTIS, DuPont, FMC, Rhone-Poulenc, now Sanofi-Aventis.

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

AstraZeneca PLC (AZN) Signs $200 Million Deal With BIND Therapeutics for Cancer Drug

4/22/2013 7:19:42 AM

CAMBRIDGE, Mass. & WILMINGTON, Del.–(BUSINESS WIRE)– BIND Therapeutics and AstraZeneca announced today that they have entered into a strategic collaboration to develop and commercialize an AccurinTM, a targeted and programmable cancer nanomedicine from BIND’s Medicinal Nanoengineering platform, based on a molecularly targeted kinase inhibitor developed and owned by AstraZeneca.The collaboration is based on emerging data suggesting that nanomedicines like Accurins selectively accumulate in diseased tissues and cells, leading to higher drug concentrations at the site of the tumor and reduced exposure to healthy tissues.

Under the terms of the agreement, the companies will work together to complete Investigational New Drug (IND)-enabling studies of the lead Accurin identified from a previously-completed feasibility program. AstraZeneca will then have the exclusive right to lead development and commercialization and BIND will lead manufacturing during the development phase. BIND could receive upfront and pre-approval milestone payments totaling $69 million, and more than $130 million in regulatory and sales milestones and other payments as well as tiered single to double-digit royalties on future sales.

“We are excited to grow this collaboration with AstraZeneca, a leading global biopharmaceutical company committed to developing innovative medicines for patients,” said Scott Minick, President and CEO of BIND. “One year ago, BIND started several feasibility projects with major pharmaceutical companies. Our collaboration with AstraZeneca is the first one completed and had very successful results. Due to the advanced nature of this program, we now plan to move an Accurin with optimized therapeutic properties quickly into product development.”

“AstraZeneca believes that targeted therapies which specifically address the underlying mechanisms of disease are the future of personalized cancer treatment,” said Susan Galbraith, Head of AstraZeneca’s Oncology Innovative Medicines Unit. “Our oncology teams are actively exploring a range of platforms to deliver targeted therapies, with a strategic focus on unlocking the significant potential of nanoparticles as an approach to cancer treatment. We view BIND’s targeted nanomedicines as a leading technology in this field.”

About Accurins™

BIND Therapeutics is discovering and developing Accurins, proprietary new best-in-class therapeutics with superior target selectivity and the potential to improve patient outcomes in the areas of oncology, inflammatory diseases and cardiovascular disorders. Leveraging its proprietary Medicinal Nanoengineering® platform, BIND develops Accurins that outperform conventional drugs by selectively accumulating in diseased tissues and cells. The result is higher drug concentrations at the site of action with minimal off-target exposure, leading to markedly better efficacy and safety.

About BIND Therapeutics

BIND Therapeutics is a clinical-stage biopharmaceutical company developing a new class of highly selective targeted and programmable therapeutics called Accurins. BIND’s Medicinal Nanoengineering® platform enables the design, engineering and manufacturing of Accurins with unprecedented control over drug properties to maximize trafficking to disease sites, dramatically enhancing efficacy while minimizing toxicities.

BIND is developing a pipeline of novel Accurins that hold extraordinary potential to become best-in-class drugs and improve patient outcomes in the areas of oncology, inflammatory diseases and cardiovascular disorders. BIND’s lead product candidate, BIND-014, is currently entering Phase 2 clinical testing in cancer patients and is designed to selectively target PSMA, a surface protein upregulated in a broad range of solid tumors. BIND also develops Accurins in collaboration with pharmaceutical and biotechnology partners to enable promising pipeline candidates to achieve their full potential and to utilize selective targeting to transform the performance of important existing drug products.

BIND is backed by leading investors Polaris Venture Partners, Flagship Ventures, ARCH Venture PartnersNanoDimension, DHK Investments, EndeavourVision and Rusnano. BIND was founded on proprietary technology from the laboratories of two leaders in the field of nanomedicine, Professors Robert Langer, David H. Koch Institute Professor of the Massachusetts Institute of Technology (MIT) and Omid Farokhzad, Associate Professor of Harvard Medical School.

For more information, please visit the company’s web site at http://www.bindtherapeutics.com.

Contact:

Media:

The Yates Network

Kathryn Morris, 845-635-9828

Kathryn@theyatesnetwork.com

SOURCE:

Read Full Post »

Curators: Aviva Lev-Ari, PhD, RN and Larry Bernstein, MD, FACP

The essence of the message is summarized by Larry Bernstein, MD, FACP, as follows:

[1] we employ a massively parallel reporter assay (MPRA) to measure the transcriptional levels induced by 145bp DNA segments centered on evolutionarily-conserved regulatory motif instances and found in enhancer chromatin states
[2] We find statistically robust evidence that (1) scrambling, removing, or disrupting the predicted activator motifs abolishes enhancer function, while silent or motif-improving changes maintain enhancer activity; (2) evolutionary conservation, nucleosome exclusion, binding of other factors, and strength of the motif match are all associated with wild-type enhancer activity; (3) scrambling repressor motifs leads to aberrant reporter expression in cell lines where the enhancers are usually not active.
[3] Our results suggest a general strategy for deciphering cis-regulatory elements by systematic large-scale experimental manipulation, and provide quantitative enhancer activity measurements across thousands of constructs that can be mined to generate and test predictive models of gene expression.

Manolis Kellis and co-authors from the Massachusetts Institute of Technology and the Broad Institute describe a massively parallel reporter assay that they used to systematically study regulatory motifs falling within thousands of predicted enhancer sequences in the human genome. Using this assay, they examined 2,104 potential enhancers in two human cell lines, along with another 3,314 engineered enhancer variants. “Our results suggest a general strategy for deciphering cis-regulatory elements by systematic large-scale experimental manipulation,” they write, “and provide quantitative enhancer activity measurements across thousands of constructs that can be mined to generate and test predictive models of gene expression.”

SOURCE:

http://www.genomeweb.com//node/1206571?hq_e=el&hq_m=1536519&hq_l=4&hq_v=e1df6f3681

Systematic dissection of regulatory motifs in 2,000 predicted human enhancers using a massively parallel reporter assay

  1. Pouya Kheradpour1,
  2. Jason Ernst1,
  3. Alexandre Melnikov2,
  4. Peter Rogov2,
  5. Li Wang2,
  6. Xiaolan Zhang2,
  7. Jessica Alston2,
  8. Tarjei S Mikkelsen2 and
  9. Manolis Kellis1,3

+Author Affiliations


  1. 1 MIT;

  2. 2 Broad Institute
  1. * Corresponding author; email: manoli@mit.edu

Abstract

Genome-wide chromatin maps have permitted the systematic mapping of putative regulatory elements across multiple human cell types, revealing tens of thousands of candidate distal enhancer regions. However, until recently, their experimental dissection by directed regulatory motif disruption has remained unfeasible at the genome scale, due to the technological lag in large-scale DNA synthesis. Here, we employ a massively parallel reporter assay (MPRA) to measure the transcriptional levels induced by 145bp DNA segments centered on evolutionarily-conserved regulatory motif instances and found in enhancer chromatin states. We select five predicted activators (HNF1, HNF4, FOXA, GATA, NFE2L2) and two predicted repressors (GFI1, ZFP161) and measure reporter expression in erythroleukemia (K562) and liver carcinoma (HepG2) cell lines. We test 2,104 wild-type sequences and an additional 3,314 engineered enhancer variants containing targeted motif disruptions, each using 10 barcode tags in two cell lines and 2 replicates. The resulting data strongly confirm the enhancer activity and cell type specificity of enhancer chromatin states, the ability of 145bp segments to recapitulate both, the necessary role of regulatory motifs in enhancer function, and the complementary roles of activator and repressor motifs. We find statistically robust evidence that (1) scrambling, removing, or disrupting the predicted activator motifs abolishes enhancer function, while silent or motif-improving changes maintain enhancer activity; (2) evolutionary conservation, nucleosome exclusion, binding of other factors, and strength of the motif match are all associated with wild-type enhancer activity; (3) scrambling repressor motifs leads to aberrant reporter expression in cell lines where the enhancers are usually not active. Our results suggest a general strategy for deciphering cis-regulatory elements by systematic large-scale experimental manipulation, and provide quantitative enhancer activity measurements across thousands of constructs that can be mined to generate and test predictive models of gene expression.

  • Received June 26, 2012.
  • Accepted March 14, 2013.

This manuscript is Open Access.

This article is distributed exclusively by Cold Spring Harbor Laboratory Press for the first six months after the full-issue publication date (see http://genome.cshlp.org/site/misc/terms.xhtml). After six months, it is available under a Creative Commons License (Attribution-NonCommercial 3.0 Unported License), as described at http://creativecommons.org/licenses/by-nc/3.0/.

SOURCE:

http://genome.cshlp.org/content/early/2013/03/19/gr.144899.112.abstract

Read Full Post »

Curator: Aviva Lev-Ari, PhD, RN

Chaperon Protein Mechanism inspired MIT Team to Model the Role of Genetic Mutations on Cancer Progression, proposing the next generation of Oncology drugs to aim at Suppression of Passenger Mutations. Current drug, in clinical trials, use the Chaperon Protein Mechanism to suppress Driver Mutations.

Deleterious Mutations in Cancer Progression

Kirill S. Korolev1, Christopher McFarland2, and Leonid A. Mirny3

1Department of Physics, MIT, Cambridge, MA.

E-mail: papers.korolev@gmail.com

2Graduate Program in Biophysics, Harvard University, Cambridge, MA.

3Health Sciences and Technology, MIT, Cambridge, MA

The research was funded by the National Institutes of Health/National Cancer Institute Physical Sciences Oncology Center at MIT.

SOURCE:

http://cnls.lanl.gov/q-bio/wiki/images/4/40/Abstract.pdf

Deleterious passenger mutations significantly affect evolutionary dynamics of cancer. Including passenger mutations in evolutionary models is necessary to understand the role of genetic diversity in cancer progression and to create new treatments based on the accumulation of deleterious passenger mutations.

Evolutionary models of cancer almost exclusively focus on the acquisition of driver mutations, which are beneficial to cancer cells. The driver mutations, however, are only a small fraction of the mutations found in tumors. The other mutations, called passenger mutations, are typically neglected because their effect on fitness is assumed to be very small. Recently, it has been suggested that some passenger mutations are slightly deleterious. We find that deleterious passengers significantly affect cancer progression. In particular, they lead to a critical tumor size, below which tumors shrink on average, and to an optimal mutation rate for cancer evolution.

ANCER is an outcome of somatic evolution [1-3]. To outcompete their benign sisters, cancer cells need to acquire many heritable changes (driver mutations) that enable proliferation. In addition to the rare beneficial drivers, cancer cells must also acquire neutral or slightly deleterious passenger mutations [4]. Indeed, the number of possible passengers exceeds the number of possible drivers by orders of magnitude. Surprisingly, the effect of passenger mutations on cancer progression has not been explored. To address this problem, we developed an evolutionary model of cancer progression, which includes both drivers and passengers. This model was analyzed both numerically and analytically to understand how mutation rate, population size, and fitness effects of mutations affect cancer progression.

RESULTS

Upon including passengers in our model, we found that cancer is no longer a straightforward progression to malignancy. In particular, there is a critical population size such that smaller populations accumulate passengers and decline, while larger populations accumulate drivers and grow. The transition to cancer for small initial populations is, therefore, stochastic in nature and is similar to diffusion over an energy barrier in chemical kinetics. We also found that there is an optimal mutation rate for cancer development, and passengers with intermediate fitness costs are most detrimental to cancer. The existence of an optimal mutation rate could explain recent clinical data [5] and is in stark contrast to the predictions of the models neglecting passengers. We also show that our theory is consistent with recent sequencing data.

SOURCE:

http://cnls.lanl.gov/q-bio/wiki/images/4/40/Abstract.pdf

Just as some mutations in the genome of cancer cells actively spur tumor growth, it would appear there are also some that do the reverse, and act to slow it down or even stop it, according to a new US study led by MIT.

Senior author, Leonid Mirny, an associate professor of physics and health sciences and technology at MIT, and colleagues, write about this surprise finding in a paper to be published online this week in the Proceedings of the National Academy of Sciences.

In a statement released on Monday, Mirny tells the press:

“Cancer may not be a sequence of inevitable accumulation of driver events, but may be actually a delicate balance between drivers and passengers.”

“Spontaneous remissions or remissions triggered by drugs may actually be mediated by the load of deleterious passenger mutations,” he suggests.

Cancer Cell‘s Genome Has “Drivers” and “Passengers”

Your average cancer cell has a genome littered with thousands of mutations and hundreds of mutated genes. But only a handful of these mutated genes are drivers that are responsible for the uncontrolled growth that leads to tumors.

Up until this study, cancer researchers have mostly not paid much attention to the “passenger” mutations, believing that because they were not “drivers”, they had little effect on cancer progression. 

Now Mirny and colleagues have discovered, to their surprise, that the “passengers” aren’t there just for the ride. In sufficient numbers, they can slow down, and even stop, the cancer cells from growing and replicating as tumors. 

New Drugs Could Target the Passenger Mutations in Protein Chaperoning

Although there are already several drugs in development that target the effect of chaperone proteins in cancer, they are aiming to suppress driver mutations.

Recently, biochemists at the University of Massachusetts Amherst“trapped” a chaperone in action, providing a dynamic snapshot of its mechanism as a way to help development of new drugs that target drivers.

But Mirny and colleagues say there is now another option: developing drugs that target the same chaperoning process, but their aim would be to encourage the suppressive effect of the passenger mutations.

They are now comparing cells with identical driver mutations but different passenger mutations, to see which have the strongest effect on growth.

They are also inserting the cells into mice to see which are the most likely to lead to secondary tumors (metastasize).

Written by Catharine Paddock PhD
Copyright: Medical News Today

SOURCE:

http://www.medicalnewstoday.com/articles/255920.php

After proteins are synthesized, they need to be folded into the correct shape, and chaperones help with that process. In cancerous cells, chaperones help proteins fold into the correct shape even when they are mutated, helping to suppress the effects of deleterious mutations.
Several potential drugs that inhibit chaperone proteins are now in clinical trials to treat cancer, although researchers had believed that they acted by suppressing the effects of driver mutations, not by enhancing the effects of passengers.

In current studies, the researchers are comparing cancer cell lines that have identical driver mutations but a different load of passenger mutations, to see which grow faster. They are also injecting the cancer cell lines into mice to see which are likeliest to metastasize.

Drugs that tip the balance in favor of the passenger mutations could offer a new way to treat cancer, the researchers say, beating it with its own weapon — mutations. Although the influence of a single passenger mutation is minuscule, “collectively they can have a profound effect,” Mirny says. “If a drug can make them a little bit more deleterious, it’s still a tiny effect for each passenger, but collectively this can build up.”

In natural populations, selection weeds out deleterious mutations. However, Mirny and his colleagues suspected that the evolutionary process in cancer can proceed differently, allowing mutations with only a slightly harmful effect to accumulate.

If enough deleterious passengers are present, their cumulative effects can slow tumor growth, the simulations found. Tumors may become dormant, or even regress, but growth can start up again if new driver mutations are acquired. This matches the cancer growth patterns often seen in human patients.

“Spontaneous remissions or remissions triggered by drugs may actually be mediated by the load of deleterious passenger mutations.”

When they analyzed passenger mutations found in genomic data taken from cancer patients, the researchers found the same pattern predicted by their model — accumulation of large quantities of slightly deleterious mutations.

REFERENCE

Massachusetts Institute of Technology (2013, February 4). Some cancer mutations slow tumor growth. ScienceDaily. Retrieved February 4, 2013, from http://www.sciencedaily.com­/releases/2013/02/130204154011.htm

Biochemists Trap A Chaperone Machine In Action

Main Category: Biology / Biochemistry
Article Date: 11 Dec 2012 – 0:00 PST

Molecular chaperones have emerged as exciting new potential drug targets, because scientists want to learn how to stop cancer cells, for example, from using chaperones to enable their uncontrolled growth. Now a team of biochemists at the University of Massachusetts Amherst led by Lila Gierasch have deciphered key steps in the mechanism of the Hsp70 molecular machine by “trapping” this chaperone in action, providing a dynamic snapshot of its mechanism.

She and colleagues describe this work in the current issue of Cell. Gierasch’s research on Hsp70 chaperones is supported by a long-running grant to her lab from NIH’s National Institute for General Medical Sciences.

Molecular chaperones like the Hsp70s facilitate the origami-like folding of proteins, made in the cell’s nanofactories or ribosomes, from where they emerge unstructured like noodles. Proteins only function when folded into their proper structures, but the process is so difficult under cellular conditions that molecular chaperone helpers are needed. 

The newly discovered information about chaperone action is important because all rapidly dividing cells use a lot of Hsp70, Gierasch points out. “The saying is that cancer cells are addicted to Hsp70 because they rely on this chaperone for explosive new cell growth. Cancer shifts our body’s production of Hsp70 into high gear. If we can figure out a way to take that away from cancer cells, maybe we can stop the out-of-control tumor growth. To find a molecular way to inhibit Hsp70, you’ve got to know how it works and what it needs to function, so you can identify its vulnerabilities.”

Chaperone proteins in cells, from bacteria to humans, act like midwives or bodyguards, protecting newborn proteins from misfolding and existing proteins against loss of structure caused by stress such as heat or a fever. In fact, the heat shock protein (Hsp) group includes a variety of chaperones active in both these situations.

As Gierasch explains, “New proteins emerge into a challenging environment. It’s very crowded in the cell and it would be easy for them to get their sticky amino acid chains tangled and clumped together. Chaperones bind to them and help to avoid this aggregation, which is implicated in many pathologies such as neurodegenerative diseases. This role of chaperones has also heightened interest in using them therapeutically.”

However, chaperones must not bind too tightly or a protein can’t move on to do its job. To avoid this, chaperones rapidly cycle between tight and loose binding states, determined by whether ATP or ADP is bound. In the loose state, a protein client is free to fold or to be picked up by another chaperone that will help it fold to do its cellular work. In effect, Gierasch says, Hsp70s create a “holding pattern” to keep the protein substrate viable and ready for use, but also protected.

She and colleagues knew the Hsp70’s structure in both tight and loose binding affinity states, but not what happened between, which is essential to understanding the mechanism of chaperone action. Using the analogy of a high jump, they had a snapshot of the takeoff and landing, but not the top of the jump. “Knowing the end points doesn’t tell us how it works. There is a shape change in there that we wanted to see,” Gierasch says.

To address this, she and her colleagues postdoctoral fellows Anastasia Zhuravleva and Eugenia Clerico obtained “fingerprints” of the structure of Hsp70 in different states by using state-of-the-art nuclear magnetic resonance (NMR) methods that allowed them to map how chemical environments of individual amino acids of the protein change in different sample conditions. Working with an Hsp70 known as DnaK from E. coli bacteria, Zhuravleva and Clerico assigned its NMR spectra. In other words, they determined which peaks came from which amino acids in this large molecule.

The UMass Amherst team then mutated the Hsp70 so that cycling between tight and loose binding states stopped. As Gierasch explains, “Anastasia and Eugenia were able to stop the cycle part-way through the high jump, so to speak, and obtain the molecular fingerprint of a transient intermediate.” She calls this accomplishment “brilliant.”

Now that the researchers have a picture of this critical allosteric state, that is, one in which events at one site control events in another, Gierasch says many insights emerge. For example, it appears nature uses this energetically tense state to “tune” alternate versions of Hsp70 to perform different cellular functions. “Tuning means there may be evolutionary changes that let the chaperone work with its partners optimally,” she notes.

“And if you want to make a drug that controls the amount of Hsp70 available to a cell, our work points the way toward figuring out how to tickle the molecule so you can control its shape and its ability to bind to its client. We’re not done, but we made a big leap,” Gierasch adds. “We now have a idea of what the Hsp70 structure is when it is doing its job, which is extraordinarily important.” 

Article adapted by Medical News Today from original press release. Click ‘references’ tab above for source.
Visit our biology / biochemistry section for the latest news on this subject.
SOURCE:

REFERENCES

[1] Michor F, Iwasa Y, and Nowak MA (2004) Dynamics of cancer

progression. Nature Reviews Cancer 4, 197-205.

[2] Crespi B and Summers K (2005) Evolutionary biology of cancer.

Trends in Ecology and Evolution 20, 545-552.

[3] Merlo LMF, et al. (2006) Cancer as an evolutionary and ecological

process. Nature Reviews Cancer 6, 924-935.

[4] McFarland C, et al. “Accumulation of deleterious passenger mutations

in cancer,” in preparation.

[5] Birkbak NJ, et al. (2011) Paradoxical relationship between

chromosomal instability and survival outcome in cancer. Cancer

Research 71,3447-3452.

Other related articles on this Open Access Online Scientific Journal include the following:

Hold on. Mutations in Cancer do good.

http://pharmaceuticalintelligence.com/2013/02/04/hold-on-mutations-in-cancer-do-good/

Rational Design of Allosteric Inhibitors and Activators Using the Population-Shift Model: In Vitro Validation and Application to an Artificial Biosensor

http://pharmaceuticalintelligence.com/2012/10/26/rational-design-of-allosteric-inhibitors-and-activators-using-the-population-shift-model-in-vitro-validation-and-application-to-an-artificial-biosensor/

LEADERS in Genome Sequencing of Genetic Mutations for Therapeutic Drug Selection in Cancer Personalized Treatment: Part 2

http://pharmaceuticalintelligence.com/2013/01/13/leaders-in-genome-sequencing-of-genetic-mutations-for-therapeutic-drug-selection-in-cancer-personalized-treatment-part-2/

Exome sequencing of serous endometrial tumors shows recurrent somatic mutations in chromatin-remodeling and ubiquitin ligase complex genes

http://pharmaceuticalintelligence.com/2012/12/18/exome-sequencing-of-serous-endometrial-tumors-shows-recurrent-somatic-mutations-in-chromatin-remodeling-and-ubiquitin-ligase-complex-genes/

Genome-Wide Detection of Single-Nucleotide and Copy-Number Variation of a Single Human Cell(1)

http://pharmaceuticalintelligence.com/2013/02/03/genome-wide-detection-of-single-nucleotide-and-copy-number-variation-of-a-single-human-cell/

Gastric Cancer: Whole-genome reconstruction and mutational signatures

http://pharmaceuticalintelligence.com/2012/12/24/gastric-cancer-whole-genome-reconstruction-and-mutational-signatures-2/

Pregnancy with a Leptin-Receptor Mutation

http://pharmaceuticalintelligence.com/2012/10/31/pregnancy-with-a-leptin-receptor-mutation/

Mitochondrial mutation analysis might be “1-step” away

http://pharmaceuticalintelligence.com/2012/08/14/mitochondrial-mutation-analysis-might-be-1-step-away/

Genome-wide Single-Cell Analysis of Recombination Activity and De Novo Mutation Rates in Human Sperm

http://pharmaceuticalintelligence.com/2012/08/07/genome-wide-single-cell-analysis-of-recombination-activity-and-de-novo-mutation-rates-in-human-sperm/

A Prion Like-Protein, Protein Kinase Mzeta and Memory Maintenance

http://pharmaceuticalintelligence.com/2012/10/19/a-prion-like-protein-protein-kinase-mzeta-and-memory-maintenance/

Hope for Male Contraception: A small molecule that inhibits a protein important for chromatin organization can cause reversible sterility in male mice

http://pharmaceuticalintelligence.com/2012/09/03/hope-for-male-contraception-a-small-molecule-that-inhibits-a-protein-important-for-chromatin-organization-can-cause-reversible-sterility-in-male-mice/

Protein Folding may lead to better FLU Vaccine

http://pharmaceuticalintelligence.com/2012/07/25/protein-folding-may-lead-to-better-flu-vaccine/

SNAP: Predict Effect of Non-synonymous Polymorphisms: How well Genome Interpretation Tools could Translate to the Clinic

http://pharmaceuticalintelligence.com/2013/02/03/snap-predict-effect-of-non-synonymous-polymorphisms-how-well-genome-interpretation-tools-could-translate-to-the-clinic/

Drugging the Epigenome

http://pharmaceuticalintelligence.com/2013/02/01/drugging-the-epigenome/

Read Full Post »

Older Posts »

%d bloggers like this: