Posts Tagged ‘IBM’

From Molecular Biology to Translational Medicine: How Far Have We Come, and Where Does It Lead Us?

The Initiation and Growth of Molecular Biology and Genomics, Part I

Curator: Larry H Bernstein, MD, FCAP

Introduction and purpose

This material will cover the initiation phase of molecular biology, Part I; to be followed by the Human Genome Project, Part II; and concludes with Ubiquitin, it’s Role in Signaling and Regulatory Control, Part III.
This article is first a continuation of a previous discussion on the role of genomics in discovery of therapeutic targets titled Directions for genomics in personalized medicine

The previous article focused on key drivers of cellular proliferation, stepwise mutational changes coinciding with cancer progression, and potential therapeutic targets for reversal of the process. It also covers the race to delineation of the Human Genome, discovery methods and fundamental genomic patterns that are ancient in both animal and plant speciation.

This article reviews the web-like connections between early and later discoveries, as significant finding has led to novel hypotheses and many more findings over the last 75 years. This largely post WWII revolution has driven our understanding of biological and medical processes at an exponential pace owing to successive discoveries of chemical structure, the basic building blocks of DNA and proteins, of nucleotide and protein-protein interactions, protein folding, allostericity, genomic structure, DNA replication, nuclear polyribosome interaction, and metabolic control. In addition, the emergence of methods for copying, removal and insertion, and improvements in structural analysis as well as developments in applied mathematics have transformed the research framework.

In the Beginning

During the Second World War we had the discoveries of physics and the emergence out of the Manhattan Project of radioactive nuclear probes from E.O. Lawrence University of California Berkeley Laboratory. The use of radioactive isotopes led to the development of biochemistry and isolation of nucleotides, nucleosides, enzymes, and filling in of details of pathways for photosynthesis, for biosynthesis, and for catabolism.
Perhaps a good start of the journey is a student of Neils Bohr named Max Delbruck (September 4, 1906 – March 9, 1981), who won the Nobel prize for discovering that bacteria become resistant to viruses (phages) as a result of genetic mutations, founded a new discipline called Molecular Biology, lifting the experimental work in Physiology to a systematic experimentation in biology with the rigor of Physics using radiation and virus probes on selected cells. In 1937 he turned to research on the genetics of Drosophila melanogaster at Caltech, and two years later he coauthored a paper, “The growth of bacteriophage”, reporting that the viruses replicate in one step, not exponentially. In 1942, he and Salvador Luria of Indiana University demonstrated that bacterial resistance to virus infection is mediated by random mutation. This research, known as the Luria-Delbrück experiment, notably applied mathematics to make quantitative predictions, and earned them the 1969 Nobel Prize in Physiology or Medicine, shared with Alfred Hershey. His inferences on genes’ susceptibility to mutation was relied on by physicist Erwin Schrödinger in his 1944 book, What Is Life?, which conjectured genes were an “aperiodic crystal” storing code-script and influenced Francis Crick and James D. Watson in their 1953 identification of cellular DNA’s molecular structure as a double helix.

Watson-Crick Double Helix Model

A new understanding of heredity and hereditary disease was possible once it was determined that DNA consists of two chains twisted around each other, or double helixes, of alternating phosphate and sugar groups, and that the two chains are held together by hydrogen bonds between pairs of organic bases—adenine (A) with thymine (T), and guanine (G) with cytosine (C). Modern biotechnology also has its basis in the structural knowledge of DNA—in this case the scientist’s ability to modify the DNA of host cells that will then produce a desired product, for example, insulin.
The background for the work of the four scientists was formed by several scientific breakthroughs:

  1. the progress made by X-ray crystallographers in studying organic macromolecules;
  2. the growing evidence supplied by geneticists that it was DNA, not protein, in chromosomes that was responsible for heredity;
  3. Erwin Chargaff’s experimental finding that there are equal numbers of A and T bases and of G and C bases in DNA;
  4. and Linus Pauling’s discovery that the molecules of some proteins have helical shapes.

In 1962 James Watson (b. 1928), Francis Crick (1916–2004), and Maurice Wilkins (1916–2004) jointly received the Nobel Prize in physiology or medicine for their 1953 determination of the structure of deoxyribonucleic acid (DNA), performed with a knowledge of Chargaff’s ratios of the bases in DNA and having  access to the X-ray crystallography of Maurice Wilkins and Rosalind Franklin at King’s College London. Because the Nobel Prize can be awarded only to the living, Wilkins’s colleague Rosalind Franklin (1920–1958), who died of cancer at the age of 37, could not be honored.
Of the four DNA researchers, only Rosalind Franklin had any degrees in chemistry. Franklin completed her degree in 1941 in the middle of World War II and undertook graduate work at Cambridge with Ronald Norrish, a future Nobel Prize winner. She returning to Cambridge after a year of war service, presented her work and received the PhD in physical chemistry. Franklin then learned the  X-ray crystallography in Paris and rapidly became a respected authority in this field. Returning to returned to England to King’s College London in 1951, her charge was to upgrade the X-ray crystallographic laboratory there for work with DNA.

bt2304  Rosalind Franklin, crystallographer

Cold Spring Harbor Laboratory

I digress to the beginnings of the Cold Spring Harbor Laboratory. A significant part of the Laboratory’s life revolved around education with its three-week-long Phage Course, taught first in 1945 by Max Delbruck, the German-born, theoretical-physicist-turned-biologist. James D Watson first came to Cold Spring Harbor Laboratory with his thesis advisor, Salvador Luria, in the summer of 1948. Over its more than 25-year history, the Phage Course was the training ground for many notable scientists. The Laboratory’s annual scientific Symposium, has provided a unique highly interactive education about the exciting field of “molecular” biology. The 1953 symposium featured Watson coming from England to give the first public presentation of the DNA double helix. When he became the Laboratory’s director in 1968 he was determined to make the Laboratory an important center for advancing molecular biology, and he focused his energy on bringing large donations to the enterprise CSHNL. It became a magnate for future discovery at which James D. Watson became the  Director in 1968, and later the Chancellor. This contribution has as great an importance as his Nobel Prize discovery.

Biochemistry and Molecular Probes comes into View

Moreover, at the same time, the experience of Nathan Kaplan and Martin Kamen at Berkeley working with radioactive probes was the beginning of an establishment of Lawrence-Livermore Laboratories role in metabolic studies, as reported in the previous paper. A collaboration between Sid Collowick, NO Kaplan and Elizabeth Neufeld at the McCollum Pratt Institute led to the transferase reaction between the two main pyridine nucleotides.  Neufeld received a PhD a few years later from the University of California, Berkeley, under William Zev Hassid for research on nucleotides and complex carbohydrates, and did postdoctoral studies on non-protein sulfhydryl compounds in mitosis. Her later work at the NIAMDG on mucopolysaccharidoses. The Lysosomal Storage Diseases opened a new chapter on human genetic diseases when she found that the defects in Hurler and Hunter syndromes were due to decreased degradation of the mucopolysaccharides. When an assay became available for α-L-iduronidase in 1972, Neufeld was able to show that the corrective factor for Hurler syndrome that accelerates degradation of stored sulfated mucopolysaccharides was α-L-iduronidase.


The Hurler Corrective Factor. Purification and Some Properties (Barton, R. W., and Neufeld, E. F. (1971) J. Biol. Chem. 246, 7773–7779)
The Sanfilippo A Corrective Factor. Purification and Mode of Action (Kresse, H., and Neufeld, E. F. (1972) J. Biol. Chem. 247, 2164–2170)

I mention this for two reasons:
[1] We see a huge impetus for nucleic acids and nucleotides research growing in the 1950’s with a post WWII emergence of work on biological structure.
[2] At the same time, the importance of enzymes in cellular metabolic processes runs parallel to that of the genetic code.

In 1959 Arthur Kornberg was a recipient of the Nobel prize for Physiology or Medicine based on his discovery of “the mechanisms in the biological synthesis of deoxyribonucleic acid” (DNA polymerase) together with Dr. Severo Ochoa of New York University. In the next 20 years Stanford University Department of Biochemistry became a top rated graduate program in biochemistry. Today, the Pfeffer Lab is distinguished for research into how human cells put receptors in the right place through Rab GTPases that regulate all aspects of receptor trafficking. Steve Elledge (1984-1989) at Harvard University is one of  its graduates from the 1980s.

Transcription –RNA and the ribosome

In 2006, Roger Kornberg was awarded the Nobel Prize in Chemistry for identifying the role of RNA polymerase II and other proteins in transcribing DNA. He says that the process is something akin to a machine. “It has moving parts which function in synchrony, in appropriate sequence and in synchrony with one another”. The Kornbergs were the tenth family with closely-related Nobel laureates.  The 2009 Nobel Prize in Chemistry was awarded to Venki Ramakrishnan, Tom Steitz, and Ada Yonath for crystallographic studies of the ribosome. The atomic resolution structures of the ribosomal subunits provide an extraordinary context for understanding one of the most fundamental aspects of cellular function: protein synthesis. Research on protein synthesis began with studies of microsomes, and three papers were published on the atomic resolution structures of the 50S and 30S the atomic resolution of structures of ribosomal subnits in 2000. Perhaps the most remarkable and inexplicable feature of ribosome structure is that two-thirds of the mass is composed of large RNA molecules, the 5S, 16S, and 23S ribosomal RNAs, and the remaining third is distributed among ~50 relatively small and innocuous proteins. The first step on the road to solving the ribosome structure was determining the primary structure of the 16S and 23S RNAs in Harry Noller’s laboratory. The sequences were rapidly followed by secondary structure models for the folding of the two ribosomal RNAs, in collaboration with Carl Woese, bringing the ribosome structure into two dimensions. The RNA secondary structures are characterized by an elaborate series of helices and loops of unknown structure, but other than the insights offered by the structure of transfer RNA (tRNA), there was no way to think about folding these structures into three dimensions. The first three-dimensional images of the ribosome emerged from Jim Lake’s reconstructions from electron microscopy (EM) (Lake, 1976).

Ada Yonath reported the first crystals of the 50S ribosomal subunit in 1980, a crucial step that would require almost 20 years to bring to fruition (Yonath et al., 1980). Yonath’s group introduced the innovative use of ribosomes from extremophilic organisms. Peter Moore and Don Engelman applied neutron scattering techniques to determine the relative positions of ribosomal proteins in the 30S ribosomal subunit at the same time. Elegant chemical footprinting studies from the Noller laboratory provided a basis for intertwining the RNA among the ribosomal proteins, but there was still insufficient information to produce a high resolution structure, but Venki Ramakrishnan, in Peter Moore’s laboratory did it with deuterated ribosome reconstitutions. Then the Yale group was ramping up its work on the H. marismortui crystals of the 50S subunit. Peter Moore had recruited long-time colleague Tom Steitz to work on this problem and Steitz was about to complete the final event in the pentathlon of Crick’s dogma, having solved critical structures of DNA polymerases, the glutaminyl tRNA-tRNA synthetase complex, HIV reverse transcriptase, and T7 RNA polymerase. In 1999 Steitz, Ramakrishnan, and Yonath all presented electron density maps of subunits at approximately 5 Å resolution, and the Noller group presented 10 Å electron density maps of the Thermus 70S ribosome. Peter Moore aptly paraphrased Churchill, telling attendees that this was not the end, but the end of the beginning. Almost every nucleotide in the RNA is involved in multiple stabilizing interactions that form the monolithic tertiary structure at the heart of the ribosome.
Williamson J. The ribosome at atomic resolution. Cell 2009; 139:1041-1043.

This opened the door to new therapies.  For example, in 2010 it was reported that Numerous human genes display dual coding within alternatively spliced regions, which give rise to distinct protein products that include segments translated in more than one reading frame. To resolve the ensuing protein structural puzzle, we identified human genes with alternative splice variants comprising a dual coding region at least 75 nucleotides in length and analyzed the structural status of the protein segments they encode. The inspection of their amino acid composition and predictions by the IUPred and PONDR® VSL2 algorithms suggest a high propensity for structural disorder in dual-coding regions.
Kovacs E, Tompa P, liliom K, and Kalmar L. Dual coding in alternative reading frames correlates with intrinsic protein disorder. PNAS 2010.

In 2012, it was shown that drug-bound ribosomes can synthesize a distinct subset of cellular polypeptides. The structure of a protein defines its ability to thread through the antibiotic-obstructed tunnel. Synthesis of certain polypeptides that initially bypass translational arrest can be stopped at later stages of elongation while translation of some proteins goes to completion. (Kannan K, Vasquez-Laslop N, and Mankin AS. Selective Protein Synthesis by Ribosomes with a Drug-Obstructed Exit Tunnel. Cell 2012; 151; 508-520.)

Mobility of genetic elements

Barbara McClintock received the Nobel Prize for Medicine for the discovery of the mobility of genetic elements, work that been done in that period. When transposons were demonstrated in bacteria, yeast and other organisms, Barbara rose to a stratospheric level in the general esteem of the scientific world, but she was uncomfortable about the honors. It was sufficient to have her work understood and acknowledged. Prof. Howard Green said of her, “There are scientists whose discoveries greatly transcend their personalities and their humanity. But those in the future who will know of Barbara only her discoveries will know only her shadow”.
“In Memoriam – Barbara McClintock”. 5 Feb 2013

She introduced her Nobel Lecture in 1983 with the following observation: “An experiment conducted in the mid-nineteen forties prepared me to expect unusual responses of a genome to challenges for which the genome is unprepared to meet in an orderly, programmed manner. In most known instances of this kind, the types of response were not predictable in advance of initial observations of them. It was necessary to subject the genome repeatedly to the same challenge in order to observe and appreciate the nature of the changes it induces…a highly programmed sequence of events within the cell that serves to cushion the effects of the shock. Some sensing mechanism must be present in these instances to alert the cell to imminent danger, and to set in motion the orderly sequence of events that will mitigate this danger”. She goes on to consider “early studies that revealed programmed responses to threats that are initiated within the genome itself, as well as others similarly initiated, that lead to new and irreversible genomic modifications. These latter responses, now known to occur in many organisms, are significant for appreciating how a genome may reorganize itself when faced with a difficulty for which it is unprepared”.

An experiment with Zea conducted in the summer of 1944 alerted her to the mobility of specific components of genomes involved the entrance of a newly ruptured end of a chromosome into a telophase nucleus. This experiment commenced with the growing of approximately 450 plants in the summer of 1944, each of which had started its development with a zygote that had received from each parent a chromosome with a newly ruptured end of one of its arms. The design of the experiment required that each plant be self-pollinated to isolate from the self-pollinated progeny new mutants that were expected to appear, and confine them to locations within the ruptured arm of a chromosome. Each mutant was expected to reveal the phenotype produced by a minute homozygous deficiency. Their modes of origin could be projected from the known behavior of broken ends of chromosomes in successive mitoses. Forty kernels from each self-pollinated ear were sown in a seedling bench in the greenhouse during the winter of 1944-45.

Some seedling mutants of the type expected overshadowed by segregants exhibiting bizarre phenotypes. These were variegated for type and degree of expression of a gene. Those variegated expressions given by genes associated with chlorophyll development were startingly conspicuous. Within any one progeny chlorophyll intensities, and their pattern of distribution in the seedling leaves, were alike. Between progenies, however, both the type and the pattern differed widely.

The effect of X-rays on chromosomes

Initial studies of broken ends of chromosomes began in the summer of 1931. By 1931, means of studying the beads on a string hypothesis was provided by newly developed methods of examining the ten chromosomes of the maize complement in microsporocytes in meiosis. The ten bivalent chromosomes are elongated in comparison to their metaphase lengths. Each chromosome

  • is identifiable by its relative length,
  • by the location of its centromere, which is readily observed at the pachytene stage, and
  • by the individuality of the chromomeres strung along the length of each chromosome.

At that time maize provided the best material for locating known genes along a chromosome arm, and also for precisely determining the break points in chromosomes that had undergone various types of rearrangement, such as translocations, inversions, etc.
The recessive phenotypes in the examined plants arose from loss of a segment of a chromosome that carried the wild-type allele, and X-rays were responsible for inducing these deficiencies. A conclusion of basic significance could be drawn from these observations:

  1. broken ends of chromosomes will fuse, 2-by-2, and
  2. any broken end with any other broken end.

This principle has been amply proved in a series of experiments conducted over the years. In all such instances the break must sever both strands of the DNA double helix. This is a “double-strand break” in modern terminology. That two such broken ends entering a telophase nucleus will find each other and fuse, regardless of the initial distance that separates them, soon became apparent.

During the summer of 1931 she had seen plants in the maize field that showed variegation patterns resembling the one described for Nicotiana.  Dr. McClintock was interested in selecting the variegated plants to determine the presence of a ring chromosome in each, and in the summer of 1932 with Dr. Stadler’s generous cooperation from Missouri, she had the opportunity to examine such plants. Each plant had a ring chromosome, but It was the behavior of this ring that proved to be significant. It revealed several basic phenomena. The following was noted:

In the majority of mitoses

  • replication of the ring chromosome produced two chromatids completely free from each other
  • could separate without difficulty in the following anaphase.
  • sister strand exchanges do occur between replicated or replicating chromatids
  • the frequency of such events increases with increase in the size of the ring.
  • these exchanges produce a double-size ring with two centromeres.
  • Mechanical rupture occurs in each of the two chromatid bridges formed at anaphase by passage of the two centromeres on the double-size ring to opposite poles of the mitotic spindle.
  • The location of a break can be at any one position along any one bridge.
  • The broken ends entering a telophase nucleus then fuse.
  • The size and content of each newly constructed ring depend on the position of the rupture that had occurred in each bridge.
  1. The conclusion was that cells sense the presence in their nuclei of ruptured ends of chromosomes
  2. then activate a mechanism that will bring together and then unite these ends
  3. this will occur regardless of the initial distance in a telophase nucleus that separated the ruptured ends.

The ability of a cell to

  • sense these broken ends,
  • to direct them toward each other, and
  • then to unite them so that the union of the two DNA strands is correctly oriented,
  • is a particularly revealing example of the sensitivity of cells to all that is going on within them.

Evidence from gave unequivocal support for the conclusion that broken ends will find each other and fuse. The challenge is met by a programmed response. This may be necessary, as

  1. both accidental breaks and
  2. programmed breaks may be frequent.
  3. If not repaired, such breaks could lead to genomic deficiencies having serious consequences.

A cell capable of repairing a ruptured end of a chromosome must sense the presence of this end in its nucleus. This sensing

  • activates a mechanism that is required for replacing the ruptured end with a functional telomere.
  • that such a mechanism must exist was revealed by a mutant that arose in the stocks.
  • this mutant would not allow the repair mechanism to operate in the cells of the plant.

Entrance of a newly ruptured end of a chromosome into the zygote is followed by the chromatid type of breakage-fusion-bridge cycle throughout mitoses in the developing plant.
This suggested that the repair mechanism in the maize strains is repressed in cells producing

  • the male and female gametophytes and
  • also in the endosperm,
  • but is activated in the embryo.

The extent of trauma perceived by cells

  • whose nuclei receive a single newly ruptured end of a chromosome that the cell cannot repair,
  • and the speed with which this trauma is registered, was not appreciated until the winter of 1944-45.

By 1947 it was learned that the bizarre variegated phenotypes that segregated in many of the self-pollinated progenies grown on the seedling bench in the fall and winter of 1944-45, were due to the action of transposable elements. It seemed clear that

  • these elements must have been present in the genome,
  • and in a silent state previous to an event that activated one or another of them.

She concluded that some traumatic event was responsible for these activations. The unique event in the history of these plants relates to their origin. Both parents of the plants grown in 1944 had contributed a chromosome with a newly ruptured end to the zygote that gave rise to each of these plants.
Detection of silent elements is now made possible with the aid of DNA cloning method. Silent AC (Activator) elements, as well as modified derivatives of them, have already been detected in several strains of maize. When other transposable elements are cloned it will be possible to compare their structural and numerical differences among various strains of maize. In any one strain of maize the number of silent but potentially transposable elements, as well as other repetitious DNAs, may be observed to change, and most probably in response to challenges not yet recognized.
Telomeres are especially adapted to replicate free ends of chromosomes. When no telomere is present, attempts to replicate this uncapped end may be responsible for the apparent “fusions” of the replicated chromatids at the position of the previous break as well as for perpetuating the chromatid type of breakage-fusion-bridge cycle in successive mitoses.
In conclusion, a genome may react to conditions for which it is unprepared, but to which it responds in a totally unexpected manner. Among these is

  • the extraordinary response of the maize genome to entrance of a single ruptured end of a chromosome into a telophase nucleus.
  • It was this event that was responsible for activations of potentially transposable elements that are carried in a silent state in the maize genome.
  • The mobility of these activated elements allows them to enter different gene loci and to take over control of action of the gene wherever one may enter.

Because the broken end of a chromosome entering a telophase nucleus can initiate activations of a number of different potentially transposable elements,

  • the modifications these elements induce in the genome may be explored readily.

In addition to

modifying gene action, these elements can

  • restructure the genome at various levels,
  • from small changes involving a few nucleotides,
  • to gross modifications involving large segments of chromosomes, such as
  1. duplications,
  2. deficiencies,
  3. inversions,
  4. and other reorganizations.

In the future attention undoubtedly will be centered on the genome, and with greater appreciation of its significance as a highly sensitive organ of the cell,

  • monitoring genomic activities and correcting common errors,
  • sensing the unusual and unexpected events,
  • and responding to them,
  • often by restructuring the genome.

We know about the elements available for such restructuring. We know nothing, however, about

  • how the cell senses danger and instigates responses to it that often are truly remarkable.


In 2009 the Nobel Prize in Physiology or Medicine was awarded to Elizabeth Blackburn, Carol Greider and Jack Szoztak for the discovery of Telomerase. This recognition came less than a decade after the completion of the Human Genome Project previously discussed. Prof. Blackburn acknowledges a strong influence coming from the work of Barbara McClintock. The discovery is tied to the pond organism Tetrahymena thermophila, and studies of yeast cells. Blackburn was drawn to science after reading the biography of Marie Curie by her daughter, Irina, as a child. She recalls that her Master’s mentor while studying the metabolism of glutamine in the rat liver, thought that every experiment should have the beauty and simplicity of a Mozart sonata. She did her PhD at the distinguished Laboratory for Molecular Biology at Cambridge, the epicenter of molecular biology sequencing the regions of bacteriophage phiX 174, a single stranded DNA bacteriophage. Using Fred Sanger’s methods to piece together RNA sequences she showed the first sequence of a 48 nucleotide fragment to her mathematical-gifted Cambridge cousin, who pointed out repeats of DNA sequence patterns! She worked on the sequencing of the DNA at the terminal regions of  the short “minichromosomes” of the ciliated protozoan Tetrahymena thermophile at Yale in 1975. She continued her research begun at Yale at UCSF funded by the NIH based on an intriguing audiogram showing telomeric DNA in Tetrahymena.
I describe the work as follows:

  • Prof. Blackburn incorporated 32P isotope labelled deoxynucleoside residues into the rDNA molecules for DNA repair enzymatic reactions and found that
  • the end regions were selectively labeled by combinations of 32P isotope radiolabled nucleoside triphosphate, and by mid-year she had an audiogram of the depurination products.
  • The audiogram showed sequences of 4 cytosine residues flanked by either an adenosine or a guanosine residue.
  • In 1976 she had deduced a sequence consisting of a tandem array of CCCAA repeats, and subsequently separated the products on a denaturing gel electrophoresis that appeared as tiger stripes extending up the gel.
  • The size of each band was 6 bases more than the band below it.

Telomere must have a telomerase!

The discovery of the telomerase enzyme activity was done by the Prize co-awardee, Carol Greider. They were trying to decipher the structure right at the termini of telomeres of both cliliated protozoans and yeast plasmids. The view that in mammalian telomeres there is a long protruding G-rich strand does not take into account the clear evidence for the short C strand repeat oligonucleotides that she discovered. This was found for both the Tetrahymena rDNA minichromosome molecules and linear plasmids purified from yeast.
In contrast to nucleosomal regions of chromosomes, special regions of DNA, for example

  • promoters that must bind transcription initiation factors that control transcription, have proteins other than the histones on them.
  • The telomeric repeat tract turned out to be such a non-nucleosomal region.

They  found that by clipping up chromatin using an enzyme that cuts the linker between neighboring nucleosomes,

  • it cut up the bulk of the DNA into nucleosome-sized pieces
  • but left the telomeric DNA tract as a single protected chunk.

The resulting complex of the telomeric DNA tract plus its bound cargo of protective proteins behaved very differently, from nucleosomal chromatin, and concluded that it had no histones or nucleosomes.

Any evidence for a protein on the bulk of the rDNA molecule ends, such as their behavior in gel electrophoresis and the appearance of the rDNA molecules under the electron microscope, was conspicuously lacking. This was reassuring that there was no covalently attached protein at the very ends of this minichoromosome. Despite considerable work, she was unable to determine what protein(s) would co-purify with the telomeric repeat tract DNA of Tetrahymena. It was yeast genetics and approaches done by others that turned out to provide the next great leaps forward in understanding telomeric proteins. Carol Greider, her colleague, noticed the need to scale up the telomerase activity preparations and they used a very large glass column for preparative gel filtration chromatography.

Jack W Szostak at the Howard Hughes Medical Institue at Harvard shared in the 2009 Nobel Prize. He became interested in molecular biology taking a course on the frontiers of Molecular Biology and reading about the experiments of Meselson-Stahl barely a decade earlier, and learned how the genetic code had been unraveled. The fact that one could deduce, from measurements of the radioactivity in fractions from a centrifuge tube, the molecular details of DNA replication, transcription and translation was astonishing. A highlight of his time at McGill was the open-book, open-discussion final exam in this class, in which the questions required the intense collaboration of groups of students.

At Cornell, Ithaca, he collaborated with  John Stiles and they came up with a specific idea to chemically synthesize a DNA oligonucleotide of sufficient length that it would hybridize to a single sequence within the yeast genome, and then to use it as an mRNA and gene specific probe. At the time, there was only one short segment of the yeast genome for which the DNA sequence was known,

  • the region coding for the N-terminus of the iso-1 cytochrome c protein,

intensively studied by Fred Sherman
The Sherman lab, in a tour de force of genetics and protein chemistry, had isolated

  • double-frameshift mutants in which the N-terminal region of the protein was translated from out-of-frame codons.
  • Protein sequencing of the wild type and frame-shifted mutants allowed them to deduce 44 nucleotides of DNA sequence.

If they could prepare a synthetic oligonucleotide that was complementary to the coding sequence, they could use it to detect the cytochrome-c mRNA and gene. At the time, essentially all experiments on mRNA were done on total cellular mRNA. Ray Wu was already well known for determining the sequence of the sticky ends of phage lambda, the first ever DNA to be sequenced, and his lab was deeply involved in the study of enzymes that could be used to manipulate and sequence DNA more effectively, but would not take on a project from another laboratory. So John went to nearby Rochester to do postdoctoral work with Sherman, and he was able to transfer to Ray Wu’s laboratory. In order to carry out his work, Ray Wu sent him to Saran Narang’s lab in Ottawa, and he received training there under Keichi Itakura, who synthesized the Insulin gene. A few months later, he received several milligrams of our long sought 15-mer. In collaboration with John Stiles and Fred Sherman, who sent us RNA and DNA samples from appropriate yeast strains, they were able to use the labeled 15-mer as a probe to detect the cyc1 mRNA, and later the gene itself. He notes that one of the delights of the world of science is that it is filled with people of good will who are more than happy to assist a student or colleague by teaching a technique or discussing a problem. He remained in Ray’s lab after completion of the PhD upon the arrival of Rodney Rothstein from Sherman’s lab in Rochester, who introduced him to yeast genetics, and he was prepared for the next decade of work on yeast.

  • first in recombination studies, and
  • later in telomere studies and other aspects of yeast biology.

His studies of recombination in yeast were enabled by the discovery, in Gerry Fink’s lab at Cornell, of a way to introduce foreign DNA into yeast. These pioneering studies of yeast transformation showed that circular plasmid DNA molecules could on occasion become integrated into yeast chromosomal DNA by homologous recombination.

  • His studies of unequal sister chromatid exchange in rDNA locus resulted in his first publication in the field of recombination.

The idea that you could increase transformation frequency by cutting the input DNA was pleasingly counterintuitive and led us to continue our exploration of this phenomenon. He gained an appointment to the Sidney-Farber Cancer Institute due to the interest of Prof. Ruth Sager, who gathered together a great group of young investigators. In work spearheaded by his first graduate student, Terry Orr-Weaver, on

  • double-strand breaks in DNA
  • and their repair by recombination (and continuing interaction with Rod Rothstein),
  • they were attracted to what kinds of reactions occur at the DNA ends.

It was at a Gordon Conference that he was excited hearing a talk by Elizabeth Blackburn on her work on telomeres in Tetrahymena.

  • This led to a collaboration testing the ability of Tetrahymena telomers to function in yeast.
  • He performed the experiments himself, and experienced the thrill of being the first to know that our wild idea had worked.
  • It was clear from that point on that a door had been opened and that they were going to be able to learn a lot about telomere function from studies in yeast.
  • Within a short time he was able to clone bona fide yeast telomeres, and (in a continuation of the collaboration with Liz Blackburn’s lab)
  • they obtained the critical sequence information that led (them) to propose the existence of the key enzyme, telomerase.

A fanciful depiction evoking both telomere dynamics and telomere researchers, done by the artist Julie Newdoll in 2008, elicits the idea of a telomere as an ancient Sumarian temple-like hive, tended by a swarm of ancient Sumarian Bee-goddesses against a background of clay tablets inscribed with DNA sequencing gel-like bands.
Dr. Blackburn recalls owing much to Barbara McClintock for her scientific findings, but also, Barbara McClintock also gave her advice in a conversation with her in 1977, during which

  • she had unexpected findings with the rDNA end sequences.
  • Dr. McClintock urged her to trust in intuition about the scientific research results.

This advice was surprising then because intuitive thinking was not something that she accepted to be a valid aspect of being a biology researcher.
MLA style: “Elizabeth H. Blackburn – Biographical”. 5 Feb 2013.


In this Part I of a series of 3, I have described the

  • emergence of Molecular Biology and
  • closely allied work on the mechanism of Cell Replication and
  • the dependence of metabolic processes on proteins and enzymatic conversions through a surge of
  • post WWII research that gave birth to centers for basic science research in biology and medicine in both US and in England, which was preceded by work in prewar Germany. This is to be followed by further developments related to the Human Genome Project.
  • Transcription initiation (Photo credit: Wikipedia)
  • Schematic relationship between biochemistry, genetics, and molecular biology (Photo credit: Wikipedia)
  • Central dogma of molecular biology (Photo credit: Wikipedia)


Transcription initiation

Transcription initiation (Photo credit: Wikipedia)

Schematic relationship between biochemistry, g...

Schematic relationship between biochemistry, genetics, and molecular biology (Photo credit: Wikipedia)

Central dogma of molecular biology

Central dogma of molecular biology (Photo credit: Wikipedia)








Related References on the Open Access On;ine Scientific Journal

Big Data in Genomic Medicine lhb

BRCA1 a tumour suppressor in breast and ovarian cancer – functions in transcription, ubiquitination and DNA repair S Saha

Computational Genomics Center: New Unification of Computational Technologies at Stanford A Lev-Ari

Personalized medicine gearing up to tackle cancer ritu saxena

Differentiation Therapy – Epigenetics Tackles Solid Tumors SJ Williams

Mechanism involved in Breast Cancer Cell Growth: Function in Early Detection & Treatment A Lev-Ari

The Molecular pathology of Breast Cancer Progression Tilde Barliya

Gastric Cancer: Whole-genome reconstruction and mutational signatures A Lev-Ari

Paradigm Shift in Human Genomics – Predictive Biomarkers and Personalized Medicine – Part 1 ( A Lev-Ari

LEADERS in Genome Sequencing of Genetic Mutations for Therapeutic Drug Selection in Cancer Personalized Treatment: Part 2 A Lev-Ari

Personalized Medicine: An Institute Profile – Coriell Institute for Medical Research: Part 3 A Lev-Ari

Harnessing Personalized Medicine for Cancer Management, Prospects of Prevention and Cure: Opinions of Cancer Scientific Leaders @ ALA Personalized Medicine for Cancer Management, Prospects of Prevention and Cure: Opinions of Cancer Scientific Leaders/
GSK for Personalized Medicine using Cancer Drugs needs Alacris systems biology model to determine the in silico effect of the inhibitor in its “virtual clinical trial” A Lev-Ari

Recurrent somatic mutations in chromatin-remodeling and ubiquitin ligase complex genes in serous endometrial tumors S Saha

Human Variome Project: encyclopedic catalog of sequence variants indexed to the human genome sequence A Lev-Ari

Prostate Cancer Cells: Histone Deacetylase Inhibitors Induce Epithelial-to-Mesenchymal Transition sjwilliams

Inspiration From Dr. Maureen Cronin’s Achievements in Applying Genomic Sequencing to Cancer Diagnostics A Lev-Ari

The “Cancer establishments” examined by James Watson, co-discoverer of DNA w/Crick, 4/1953 A Lev-Ari

Squeezing Ovarian Cancer Cells to Predict Metastatic Potential: Cell Stiffness as Possible Biomarker pkandala

Hypothesis – following on James Watson lhb…ts-are-harmful/

Otto Warburg, A Giant of Modern Cellular Biology lhb

Is the Warburg Effect the cause or the effect of cancer: A 21st Century View? lhb

Predicting Tumor Response, Progression, and Time to Recurrence lhb

Directions for genomics in personalized medicine lhb

How mobile elements in “Junk” DNA promote cancer. Part 1: Transposon-mediated tumorigenesis. SJ Williams

Advances in Separations Technology for the “OMICs” and Clarification of Therapeutic Targets lhb ‎

Mitochondrial Damage and Repair under Oxidative Stress lhb

Mitochondria: More than just the “powerhouse of the cell” Ritu Saxena

Mitochondrial mutation analysis might be “1-step” away Ritu Saxena

RNA interference with cancer expression lhb

What can we expect of tumor therapeutic response? lhb

Expanding the Genetic Alphabet and linking the genome to the metabolome

Breast Cancer, drug resistance, and biopharmaceutical targets lhb

Breast Cancer: Genomic profiling to predict Survival: Combination of Histopathology and Gene Expression Analysis A Lev-Ari

Gastric Cancer: Whole-genome reconstruction and mutational signatures A Lev-Ari

Ubiquinin-Proteosome pathway, autophagy, the mitochondrion, proteolysis and cell apoptosis lhb

Identification of Biomarkers that are Related to the Actin Cytoskeleton lhb

Genomic Analysis: FLUIDIGM Technology in the Life Science and Agricultural Biotechnology A Lev-Ari

Interview with the co-discoverer of the structure of DNA: Watson on The Double Helix and his changing view of Rosalind Franklin A Lev-Ari

Winning Over Cancer Progression: New Oncology Drugs to Suppress Passengers Mutations vs. Driver Mutations A Lev-Ari


Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

With IBM Help, Coriell Spins off For-Profit Entity to Store Whole-Genome Sequencing Data

Review of Coriell Institute Profile

Personalized Medicine: An Institute Profile – Coriell Institute for Medical Research: Part 3

UPDATED on 5/16/2013

The Bank Where Doctors Can Stash Your Genome

A new company offers a “gene vault” for doctors who want to add genomics to patient care.

Genomic sequencing might be more common in medicine if doctors had a simple way to send for the test and keep track of the data.That’s the hope of Coriell Life Sciences in Camden, New Jersey, a startup that grew out of a partnership between the Coriell Institute for Medical Research and IBM. The company wants to facilitate the process of ordering, storing, and interpreting whole-genome-sequence data for doctors. The company launched in January and is now working with different health-care providers to set up its service. “The intent is that the doctor would order a test like any other diagnostic test they order today,” says Scott Megill, president of Coriell Life Sciences. The company would facilitate sequencing the patient’s DNA (through existing sequencing companies such as Illumina or Ion Torrent), store it in its so-called gene vault, and act as the middleman between doctors and companies that offer interpretation services. Finally, “we will return the genetic result in the human readable form back to the electronic medical record so the doctor can read it and interpret it for the patient,” says Megill.

“You need a robust software infrastructure for storing, analyzing, and presenting information,” says Jon Hirsch, who founded Syapse, a California-based company developing software to analyze biological data sets for diagnosing patients. “Until that gets built, you can generate all the data you want, but it’s not going to have any impact outside the few major centers of genomics medicine,” he says.

The company will use a board of scientific advisors to guide them to the best interpretation programs available. “No one company is in position to interpret the entire genome for its meaning,” says Michael Christman, CEO of the Coriell Institute for Medical Research. “But by having one’s sequence in the gene vault, then the physician will be able to order interpretative engines, analogous to apps for the iPhone,” he says. Doctors could order an app to analyze a patient’s genome for DNA variants linked to poor drug response at one point, and later on, order another for variants linked to heart disease.

The cloud-based workflow could help doctors in different locations take advantage of expert interpretations anywhere, says Christman. “This would allow a doctor who’s at a community clinic in Tulsa, Oklahoma, order an interpretation of breast cancer sequences derived at Sloan Kettering,” he says.

But while the cloud offers many conveniences, it carries some potential risks. “I am a bit concerned if we really start to outsource data to the cloud without any regulation,” says Emiliano De Cristofaro, a cryptography scientist with Xerox’s PARC who is developing a genomic data storage and sharing platform. “We must not forget that the sensitivity of genomic information is quite unprecedented,” he says. “The human genome is not only a unique identifier but also contains things about ethnic heritage, predisposition to certain diseases including mental disorders, and many other traits.” Data leaks happen all the time, says Cristofaro, and while you can change your password after a security break, “there’s no way to revoke your genome.”

Keeping the genomic data secure is a key component and is the reason the group began a relationship with IBM, says Megill. The data would be stored at the company’s headquarters and would be available only to limited users—doctors and companies that offer diagnostic or other medical interpretation of the genome, he says.

If a patient changes her health-care provider, the data will remain available for her next physician. Storing the data will be free, says Christman.

January 30, 2013

Originally published Jan. 29.

MOUNTAIN VIEW, Calif. – The Coriell Institute for Medical Research in partnership with IBM has launched a for-profit company that will store consumers’ whole-genome sequencing data.

The goal of the spinoff company, called Coriell Life Sciences, “is to address how will a doctor actually use whole-genome sequences in a clinical setting,” CIMR CEO Michael Christman said at a personalized medicine meeting here this week. After doctors order a whole-genome sequence, which would be provided by a sequencing service provider, Coriell Life Sciences will harmonize and store that data in a gene vault for the patient. Physicians then will be able to order certain interpretive analyses from third parties on the sequence based on the patient’s medical needs.

After planning for 18 months, Coriell and IBM launched Coriell Life Sciences a few weeks ago. Describing the company as the “expert custodians” of whole-genome data, Christman explained that patients’ information may remain in the gene vault, regardless of whether they change jobs or healthcare providers. The patient will own their data stored in the vault and will have the ability to consent to which third parties gain access to the information and for what purpose.

Coriell Life Sciences will not charge patients for storing their data. Patients can consent to allow their de-identified data to be used for research, at which point Coriell will add their information to an aggregate research database that will be used for discovery work. Alternatively, patients can remove their sequence from the vault if they choose.

Physicians that have ordered certain interpretive analysis on patients’ genome sequences will receive the results through electronic medical records. If the healthcare provider doesn’t have an EMR in place, then they can use a web portal interface through Coriell Life Sciences.

If a physician orders genomic interpretation of a patients’ data related to episodic care, however, the third-party interpretation company will have the right to the genome sequence information for performing that specific analysis. “For most collaborators, their access to patient data will be limited to only the subset of the total sequence required by their specific interpretation,” Scott Megill, CEO of the new firm, told PGx Reporter via email. “If an interpretation company has a legitimate, non-commercial research aim that could benefit from the use of large anonymized data sets, they will have an opportunity to utilize aggregate, well-consented data like any other research organization.”

Likening Coriell Life Sciences to Apple’s App Store, Megill noted that the company’s core GeneExchange product will offer a marketplace in which genome interpretation companies can charge “fair market rates” for their services. In turn for providing the sales channel, marketing, storage, data harmonization, and electronic medical records integration, Coriell Life Sciences will charge a brokerage fee for each transaction, he explained. A spokesperson for the company said that these transaction fees will be “baked into the overall cost to the payor” for each individual test.

“The data is harmonized and brokered such that it can be interpreted by a variety of clinical applications,” Christman said.

“No one company is well positioned to interpret the entire genome,” he added. “In principle what this would do is allow a doctor in Tulsa, Oklahoma, to order the cancer analysis application … that was developed at MD Anderson or Sloan Kettering.”

Coriell Life Sciences is also developing an application that will allow doctors to gauge pharmacogenomic associations in a patient’s sequence. The PGx app will be developed based on data collected by Coriell over the last five years through its Personalized Medicine Collaborative research project.

Launched in December 2007 for the lay public, the Personalized Medicine Collaborative aims to study the impact of genome-informed treatment on medical care by genotyping patients and reporting only clinically actionable genomic information. The study has so far enrolled thousands of participants and has research partnerships with Cooper University Hospital, Virtua Health, Fox Chase Cancer Center, and Helix Health (PGx Reporter 6/17/2009).

Similar to the Personalized Medicine Collaborative, the PGx app will initially enable doctors to gauge whether their patients are at risk for dozens of complex conditions and learn how they metabolize commonly prescribed drugs. “We will be expanding this offering to ultimately include several dozen drug efficacy and dosage recommendations,” Megill said.

The need for securing people’s genome data will become more acute as advanced sequencing technologies become part of mainstream medical care. Coriell Life Sciences was conceived “from a clear market need, identified in our work in the Coriell Personalized Medicine Collaborative research study, to provide the critical missing infrastructure required to enable clinical use of genome-informed medicine,” Megill said. “Doctors today have no easy way to order a genetic test, have the resulting sequence data stored in a trusted place for future use, and receive a ‘human readable’ report that can be used by doctors who haven’t been trained as geneticists.”

Coriell Life Sciences’ business model is based on an assumption that community healthcare providers will likely outsource genome sequencing and the storage of the data. “I don’t think you’re going to see the hospitals buying sequencing machines. It is rocket science and there are rocket scientists who are quite good at it,” Christman said. “So, the doctors will need the ability to collect blood and saliva and the access to FedEx. The sequence then needs to go somewhere.”

Furthermore, Coriell Life Sciences will provide doctors with options on the specific types of available data interpretation services.

“Ultimately, the sequence becomes a commodity supply to the interpretation. Doctors do not need to be educated in the value of an Illumina sequence versus a Complete Genomics sequence to order a specific interpretation,” a company spokesperson said. “Coriell Life Sciences will negotiate the best available supplier for sequence data on their behalf using stringent standards for quality and turnaround time.

“The key principle is making it easy for doctors to order tests and receive results that are ‘human readable’ – without needing to be a geneticist.”

IBM has provided technologies for Coriell Life Sciences and has invested an undisclosed amount in the company. Separate from this effort, Coriell is using IBM’s capabilities and systems to store the 1.5 gigabytes of information per person who has partaken in the Personalized Medicine Collaborative, which aims to genotype 100,000 people.

Megill, in line with other industry observers, believes that doctors are much more likely to use personalized treatment strategies if data from genomic testing were integrated into patients’ electronic medical records. In this regard, the partnership with IBM is critical, since IBM technologies are utilized in 75 percent of the world’s electronic medical record systems, he estimated.

Leveraging IBM’s integration and data interchange technologies, Coriell Life Sciences “will build bi-directional data integrations with healthcare systems so that tests can be ordered, phenotypic data can be utilized, and results can be delivered within the context of the patient record,” Megill said.

Coriell Life Sciences’ model is looking ahead to a time when having a whole genome sequence in medical care will be as commonplace as getting an annual physical exam, except one needs to get his genome sequenced only once. Several speakers at the conference in Mountain View discussed how the advent of whole genome sequencing will change patient care and the diagnostics market.

Describing a model very similar to the one being pursued by Coriell Life Sciences, Cliff Reid, CEO of sequencing firm Complete Genomics, discussed how in the future having whole genome sequence testing performed for a patient and then storing the data for future use will reduce the cost of genetic testing dramatically.

“There will be one cost up front [for whole genome sequencing]… and virtually free thereafter,” Reid said. “By storing it in a secure database, the cost of every genetic test after that is pennies and the time to get it is seconds.

“This is a radically different economic and usage profile than what we’re seeing today in the genetics industry,” he added. “This doesn’t fit very well in the current practice.”

Turna Ray is the editor of GenomeWeb’s Pharmacogenomics Reporter. She covers pharmacogenomics, personalized medicine, and companion diagnostics. E-mail Turna Ray or follow her GenomeWeb Twitter account at @PGxReporter.

Related Stories


Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Engineers work to help biologists cope with big data

Tue, 01/08/2013 – 10:15am

Liang Dong is developing an instrument that will allow plant scientists to simultaneously study thousands of plants grown in precisely controlled condition. Photo: Bob ElbertLiang Dong is developing an instrument that will allow plant scientists to simultaneously study thousands of plants grown in precisely controlled condition. Photo: Bob ElbertLiang Dong held up a clear plastic cube, an inch or so across, just big enough to hold 10 to 20 tiny seeds.

Using sophisticated sensors and software, researchers can precisely control the light, temperature, humidity, and carbon dioxide inside that cube.

Dong—an Iowa State University assistant professor of electrical and computer engineering and of chemical and biological engineering—calls it a “microsystem instrument.” Put hundreds of those cubes together and researchers can simultaneously grow thousands of seeds and seedlings in different conditions and see what happens. How, for example, do the plants react when it is hot and dry? Or carbon dioxide levels change? Or light intensity is adjusted very slightly?

The instrument designed and built by Dong’s research group will keep track of all that by using a robotic arm to run a camera over the cubes and take thousands of images of the growing seeds and seedlings.

Plant scientists will use the images to analyze the plants’ observable characteristics—the leaf color, the root development, the shoot size. All those observations are considered a plant’s phenotype. And while plant scientists understand plant genetics very well, Dong says they don’t have a lot of data about how genetics and environment combine to influence phenotype.

Dong’s instrument will provide researchers with lots of data—too much for scientists to easily sort and analyze. That’s a problem known as big data. And it’s increasingly common in the biological sciences.

“We’re seeing a proliferation of new instruments in the biological sciences,” says Srinivas Aluru, the Ross Martin Mehl and Marylyne Munas Mehl Professor of Computer Engineering at Iowa State. “And the rate of data collection is increasing. So we have to have a solution to analyze all this data.”

Aluru is leading a College of Engineering initiative to build research teams capable of solving big data problems in next-generation DNA sequencing, systems biology, and phenomics. The researchers are developing computing solutions that take advantage of emerging technologies such as cloud computing and high-performance computers. They’re also building partnerships with technology companies such as IBM, Micron, NVIDIA, Illumina Inc., Life Technologies Corp., Monsanto Co., and Roche.

The project is one of the three Dean’s Research Initiatives launched by Jonathan Wickert, former dean of the College of Engineering and currently Iowa State’s senior vice president and provost. The initiatives in high-throughput computational biology, wind energy, and a carbon-negative economy were launched in March 2011 with $500,000 each over three years. That money is to build interdisciplinary, public-private research teams ready to compete for multi-million dollar grants and projects.

Patrick Schnable, Iowa State’s Baker Professor of Agronomy and director of the centers for Plant Genomics and Carbon Capturing Crops, remembers when biologists had no interest in working with computer specialists. That was before they tried to work with billions of data points to, say, accurately predict harvests based on plant genotype, soil type and weather conditions.

“Now we’re getting huge, absolutely huge, data sets,” Schnable says. “There is no way to analyze these data sets without extraordinary computer resources. There’s no way we could do this without the collaboration of engineers.”

To date, the computational biology initiative has attracted $5.5 million for four major research projects. One of the latest grants is a three-year, $2 million award from the BIGDATA program of the National Science Foundation and the National Institutes of Health. The grant will allow Aluru and researchers from Iowa State, Stanford University, Virginia Tech, and the University of Michigan to work together to develop a computing toolbox that helps scientists manage all the data from today’s DNA sequencing instruments.

Aluru says the research initiative helped prepare Iowa State researchers to go after that grant.

“When the BIGDATA call came in, we had the credibility to compete,” he says. “We were already working on leading edge problems and had established relationships with companies.”

The initiative, the grants and the industry partnerships are helping Iowa State faculty and students move to the front of the developing field.

“One computing company wanted to set up a life science research group and it came here for advice,” Aluru says. “Iowa State is known as a big data leader in the biosciences.”

Source: Iowa State University



Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

October 24, 2012

Sequoia Supercomputer Pumps Up Heart Research

Tiffany Trader

Cardioid code imageThe Cardioid code developed by a team of Livermore and IBM scientists divides the heart into a large number of manageable pieces, or subdomains. The development team used two approaches, called Voronoi (left) and grid (right), to break the enormous computing challenge into much smaller individual tasks.Source: LLNLThe world’s fastest computer has created the fastest computer simulation of the human heart.

The Lawrence Livermore National Laboratory‘s Sequoia supercomputer, a TOP500 chart topper, was built to handle top secret nuclear weapons simulations, but before it goes behind the classified curtain, it is pumping out sophisticated cardiac simulations.

Earlier this month, Sequoia, which currently ranks number one on the TOP500 list of the world’s fastest computer systems, received a 2012 Breakthrough Award from Popular Mechanics magazine. Now the magazine is reporting on Sequoia’s ground-breaking heart simulations.

Clocking in at 16.32 sustained petaflops (20 PF peak), Sequoia is taking modeling and simulation to new heights, enabling researchers to capture greater complexity in a shorter time frame. With this advanced capability, LLNL scientists have been able to simulate the human heart down to the cellular level and use the resulting model to predict how the organ will respond to different drug compounds.

Principal investigator Dave Richards couldn’t resist a little showboating: “Other labs are working on similar models for many body systems, including the heart,” he told Popular Mechanics. “But Lawrence Livermore’s model has one major advantage: It runs on Sequoia, the most powerful supercomputer in the world and a recent PM Breakthrough Award winner.”

The simulations were made possible by an advanced modeling program, calledCardioid, that was developed by a team of scientists from LLNL and the IBM T. J. Watson Research Center. The highly scalable code simulates the electrophysiology of the heart. It works by breaking down the heart into units; the smaller the unit, the more accurate the model.

Until now the best modeling programs could achieve 0.2 mm in each direction. Cardioid can get down to 0.1 mm. Where previously researchers could run the simulations for tens of heartbeats, Cardioid executing on Sequoia captures thousands of heartbeats.

Scientists are seeing 300-fold speedups. It used to take 45 minutes to simulate just one beat, but now researchers can simulate an hour of heart activity – several thousand heartbeats – in seven hours.

With the less sophisticated codes, it was impossible to model the heart’s response to a drug or perform an electrocardiogram trace for a particular heart disorder. That kind of testing requires longer run times, which just wasn’t possible before Cardioid.

The model could potentially test a range of drugs and devices like pacemakers to examine their affect on the heart, paving the way for safer and more effective human testing. But it is especially suited to studying arrhythmia, a disorder of the heart in which the organ does not pump blood efficiently. Arrhythmias can lead to congestive heart failure, an inability of the heart to supply sufficient blood flow to meet the needs of the body.

There are various types of medications that disrupt cardiac rhythms. Even those designed to prevent arrhythmias can be harmful to some patients, and researchers do not yet fully understand exactly what causes these negative side effects. Cardioid will enable LLNL scientists to examine heart function as an anti-arrhythmia drug enters the bloodstream. They’ll be able to identify when drug levels are highest and when they drop off.

“Observing the full range of effects produced by a particular drug takes many hours,” noted computational scientist Art Mirin of LLNL. “With Cardioid, heart simulations over this timeframe are now possible for the first time.”

The Livermore–IBM team is also working on a mechanical model that simulates the contraction of the heart and pumping of blood. The electrical and mechanical simulations will be allowed to interact with each other, adding more realism to the heart model.

It’s not entirely clear why a national defense lab took on this heart simulation work. Fred Streitz, director of the Institute for Scientific Computing Research at LLNL, would say only that “there are legitimate national security implications for understanding how drugs affect human organs,” adding that the project stretched the limits of supercomputing in a manner that is relatable to the American people.

The cardiac modeling work was performed during the system’s “shakedown period” – the set-up and testing phase – and the team had to hurry to finish in the allotted time span. Once Sequoia becomes classified, it’s unclear if it will still be available to run Cardioid and other unclassified programs, although access will certainly be more difficult since the machine’s principle mission is running nuclear weapons codes.

Sequoia is an integral part of the NNSA’s Advanced Simulation and Computing (ASC) program, which is run by partner organizations LLNL, Los Alamos National Laboratory and Sandia National Laboratories. With 96 racks, 98,304 compute nodes, 1.6 million cores, and 1.6 petabytes of memory, Sequoia will help the NNSA fulfill its mission to “maintain and enhance the safety, security, reliability and performance of the U.S. nuclear weapons stockpile without nuclear testing.”

The Cardioid simulation has been named as a finalist in the 2012 Gordon Bell Prize competition, awarded each year to recognize supercomputing’s crowning achievements. Research partners, Streitz, Richards, and Mirin, will reveal their results at the Supercomputing Conference in Salt Lake City, Utah, on November 13.


Human heart simulated on world’s fastest supercomputer

October 29, 2012 | By 

Before the U.S. government cloaks the operations of the Sequoia supercomputer for classified nuclear arms analyses, scientists have tapped the world’s fastest computer for an unprecedented simulation of the human heart. With the aid of the supercomputer, according to an HPC Wire report, researchers have been able to model the heart down to the cellular level and simulate how the organ would react to certain drugs.

The supercomputer has been performing simulations of the heart with a modeling program, Cardioid, from researchers at Lawrence Livermore National Laboratory (LLNL) and IBM’s T.J. Watson Research Center, HPC Wire reported. The computing power and capabilities of the modeling program have advanced heart modeling from simulations of a handful of heartbeats to thousands. It enables researchers to get closer to the real thing as they boost their capacity to capture activities in the heart at finer levels of detail and complexity.

Drugmakers have spent billions of dollars on studies to improve their understanding of the heart, and computer simulations offer a way for researchers to gauge the potential impacts of a compound before testing it in living subjects. Researchers believe that Cardioid could help them understand the activity and potential side effects of drugs for an inefficient heart-pumping condition known as an arrhythmia, which can trigger congestive heart failure and other medical problems.

“Observing the full range of effects produced by a particular drug takes many hours,” Art Mirin, an LLNL computational scientist, noted, as quoted by HPC Wire. “With Cardioid, heart simulations over this timeframe are now possible for the first time.”



Read Full Post »