Reporter: Aviva Lev-Ari, PhD, RN
Genomics and the State of Science Clarity
By Tracy Vence
Projects supported by the US National Institutes of Health will have produced 68,000 total human genomes — around 18,000 of those whole human genomes — through the end of this year, National Human Genome Research Institute estimates indicate. And in his book, The Creative Destruction of Medicine, the Scripps Research Institute‘s Eric Topol projects that 1 million human genomes will have been sequenced by 2013 and 5 million by 2014.
“There’s a lot of inventory out there, and these things are being generated at a fiendish rate,” says Daniel MacArthur, a group leader in Massachusetts General Hospital‘s Analytic and Translational Genetics Unit. “From a capacity perspective … millions of genomes are not that far off. If you look at the rate that we’re scaling, we can certainly achieve that.”
The prospect of so many genomes has brought clinical interpretation into focus — and for good reason. Save for regulatory hurdles, it seems to be the single greatest barrier to the broad implementation of genomic medicine.
But there is an important distinction to be made between the interpretation of an apparently healthy person’s genome and that of an individual who is already affected by a disease, whether known or unknown.
In an April Science Translational Medicine paper, Johns Hopkins University School of Medicine‘s Nicholas Roberts and his colleagues reported that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals. The researchers then concluded that whole-genome sequencing was not likely to be clinically useful for that purpose. (See sidebar, story end.)
“The Roberts paper was really about the value of omniscient interpretation of whole-genome sequences in asymptomatic individuals and what were the likely theoretical limits,” says Isaac Kohane, chair of the informatics program at Children’s Hospital Boston. “That was certainly an important study, and it was important to establish what those limits of knowledge are in asymptomatic populations. But, in fact, the major and most important use cases [for whole-genome sequencing] may be in cases of disease.”
Still, targeted clinical interpretations are not cut and dried. “Even in cases of disease, it’s not clear that we know now how to look across multiple genes and figure out which are relevant, which are not,” Kohane adds.
While substantial progress has been made — in particular, for genetic diseases, including certain cancers — ambiguities have clouded even the most targeted interpretation efforts to date. Technological challenges, meager sample sizes, and a need for increased, fail-safe automation all have hampered researchers’ attempts to reliably interpret the clinical significance of genomic variation. But perhaps the greatest problem, experts say, is a lack of community-wide standards for the task.
Genes to genomes
When scientists analyzed James Watson’s genome — his was the first personal sequence, completed in 2007 and published in Nature in 2008 — they were surprised to find that he harbored two putative homozygous SNPs matching Human Gene Mutation Database entries that, were they truly homozygous, would have produced severe clinical pheno-types.
But Watson was not sick.
As researchers search more and more genomes, such inconsistencies are increasingly common.
“My take on what has happened is that the people who were doing the interpretation of the raw sequence largely were coming from a SNPs world, where they were thinking about sequence variants that have been observed before, or that have an appreciable frequency, and weren’t thinking very much about the single-ton sequence variants,” says Sean Tavtigian, associate professor of oncology at the University of Utah.
“There is a qualitative difference between looking at whole-genome sequences and looking at single genes or, even more typically, small numbers of variants that have been previously implicated in a disease,” Boston’s Kohane adds.
“Previously, because of the cost and time limitations around sequencing and genotyping, we only looked at variants in genes for which we had a clinical indication. Now, since we can essentially see that in the near future we will be able to do a full genome sequence for essentially the same cost as just a focused set-of-variants test, all of the sudden we have to ask ourselves: What is the meaning of variants that fall outside where we would have ordinarily looked for a given disease or, in fact, if there is no disease at all?”
Mass General’s MacArthur says it has been difficult to pinpoint causal variants because they are enriched for both sequencing and annotation errors. “In the genome era, we can generate those false positives at an amazing rate, and we need to work hard to filter them back out,” he says.
“Clinical geneticists have been working on rare diseases for a long time, and have identified many genes, and are used to working in a world where there is sequence data available only from, say, one gene with a strong biological hypothesis. Suddenly, they’re in this world where they have data from patients on all 20,000 genes,” MacArthur adds. “There’s a fundamental mind-shift there, in shifting from one gene through to every gene. My impression is that the community as a whole hasn’t really internalized that shift; people still have a sense in their head that if you see a strongly damaging variant that segregates with the disease, and maybe there’s some sort of biological plausibility around it as well, that that’s probably the causal variant.”
Studies have shown that that’s not necessarily so. Because of this, “I do worry that in the next year or so we’ll see increasing numbers of mutations published that later prove to just be benign polymorphisms,” MacArthur adds.
“The meaning of whole-genome -sequence I think is very much front-and-center of where genomics is going to go. What is the true, clinical meaning? What is the interpretation? And, there’s really a double-edged sword,” Kohane says. On one hand, “if you only focus on the genes that you believe are relevant to the condition you’re studying, then you might miss some important findings,” he says. Conversely, “if you look at every-thing, the likelihood of a false positive becomes very, very high. Because, if you look at enough things, invariably you will find something abnormal,” he adds.
False positives are but one of the several challenges scientists working to analyze genomes in a clinical context face.
Technical difficulties
That advances in sequencing technologies are far outstripping researchers’ abilities to analyze the data they produce has become a truism of the field. But current sequencing platforms are still far from perfect, making most analyses complicated and nuanced. Among other things, improvements in both read length and quality are needed to enable accurate and reproducible interpretations.
“The most promising thing is the rate at which the cost-per-base-pair of massively parallel sequencing has dropped,” Utah’s Tavtigian says. Still, the cost of clinical sequencing is not inconsequential. “The $1,000, $2,000, $3,000 whole-genome sequences that you can do right now do not come anywhere close to 99 percent probability to identify a singleton sequence variant, especially a biologically severe singleton sequence variant,” he says. “Right now, the real price of just the laboratory sequencing to reach that quality is at least $5,000, if not $10,000.”
However, Tavtigian adds, “techniques for multiplexing many samples into a channel for sequencing have come along. They’re not perfect yet, but they’re going to improve over the next year or so.”
Using next-generation sequencing platforms, researchers have uncovered a variety of SNPs, copy-number variants, and small indels. But to MacArthur’s mind, current read lengths are not up to par when it comes to clinical-grade sequencing, and they have made supernumerary quality-control measures necessary.
“There’s no question that we’re already seeing huge improvements. … And as we add in to that changes in technology — for instance much, much longer sequencing reads, more accurate reads, possibly combining different platforms — I think these sorts of [quality-control] issues will begin to go away over the next couple of years,” MacArthur says. “But at this stage, there is still a substantial quality-control component in any sort of interpretation process. We don’t have perfect genomes.”
In a 2011 Nature Biotechnology paper, Stanford University’s Michael Snyder and his colleagues sought to examine the accuracy and completeness of single-nucleotide variant and indel calls from both the Illumina and Complete Genomics platforms by sequencing the genome of one individual using both technologies. Though the researchers found that more than 88 percent of the unique single-nucleotide variants they detected were concordant between the two platforms, only around one-quarter of the indel calls they generated matched up. Overall, the authors reported having found tens of thousands of platform-specific variant calls, around 60 percent of which they later validated by genotyping array.
For clinical sequencing to ever become widespread, “we’re going to have to be able to show the same reproducibility and test characteristic modification as we have for, let’s say, an LDL cholesterol level,” Boston’s Kohane says. “And if you measure it in one place, it should not be too different from another place. … Even before we can get to the clinical meaning of the genomes, we’re going to have to get some industry-wide standards around quality of sequencing.”
Scripps’ Topol adds that when it comes to detecting rare variants, “there still needs to be a big upgrade in accuracy.”
Analytical issues
Beyond sequencing, technological advances must also be made on the analysis end. “The next thing, of course, is once you have better -accuracy … being able to do all of the analytical work,” Topol says. “We’re getting better at the exome, but every-thing outside of protein-coding -elements, there’s still a tremendous challenge.”
Indeed, that challenge has inspired another — a friendly competition among bioinformaticians working to analyze pediatric genomes in a pedigree study.
With enrollment closed and all sequencing completed, participants in the Children’s Hospital Boston-sponsored CLARITY Challenge have rolled up their shirtsleeves and begun to dig into the data — de-identified clinical summaries and exome or whole-genome sequences generated by Complete Genomics and Life Technologies for three children affected by rare diseases of unknown genetic basis, and their parents. According to its organizers, the competition aims to help set standards for genomic analysis and interpretation in a clinical setting, and for returning actionable results to clinicians and patients.
“A bunch of teams have signed up to provide clinical-grade reports that will be checked by a blue-ribbon panel of judges later this year to compare and contrast the different forms of clinical reporting at the genome-wide level,” Kohane says. The winning team will be announced this fall and will receive a $25,000 prize, he adds.
While the competition covers all aspects of clinical sequencing — from readout to reporting — it is important to recognize that, more generally, there may not be one right answer and that the challenges are far-reaching, affecting even the most basic aspects of analysis.
“There is a lot of algorithm investment still to be made in order to get very good at identifying the very rare or singleton sequence variants from the massively parallel sequencing reads efficiently, accurately, [and with] sensitivity,” Utah’s Tavtigian says.
Picking up a variant that has been seen before is one thing, but detecting a potentially causal, though as-yet-unclassified variant is a beast of another nature.
“Novel mutations usually need extensive knowledge but also validation. That’s one of the challenges,” says Zhongming Zhao, associate professor of biomedical informatics at Vanderbilt University. “Validation in terms of a disease study is most challenging right now, because it is very time-consuming, and usually you need to find a good number of samples with similar disease to show this is not by chance.”
Search for significance
Much like sequencing a human genome in the early- to mid-2000s was more laborious than it is now, genome interpretation has also become increasingly automated.
Beyond standard quality-control checks, the process of moving from raw data to calling variants is now semiautomatic. “There’s essentially no manual intervention required there, apart from running our eyes over [the calls], making sure nothing has gone horribly wrong,” says Mass General’s MacArthur. “The step that requires manual intervention now is all about taking that list of variants that comes out of that and looking at all the available biological data that exists on the Web, [coming] up with a short-list of genes, and then all of us basically have a look at all sorts of online resources to see if any of them have some kind of intuitive biological profile that fits with the disease we’re thinking about.”
Of course, intuitive leads are not foolproof, nor are current mutation data-bases. (See sidebar, story end.) And so, MacArthur says, “we need to start replacing the sort of intuitive biological approach with a much more data-informed approach.”
Developing such an approach hinges in part on having more genomes. “If we get thousands — tens of thousands — of people sequenced with various different phenotypes that have been crisply identified, that’s going to be so important because it’s the coupling of the processing of the data with having rare variants, structural variants, all the other genomic variations to understand the relationship of whole-genome sequence of any particular phenotype and a sequence variant,” Scripps’ Topol says.
Vanderbilt’s Zhao says that sample size is still an issue. “Right now, the number of samples in each whole-genome sequencing-based publication is still very limited,” he says. At the same time, he adds, “when I read peers’ grant applications, they are proposing more and more whole-genome sequencing.”
When it comes to disease studies, sequencing a whole swath of apparently healthy people is not likely to ever be worthwhile. According to Utah’s Tavtigian, “the place where it is cost-effective is when you test cases and then, if something is found in the case, go on and test all of the first-degree relatives of the case — reflex testing for the first-degree relatives,” he says. “If there is something that’s pathogenic for heart disease or colon cancer or whatever is found in an index case, then there is a roughly 50 percent chance that the first-degree relatives are going to carry the same thing, whereas if you go and apply that same test to someone in the general population, the probability that they carry something of interest is a lot lower.”
But more genomes, even familial ones, are not the only missing elements. To fill in the functional blanks, researchers require multiple data types.
“We’ve been pretty much sequence-centric in our thinking for many years now because that was where are the attention [was],” Topol says. “But that leaves the other ‘omes out there.”
From the transcriptome to the proteome, the metabolome, the microbiome, and beyond — Topol says that because all the ‘omes contribute to human health, they all merit review.
“The ability to integrate information about the other ‘omics will probably be a critical direction to understand the underpinnings of disease,” he says. “I call it the ‘panoromic’ view — that is really going to become a critical future direction once we can do those other ‘omics readily. We’re quite a ways off from that right now.”
Mass General’s MacArthur envisages “rolling in data from protein-protein interaction networks and tissue expression data — pulling all of these together into a model that predicts, given the phenotype, given the systems that appear to be disrupted by this variant, what are the most likely set of genes to be involved,” he says. From there, whittling that set down to putative causal variants would be simpler.
“And at the end of that, I think we’ll end up with a relatively small number of variants, each of which has a probability score associated with it, along with a whole host of additional information that a clinician can just drill down into in an intuitive way in making a diagnosis in that individual,” he adds.
According to MacArthur, “we’re already moving in this direction — in five years I think we will have made substantial progress toward that.” He adds, “I certainly think within five years we will be diagnosing the majority of severe genetic disease patients; the vast majority of those we’ll be able to assign a likely causal variant using this type of approach.”
Tavtigian, however, highlights a potential pitfall. While he says that “integration of those [multivariate] data helps a lot with assessing unclassified variants,” it is not enough to help clinicians ascertain causality. Functional assays, which can be both inconclusive and costly, will be needed for some unclassified variant hits, particularly those that are thought to be clinically meaningful.
“I don’t see how you’re going to do a functional assay for less than like $1,000,” he says. “That means that unless the cost of the sequencing test also includes a whole bunch of money for assessing the unclassified variants, a sequencing test is going to create more of a mess than it cleans up.”
Rare, common
Despite the challenges, there have been plenty of clinical sequencing success stories. Already, Scripps’ Topol says there have been “two big fronts in 2012: One is the unknown diseases [and] the other one, of course, is cancer.” But scientists say that despite the challenges, whole–genome sequencing might also become clinically useful for asymptomatic individuals in the future.
Down the line, scientists have their sights set on sequencing asymptomatic individuals to predict disease risk. “The long-term goal is to have any person walk off the street, be able to take a look at their genome and, without even looking at them clinically, say: ‘This is a person who will almost certainly have phenotype X,'” MacArthur says. “That is a long way away. And, of course, there are many phenotypes that can’t be predicted from genetic data alone.”
Nearer term, Boston’s Kohane imagines that newborns might have their genomes screened for a number of neonatal or pediatric conditions.
Overall, he says, it’s tough to say exactly where all of the chips might fall. “It’s going to be an interesting few years where the sequencing companies will be aligning themselves with laboratory testing companies and with genome interpretation companies,” Kohane says.
Even if clinical sequencing does not show utility for cases other than genetic diseases, it could still become common practice.
“Worldwide, there are certainly millions of people with severe diseases that would benefit from whole–genome sequencing, so the demand is certainly there,” MacArthur says. “It’s just a question of whether we can develop the infrastructure that is required to turn the research-grade genomes that we’re generating at the moment into clinical-grade genomes. Given the demand and the practical benefit of having this information … I don’t think there is any question that we will continue to drive, pretty aggressively, towards large-scale -genome sequencing.”
Kohane adds that “although rare diseases are rare, in aggregate they’re actually not — 5 percent of the population, or 1 in 20, is beginning to look common.”
Despite conflicting reports as to its clinical value, given the rapid declines in cost, Kohane says it’s possible that a whole-genome sequence could be less expensive than a CT scan in the next five years. Confident that many of the interpretation issues will be worked out by then, he adds, “this soon-to-be-very-inexpensive test will actually have a lot of clinical value in a variety of situations. I think it will become part the decision procedure of most doctors.”
[Sidebar] ‘Predictive Capacity’ Challenged
In Science Translational Medicine in April, Johns Hopkins University School of Medicine’s Nicholas Roberts and his colleagues showed that personal genome sequences for healthy monozygotic twin pairs are not predictive of significant risk for 24 different diseases in those individuals and concluded that whole-genome sequencing was unlikely to be useful for that purpose.
As the Scripps Research Institute’s Eric Topol says, that Roberts and his colleagues examined the predictive capacity of personal genome sequencing “without any genome sequences” was but one flaw of their interpretation.
In a comment appearing in the same journal in May, Topol elaborated on this criticism, and noted that the Roberts et al. study essentially showed nothing new. “We cannot know the predictive capacity of whole-genome sequencing until we have sequenced a large number of individuals with like conditions,” Topol wrote.
Elsewhere in the journal, Tel Aviv University’s David Golan and Saharon Rosset noted that slightly tweaking the gene-environment parameters of the mathematical model used by Roberts et al. showed that the “predictive capacity of genomes may be higher than their maximal estimates.”
Colin Begg and Malcolm Pike from Memorial Sloan-Kettering Cancer Center also commented on the study in Science Translational Medicine, reporting their -alternative calculation of the predictive capacity of personal sequencing and their analysis of cancer occurrence in the second breast of breast cancer patients, both of which, they wrote, “offer a more optimistic view of the predictive value of genetic data.”
In response to those comments, Bert Vogelstein — who co-authored the Roberts et al. study — and his colleagues wrote in Science Translational Medicine that their “group was the first to show that unbiased genome-wide sequencing could illuminate the basis for a hereditary disease,” adding that they are “acutely aware of its immense power to elucidate disease pathogenesis.” However, Vogelstein and his colleagues also said that recognizing the potential limitations of personal genome sequencing is important to “minimize false expectations and foster the most fruitful investigations.”
[Sidebar] ‘The Single Biggest Problem’
That there is currently no comprehensive, accurate, and openly accessible database of human disease-causing mutations “is the single greatest failure of modern human genetics,” Massachusetts General Hospital’s Daniel MacArthur says.
“We’ve invested so much effort and so much money in researching these Mendelian diseases, and yet we have never managed as a community to centralize all of those mutations in a single resource that’s actually useful,” MacArthur says. While he notes that several groups have produced enormously helpful resources and that others are developing more, currently “none covers anywhere close to the whole of the literature with the degree of detail that is required to make an accurate interpretation.”
Because of this, he adds, researchers are pouring time and resources into rehashing one another’s efforts and chasing down false leads.
“As anyone at the moment who is sequencing genomes can tell you, when you look at a person’s genome and you compare it to any of these databases, you find things that just shouldn’t be there — homozygous mutations that are predicted to be severe, recessive, disease-causing variants and dominant mutations all over the place, maybe a dozen or more, that they’ve seen in every genome,” MacArthur says. “Those things are clearly not what they claim to be, in the sense that a person isn’t sick.” Most often, he adds, the researchers who reported that variant as disease-causing were mistaken. Less commonly, the database moderators are at fault.
“The single biggest problem is that the literature contains a lot of noise. There are things that have been reported to be mutations that just aren’t. And, of course, a lot of the databases are missing a lot of mutations as well,” MacArthur adds. “Until we have a complete database of severe disease mutations that we can trust, genome interpretation will always be far more complicated than it should be.”
![]() |
Tracy Vence is a senior editor of Genome Technology. |
Source:
http://www.genomeweb.com/node/1098636/
NIST Consortium Embarks on Developing ‘Meter Stick of the Genome’ for Clinical Sequencing
The National Institute of Standards and Technology has founded a consortium, called “Genome in a Bottle,” to develop reference materials and performance metrics for clinical human genome sequencing.
Following an initial workshop in April, consortium members – which include stakeholders from industry, academia, and the government – met at NIST last month to discuss details and timelines for the project.
The current aim is to have the first reference genome — consisting of genomic DNA for a specific human sample and whole-genome sequencing data with variant calls for that sample — available by the end of next year, and another, more complete version by mid-2014.
“At present, there are no widely accepted genomics standards or quantitative performance metrics for confidence in variant calling,” the consortium wrote in its work plan, which was discussed at the meeting. Its main motivation is “to develop widely accepted reference materials and accompanying performance metrics to provide a strong scientific foundation for the development of regulations and professional standards for clinical sequencing.”
“This is like the meter stick of the genome,” said Marc Salit, leader of the Multiplexed Biomolecular Science group in NIST’s Materials Measurement Laboratory and one of the consortium’s organizers. He and his colleagues were approached by several vendors of next-generation sequencing instrumentation about the possibility of generating standards for assessing the performance of next-gen sequencing in clinical laboratories. The project, he said, will focus on whole-genome sequencing but will also include targeted sequencing applications.
The consortium, which receives funding from NIST and the Food and Drug Administration, is open for anyone to participate. About 100 people, representing 40 to 50 organizations, attended last month’s meeting, among them representatives from Illumina, Life Technologies, Pacific Biosciences, Complete Genomics, the FDA, the Centers for Disease Control and Prevention, commercial and academic clinical laboratories, and a number of large-scale sequencing centers.
Four working groups will be responsible for different aspects of the project: a group led by Andrew Grupe at Celera will select and design the reference materials; a group headed by Elliott Margulies at Illumina will characterize the reference materials experimentally, using multiple sequencing platforms; Steve Sherry at the National Center for Biotechnology Information is heading a bioinformatics, data integration, and data representation group to analyze and represent the experimental data; and Justin Johnson from EdgeBio is in charge of a performance metrics and “figures of merit” group to help laboratories use the reference materials to characterize their own performance.
The reference materials will include both human genomic DNA and synthetic DNA that can be used as spike-in controls. Eventually, NIST plans to release the references as Standard Reference Materials that will be “internationally recognized as certified reference materials of higher order.”
According to Salit, there was some discussion at the meeting about what sample to select for a national reference genome. The initial plan was to use a HapMap sample – NA12878, a female from the CEPH pedigree from Utah – but it turned out that HapMap samples are consented for research use only and not for commercial use, for example in an in vitro diagnostic or for potential re-identification from sequence data.
The genome of NA12878 has already been extensively characterized, and the CDC is developing it as a reference for clinical laboratories doing targeted sequencing. “We were going to build on that momentum and make our first reference material the same genome,” Salit said. But because of the consent issues, NIST’s institutional review board and legal experts are currently evaluating whether the sample can be used.
In the meantime, consortium members have been “quite enthusiastic” about using samples from the Harvard University’s Personal Genome Project, which are broadly consented, Salit said.
The reference material working group issued a recommendation to develop a set of genomes from eight ethnically diverse parent-child trios as references, he said. For cancer applications, the references may also potentially include a tumor-normal pair.
The consortium will characterize all reference materials by several sequencing platforms. Several instrument vendors, as well as a couple of academic labs, have offered to contribute to data production. According to Justin Zook, a biomedical engineer at NIST and another organizer of the consortium, the current plan is to use sequencing technology from Illumina, Life Technologies, Complete Genomics, and – at least for the first genome – PacBio. Some of the sequencing will be done internally at NIST, which has Life Tech’s 5500 and Ion Torrent PGM available. In addition, the consortium might consider fosmid sequencing, which would provide phasing information and lower the error rate, as well as optical mapping to gain structural information, Zook said.
He and his colleagues have developed new methods for calling consensus variants from different data sets already available for the NA12878 sample, which they are planning to submit for publication in the near future. A fraction of the genotype calls will be validated using other methods, such as microarrays and Sanger sequencing. Consensus genotypes with associated confidence levels will eventually be released publicly as NIST Reference Data.
An important part of NIST’s work on the data analysis will be to develop probabilistic confidence estimates for the variant calls. It will also be important to distinguish between homozygous reference genotypes and areas in the genome “where you’re not sure what the genotype is,” Zook said, adding that this will require new data formats.
Coming up with confidence estimates for the different types of variants will be challenging, Zook said, particularly for indels and structural variants. Also, representing complex variants has not been standardized yet.
Several meeting participants called for “reproducible research and transparency in the analysis,” Salit said, and there were discussions about how to implement that at the technical level, including data archives so anyone can re-analyze the reference data.
One of the challenges will be to establish the infrastructure for hosting the reference data, which will require help from the NCBI, Salit said. Also, analyzing the data collaboratively is “not a solved problem,” and the consortium is looking into cloud computing services for that.
The consortium will also develop methods that describe how to use the reference materials to assess the performance of a particular sequencing method, including both experimental protocols and open source software for comparing genotypes. “We could throw this over the fence and tell someone, ‘Here is the genome and here is the variant table,'” Salit said, but, he noted, the consortium would like to help clinical labs use those tools to understand their own performance.
Edge Bio’s Johnson, who is chairing the working group in charge of this effort, is also involved in developing bioinformatic tools to judge the quality of genomes for the Archon Genomics X Prize (CSN 11/2/2011). Salit said that NIST is “leveraging some excellent work coming out of the X Prize” and is collaborating with a member of the X Prize team on the consensus genotype calling project.
By the end of 2013, the consortium wants to have its first “genome in a bottle” and reference data with SNV and maybe indel calls available, which will not yet include all confidence estimates. Another version, to be released in mid-2014, will include further analysis of error rates and uncertainties, as well as additional types of variants, such as structural variation.
![]() |
Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews. |
At AACC, NHGRI’s Green Lays out Vision for Genomic Medicine
LOS ANGELES – The age of genomic medicine is within “striking distance,” Eric Green, director of the National Human Genome Research Institute, told attendees of the American Association of Clinical Chemistry’s annual meeting here on Sunday.
Speaking at the conference’s opening plenary session, Green discussed NHGRI’sroadmap for moving genomic findings into clinical practice. While this so-called “helix to healthcare” vision may take many years to fully materialize, “I predict absolutely that it’s coming,” he said.
Green noted that rapid advances in DNA sequencing have put genomics on a similar development path as clinical chemistry, which is also a technology-driven field. “If you look over the history of clinical chemistry, whenever there were technology advances, it became incredibly powerful and new opportunities sprouted up left and right,” he said.
Green likened next-gen sequencing to the autoanalyzers that “changed the face of clinical chemistry” by providing a generic platform that enabled a range of applications. In a similar fashion, low-cost sequencing is becoming a “general purpose technology” that can not only read out DNA sequence but can also provide information about RNA, epigenetic modifications, and other associated biology, he said.
The “low-hanging fruit” for genomic medicine is cancer, where molecular profiling is already being used alongside traditional histopathology to provide information on prognosis and to help guide treatment, he said.
Another area where Green said that genomic medicine is already bearing fruit is pharmacogenomics, where genomic data is proving useful in determining which patients will respond to specific drugs.
Nevertheless, while it’s clear that “sequencing is already altering the clinical landscape,” Green urged caution. “We have to manage expectations and realize it’s going to be many years from going from the most basic information about our genome sequence to actually changing medical care in any serious way,” he said.
In particular, he noted that the clinical interpretation of genomic data is still a challenge. Not only are the data volumes formidable, but the functional role of most variants is still unknown, he noted.
This knowledge gap should be addressed over the next several years as NHGRI and other organizations worldwide sequence “hundreds of thousands” of human genomes as part of large-scale research studies.
“We’re increasingly thinking about how to use that data to actually do clinical care, but I want to emphasize that the great majority of this data being generated will and should be part of research studies and not part of primary clinical care quite yet,” Green said.
Source:
http://www.genomeweb.com/sequencing/aacc-nhgris-green-lays-out-vision-genomic-medicine
Startup Aims to Translate Hopkins Team’s Cancer Genomics Expertise into Patient Care
Researchers at Johns Hopkins University who helped pioneer cancer genome sequencing have launched a commercial effort intended to translate their experience into clinical care.
Personal Genome Diagnostics, founded in 2010 by Victor Velculescu and Luis Diaz, aims to commercialize a number of cancer genome analysis methods that have been developed at Hopkins over the past several decades. Velculescu, chief scientific officer of PGDx, is director of cancer genetics at the Ludwig Center for Cancer Genetics and Therapeutics at Hopkins; while Diaz, chief medical officer of the company, is director of translational medicine at the Ludwig Center.
Other founders include Ludwig Center Director Bert Vogelstein as well as Hopkins researchers Ken Kinzler, Nick Papadopoulos, and Shibin Zhou. The team has led a number of seminal cancer sequencing projects, including the first effort to apply large-scale sequencing to cancer genomes, one of the first cancer exome sequencingstudies, and the discovery of a number of cancer-related genes, including TP53, PIK3CA, APC, IDH1 and IDH2.
Velculescu told Clinical Sequencing News that the 10-person company, headquartered in the Science and Technology Park at Johns Hopkins in Baltimore, is a natural extension of the Hopkins group’s research activities.
Several years ago, “we began receiving requests from other researchers, other physicians, collaborators, and then actually patients, family members, and friends, wanting us to do these whole-exome analyses on cancer samples,” he said. “We realized that doing this in the laboratory wasn’t really the best place to do it, so for that reason we founded Personal Genome Diagnostics.”
The goal of the company, he said, “is to translate this history of our group’s experience of cancer genetics and our understanding of cancer biology, together with the technology that has now become available, and to ultimately perform these analyses for individual patients.”
The fledgling company has reached two commercial milestones in the last several weeks. First, it gained CLIA certification for cancer exome sequencing using the HiSeq 2000. In addition, it secured exclusive licensing rights from Hopkins for a technology called digital karyotyping, developed by Velculescu and colleagues to analyze copy number changes in cancer genomes.
PGDx offers a comprehensive cancer genome analysis service that combines exome sequencing with digital karyotyping, which isolates short sequence tags from specific genomic loci in order to identify chromosomal changes as well as amplifications and deletions.
The company sequences tumor-normal pairs and promises a turnaround time of six to 10 weeks, though Velculescu said that ongoing improvements in sequencing technology and the team’s analysis methods promise to reduce that time “significantly.” It is currently seeing turnaround times of under a month.
To date, the company has focused solely on the research market. Customers have included pharmaceutical and biotech companies, individual clinicians and researchers, and contract research organizations, while the scale of these projects has ranged from individual patients to thousands of exomes for clinical trials.
While the company performs its own sequencing for smaller projects, it relies on third-party service providers for larger studies.
PGDx specializes in all aspects of cancer genome analyses, but has a particular focus on the front and back end of the workflow, Velculescu said, including “library construction, pathologic review of the samples, dissection of tumor samples to enrich tumor purity, next generation sequencing, identification of tumor-specific alterations, and linking of these data to clinical and biologic information about human cancer.”
The sequencing step in the middle, however, “is really almost becoming a commodity,” he noted. “Although we’ve done it in house, we typically do outsource it and that allows us to scale with the size of these projects.”
He said that PGDx typically works with “a number of very high-quality sequence partners to do that part of it,” but he declined to disclose these partners.
On the front end, PGDx has developed “a variety of techniques that we’ve licensed and optimized from Hopkins that have allowed us to improve extraction of DNA from both frozen tissue and [formalin-fixed, paraffin-embedded] tissue, even at very small quantities,” Diaz said. The team has also developed methods “to maximize our ability to construct libraries, capture, and then perform exomic sequencing with digital karyotyping.”
Once the sequence data is in hand, “we have a pipeline that takes that information and deciphers the changes that are most likely to be related to the cancer and its genetic make-up,” he said. “That’s not trivial. It requires inspection by an experienced cancer geneticist.”
While the firm is working on automating the analysis, “it’s not something that is entirely automatable at this time and therefore cannot be commoditized,” Diaz said.
The firm issues a report for its customers that “provides information not only on the actual sequence changes which are of high quality, but what these changes are likely to do,” Velculescu said, including “information about diagnosis, prognosis, therapeutic targeting [information] or predictive information about the therapy, and clinical trials.”
So far, the company has relied primarily on word of mouth to raise awareness of its offerings. “We’ve literally been swamped with requests from people who just know us,” Velculescu said. “I think one of the major reasons people have been coming to us for either these small or very large contracts is that people are getting this type of NGS data and they don’t know what to do with it — whether it’s a researcher who doesn’t have a lot of experience in cancer or a clinician who hasn’t seen this type of data before.”
While there’s currently “a wealth in the ability to get data, there’s an inadequacy in being able to understand and interpret the data,” he said.
Pricing for the company’s services is on a case-by-case basis, but Diaz estimated that retail costs are currently between $5,000 and $10,000 per tumor-normal pair for research purposes. Clinical cases are more costly because the depth of coverage is deeper and additional analyses are required, as well as a physician interpretation.
A Cautious Approach
While the company’s ultimate goal is to help oncologists use genomic information to inform treatment for their patients, PGDx is “proceeding cautiously” in that direction, Diaz said.
The firm has so far sequenced around 50 tumor-normal pairs for individual patients, but these have been for “informational purposes,” he said, stressing that the company believes the field of cancer genomics is still in the “discovery” phase.
“I think we’re really at the beginning of the genomic revolution in cancer,” Diaz said. “We are partnering with pharma, with researchers, and with certain clinicians to start bringing this forward — not only as a discovery tool but eventually as a clinical application.”
“We do think that rushing into this right now is too soon, but we are building the infrastructure — for example our recent CLIA approval for cancer genome analyses — to do that,” he added.
This cautious approach sets the firm apart from some competitors, including Foundation Medicine, which is about to launch a targeted sequencing test that it is marketing as a diagnostic aid to help physicians tailor therapy for their patients. Diagnostic firm Asuragen is also offering cancer sequencing services based on a targeted approach (CSN 1/12/12), as are a number of academic labs.
Diaz said that PGDx’s comprehensive approach also sets it apart from these groups. “We think there’s a lot of clinically actionable information in the genome … and we don’t want to limit ourselves by just looking at a set of genes and saying that these may or may not have importance.”
While the genes in targeted panels “may have some data surrounding them with regard to prognosis, or in relation to a therapy, that’s really only a small part of the story when it comes to the patient’s cancer,” Diaz said.
“That’s why we would like to remain the company that looks at the entire cancer genome in a comprehensive fashion, because we don’t know enough yet to break it down to a few genes,” he said.
The company’s proprietary use of digital karyotyping to find copy number alterations is another differentiator, Velculescu said, because many cancer-associated genes — such as p16, EGFR, MYC, and HER2/neu — are only affected by copy number changes, not point mutations.
Ultimately, “we want to develop something that has value for the clinician,” Diaz said. “A clinician currently sees 20 to 30 patients a day and may have only a few minutes to look at a report. If [information from sequencing] doesn’t have immediate high-impact value, it’s going to be very hard to justify its use down the road.”
He added that the company is “thinking very hard about what we can squeeze out of the cancer genome to provide that high-impact clinical value — something that isn’t just going to improve the outcome of patients by a few months or weeks, but actually change the outlook of that patient substantially.”
Source:
![]() |
Bernadette Toner is editorial director for GenomeWeb’s premium content. E-mail her here or follow her GenomeWeb Twitter account at @GenomeWeb. |
In Educational Symposium, Illumina to Sequence, Interpret Genomes of 50 Participants for $5K Each
This story was originally published June 25.
As part of a company-sponsored symposium this fall to “explore best practices for deploying next-generation sequencing in a clinical setting,” Illumina plans to sequence and analyze the genomes of around 50 participants for $5,000 each, Clinical Sequencing News has learned.
According to Matt Posard, senior vice president and general manager of Illumina’s translational and consumer genomics business, the event is part of a “multi-step process to engage experts in the field around whole-genome sequencing, and to support the conversation.”
The “Understand your Genome” symposium will take place Oct. 22-23 at Illumina’s headquarters in San Diego.
The company sent out invitations to the event over the last few months, targeting individuals with a professional interest in whole-genome sequencing, including medical geneticists, pathologists, academics, and industry or business leaders, Posard told CSN this week. To provide potential participants with more information about the symposium, Illumina also hosted a webinar this month that included a Q&A session.
Registration closed June 14 and has exceeded capacity — initially 50 spots, a number that may increase slightly, Posard said. Everyone else is currently waitlisted, and Illumina plans to host additional symposia next year.
“There has been quite a bit of unanticipated enthusiasm around this from people who are speaking at the event or planning to attend the event,” including postings on blogs and listservs, Posard said.
As part of their $5,000 registration fee, which does not include travel and lodging, participants will have their whole genome sequenced in Illumina’s CLIA-certified and CAP-accredited lab prior to the event. It is also possible to participate without having one’s genome sequenced, but only as a companion to a full registrant, according to Illumina’s website. The company prefers that participants submit their own sample, but as an alternative, they may submit a patient sample instead.
The general procedure is very similar to Illumina’s Individual Genome Sequencing, or IGS, service in that it requires a prescription from a physician, who also receives the results to review them with the participant. However, participants pay less than they would through IGS, where a single human genome currently costs $9,500.
Participants will also have a one-on-one session with an Illumina geneticist prior to being sequenced, and they can choose to not receive certain medical information as part of the genome interpretation.
Doctors will receive the results and review them with the participants sometime before the event. “There will be no surprises for these participants when they come to the symposium,” Posard said.
Results will include not only a list of variants but also a clinical interpretation of the data by Illumina geneticists. This is currently not part of IGS, which requires an interpretation of the data by a third party, but Illumina plans to start offering interpretation services for IGS before the symposium, Posard said.
“Our stated intent has always been that we want to fill in all of the pieces that the physicians require, so we are building a human resource, as well as an informatics team, to provide that clinical interpretation, and we are using that apparatus for the ‘Understand your Genome’ event,” Posard said.
The interpretation will include “a specified subset of genes relating to Mendelian conditions, drug response, and complex disease risks,” according to the website, which notes that “as with any clinical test, the patient and physician must discuss any medically significant results.”
The first day of the symposium will feature presentations on clinical, laboratory, ethical, legal, and social issues around whole-genome sequencing by experts in the field. Speakers include Eric Topol from the Scripps Translational Science Institute, Matthew Ferber from the Mayo Clinic, Robert Green from Brigham and Women’s Hospital and Harvard Medical School, Heidi Rehm from the Harvard Partners Center for Genetics and Genomics, Gregory Tsongalis from the Dartmouth Hitchcock Medical Center, Robert Best from the University of South Carolina School of Medicine, Kenneth Chahine from Ancestry.com, as well as Illumina’s CEO Jay Flatley and chief scientist David Bentley.
On the second day, participants will receive their genome data on an iPad and learn how to analyze their results using the iPad MyGenome application that Illumina launched in April.
The planned symposium stirred some controversy at the European Society of Human Genetics annual meeting in Nuremberg, Germany, this week. During a presentation in a session on the diagnostic use of next-generation sequencing, Gert Matthijs, head of the Laboratory for Molecular Diagnostics at the Center for Human Genetics in Leuven, Belgium, said he was upset because the invitation to Illumina’s event apparently not only reached selected individuals but also patient organizations.
“To me, personally, [the event] tells that some people are really exploring the limits of business, and business models, to get us to genome sequencing,” he said.
“We have to be very careful when we put next-generation sequencing direct to the consumer, or to patient testing, but it’s a free world,” he added later.
Posard said that Illumina welcomes questions about and criticism of the symposium. “This is another example of us being extremely responsible and transparent in how we’re handling this novel application that everybody acknowledges is the wave of the future,” he said. “We want to responsibly introduce that wave, and I believe we’re doing so, through such things as the ‘Understand your Genome’ event, but not limited to this event.”
![]() |
Julia Karow tracks trends in next-generation sequencing for research and clinical applications for GenomeWeb’s In Sequenceand Clinical Sequencing News. E-mail her here or follow her GenomeWeb Twitter accounts at @InSequence and@ClinSeqNews. |
Federal Court Rules Helicos Patent Invalid; Company Reaches Payment Agreement with Lenders
NEW YORK (GenomeWeb News) – A federal court has ruled in Illumina’s favor in a lawsuit filed by Helicos BioSciences that had alleged patent infringement.
In a decision dated Aug. 28, District Judge Sue Robinson of the US District Court for the District of Delaware granted Illumina’s motion for summary judgment declaring US Patent No 7,593,109 held by Helicos invalid for “lack of written description.”
Titled “Apparatus and methods for analyzing samples,” the patent relates to an apparatus, systems, and methods for biological sample analysis.
The ‘109 patent was the last of three patents that Helicos accused Illumina of infringing, following voluntary dismissal by Helicos earlier this year with prejudice of the other two patents. In October 2010 Helicos included Illumina and Life Technologies in a lawsuit that originally accused Pacific Biosciences of patent infringement.
Helicos dropped its lawsuit against Life Tech and settled with PacBio earlier this year, leaving Illumina as the sole defendant.
In seeking a motion for summary judgment, Illumina argued that the ‘109 patent does not disclose “a focusing light source operating with any one of the analytical light sources to focus said optical instrument on the sample.” Illumina’s expert witness further said that the patent “does not describe how focusing light source works” nor does it provide an illustration of such a system, according to court documents.
In handing down her decision, Robinson said, “In sum, and based on the record created by the parties, the court concludes that Illumina has demonstrated, by clear and convincing evidence, that the written description requirement has not been met.”
In a statement, Illumina President and CEO Jay Flatley said he was pleased with the court’s decision.
“The court’s ruling on the ‘109 patent, and Helicos’ voluntary dismissal of the other patents in the suit, vindicates our position that we do not infringe any valid Helicos patent,” he said. “While we respect valid and enforceable intellectual property rights of others, Illumina will continue to vigorously defend against unfounded claims of infringement.”
After the close of the market Wednesday, Helicos also disclosed that it had reached an agreement with lenders to waive defaults arising from Helicos’ failure to pay certain risk premium payments in connection with prior liquidity transactions. The transactions are part of risk premium payment agreement Helicos entered into with funds affiliated with Atlas Venture and Flagship Ventures in November 2010.
The lenders have agreed to defer the risk premium payments “until [10] business days after receipt of a written notice from the lenders demanding the payment of such risk premium payments,” Helicos said in a document filed with the US Securities and Exchange Commission.
The Cambridge, Mass.-based firm also disclosed that Noubar Afeyan and Peter Barrett have resigned from its board.
Helicos said two weeks ago that its second-quarter revenues dipped 29 percent year over year to $577,000. In an SEC document, it also warned that existing funds were not sufficient to support its operations and related litigation expenses through the planned September trial date for its dispute with Illumina.
In Thursday trade on the OTC market, shares of Helicos closed down 20 percent at $.04.
Source:
State of the Science: Genomics and Cancer Research
Genome Technology: Doctors Wang and Chen, can you tell me a bit about the work you did that led to you receiving the Szent-Györgyi prize?
Zhen-Yi Wang: I am a physician. I am working in the clinic, so I have to serve the patients. … I know the genes very superficially, not very deeply, but the question raised to me is: There are so many genes, but how are [we] to judge what is the most important?
Zhu Chen: The work that is recognized by this year’s Szent-Györgyi Prize concerns … acute promyelocytic leukemia. Over the past few decades, we have been involved in developing new treatment strategies against this disease.
You have two [therapies — all-trans retinoic acid and arsenic trioxide] — that target the same protein but with slightly different mechanisms, so we call this synergistic targeting. When the two drugs combine together for the induction therapy, then we see very nice response in terms of the complete remission rate. But more importantly, we see that this synergistic targeting, together with the effect of the chemotherapy, can achieve a very high five-year disease-free survival — as high as 90 percent.
But we were more interested in the functional aspects of the genome, to understand what each gene does and also to particularly understand the network behavior of the genes.
GT: There are a number of consortiums looking at the genome sequences of many cancer types. What do you hope to see from such studies?
Webster Cavenee: This is a way that tumors are being sequenced in a rational kind of way. It would have been done anyway by labs individually, which would have taken a lot more money and taken a lot longer, too. The human genome sequence, everybody said, ‘Why are you going to do that?’ … But that now turns out to be a tremendous resource. … From the point of view of The Cancer Genome Atlas, having the catalog of all of the kinds of mutations which are present in tumors can be very useful because you can see patterns. For example, in the glioblastoma cancer genome project, they found an unexpected association of some mutations and combinations of mutations with drug sensitivity. Nobody would have thought that.
Carlo Croce: After that, you have to be able to validate all of the genetic operations in model systems where you can reproduce the same changes and see whether there are the same consequences. Otherwise, without validation, to develop therapy doesn’t make much sense because maybe those so-called driver mutations will turn out to be something else.
GT: Will sequencing of patient’s tumors come to the clinic?
CC: It is inevitable. Naturally, there are a lot of bottlenecks. To do the sequencing is the, quote, trivial part and it is going to cost less and less. But then interpreting the data might be a little bit more cumbersome.
Sujuan Ba: Dr. Chen, there is an e-health card in China right now. Do you think some day gene sequencing will be stored in that card?
ZC: We are developing a digital healthcare in China. We started with electronic health records and now by providing the e-health card to the people, that will facilitate the individualized health management and also the supervision of our healthcare system. In terms of the use of genetic information for clinical purposes, as Professor Croce said, it’s going to happen.
GT: What do you think are the major questions in cancer research that still need to be addressed?
PV: There are increasingly two schools of thought on cancer. One is that it is all an engineering problem: We have all the information we need, we just need to engineer the right drugs. The other school says it’s still a basic knowledge problem. I think more and more people think it’s just an engineering problem — give us the money and we’ll do it all. A lot of things can be done, but we still don’t have complete knowledge.
Roundtable Participants
Sujuan Ba, National Foundation for Cancer Research
Webster Cavenee, University of California, San Diego
Zhu Chen, Ministry of Health, China
Carlo Croce, Ohio State University
Peter Vogt, Scripps Research Institute
Zhen-Yi Wang, Shanghai Jiao Tong University
This is terrific. I was aware of the complexity issue. Now we are back to the three blind men – one holding the ear, the other holding the tail, and the third holding the trunk. Hmmm. This is an elephant.
The ability to integrate information about the other ‘omics will probably be a critical direction to understand the underpinnings of disease,” he says. “I call it the ‘panoromic’ view — that is really going to become a critical future direction once we can do those other ‘omics readily.
I have to visit Dr. Coifman, and also share this with Gil David.
Dr. Larry, thank you for your comment.
The merit of this post is to put in ONE place disperse pieces of information all making the tapestry of the New Generation of Sequencing (NGS) – the era post- Human Genome sequencing completion
Thank you very much for a great review!
A few things should be considered:
1. The 68,000 human genome represent staggering amount of information. The challenge will be to covert this information into knowledge and to make that knowledge effectively accessible at least to researchers and in the longer term to medical professionals and patients.
2. The 68,000+ genomes collected by a large and well-funded government collaboration is just a start. With the continued increase in the efficiency of the NGS technology, we can expect a day when 68,000 (or more) genomes are collected by individual companies.
In addition to the obvious need for analytical algorithm and data management systems, we probably need to look forward to regulations governing the use of genomic data. I also wonder how long it might be before liability issues are raised.
obvious need for analytical algorithm and data management systems, we probably need to look forward to regulations governing the use of genomic data. I also wonder how long it might be before liability issues are raised
I am ashamed to say that my view of regulations has been modified after 15 years on several very collegial regulatory and review committees and professional experience. The industry has some very good committees to address needed changes. In practice, I am thinking that self-regulation is an illusion, FDA is is overloaded, nutraceuticals are not regulated like drugs, and the regulation we have can be stifling our innovation. It’s messy because of the huge complexity. Dr. Stanley Dudrick would not have been able to develop total parenteral nutrition through a subclavian line if he had to do it today. When he did it with the future Chairman of Surgery at Harvard, the Philadelphia Medical Society was outraged (does it sound like Semmelweis and Childbed fever under Rokitansky in Vienna?)
To get to your most important point – we actually have equations and capabilities today that were not possible 10 years ago to cross that line. My colleague, the best pathologist at Yale, Marguerite M Pinto (who Jon morrow loves to badger) has told me that Jon says that people are using some 3 equations, none of which has been proved to be best. I don’t think that’s the issue. The issue is that there is no strong validity unless you can tie the genomics to critical pathways that are phenotypical ANOMALIES. Now to do that you have to have a lot more data that can’t be predetermined because the number of predictors is dependent on the size the classes. We don’t know the number of classes or their size apriori. The probabilities become more and more accurate as you add to the data bank. The weights of the predictors becomes clearer. The probabilities are calculated from the total population. The man I think has gone beyond everything that is reported is in Tel Aviv, but there is another branch on the tree that hasn’t been traversed yet.
Dr. Larry,
The comment made by Dr. Boris refers to paradigms of Artificial Intelligence, data mining – algorithms from the family of correlation decay algorithms, Bucket Brigade, machine learning and neural nets, Combinatorial Optimization, Simulated Annealing and genetic programming and genetic algorithms rather than the Bayesian models that you are talking about.
I worked in this field at MITRE in 1995-1996. Then it was frontier
I didn’t at all refer to bayesian models. They are clearly insufficient. Neural nets are also inadequate to handle at leat 100 subsets and perhaps a dozen predictors; You only need 3 or 4 predictors for a single choice. I think I have more publications in clinical biomarkers analysis than anyone in my field.
I already had only a brief post on Anomaly detection and classification, a purely empirical method used by Gil David, and defered reference to waveform analysis that opened the whole field of spectroscopy and gave Prof. Ronald R Coifman the National Science Medal and membership in the NAS. Now, a hinese radiologist and mathematician used waveform analysis to analyze periodicity of respiration and the EKG, which is marvelous.
I know that I am not as sophisticated as you are based on what you have said was your work at MITRE. One handicap you had was unaccounted for noise. The next was probably having enough POWER. The last had to be limitations in your computational support that dropped away in 2002, and is now driving the “BIG DATA” promise. You may note that “BiG DATA” has not hit medicine, but is still aimed largely at business competitiveness. I keep looking at Watson, and I’m underwhelmed.
I learned what was needed from Wally Foster at AFIP, Christos Tsokos, IJ Good, Gene Rypka (whose papers on syndromic classification of microorganisms are on my desk and was taken up by industry and is use worldwide), and by Rosser Rudolph (a true genius who had more misfortune than anyone can expect – with the death of his daughter and with biphasic disorder that required hospitalization at times, I have had really fabulous responses to the work done at Yale. You can’t imagine what it can be when you are trying to stay at the leading edge and are viewed as a crackpot. But I have had recognition, and even backslaps. My grounding has to stay with medical evidence (not committe designed). We lose some clarity when we get into specifically designed algorithms.
The work with Rosser Rudolph sold me using “Effective Information” (not any different than Kullback entropy) to find optimum decision points for a biomarker. then I had the good fortune to work with Izaak Mayzlin, the refusenik from Moscow University, who was the most gifted in math in y daughter’s class.
I saw the biomarker field get really screwed up in the last 10 years, and this is because of the way confounders are not worked into the analysis, a basic problem with clinical trials. That’s why I can’t be finished with troponins and natriuretc peptides. These are seen at a more macro level that the signaling pathways, which are not fully integrated with genomics.
I’m an old fart still trying to hold on with a weak grip.
Dr. Boris.
Please conduct a search on our Scientific Web Site for genome or FDA or IRB and review the posts.
I posted three on biosimilars, one about legal issues. Search “Biosimilars”
Thank you for your comment.
I e-mailed you and invitation to be A GUEST AUTHOR ON OUR CATEGORY
called :Bioinformatics or any other that you can claim expertise.
Dr. Larry,
Thank you for your comment. This discussion will go on only live, not in comments.
Dr. Boris is with
http://www.japanbioinformatics.com/
Please check what they do, how it is relevant to his comment and how my comments are relevant to his original one.
I actually consider this amazing blog , âSAME SCIENTIFIC IMPACT: Scientific Publishing –
Open Journals vs. Subscription-based « Pharmaceutical Intelligenceâ, very compelling plus the blog post ended up being a good read.
Many thanks,Annette
I actually consider this amazing blog , âSAME SCIENTIFIC IMPACT: Scientific Publishing –
Open Journals vs. Subscription-based « Pharmaceutical Intelligenceâ, very compelling plus the blog post ended up being a good read.
Many thanks,Annette