Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Ras oncogene’

War on Cancer Needs to Refocus to Stay Ahead of Disease Says Cancer Expert


War on Cancer Needs to Refocus to Stay Ahead of Disease Says Cancer Expert

Writer, Curator: Stephen J. Williams, Ph.D.

Is one of the world’s most prominent cancer researchers throwing in the towel on the War On Cancer? Not throwing in the towel, just reminding us that cancer is more complex than just a genetic disease, and in the process, giving kudos to those researchers who focus on non-genetic aspects of the disease (see Dr. Larry Bernstein’s article Is the Warburg Effect the Cause or the Effect of Cancer: A 21st Century View?).

 

National Public Radio (NPR) has been conducting an interview series with MIT cancer biology pioneer, founding member of the Whitehead Institute for Biomedical Research, and National Academy of Science member and National Medal of Science awardee Robert A. Weinberg, Ph.D., who co-discovered one of the first human oncogenes (Ras)[1], isolation of first tumor suppressor (Rb)[2], and first (with Dr. Bill Hahn) proved that cells could become tumorigenic after discrete genetic lesions[3].   In the latest NPR piece, Why The War On Cancer Hasn’t Been Won (seen on NPR’s blog by Richard Harris), Dr. Weinberg discusses a comment in an essay he wrote in the journal Cell[4], basically that, in recent years, cancer research may have focused too much on the genetic basis of cancer at the expense of multifaceted etiology of cancer, including the roles of metabolism, immunity, and physiology. Cancer is the second most cause of medically related deaths in the developed world. However, concerted efforts among most developed nations to eradicate the disease, such as increased government funding for cancer research and a mandated ‘war on cancer’ in the mid 70’s has translated into remarkable improvements in diagnosis, early detection, and cancer survival rates for many individual cancer. For example, survival rate for breast and colon cancer have improved dramatically over the last 40 years. In the UK, overall median survival times have improved from one year in 1972 to 5.8 years for patients diagnosed in 2007. In the US, the overall 5 year survival improved from 50% for all adult cancers and 62% for childhood cancer in 1972 to 68% and childhood cancer rate improved to 82% in 2007. However, for some cancers, including lung, brain, pancreatic and ovarian cancer, there has been little improvement in survival rates since the “war on cancer” has started.

(Other NPR interviews with Dr. Weinberg include How Does Cancer Spread Through The Body?)

As Weinberg said, in the 1950s, medical researchers saw cancer as “an extremely complicated process that needed to be described in hundreds, if not thousands of different ways,”. Then scientists tried to find a unifying principle, first focusing on viruses as the cause of cancer (for example rous sarcoma virus and read Dr. Gallo’s book on his early research on cancer, virology, and HIV in Virus Hunting: AIDS, Cancer & the Human Retrovirus: A Story of Scientific Discovery).

However (as the blog article goes on) “that idea was replaced by the notion that cancer is all about wayward genes.”

“The thought, at least in the early 1980s, was that were a small number of these mutant, cancer-causing oncogenes, and therefore that one could understand a whole disparate group of cancers simply by studying these mutant genes that seemed to be present in many of them,” Weinberg says. “And this gave the notion, the illusion over the ensuing years, that we would be able to understand the laws of cancer formation the way we understand, with some simplicity, the laws of physics, for example.”

According to Weinberg, this gene-directed unifying theory has given way as recent evidences point back once again to a multi-faceted view of cancer etiology.

But this is not a revolutionary or conflicting idea for Dr. Weinberg, being a recipient of the 2007 Otto Warburg Medal and focusing his latest research on complex systems such as angiogenesis, cell migration, and epithelial-stromal interactions.

In fact, it was both Dr. Weinberg and Dr. Bill Hanahan who formulated eight governing principles or Hallmarks of cancer:

  1. Maintaining Proliferative Signals
  2. Avoiding Immune Destruction
  3. Evading Growth Suppressors
  4. Resisting Cell Death
  5. Becoming Immortal
  6. Angiogenesis
  7. Deregulating Cellular Energy
  8. Activating Invasion and Metastasis

Taken together, these hallmarks represent the common features that tumors have, and may involve genetic or non-genetic (epigenetic) lesions … a multi-modal view of cancer that spans over time and across disciplines. As reviewed by both Dr. Larry Bernstein and me in the e-book Volume One: Cancer Biology and Genomics for Disease Diagnosis, each scientific discipline, whether the pharmacologist, toxicologist, virologist, molecular biologist, physiologist, or cell biologist has contributed greatly to our total understanding of this disease, each from their own unique perspective based on their discipline. This leads to a “multi-modal” view on cancer etiology and diagnosis, treatment. Many of the improvements in survival rates are a direct result of the massive increase in the knowledge of tumor biology obtained through ardent basic research. Breakthrough discoveries regarding oncogenes, cancer cell signaling, survival, and regulated death mechanisms, tumor immunology, genetics and molecular biology, biomarker research, and now nanotechnology and imaging, have directly led to the advances we now we in early detection, chemotherapy, personalized medicine, as well as new therapeutic modalities such as cancer vaccines and immunotherapies and combination chemotherapies. Molecular and personalized therapies such as trastuzumab and aromatase inhibitors for breast cancer, imatnib for CML and GIST related tumors, bevacizumab for advanced colorectal cancer have been a direct result of molecular discoveries into the nature of cancer. This then leads to an interesting question (one to be tackled in another post):

Would shifting focus less on cancer genome and back to cancer biology limit the progress we’ve made in personalized medicine?

 

In a 2012 post Genomics And Targets For The Treatment Of Cancer: Is Our New World Turning Into “Pharmageddon” Or Are We On The Threshold Of Great Discoveries? Dr. Leonard Lichtenfield, MD, Deputy Chief Medical Officer for the ACS, comments on issues regarding the changes which genomics and personalized strategy has on oncology drug development. As he notes, in the past, chemotherapy development was sort of ‘hit or miss’ and the dream and promise of genomics suggested an era of targeted therapy, where drug development was more ‘rational’ and targets were easily identifiable.

To quote his post

That was the dream, and there have been some successes–even apparent cures or long term control–with the used of targeted medicines with biologic drugs such as Gleevec®, Herceptin® and Avastin®. But I think it is fair to say that the progress and the impact hasn’t been quite what we thought it would be. Cancer has proven a wily foe, and every time we get answers to questions what we usually get are more questions that need more answers. The complexity of the cancer cell is enormous, and its adaptability and the genetic heterogeneity of even primary cancers (as recently reported in a research paper in the New England Journal of Medicine) has been surprising, if not (realistically) unexpected.

                                                                               ”

Indeed the complexity of a given patient’s cancer (especially solid tumors) with regard to its genetic and mutation landscape (heterogeneity) [please see post with interview with Dr. Swanton on tumor heterogeneity] has been at the forefront of many clinicians minds [see comments within the related post as well as notes from recent personalized medicine conferences which were covered live on this site including the PMWC15 and Harvard Personalized Medicine conference this past fall].

In addition, Dr. Lichtenfeld makes some interesting observations including:

  • A “pharmageddon” where drug development risks/costs exceed the reward so drug developers keep their ‘wallets shut’. For example even for targeted therapies it takes $12 billion US to develop a drug versus $2 billion years ago
  • Drugs are still drugs and failure in clinical trials is still a huge risk
  • “Eroom’s Law” (like “Moore’s Law” but opposite effect) – increasing costs with decreasing success
  • Limited market for drugs targeted to a select mutant; what he called “slice and dice”

The pros and cons of focusing solely on targeted therapeutic drug development versus using a systems biology approach was discussed at the 2013 Institute of Medicine’s national Cancer Policy Summit.

  • Andrea Califano, PhD – Precision Medicine predictions based on statistical associations where systems biology predictions based on a physical regulatory model
  • Spyro Mousses, PhD – open biomedical knowledge and private patient data should be combined to form systems oncology clearinghouse to form evolving network, linking drugs, genomic data, and evolving multiscalar models
  • Razelle Kurzrock, MD – What if every patient with metastatic disease is genomically unique? Problem with model of smaller trials (so-called N=1 studies) of genetically similar disease: drugs may not be easily acquired or re-purposed, and greater regulatory burdens

So, discoveries of oncogenes, tumor suppressors, mutant variants, high-end sequencing, and the genomics and bioinformatic era may have led to advent of targeted chemotherapies with genetically well-defined patient populations, a different focus in chemotherapy development

… but as long as we have the conversation open I have no fear of myopia within the field, and multiple viewpoints on origins and therapeutic strategies will continue to develop for years to come.

References

  1. Parada LF, Tabin CJ, Shih C, Weinberg RA: Human EJ bladder carcinoma oncogene is homologue of Harvey sarcoma virus ras gene. Nature 1982, 297(5866):474-478.
  2. Friend SH, Bernards R, Rogelj S, Weinberg RA, Rapaport JM, Albert DM, Dryja TP: A human DNA segment with properties of the gene that predisposes to retinoblastoma and osteosarcoma. Nature 1986, 323(6089):643-646.
  3. Hahn WC, Counter CM, Lundberg AS, Beijersbergen RL, Brooks MW, Weinberg RA: Creation of human tumour cells with defined genetic elements. Nature 1999, 400(6743):464-468.
  4. Weinberg RA: Coming full circle-from endless complexity to simplicity and back again. Cell 2014, 157(1):267-271.

 

Other posts on this site on The War on Cancer and Origins of Cancer include:

 

2013 Perspective on “War on Cancer” on December 23, 1971

Is the Warburg Effect the Cause or the Effect of Cancer: A 21st Century View?

World facing cancer ‘tidal wave’, warns WHO

2013 American Cancer Research Association Award for Outstanding Achievement in Chemistry in Cancer Research: Professor Alexander Levitzki

Genomics and Metabolomics Advances in Cancer

The Changing Economics of Cancer Medicine: Causes for the Vanishing of Independent Oncology Groups in the US

Cancer Research Pioneer, after 71 years of Immunology Lab Research, Herman Eisen, MD, MIT Professor Emeritus of Biology, dies at 96

My Cancer Genome from Vanderbilt University: Matching Tumor Mutations to Therapies & Clinical Trials

Articles on Cancer-Related Topic in http://pharmaceuticalintelligence.com Scientific Journal

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

Introduction – The Evolution of Cancer Therapy and Cancer Research: How We Got Here?

Advertisements

Read Full Post »


Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »