Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Curators: Larry H. Berstein, M.D. FACP & Stephen J. Williams, Ph.D.
For most of the history of chemotherapy drug development, predicting the possible mechanisms of drug resistance that ensued could be surmised from the drug’s pharmacologic mechanism of action. In other words, a tumor would develop resistance merely by altering the pathways/systems which the drug relied on for mechanism of action. For example, as elucidated in later chapters in this book, most cytotoxic chemotherapies like cisplatin and cyclophosphamide were developed to bind DNA and disrupt the cycling cell, thereby resulting in cell cycle arrest and eventually cell death or resulting in such a degree of genotoxicity which would result in great amount of DNA fragmentation. These DNA-damaging agents efficacy was shown to be reliant on their ability to form DNA adducts and lesions. Therefore increasing DNA repair could result in a tumor cell becoming resistant to these drugs. In addition, if drug concentration was merely decreased in these cells, by an enhanced drug efflux as seen with the ABC transporters, then there would be less drug available for these DNA adducts to be generated. A plethora of literature has been generated on this particular topic.
However in the era of chemotherapies developed against targets only expressed in tumor cells (such as Gleevec against the Bcr-Abl fusion protein in chronic myeloid leukemia), this paradigm had changed as clinical cases of resistance had rapidly developed soon after the advent of these compounds and new paradigms of resistance mechanisms were discovered.
Imatinib resistance can be seen quickly after initiation of therapy
Speed of imatinib resistance a result of rapid gene amplification of BCR/ABL target, thereby decreasing imatinib efficacy
Although there are many other new mechanisms of resistance to personalized medicine agents (which are discussed later in the chapter) this post is a curation of cellular changes which are not commonly discussed in reviews of chemoresistance and separated in three main categories:
The advent of Gleevec (imatinib) had issued in a new era of chemotherapy, a personalized medicine approach by determining the and a lifesaver to chronic myeloid leukemia (CML) patients whose tumors displayed expression of the Bcr-Abl fusion gene. However it was not long before clinical resistance was seen to this therapy and, it was shown amplification of the drug target can lead to tumor outgrowth despite adequate drug exposure. le Coutre, Weisberg and Mahon23, 24, 25 all independently generated imatinib-resistant clones through serial passage of the cells in imatinib-containing media and demonstrated elevated Abl kinase activity due to a genetic amplification of the Bcr–Abl sequence. However, all of these samples were derived in vitro and may not represent a true mode of clinical resistance. Nevertheless, Gorre et al.26 obtained specimens, directly patients demonstrating imatinib resistance, and using fluorescence in situ hybridization analysis, genetic duplication of the Bcr–Abl gene was identified as one possible source of the resistance. Additional sporadic examples of amplification of the Bcr–Abl sequence have been clinically described, but the majority of patients presenting with either primary or secondary imatinib resistance fail to clinically demonstrate Abl amplification as a primary mode of treatment failure.
The 2-phenylaminopyrimidine derivative STI571 has been shown to selectively inhibit the tyrosine kinase domain of the oncogenic bcr/abl fusion protein. The activity of this inhibitor has been demonstrated so far both in vitro with bcr/abl expressing cells derived from leukemic patients, and in vivo on nude mice inoculated with bcr/abl positive cells. Yet, no information is available on whether leukemic cells can develop resistance to bcr/abl inhibition. The human bcr/abl expressing cell line LAMA84 was cultured with increasing concentrations of STI571. After approximately 6 months of culture, a new cell line was obtained and named LAMA84R. This newly selected cell line showed an IC50 for the STI571 (1.0 microM) 10-fold higher than the IC50 (0.1 microM) of the parental sensitive cell line. Treatment with STI571 was shown to increase both the early and late apoptotic fraction in LAMA84 but not in LAMA84R. The induction of apoptosis in LAMA84 was associated with the activation of caspase 3-like activity, which did not develop in the resistant LAMA84R cell line. LAMA84R cells showed increased levels of bcr/abl protein and mRNA when compared to LAMA84 cells. FISH analysis with BCR- and ABL-specific probes in LAMA84R cells revealed the presence of a marker chromosome containing approximately 13 to 14 copies of the BCR/ABL gene. Thus, overexpression of the Bcr/Abl protein mediated through gene amplification is associated with and probably determines resistance of human leukemic cells to STI571 in vitro. (Blood. 2000;95:1758-1766)
This is actually the opposite case with other personalized therapies like the EGFR inhibitor gefinitib where actually the AMPLIFICATION of the therapeutic target EGFR is correlated with better response to drug in
The epidermal growth factor receptor (EGFR) family of receptor tyrosine kinases, including EGFR, HER2/erbB2, and HER3/erbB3, is an attractive target for antitumor strategies. Aberrant EGFR signaling is correlated with progression of various malignancies, and somatic tyrosine kinase domain mutations in the EGFR gene have been discovered in patients with non-small cell lung cancer responding to EGFR-targeting small molecular agents, such as gefitinib and erlotinib. EGFR overexpression is thought to be the principal mechanism of activation in various malignant tumors. Moreover, an increased EGFR copy number is associated with improved survival in non-small cell lung cancer patients, suggesting that increased expression of mutant and/or wild-type EGFR molecules could be molecular determinants of responses to gefitinib. However, as EGFR mutations and/or gene gains are not observed in all patients who respond partially to treatment, alternative mechanisms might confer sensitivity to EGFR-targeting agents. Preclinical studies showed that sensitivity to EGFR tyrosine kinase inhibitors depends on how closely cell survival and growth signalings are coupled with EGFR, and also with HER2 and HER3, in each cancer. This review also describes a possible association between EGFR phosphorylation and drug sensitivity in cancer cells, as well as discussing the antiangiogenic effect of gefitinib in association with EGFR activation and phosphatidylinositol 3-kinase/Akt activation in vascular endothelial cells.
Twitter, Google, LinkedIn Enter in the Curation Foray: What’s Up With That?
Reporter: Stephen J. Williams, Ph.D.
Recently Twitter has announced a new feature which they hope to use to increase engagement on their platform. Originally dubbed Project Lightning and now called Moments, this feature involves many human curators which aggregate and curate tweets surrounding individual live events(which used to be under #Live).
As Madhu Muthukumar (@justmadhu), Twitter’s Product Manager, published a blog post describing Moments said:
“Every day, people share hundreds of millions of tweets. Among them are things you can’t experience anywhere but on Twitter: conversations between world leaders and celebrities, citizens reporting events as they happen, cultural memes, live commentary on the night’s big game, and many more,” the blog post noted. “We know finding these only-on-Twitter moments can be a challenge, especially if you haven’t followed certain accounts. But it doesn’t have to be.”
Moments is a new tab on Twitter’s mobile and desktop home screens where the company will curate trending topics as they’re unfolding in real-time — from citizen-reported news to cultural memes to sports events and more. Moments will fall into five total categories, including “Today,” “News,” “Sports,” “Entertainment” and “Fun.” (Source: Fox)
What’s a challenge for Google is a direct threat to Twitter’s existence.
For all the talk about what doesn’t work in journalism, curation works. Following the news, collecting it and commenting, and encouraging discussion, is the “secret sauce” for companies like Buzzfeed, Vox, Vice and The Huffington Post, which often wind up getting more traffic from a story at, say The New York Times (NYSE:NYT), than the Times does as a result.
Curation is, in some ways, a throwback to the pre-Internet era. It’s done by people. (At least I think I’m a people.) So as odd as it is for Twitter (NYSE:TWTR) to announce it will curate live events it’s even odder to see Google (NASDAQ:GOOG) (NASDAQ:GOOGL) doing it in a project called YouTube Newswire.
Buzzfeed, Google’s content curation platform, made for desktop as well as a mobile app, allows sharing of curated news, viral videos.
The feel for both Twitter and Google’s content curation will be like a newspaper, with an army of human content curators determining what is the trendiest news to read or videos to watch.
BuzzFeed articles, or at least, the headlines can easily be mined from any social network but reading the whole article still requires that you open the link within the app or outside using a mobile web browser. Loading takes some time–a few seconds longer. Try browsing the BuzzFeed feed on the app and you’ll notice the obvious difference.
Google News: Less focused on social signals than textual ones, Google News uses its analytic tools to group together related stories and highlight the biggest ones. Unlike Techmeme, it’s entirely driven by algorithms, and that means it often makes weird choices. I’ve heard that Google uses social sharing signals from Google+ to help determine which stories appear on Google News, but have never heard definitive confirmation of that — and now that Google+ is all but dead, it’s mostly moot. I find Google News an unsatisfying home page, but it is a good place to search for news once you’ve found it.
“
Now WordPress Too!
WordPress also has announced its curation plugin called Curation Traffic.
According to WordPress
You Own the Platform, You Benefit from the Traffic
“The Curation Traffic™ System is a complete WordPress based content curation solution. Giving you all the tools and strategies you need to put content curation into action.
It is push-button simple and seamlessly integrates with any WordPress site or blog.
With Curation Traffic™, curating your first post is as easy as clicking “Curate” and the same post that may originally only been sent to Facebook or Twitter is now sent to your own site that you control, you benefit from, and still goes across all of your social sites.”
The theory the more you share on your platform the more engagement the better marketing experience. And with all the WordPress users out there they have already an army of human curators.
So That’s Great For News But What About Science and Medicine?
The news and trendy topics such as fashion and music are common in most people’s experiences. However more technical areas of science, medicine, engineering are not in most people’s domain so aggregation of content needs a process of peer review to sort basically “the fact from fiction”. On social media this is extremely important as sensational stories of breakthroughs can spread virally without proper vetting and even influence patient decisions about their own personal care.
Expertise Depends on Experience
In steps the human experience. On this site (www.pharmaceuticalintelligence.com) we attempt to do just this. A consortium of M.D.s, Ph.D. and other medical professionals spend their own time to aggregate not only topics of interest but curate on specific topics to add some more insight from acceptable sources over the web.
For instance, using a Twitter platform, we curate #live meeting notes and tweets from meeting attendees (please see links below and links within) to give a live conference coverage
and curation and analysis give rise not only to meeting engagement butunique insights into presentations.
In addition, the use of a WordPress platform allows easy sharing among many different social platforms including Twitter, Google+, LinkedIn, Pinterest etc.
Hopefully, this will catch on to the big powers of Twitter, Google and Facebook to realize there exists armies of niche curation communities which they can draw on for expert curation in the biosciences.
Leaders in Pharmaceutical Business Intelligence would like to announce their First Volume of their BioMedical E-Book Series A: eBooks on Cardiovascular Diseases
This book is a comprehensive review of Nitric Oxide, its discovery, function, and related opportunities for Targeted Therapy written by Experts, Authors, Writers. This book is a series of articles delineating the basic functioning of the NOS isoforms, their production widely by endothelial cells, and the effect of NITRIC OXIDE production by endothelial cells, by neutrophils and macrophages, the effect on intercellular adhesion, and the effect of circulatory shear and turbulence on NITRIC OXIDE production. The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012. All new articles on this subject, will continue to be incorporated, as published, in real time in the e-Book which is a live book.
We invite e-Readers to write an Article Reviews on Amazon for this e-Book.
All forthcoming BioMed e-Book Titles can be viewed at:
Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.
Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:
delivering curation and summary interpretations of latest findings and innovations on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
providing a social platform for scientists and clinicians to enter into discussion using social media
compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform
This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.
Table of Contents forPerspectives on Nitric Oxide in Disease Mechanisms
Chapter 1: Nitric Oxide Basic Research
Chapter 2: Nitric Oxide and Circulatory Diseases
Chapter 3: Therapeutic Cardiovascular Targets
Chapter 4: Nitric Oxide and Neurodegenerative Diseases
Chapter 5: Bone Metabolism
Chapter 6: Nitric Oxide and Systemic Inflammatory Disease
Chapter 7: Nitric Oxide: Lung and Alveolar Gas Exchange
This e-Book is a comprehensive review of recent Original Research on METABOLOMICS and related opportunities for Targeted Therapy written by Experts, Authors, Writers. This is the first volume of the Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases. It is written for comprehension at the third year medical student level, or as a reference for licensing board exams, but it is also written for the education of a first time baccalaureate degree reader in the biological sciences. Hopefully, it can be read with great interest by the undergraduate student who is undecided in the choice of a career. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation.The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012. All new articles on this subject, will continue to be incorporated, as published with periodical updates.
We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon.
All forthcoming BioMed e-Book Titles can be viewed at:
Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.
Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:
delivering curation and summary interpretations of latest findings and innovations on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
providing a social platform for scientists and clinicians to enter into discussion using social media
compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform
This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.
Table of Contents forMetabolic Genomics & Pharmaceutics, Vol. I
Chapter 1: Metabolic Pathways
Chapter 2: Lipid Metabolism
Chapter 3: Cell Signaling
Chapter 4: Protein Synthesis and Degradation
Chapter 5: Sub-cellular Structure
Chapter 6: Proteomics
Chapter 7: Metabolomics
Chapter 8: Impairments in Pathological States: Endocrine Disorders; Stress
Hypermetabolism and Cancer
Chapter 9: Genomic Expression in Health and Disease
Cancer Biology and Genomics for Disease Diagnosis (Vol. I) Now Available for Amazon Kindle
Reporter: Stephen J Williams, PhD
Leaders in Pharmaceutical Business Intelligence would like to announce the First volume of their BioMedical E-Book Series C: e-Books on Cancer & Oncology
This e-Book is a comprehensive review of recent Original Research on Cancer & Genomics including related opportunities for Targeted Therapy written by Experts, Authors, Writers. This ebook highlights some of the recent trends and discoveries in cancer research and cancer treatment, with particular attention how new technological and informatics advancements have ushered in paradigm shifts in how we think about, diagnose, and treat cancer. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation.The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012. All new articles on this subject, will continue to be incorporated, as published with periodical updates.
We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon. All forthcoming BioMed e-Book Titles can be viewed at:
Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.
Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:
delivering curation and summary interpretations of latest findings and innovations
on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
providing a social platform for scientists and clinicians to enter into discussion using social media
compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform
This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.
Table of Contents for Cancer Biology and Genomics for Disease Diagnosis
Preface
Introduction The evolution of cancer therapy and cancer research: How we got here?
Part I. Historical Perspective of Cancer Demographics, Etiology, and Progress in Research
Chapter 1: The Occurrence of Cancer in World Populations
Chapter 2. Rapid Scientific Advances Changes Our View on How Cancer Forms
Chapter 3: A Genetic Basis and Genetic Complexity of Cancer Emerge
Chapter 4: How Epigenetic and Metabolic Factors Affect Tumor Growth
Chapter 5: Advances in Breast and Gastrointestinal Cancer Research Supports Hope for Cure
Part II. Advent of Translational Medicine, “omics”, and Personalized Medicine Ushers in New Paradigms in Cancer Treatment and Advances in Drug Development
Chapter 6: Treatment Strategies
Chapter 7: Personalized Medicine and Targeted Therapy
Part III.Translational Medicine, Genomics, and New Technologies Converge to Improve Early Detection
Chapter 8: Diagnosis
Chapter 9: Detection
Chapter 10: Biomarkers
Chapter 11: Imaging In Cancer
Chapter 12: Nanotechnology Imparts New Advances in Cancer Treatment, Detection, & Imaging
Epilogue by Larry H. Bernstein, MD, FACP: Envisioning New Insights in Cancer Translational Biology
Leaders in Pharmaceutical Intelligence Presentation at The Life Sciences Collaborative
Curator:Stephen J. Williams, Ph.D.Website Analytics:Adam Sonnenberg, BSc Leaders in Pharmaceutical Intelligence presented their ongoing efforts to develop an open-access scientific and medical publishing and curation platform to The Life Science Collaborative, an executive pharmaceutical and biopharma networking group in the Philadelphia/New Jersey area.
The Pennsylvania (PA) and New Jersey (NJ) Biotech environment had been hit hard by the recession and loss of anchor big pharma companies however as highlighted by our interviews in “The Vibrant Philly Biotech Scene” and other news outlets, additional issues are preventing the PA/NJ area from achieving its full potential (discussions also with LSC)
War on Cancer Needs to Refocus to Stay Ahead of Disease Says Cancer Expert
Writer, Curator: Stephen J. Williams, Ph.D.
UPDATED 1/08/2020
Is one of the world’s most prominent cancer researchers throwing in the towel on the War On Cancer? Not throwing in the towel, just reminding us that cancer is more complex than just a genetic disease, and in the process, giving kudos to those researchers who focus on non-genetic aspects of the disease (see Dr. Larry Bernstein’s article Is the Warburg Effect the Cause or the Effect of Cancer: A 21st Century View?).
National Public Radio (NPR) has been conducting an interview series with MIT cancer biology pioneer, founding member of the Whitehead Institute for Biomedical Research, and National Academy of Science member and National Medal of Science awardee Robert A. Weinberg, Ph.D., who co-discovered one of the first human oncogenes (Ras)[1], isolation of first tumor suppressor (Rb)[2], and first (with Dr. Bill Hahn) proved that cells could become tumorigenic after discrete genetic lesions[3]. In the latest NPR piece, Why The War On Cancer Hasn’t Been Won (seen on NPR’s blog by Richard Harris), Dr. Weinberg discusses a comment in an essay he wrote in the journal Cell[4], basically that, in recent years, cancer research may have focused too much on the genetic basis of cancer at the expense of multifaceted etiology of cancer, including the roles of metabolism, immunity, and physiology. Cancer is the second most cause of medically related deaths in the developed world. However, concerted efforts among most developed nations to eradicate the disease, such as increased government funding for cancer research and a mandated ‘war on cancer’ in the mid 70’s has translated into remarkable improvements in diagnosis, early detection, and cancer survival rates for many individual cancer. For example, survival rate for breast and colon cancer have improved dramatically over the last 40 years. In the UK, overall median survival times have improved from one year in 1972 to 5.8 years for patients diagnosed in 2007. In the US, the overall 5 year survival improved from 50% for all adult cancers and 62% for childhood cancer in 1972 to 68% and childhood cancer rate improved to 82% in 2007. However, for some cancers, including lung, brain, pancreatic and ovarian cancer, there has been little improvement in survival rates since the “war on cancer” has started.
As Weinberg said, in the 1950s, medical researchers saw cancer as “an extremely complicated process that needed to be described in hundreds, if not thousands of different ways,”. Then scientists tried to find a unifying principle, first focusing on viruses as the cause of cancer (for example rous sarcoma virus and read Dr. Gallo’s book on his early research on cancer, virology, and HIV in Virus Hunting: AIDS, Cancer & the Human Retrovirus: A Story of Scientific Discovery).
However (as the blog article goes on) “that idea was replaced by the notion that cancer is all about wayward genes.”
“The thought, at least in the early 1980s, was that were a small number of these mutant, cancer-causing oncogenes, and therefore that one could understand a whole disparate group of cancers simply by studying these mutant genes that seemed to be present in many of them,” Weinberg says. “And this gave the notion, the illusion over the ensuing years, that we would be able to understand the laws of cancer formation the way we understand, with some simplicity, the laws of physics, for example.”
According to Weinberg, this gene-directed unifying theory has given way as recent evidences point back once again to a multi-faceted view of cancer etiology.
But this is not a revolutionary or conflicting idea for Dr. Weinberg, being a recipient of the 2007 Otto Warburg Medal and focusing his latest research on complex systems such as angiogenesis, cell migration, and epithelial-stromal interactions.
In fact, it was both Dr. Weinberg and Dr. Bill Hanahan who formulated eight governing principles or Hallmarks of cancer:
Maintaining Proliferative Signals
Avoiding Immune Destruction
Evading Growth Suppressors
Resisting Cell Death
Becoming Immortal
Angiogenesis
Deregulating Cellular Energy
Activating Invasion and Metastasis
Taken together, these hallmarks represent the common features that tumors have, and may involve genetic or non-genetic (epigenetic) lesions … a multi-modal view of cancer that spans over time and across disciplines. As reviewed by both Dr. Larry Bernstein and me in the e-book Volume One: Cancer Biology and Genomics for Disease Diagnosis, each scientific discipline, whether the pharmacologist, toxicologist, virologist, molecular biologist, physiologist, or cell biologist has contributed greatly to our total understanding of this disease, each from their own unique perspective based on their discipline. This leads to a “multi-modal” view on cancer etiology and diagnosis, treatment. Many of the improvements in survival rates are a direct result of the massive increase in the knowledge of tumor biology obtained through ardent basic research. Breakthrough discoveries regarding oncogenes, cancer cell signaling, survival, and regulated death mechanisms, tumor immunology, genetics and molecular biology, biomarker research, and now nanotechnology and imaging, have directly led to the advances we now we in early detection, chemotherapy, personalized medicine, as well as new therapeutic modalities such as cancer vaccines and immunotherapies and combination chemotherapies. Molecular and personalized therapies such as trastuzumab and aromatase inhibitors for breast cancer, imatnib for CML and GIST related tumors, bevacizumab for advanced colorectal cancer have been a direct result of molecular discoveries into the nature of cancer. This then leads to an interesting question (one to be tackled in another post):
Would shifting focus less on cancer genome and back to cancer biology limit the progress we’ve made in personalized medicine?
In a 2012 post Genomics And Targets For The Treatment Of Cancer: Is Our New World Turning Into “Pharmageddon” Or Are We On The Threshold Of Great Discoveries? Dr. Leonard Lichtenfield, MD, Deputy Chief Medical Officer for the ACS, comments on issues regarding the changes which genomics and personalized strategy has on oncology drug development. As he notes, in the past, chemotherapy development was sort of ‘hit or miss’ and the dream and promise of genomics suggested an era of targeted therapy, where drug development was more ‘rational’ and targets were easily identifiable.
To quote his post
“
That was the dream, and there have been some successes–even apparent cures or long term control–with the used of targeted medicines with biologic drugs such as Gleevec®, Herceptin® and Avastin®. But I think it is fair to say that the progress and the impact hasn’t been quite what we thought it would be. Cancer has proven a wily foe, and every time we get answers to questions what we usually get are more questions that need more answers. The complexity of the cancer cell is enormous, and its adaptability and the genetic heterogeneity of even primary cancers (as recently reported in a research paper in the New England Journal of Medicine) has been surprising, if not (realistically) unexpected.
In addition, Dr. Lichtenfeld makes some interesting observations including:
A “pharmageddon” where drug development risks/costs exceed the reward so drug developers keep their ‘wallets shut’. For example even for targeted therapies it takes $12 billion US to develop a drug versus $2 billion years ago
Drugs are still drugs and failure in clinical trials is still a huge risk
“Eroom’s Law” (like “Moore’s Law” but opposite effect) – increasing costs with decreasing success
Limited market for drugs targeted to a select mutant; what he called “slice and dice”
Andrea Califano, PhD – Precision Medicine predictions based on statistical associations where systems biology predictions based on a physical regulatory model
Spyro Mousses, PhD – open biomedical knowledge and private patient data should be combined to form systems oncology clearinghouse to form evolving network, linking drugs, genomic data, and evolving multiscalar models
Razelle Kurzrock, MD – What if every patient with metastatic disease is genomically unique? Problem with model of smaller trials (so-called N=1 studies) of genetically similar disease: drugs may not be easily acquired or re-purposed, and greater regulatory burdens
So, discoveries of oncogenes, tumor suppressors, mutant variants, high-end sequencing, and the genomics and bioinformatic era may have led to advent of targeted chemotherapies with genetically well-defined patient populations, a different focus in chemotherapy development
… but as long as we have the conversation open I have no fear of myopia within the field, and multiple viewpoints on origins and therapeutic strategies will continue to develop for years to come.
References
Parada LF, Tabin CJ, Shih C, Weinberg RA: Human EJ bladder carcinoma oncogene is homologue of Harvey sarcoma virus ras gene. Nature 1982, 297(5866):474-478.
Friend SH, Bernards R, Rogelj S, Weinberg RA, Rapaport JM, Albert DM, Dryja TP: A human DNA segment with properties of the gene that predisposes to retinoblastoma and osteosarcoma. Nature 1986, 323(6089):643-646.
Hahn WC, Counter CM, Lundberg AS, Beijersbergen RL, Brooks MW, Weinberg RA: Creation of human tumour cells with defined genetic elements. Nature 1999, 400(6743):464-468.
Weinberg RA: Coming full circle-from endless complexity to simplicity and back again. Cell2014, 157(1):267-271.
The cancer death rate in the United States fell 2.2 percent in 2017 — the biggest single-year drop ever reported — propelled by gains against lung cancer, the American Cancer Society said Wednesday.
Declines in the mortality rate for lung cancer have accelerated in recent years in response to new treatments and falling smoking rates, said Rebecca Siegel, lead author of Cancer Statistics 2020, the latest edition of the organization’s annual report on cancer trends.
The improvement in 2017, the most recent year for which data is available, is part of a long-term drop in cancer mortality that reflects, to a large extent, the smoking downturn. Since peaking in 1991, the cancer death rate has fallen 29 percent, which translates into 2.9 million fewer deaths.
Norman “Ned” Sharpless, director of the National Cancer Institute, which was not involved in the report, said the data reinforces that “we are making steady progress” on cancer. For lung cancer, he pointed to new immunotherapy treatments and so-called targeted therapies that stop the action of molecules key to cancer growth. He predicted that the mortality rate would continue to fall “as we get better at using these therapies.” Multiple clinical trials are exploring how to combine the new approaches with older ones, such as chemotherapy.
Sharpless expressed concern, however, that progress against cancer would be undermined by increased obesity, which is a risk factor for several malignancies.
The cancer society report projected 1.8 million new cases of cancer in the United States this year and more than 606,000 deaths. Nationally, cancer is the second-leading cause of death after heart disease in both men and women. It is the No. 1 cause in many states, and among Hispanic and Asian Americans and people younger than 80, the report said.
The cancer death rate is defined as deaths per 100,000 people. The cancer society has been reporting the rate since 1930.
Because lung cancer is the leading cause of cancer deaths, accounting for 1 in 4, any change in the mortality rate has a large effect on the overall cancer death rate, Siegel noted.
She described the gains against lung cancer, and against another often deadly cancer, melanoma, as “exciting.” But, she added, “the news this year is mixed” because of slower progress against colorectal, breast and prostate cancers. Those cancers often can be detected early by screening, she said.
The report said substantial racial and geographic disparities remain for highly preventable cancers, such as cervical cancer, and called for “the equitable application” of cancer control measures.
In recent years, melanoma has showed the biggest mortality-rate drop of any cancer. That’s largely a result of breakthrough treatments such as immunotherapy, which unleashes the patient’s own immune system to fight the cancer and was approved for advanced melanoma in 2011.
Other posts on this site on The War on Cancer and Origins of Cancer include:
Artificial Intelligence Versus the Scientist: Who Will Win?
Will DARPA Replace the Human Scientist: Not So Fast, My Friend!
Writer, Curator: Stephen J. Williams, Ph.D.
Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,
all without human input.
The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:
“‘
Just when we need to understand highly connected systems as systems,
our research methods force us to focus on little parts.
”
The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.
As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:
One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.
Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.
The Human no Longer Needed?: Not So Fast, my Friend!
The problems the DARPA research teams are encountering namely:
Need for data verification
Text mining and curation strategies
Incomplete knowledge base (past, current and future)
Molecular biology not necessarily “requires casual inference” as other fields do
Verification
Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.
Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.
Text-Mining and Curation Strategies
The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:
Manual or automated curation of scientific literature?
Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.
The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].
Incomplete Knowledge Base
In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.
Tackling the problem of scientific and medical information overload
Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.
Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”
Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.
These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.
Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”
In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)
which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.
This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.
A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.
Deriving Casual Inference
Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood. This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.
Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:
“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”
According to him first
1) articulate assumptions
2) define research question in counter-inference terms
Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.
The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.
In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.
Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson
Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN
(this discussion is in a three part series including:
Using Scientific Content Curation as a Method for Validation and Biocuration
Using Scientific Content Curation as a Method for Open Innovation)
Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value. He also writes extensively about tech and entrepreneurship.
Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:
Our ideas are PRODUCTS OF OUR ENVIRONMENT.
They are influenced by the conversations around us.
However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.
But HOW MUCH Knowledge is OUT THERE?
Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:
154.6 billion EMAILS per DAY
400 million TWEETS per DAY
1 million BLOG POSTS (including this one) per DAY
2 million COMMENTS on WordPress per DAY
16 million WORDS on Facebook per DAY
TOTAL 52 TRILLION WORDS per DAY
As he estimates this would be 520 million books per DAY (book with average 100,000 words).
A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.
Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:
2.9 billion internet users
981 million websites (only 25,000 hacked today)
128 billion emails
385 million Tweets
> 2.7 million BLOG posts today (including this one)
The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)
So how many science blogs are out there? Well back in 2008 “grrlscientist” asked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.
So granted, there is a lot of
….. when it comes to scientific information on the internet!
I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:
Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.
Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians
Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:
“a meeting of natural philosophers to discuss promoting knowledge of the
natural world through observation and experiment”, i.e. science.
The Society met weekly to witness experiments and discuss what we
would now call scientific topics. The first Curator of Experiments
Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:
Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
Current publishing does not encourage open feedback and review
Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
Published data and methodology incomplete
Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format
(put in here a brief blurb of summary of problems and why curation could help)
Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN
Curation is an active filtering of the web’s and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view. Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale. One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.
– Larry H. Bernstein, MD, FCAP
Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP
The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.
Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care
… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.
In Summary: How Scientific Content Curation Can Help
Given the aforementioned problems of:
I. the complex and rapid deluge of scientific information
II. the need for a collaborative, open environment to produce transformative innovation
III. need for alternative ways to disseminate scientific findings
CURATION MAY OFFER SOLUTIONS
I. Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation
II. Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms
III. Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)
Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.
Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN
At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:
Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation
demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …
which has created a need for more context-driven scientific search and discourse.
To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.
.. where Célya considers curation an essential practice to manage open science and this new style of research.
As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.
Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS
The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging. Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communicationwas peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases.In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.
The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to
In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid’:
“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”
— JISC Senior Management Briefing Paper, The Data Deluge (2004)
As she states proper data and content curation is important for:
Post-analysis
Data and research result reuse for new research
Validation
Preservation of data in newer formats to prolong life-cycle of research results
However she laments the lack of
Funding for such efforts
Training
Organizational support
Monitoring
Established procedures
Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.
Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.
And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:
Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.
“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”
Creating Content Curation Communities: Breaking Down the Silos!
An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as
information integration (various types of data and formats)
social integration (marginalized by scientific communities, no funding, no recognition)
The authors set forth some ways to overcome these challenges of the content curation community including:
standardization in practices
visualization to document contributions
emphasizing role of information professionals in content curation communities
maintaining quality control to increase respectability
recognizing participation to professional communities
A few great presentations and papers from the 2012 DICOSE meeting are found below
Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown
Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe
Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee
Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland
Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman
Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber
Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.
This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.
The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.
2. cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.
3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.
4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.
OpenWetWare– OpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us. If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.
6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.
7. OpenScience Project– The OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.
8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.
9. Some ongoing biomedical knowledge (curation) projects at ISI
IICurate This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.
BioScholar It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).
VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries. So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.
11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education
From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset
Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.
How the Internet of Things is Promoting the Curation Effort
Update by Stephen J. Williams, PhD 3/01/19
Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts. I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running. One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in. About an army of 4 million have Wiki entries and maintain these databases.
Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)
In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information” senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.
As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.” Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”
Enter Wikidata. According to founder Denny Vrandecic,
Language depends on knowing a lot of common sense, which computers don’t have access to
A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.
Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items. Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.
The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.
But this effort needs money
Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s. In response, Amazon and Facebook had donated millions for the Wikimedia projects. Google recently gave 3.1 million USD$ in donations.
Future postings on the relevance and application of scientific curation will include:
Using Scientific Content Curation as a Method for Validation and Biocuration
Using Scientific Content Curation as a Method for Open Innovation
Other posts on this site related to Content Curation and Methodology include: