Feeds:
Posts
Comments

Posts Tagged ‘science’

Medical Headline Misinformation Strikes Again: Claims About Vitamin D

Reporter: Stephen J. Williams, Ph.D.

A recent posting by a group called the Vitamin D Council (and put on this site) had referred to, and misquoted, the Mayo Clinic site on the role of vitamin D on various diseases. At first I was curious if this was actually reported on the Mayo site on claims of prevention of various cancers (as results from retrospective studies had been conflicting) and originally had made some strong comments. From comments made from this post I do agree that there is strong evidence about vitamin D supplementation for the prevention of rickets but as Mayo reviewed claims about vitamin D supplementation and prevention of certain diseases such as cancers and heart disease may not be as strong as some suggest.  My main concern was : is the clinical evidence strong enough for the role of vitamin D supplementation in a wide array of diseases and did Mayo make the claims as suggested in some media reports?  Actually Mayo does a very thorough job of determining the clinical evidence and the focus of vitamins and cancer risk will be a point of further discussion.

After consulting the Mayo clinic website it appears that the Vitamin D Council site had indeed misquoted and misrepresented the medical information contained within the Mayo Clinic website.

Medical Misinformation Is Probably The Most Hazardous and Biggest Risk Impacting a Healthy Lifestyle

The site had made numerous claims on role of vitamin D3 (cholecalciferol) in numerous diseases; making it appear there were definitive links between low vitamin D3 and risk of hypertension, cancer, depression and diabetes.

A little background on Vitamin D

From Wikipedia

Vitamin D refers to a group of fat-soluble secosteroids responsible for enhancing intestinal absorption of calcium, iron, magnesium, phosphate and zinc. In humans, the most important compounds in this group are vitamin D3 (also known as cholecalciferol) and vitamin D2 (ergocalciferol).[1] Cholecalciferol and ergocalciferol can be ingested from the diet and from supplements.[1][2][3] Very few foods contain vitamin D; synthesis of vitamin D (specifically cholecalciferol) in the skin is the major natural sources of the vitamin. Dermal synthesis of vitamin D from cholesterol is dependent on sun exposure (specifically UVB radiation).Vitamin D has a significant role in calcium homeostasis and metabolism. Its discovery was due to effort to find the dietary substance lacking in rickets (the childhood form of osteomalacia).[4]

also from Widipedia on Vitamin D toxicity

Vitamin D toxicity

Vitamin D toxicity is rare.[20] It is caused by supplementing with high doses of vitamin D rather than sunlight. The threshold for vitamin D toxicity has not been established; however, the tolerable upper intake level (UL), according to some research, is 4,000 IU/day for ages 9–71.[7] Whereas another research concludes that in healthy adults, sustained intake of more than 1250 μg/day (50,000 IU) can produce overt toxicity after several months and can increase serum 25-hydroxyvitamin D levels to 150 ng/ml and greater;[20][56] those with certain medical conditions, such as primary hyperparathyroidism,[57] are far more sensitive to vitamin D and develop hypercalcemia in response to any increase in vitamin D nutrition, while maternal hypercalcemia during pregnancy may increase fetal sensitivity to effects of vitamin D and lead to a syndrome of mental retardation and facial deformities.[57][58]

After being commissioned by the Canadian and American governments, the Institute of Medicine (IOM) as of 30 November 2010, has increased the tolerable upper limit (UL) to 2,500 IU per day for ages 1–3 years, 3,000 IU per day for ages 4–8 years and 4,000 IU per day for ages 9–71+ years (including pregnant or lactating women).[7]

Published cases of toxicity involving hypercalcemia in which the vitamin D dose and the 25-hydroxy-vitamin D levels are known all involve an intake of ≥40,000 IU (1,000 μg) per day.[57] Recommending supplementation, when those supposedly in need of it are labeled healthy, has proved contentious, and doubt exists concerning long-term effects of attaining and maintaining high serum 25(OH)D by supplementation.[61]

From the Mayo Clinic Website on Vitamin D

The Mayo Clinic has done a wonderful job curating the uses and proposed uses of vitamin D for various diseases and rates the evidence using a grading system A-F (as shown below):

Key to grades

A STRONG scientific evidence FOR THIS USE

B GOOD scientific evidence FOR THIS USE

C UNCLEAR scientific evidence for this use

D Fair scientific evidence AGAINST THIS USE (it may not work)

F Strong scientific evidence AGAINST THIS USE (it likely does not work)

Mayo has information for other natural products as well. As described below (and on the Mayo site here) most of the supposed evidence fails their criteria for a strong clinical link between diseases such as heart disease, hypertension, cancer and vitamin D (either parental or D3) levels.

The important take-home from the Mayo site is that there is strong evidence for the use of vitamin D in diseases related to the known mechanism of vitamin D such as low serum phosphate either due to kidney disease (Fanconi syndrome) or familial hypophosphatemia or in diseases surrounding bone metabolism like osteomalacia, rickets, dental cavities and even as a treatment for psoriasis or underactive parathyroid.

However most indications like hypertension, stroke, cancer prevention or treatment (other than supportive therapy for low vitamin D levels) get a poor grade (C or D) for clinical correlation from Mayo Clinic.

A Post in the Near Future will be a Curation of Validated Clinical Studies on Effects of Vitamins on Cancer Risk.

Below is taken from the Mayo Site:

Evidence

These uses have been tested in humans or animals.  Safety and effectiveness have not always been proven.  Some of these conditions are potentially serious, and should be evaluated by a qualified healthcare provider.

Grading rationale

Evidence grade Condition to which grade level applies
A

Deficiency (phosphate)

Familial hypophosphatemia is a rare, inherited condition in which there are low blood levels of phosphate and problems with vitamin D metabolism. It is a form of rickets. Taking calcitriol or dihydrotachysterol by mouth along with phosphate supplements is effective for treating bone disorders in people with this disease. Those with this disorder should be monitored by a medical professional.

A

Kidney disease (causing low phosphate levels)

Fanconi syndrome is a kidney disease in which nutrients, including phosphate, are lost in the urine instead of being reabsorbed by the body. Taking ergocalciferol by mouth is effective for treating low phosphate levels caused by Fanconi syndrome.

A

Osteomalacia (bone softening in adults)

Adults who have severe vitamin D deficiency may experience bone pain and softness, as well as muscle weakness. Osteomalacia may be found among the following people: those who are elderly and have diets low in vitamin D; those with problems absorbing vitamin D; those without enough sun exposure; those who undergo stomach or intestine surgery; those with bone disease caused by aluminum; those with chronic liver disease; or those with bone disease associated with kidney problems. Treatment for osteomalacia depends on the cause of the disease and often includes pain control and surgery, as well as vitamin D and phosphate-binding agents.

A

Psoriasis (disorder causing skin redness and irritation)

Many different approaches are used to treat psoriasis, including light therapy, stress reduction, moisturizers, or salicylic acid. For more severe cases, calcipotriene (Dovonex®), a man-made substance similar to vitamin D3, may help control skin cell growth. This agent is a first-line treatment for mild-to-moderate psoriasis. Calcipotriene is also available with betamethasone and may be safe for up to one year. Vitamin D3 (tacalcitol) ointment or high doses of becocalcidiol applied to the skin are also thought to be safe and well-tolerated.

A

Rickets (bone weakening in children)

Rickets may develop in children who have vitamin D deficiency caused by a diet low in vitamin D, a lack of sunlight, or both. Babies fed only breast milk (without supplemental vitamin D) may also develop rickets. Ergocalciferol or cholecalciferol is effective for treating rickets caused by vitamin D deficiency. Calcitriol should be used in those with kidney failure. Treatment should be under medical supervision.

A

Thyroid conditions (causing low calcium levels)

Low levels of parathyroid hormone may occur after surgery to remove the parathyroid glands. Taking high doses of dihydrotachysterol, calcitriol, or ergocalciferol by mouth, with or without calcium, may help increase calcium levels in people with this type of thyroid problem. Increasing calcium intake, with or without vitamin D, may reduce the risk of underactive parathyroid glands.

A

Thyroid conditions (due to low vitamin D levels)

Some people may have overactive parathyroid glands due to low levels of vitamin D, and vitamin D is the first treatment for this disorder. For people who have overactive parathyroid glands due to other causes, surgery to remove the glands is often recommended. Studies suggest that vitamin D may help reduce the risk of further thyroid problems after undergoing partial or total removal of the parathyroid glands.

A

Vitamin D deficiency

Vitamin D deficiency is associated with many conditions, including bone loss, kidney disease, lung disorders, diabetes, stomach and intestine problems, and heart disease. Vitamin D supplementation has been found to help prevent or treat vitamin D deficiency.

B

Dental cavities

Much evidence has shown that vitamin D helps prevent cavities; however, more high-quality research is needed to further support this finding.

B

Renal osteodystrophy (bone problems due to chronic kidney failure)

Renal osteodystrophy refers to the bone problems that occur in people with chronic kidney failure. Calcifediol or ergocalciferol taken by mouth may help prevent this condition in people with chronic kidney failure who are undergoing treatment.

C

Autoimmune diseases

Vitamin D may reduce inflammation and help prevent autoimmune diseases, including rheumatoid arthritis, multiple sclerosis, and Crohn’s disease. However, further high-quality research is needed to confirm these results.

C

Bone density (children)

Vitamin D improves bone density in children who are vitamin D deficient. However, results are unclear and more research is needed.

C

Bone diseases (kidney disease or kidney transplant)

Vitamin D has been studied for people with chronic kidney disease. The use of substances similar to vitamin D has been found to increase bone density in people with kidney disease. The effect of vitamin D itself is unclear. Further research is needed before conclusions can be made.

C

Cancer prevention (breast, colorectal, prostate, other)

Many studies have looked at the effects of vitamin D on cancer. Positive results have been reported with the use of vitamin D alone or with calcium. Vitamin D intake with or without calcium has been studied for colorectal, cervical, breast, and prostate cancer. A reduced risk of colorectal cancer has been shown with vitamin D supplementation. However, there is a lack of consistent or strong evidence. Further study is needed.

C

Fibromyalgia (long-term, body-wide pain)

Vitamin D has been studied for the treatment of fibromyalgia, but evidence is lacking in support of its effectiveness. Further study is needed.

C

Fractures (prevention)

Conflicting results have been found on the use of vitamin D for fracture prevention. The combination of alfacalcidol and alendronate has been found to reduce the risk of falls and fractures. However, further high-quality research is needed before firm conclusions can be made.

C

Hepatic osteodystrophy (bone disease in people with liver disease)

Metabolic bone disease is common among people with chronic liver disease, and osteoporosis accounts for the majority of cases. Varying degrees of poor calcium absorption may occur in people with chronic liver disease due to malnutrition and vitamin D deficiency. Vitamin D taken by mouth or injected may play a role in the management of this condition.

C

High blood pressure

Low levels of vitamin D may be linked to high blood pressure. Blood pressure is often higher during the winter season, at a further distance from the equator, and in people with dark skin pigmentation. However, the evidence is unclear. More research is needed in this area. People who have high blood pressure should be managed by a medical professional.

C

Immune function

Early research suggests that vitamin D and similar compounds, such as alfacalcidol, may impact immune function. Vitamin D added to standard therapy may benefit people with infectious disease. More studies are needed to confirm these results.

C

Seasonal affective disorder (SAD)

SAD is a form of depression that occurs during the winter months, possibly due to reduced exposure to sunlight. In one study, vitamin D was found to be better than light therapy in the treatment of SAD. Further studies are necessary to confirm these findings.

C

Stroke

Higher levels of vitamin D may decrease the risk of stroke. However, further study is needed to confirm the use of vitamin D for this condition.

C

Type 1 diabetes

Some studies suggest that vitamin D may help prevent the development of type 1 diabetes. However, there is a lack of strong evidence to support this finding.

C

Type 2 diabetes

Vitamin D has mixed effects on blood sugar and insulin sensitivity. It is often studied in combination with calcium. Further research is needed to confirm these results.

D

Cancer treatment (prostate)

Evidence suggests a lack of effect of vitamin D as a part of cancer treatment for prostate cancer. Further study is needed using other formulations of vitamin D and other types of cancer.

D

Heart disease

Vitamin D is recognized as being important for heart health. Overall, research is not consistent, and some studies have found negative effects of vitamin D on heart health. More high-quality research is needed to make a firm conclusion.

D

High cholesterol

Many studies have looked at the effects of vitamin D alone or in combination with other agents for high cholesterol, but results are inconsistent. Some negative effects have been reported. More research is needed on the use of vitamin D alone or in combination with calcium.

Other related articles on Vitamins and Disease were published in this Open Access Online Scientific Journal, include the following:

Multivitamins – Don’t help Extend Life or ward off Heart Disease and Improve state of Memory Loss

Diet and Diabetes

What do you know about Plants and Neutraceuticals?

Malnutrition in India, high newborn death rate and stunting of children age under five years

Omega-3 fatty acids, depleting the source, and protein insufficiency in renal disease

American Diet is LOW in four important Nutrients that have a direct bearing on Aging and the Brain

Parathyroids and Bone Metabolism

Read Full Post »

Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

Article ID #168: Artificial Intelligence Versus the Scientist: Who Will Win?. Published on 3/2/2015

WordCloud Image Produced by Adam Tubman

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »

  • Oracle Industry Connect Presents Their 2015 Life Sciences and Healthcare Program

 

Reporter: Stephen J. Williams, Ph.D. and Aviva Lev-Ari, Ph.D., R.N.

oraclehealthcare

Copyright photo Oracle Inc. (TM)

 

Transforming Clinical Research and Clinical Care with Data-Driven Intelligence

March 25-26 Washington, DC

For more information click on the following LINK:

https://www.oracle.com/oracleindustryconnect/life-sciences-healthcare.html

oracle-healthcare-solutions-br-1526409

https://www.oracle.com/industries/health-sciences/index.html  

Oracle Health Sciences: Life Sciences & HealthCare — the Solutions for Big Data

Healthcare and life sciences organizations are facing unprecedented challenges to improve drug development and efficacy while driving toward more targeted and personalized drugs, devices, therapies, and care. Organizations are facing an urgent need to meet the unique demands of patients, regulators, and payers, necessitating a move toward a more patient-centric, value-driven, and personalized healthcare ecosystem.

Meeting these challenges requires redesigning clinical R&D processes, drug therapies, and care delivery through innovative software solutions, IT systems, data analysis, and bench-to-bedside knowledge. The core mission is to improve the health, well-being, and lives of people globally by:

  • Optimizing clinical research and development, speeding time to market, reducing costs, and mitigating risk
  • Accelerating efficiency by using business analytics, costing, and performance management technologies

 

  • Establishing a global infrastructure for collaborative clinical discovery and care delivery models
  • Scaling innovations with world-class, transformative technology solutions
  • Harnessing the power of big data to improve patient experience and outcomes

The Oracle Industry Connect health sciences program features 15 sessions showcasing innovation and transformation of clinical R&D, value-based healthcare, and personalized medicine.

The health sciences program is an invitation-only event for senior-level life sciences and healthcare business and IT executives.

Complete your registration and book your hotel reservation prior to February 27, 2015 in order to secure the Oracle discounted hotel rate.

Learn more about Oracle Healthcare.

General Welcome and Joint Program Agenda

Wednesday, March 25

10:30 a.m.–12:00 p.m.

Oracle Industry Connect Opening Keynote

Mark Hurd, Chief Executive Officer, Oracle

Bob Weiler, Executive Vice President, Global Business Units, Oracle

Warren Berger, Author of “A More Beautiful Question: The Power of Inquiry to Spark Breakthrough Ideas.”

12:00 p.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:45 p.m.

Oracle Industry Connect Keynote

Bob Weiler, Executive Vice President, Global Business Units, Oracle

2:45 p.m.–3:45 p.m.

Networking Break

3:45 p.m.–5:45 p.m.

Life Sciences and Healthcare General Session

Robert Robbins, President, Chief Executive Officer, Texas Medical Center

Steve Rosenberg, Senior Vice President and General Manager Health Sciences Global Business Unit, Oracle

7:00 p.m.–10:00 p.m.

Life Sciences and Healthcare Networking Reception

National Museum of American History
14th Street and Constitution Avenue, NW
Washington DC 20001

Life Sciences Agenda

Thursday, March 26

7:00 a.m.–8:00 a.m.

Networking Breakfast

8:00 a.m.–9:15 a.m.

Digital Trials and Research Models of the Future 

Markus Christen, Senior Vice President and Head of Global Development, Proteus

Praveen Raja, Senior Director of Medical Affairs, Proteus Digital Health

Michael Stapleton, Vice President and Chief Information Officer, R&D IT, Merck

9:15 a.m.–10:30 a.m.

Driving Patient Engagement and the Internet of Things 

Howard Golub, Vice President of Clinical Research, Walgreens

Jean-Remy Behaeghel, Senior Director, Client Account Management, Product Development Solutions, Vertex Pharmaceuticals

10:30 a.m.–10:45 a.m.

Break

10:45 a.m.–12:00 p.m.

Leveraging Data and Advanced Analytics to Enable True Pharmacovigilance and Risk Management 

Leonard Reyno, Senior Vice President, Chief Medical Officer, Agensys

 

Accelerating Therapeutic Development Through New Technologies 

Andrew Rut, Chief Executive Officer, Co-Founder and Director, MyMeds&Me

12:45 a.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:30 p.m.

Oracle Industry Connect Keynote

2:30 p.m.–2:45 p.m.

Break

2:45 p.m.–3:15 p.m.

Harnessing Big Data to Increase R&D Innovation, Efficiency, and Collaboration 

Sandy Tremps, Executive Director, Global Clinical Development IT, Merck

3:15 p.m.–3:30 p.m.

Break

3:30 p.m.–4:45 p.m.

Transforming Clinical Research from Planning to Postmarketing 

Kenneth Getz, Director of Sponsored Research Programs and Research Associate Professor, Tufts University

Jason Raines, Head, Global Data Operations, Alcon Laboratories

4:45 p.m.–6:00 p.m.

Increasing Efficiency and Pipeline Performance Through Sponsor/CRO Data Transparency and Cloud Collaboration 

Thomas Grundstrom, Vice President, ICONIK, Cross Functional IT Strategies and Innovation, ICON

Margaret Keegan, Senior Vice President, Global Head Data Sciences and Strategy, Quintiles

6:00 p.m.–9:00 p.m.

Oracle Customer Networking Event

Healthcare Agenda

Thursday, March 26

7:00 a.m.–8:15 a.m.

Networking Breakfast

8:30 a.m.–9:15 a.m.

Population Health: A Core Competency for Providers in a Post Fee-for-Service Model 

Margaret Anderson, Executive Director, FasterCures

Balaji Apparsamy, Director, Business Intellegence, Baycare

Leslie Kelly Hall, Senior Vice President, Policy, Healthwise

Peter Pronovost, Senior Vice President, Patient Safety & Quality, Johns Hopkins

Sanjay Udoshi, Healthcare Product Strategy, Oracle

9:15 a.m.–9:30 a.m.

Break

9:30 a.m.–10:15 a.m.

Population Health: A Core Competency for Providers in a Post Fee-for-Service Model (Continued)

10:15 a.m.–10:45 a.m.

Networking Break

10:45 a.m.–11:30 a.m.

Managing Cost of Care in the Era of Healthcare Reform 

Chris Bruerton, Director, Budgeting, Intermountain Healthcare

Tony Byram, Vice President Business Integration, Ascension

Kerri-Lynn Morris, Executive Director, Finance Operations and Strategic Projects, Kaiser Permanente

Kavita Patel, Managing Director, Clinical Transformation, Brookings Institute

Christine Santos, Chief of Strategic Business Analytics, Providence Health & Services

Prashanth Kini, Senior Director, Healthcare Product Strategy, Oracle

11:30 a.m.–11:45 a.m.

Break

11:45 a.m.–12:45 p.m.

Managing Cost of Care in the Era of Healthcare Reform (Continued)

12:45 p.m.–1:45 p.m.

Networking Lunch

1:45 p.m.–2:30 p.m.

Oracle Industry Connect Keynote

2:30 p.m.–2:45 p.m.

Break

2:45 p.m.–3:30 p.m.

Precision Medicine 

Annerose Berndt, Vice President, Analytics and Information, UPMC

James Buntrock, Vice Chair, Information Management and Analytics, Mayo Clinic

Dan Ford, Vice Dean for Clinical Investigation, Johns Hopkins Medicine

Jan Hazelzet, Chief Medical Information Officer, Erasmus MC

Stan Huff, Chief Medical Information Officer, Intermountain Healthcare

Vineesh Khanna, Director, Biomedical Informatics, SIDRA

Brian Wells, Vice President, Health Technology, Penn Medicine

Wanmei Ou, Senior Product Strategist, Healthcare, Oracle

3:30 p.m.–3:45 p.m.

Networking Break

3:45 p.m.–4:30 p.m.

Precision Medicine (Continued)

4:30 p.m.–4:45 p.m.

Break

6:00 p.m.–9:00 p.m.

Oracle Customer Networking Event

Additional Links to Oracle Pharma, Life Sciences and HealthCare

 
Life Sciences | Industry | Oracle <http://www.oracle.com/us/industries/life-sciences/overview/>

http://www.oracle.com/us/industries/life-sciences/overview/

 
Oracle Corporation

 
Oracle Applications for Life Sciences deliver a powerful combination of technology and preintegrated applications.

  • Clinical

<http://www.oracle.com/us/industries/life-sciences/clinical/overview/index.html>

  • Medical Devices

<http://www.oracle.com/us/industries/life-sciences/medical/overview/index.html>

  • Pharmaceuticals

<http://www.oracle.com/us/industries/life-sciences/pharmaceuticals/overview/index.html>

 
Life Sciences Solutions | Pharmaceuticals and … – Oracle <http://www.oracle.com/us/industries/life-sciences/solutions/index.html>

http://www.oracle.com  Industries  Life Sciences

 
Oracle Corporation

 
Life Sciences Pharmaceuticals and Biotechnology.

 
Oracle Life Sciences Data Hub – Overview | Oracle <http://www.oracle.com/us/products/applications/health-sciences/e-clinical/data-hub/index.html>

http://www.oracle.com  …  E-Clinical Solutions

 
Oracle Corporation

 
Oracle Life Sciences Data Hub. Better Insights, More Informed Decision-Making. Provides an integrated environment for clinical data, improving regulatory …

 
Pharmaceuticals and Biotechnology | Oracle Life Sciences <http://www.oracle.com/us/industries/life-sciences/pharmaceuticals/overview/index.html>

http://www.oracle.com/us/…/life-sciences/…/index.html

 
Oracle Corporation

 
Oracle Applications for Pharmaceuticals and Biotechnology deliver a powerful combination of technology and preintegrated applications.

 
Oracle Health Sciences – Healthcare and Life Sciences … <https://www.oracle.com/industries/health-sciences/>

https://www.oracle.com/industries/health-sciences/

 
Oracle Corporation

 
Oracle Health Sciences leverages industry-shaping technologies that optimize clinical R&D, mitigate risk, advance healthcare, and improve patient outcomes.

 
Clinical | Oracle Life Sciences | Oracle <http://www.oracle.com/us/industries/life-sciences/clinical/overview/index.html>

http://www.oracle.com  Industries  Life Sciences  Clinical

 
Oracle Corporation

 
Oracle for Clinical Applications provides an integrated remote data collection facility for site-based entry.

 
Oracle Life Sciences | Knowledge Zone | Oracle … <http://www.oracle.com/partners/en/products/industries/life-sciences/get-started/index.html>

http://www.oracle.com/partners/…/life-sciences/…/index.ht&#8230;

 
Oracle Corporation

 
This Knowledge Zone was specifically developed for partners interested in reselling or specializing in Oracle Life Sciences solutions. To become a specialized …

 
[PDF]Brochure: Oracle Health Sciences Suite of Life Sciences … <http://www.oracle.com/us/industries/life-sciences/oracle-life-sciences-solutions-br-414127.pdf>

http://www.oracle.com/…/life-sciences/oracle-life-sciences-s&#8230;

 
Oracle Corporation

 
Oracle Health Sciences Suite of. Life Sciences Solutions. Integrated Solutions for Global Clinical Trials. Oracle Health Sciences provides the world’s broadest set …

 

 

Read Full Post »

The Vibrant Philly Biotech Scene: Focus on Computer-Aided Drug Design and Gfree Bio, LLC

Curator: Stephen J. Williams, Ph.D.

Article ID #166: The Vibrant Philly Biotech Scene: Focus on Computer-Aided Drug Design and Gfree Bio, LLC. Published on 2/10/2015

WordCloud Image Produced by Adam Tubman

 

 

philly philly2night

 

 

 

 

 

 

 

This post is the second in a series of posts highlighting interviews with Philadelphia area biotech startup CEO’s and show how a vibrant biotech startup scene is evolving in the city as well as the Delaware Valley area. Philadelphia has been home to some of the nation’s oldest biotechs including Cephalon, Centocor, hundreds of spinouts from a multitude of universities as well as home of the first cloned animal (a frog), the first transgenic mouse, and Nobel laureates in the field of molecular biology and genetics. Although some recent disheartening news about the fall in rankings of Philadelphia as a biotech hub and recent remarks by CEO’s of former area companies has dominated the news, biotech incubators like the University City Science Center and Bucks County Biotechnology Center as well as a reinvigorated investment community (like PCCI and MABA) are bringing Philadelphia back. And although much work is needed to bring the Philadelphia area back to its former glory days (including political will at the state level) there are many bright spots such as the innovative young companies as outlined in these posts.

efavirenz_med-2In today’s post, I had the opportunity to talk with molecular modeler Charles H. Reynolds, Ph.D., founder and CEO of Gfree Bio LLC, a computational structure-based design and modeling company based in the Pennsylvania Biotech Center of Bucks County. Gfree is actually one of a few molecular modeling companies at the Bucks County Biotech Center (I highlighted another company RabD Biotech which structural computational methods to design antibody therapeutics).

Below is the interview with Dr. Reynolds of Gfree Bio LLC and Leaders in Pharmaceutical Business Intelligence (LPBI):

LPBI: Could you briefly explain, for non-molecular modelers, your business and the advantages you offer over other molecular modeling programs (either academic programs or other biotech companies)? As big pharma outsources more are you finding that your company is filling a needed niche market?

GfreeBio: Gfree develops and deploys innovative computational solutions to accelerate drug discovery. We can offer academic labs a proven partner for developing SBIR/STTR proposals that include a computational or structure-based design component. This can be very helpful in developing a successful proposal. We also provide the same modeling and structure-based design input for small biotechs that do not have these capabilities internally. Working with Gfree is much more cost-effective than trying to develop these capabilities internally. We have helped several small biotechs in the Philadelphia region assess their modeling needs and apply computational tools to advance their discovery programs. (see publication and collaboration list here).

LPBI: Could you offer more information on the nature of your 2014 STTR award?

GfreeBio: Gfree has been involved in three successful SBIR/STTR awards in 2014.   I am the PI for an STTR with Professor Burgess of Texas A&M that is focused on new computational and synthetic approaches to designing inhibitors for protein-protein interactions. Gfree is also collaborating with the Wistar Institute and Phelix Therapeutics on two other Phase II proposals in the areas of oncology and infectious disease.

LPBI: Why did you choose the Bucks County Pennsylvania Biotechnology Center?

GfreeBio: I chose to locate my company at the Biotech Center because it is a regional hub for small biotech companies and it provides a range of shared resources that are very useful to the company. Many of my most valuable collaborations have resulted from contacts at the center.

LPBI: The Blumberg Institute and Natural Products Discovery Institute has acquired a massive phytochemical library. How does this resource benefit the present and future plans for GfreeBio?

GfreeBio: To date Gfree Bio has not been an active collaborator with the Natural Products Insititute, but I have a good relationship with the Director and that could change at any time.

LPBI: Was the state of Pennsylvania and local industry groups support GfreeBio’s move into the Doylestown incubator? Has the partnership with Ben Franklin Partners and the Center provided you with investment and partnership opportunities?

GfreeBio: Gfree Bio has not been actively seeking outside investors, at least to date. We have been focused on growing the company through collaborations and consulting relationships. However, we have benefitted from being part of the Keystone Innovation Zone, a state program that provides incentives for small technology-based businesses in Pennsylvania.

LPBI: You will be speaking at a conference in the UK on reinventing the drug discovery process through tighter collaborations between biotech, academia, and non-profit organizations.  How do you feel the Philadelphia area can increase this type of collaboration to enhance not only the goals and missions of nonprofits, invigorate the Pennsylvania biotech industry, but add much needed funding to the local academic organizations?

GfreeBio: I think this type of collaboration across sectors appears to be one of the most important emerging models for drug discovery.   The Philadelphia region has been in many ways hard hit by the shift of drug discovery from large vertically integrated pharmaceutical companies to smaller biotechs, since this area was at the very center of “Big Pharma.” But I think the region is bouncing back as it shifts more to being a center for biotech. The three ingredients for success in the new pharma model are great universities, a sizeable talent pool, and access to capital. The last item may be the biggest challenge locally. The KIZ program (Keystone Innovation Zone) is a good start, but the region and state could do more to help promote innovation and company creation. Some other states are being much more aggressive.

LPBI: In addition, the Pennsylvania Biotechnology Center in Bucks County appears to have this ecosystem: nonprofit organizations, biotechs, and academic researchers. Does this diversity of researchers/companies under one roof foster the type of collaboration needed, as will be discussed at the UK conference? Do you feel collaborations which are in close physical proximity are more effective and productive than a “virtual-style” (online) collaboration model? Could you comment on some of the collaborations GfreeBio is doing with other area biotechs and academics?

GfreeBio: I do think the “ecosystem” at the Pennsylvania Biotechnology Center is important in fostering new innovative companies. It promotes collaborations that might not happen otherwise, and I think close proximity is always a big plus. As I mentioned before, many of the current efforts of Gfree have come from contacts at the center.   This includes SBIR/STTR collaborations and contract work for local small biotech companies.

LPBI: Thompson Reuters just reported that China’s IQ (Innovation Quotient) has risen dramatically with the greatest patents for pharmaceuticals and compounds from natural products. Have you or your colleagues noticed more competition or business from Chinese pharmaceutical companies?

GfreeBio: The rise of Asia, particularly China, has been one of the most significant recent trends in the pharmaceutical industry. Initially, this was almost exclusively in the CRO space, but now China is aggressively building a fully integrated domestic pharmaceutical industry.

LPBI: How can the Philadelphia ecosystem work closer together to support greater innovation?

GfreeBio: A lot has happened in recent years to promote innovation and company creation in the region. There could always be more opportunities for networking and collaboration within the Philadelphia community. Of course the biggest obstacle in this business is often financing. Philadelphia needs more public and private sources for investment in startups.

LPBI: Thank you Dr. Reynolds.

Please look for future posts in this series on the Philly Biotech Scene on this site

Also, if you would like your Philadelphia biotech startup to be highlighted in this series please contact me: sjwilliamspa@comcast.net or @StephenJWillia2.
Our site is read by ~ 570,000 readers, among them thousand international readers daily and followed by thousands of Twitter followers.

 

Other posts on this site in this VIBRANT PHILLY BIOTECH SCENE SERIES OR referring to PHILADELPHIA BIOTECH include:

RAbD Biotech Presents at 1st Pitch Life Sciences-Philadelphia

The Vibrant Philly Biotech Scene: Focus on Vaccines and Philimmune, LLC

What VCs Think about Your Pitch? Panel Summary of 1st Pitch Life Science Philly

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

LytPhage Presents at 1st Pitch Life Sciences-Philadelphia

Hastke Inc. Presents at 1st Pitch Life Sciences-Philadelphia

PCCI’s 7th Annual Roundtable “Crowdfunding for Life Sciences: A Bridge Over Troubled Waters?” May 12 2014 Embassy Suites Hotel, Chesterbrook PA 6:00-9:30 PM

Pfizer Cambridge Collaborative Innovation Events: ‘The Role of Innovation Districts in Metropolitan Areas to Drive the Global an | Basecamp Business

Mapping the Universe of Pharmaceutical Business Intelligence: The Model developed by LPBI and the Model of Best Practices LLC

Read Full Post »

Twitter is Becoming a Powerful Tool in Science and Medicine

 Curator: Stephen J. Williams, Ph.D.

Article ID #159: Twitter is Becoming a Powerful Tool in Science and Medicine. Published on 11/6/2014

WordCloud Image Produced by Adam Tubman

Updated 4/2016

Life-cycle of Science 2

A recent Science article (Who are the science stars of Twitter?; Sept. 19, 2014) reported the top 50 scientists followed on Twitter. However, the article tended to focus on the use of Twitter as a means to develop popularity, a sort of “Science Kardashian” as they coined it. So the writers at Science developed a “Kardashian Index (K-Index) to determine scientists following and popularity on Twitter.

Now as much buzz Kim Kardashian or a Perez Hilton get on social media, their purpose is solely for entertainment and publicity purposes, the Science sort of fell flat in that it focused mainly on the use of Twitter as a metric for either promotional or public outreach purposes. A notable scientist was mentioned in the article, using Twitter feed to gauge the receptiveness of his presentation. In addition, relying on Twitter for effective public discourse of science is problematic as:

  • Twitter feeds are rapidly updated and older feeds quickly get buried within the “Twittersphere” = LIMITED EXPOSURE TIMEFRAME
  • Short feeds may not provide the access to appropriate and understandable scientific information (The Science Communication Trap) which is explained in The Art of Communicating Science: traps, tips and tasks for the modern-day scientist. “The challenge of clearly communicating the intended scientific message to the public is not insurmountable but requires an understanding of what works and what does not work.” – from Heidi Roop, G.-Martinez-Mendez and K. Mills

However, as highlighted below, Twitter, and other social media platforms are being used in creative ways to enhance the research, medical, and bio investment collaborative, beyond a simple news-feed.  And the power of Twitter can be attributed to two simple features

  1. Ability to organize – through use of the hashtag (#) and handle (@), Twitter assists in the very important task of organizing, indexing, and ANNOTATING content and conversations. A very great article on Why the Hashtag in Probably the Most Powerful Tool on Twitter by Vanessa Doctor explains how hashtags and # search may be as popular as standard web-based browser search. Thorough annotation is crucial for any curation process, which are usually in the form of database tags or keywords. The use of # and @ allows curators to quickly find, index and relate disparate databases to link annotated information together. The discipline of scientific curation requires annotation to assist in the digital preservation, organization, indexing, and access of data and scientific & medical literature. For a description of scientific curation methodologies please see the following links:

Please read the following articles on CURATION

The Methodology of Curation for Scientific Research Findings

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

Science and Curation: The New Practice of Web 2.0

  1. Information Analytics

Multiple analytic software packages have been made available to analyze information surrounding Twitter feeds, including Twitter feeds from #chat channels one can set up to cover a meeting, product launch etc.. Some of these tools include:

Twitter Analytics – measures metrics surrounding Tweets including retweets, impressions, engagement, follow rate, …

Twitter Analytics – Hashtags.org – determine most impactful # for your Tweets For example, meeting coverage of bioinvestment conferences or startup presentations using #startup generates automatic retweeting by Startup tweetbot @StartupTweetSF.

 

  1. Tweet Sentiment Analytics

Examples of Twitter Use

A. Scientific Meeting Coverage

In a paper entitled Twitter Use at a Family Medicine Conference: Analyzing #STFM13 authors Ranit Mishori, MD, Frendan Levy, MD, and Benjamin Donvan analyzed the public tweets from the 2013 Society of Teachers of Family Medicine (STFM) conference bearing the meeting-specific hashtag #STFM13. Thirteen percent of conference attendees (181 users) used the #STFM13 to share their thoughts on the meeting (1,818 total tweets) showing a desire for social media interaction at conferences but suggesting growth potential in this area. As we have also seen, the heaviest volume of conference-tweets originated from a small number of Twitter users however most tweets were related to session content.

However, as the authors note, although it is easy to measure common metrics such as number of tweets and retweets, determining quality of engagement from tweets would be important for gauging the value of Twitter-based social-media coverage of medical conferences.

Thea authors compared their results with similar analytics generated by the HealthCare Hashtag Project, a project and database of medically-related hashtag use, coordinated and maintained by the company Symplur.  Symplur’s database includes medical and scientific conference Twitter coverage but also Twitter usuage related to patient care. In this case the database was used to compare meeting tweets and hashtag use with the 2012 STFM conference.

These are some of the published journal articles that have employed Symplur (www.symplur.com) data in their research of Twitter usage in medical conferences.

B. Twitter Usage for Patient Care and Engagement

Although the desire of patients to use and interact with their physicians over social media is increasing, along with increasing health-related social media platforms and applications, there are certain obstacles to patient-health provider social media interaction, including lack of regulatory framework as well as database and security issues. Some of the successes and issues of social media and healthcare are discussed in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

However there is also a concern if social media truly engages the patient and improves patient education. In a study of Twitter communications by breast cancer patients Tweeting about breast cancer, authors noticed Tweeting was a singular event. The majority of tweets did not promote any specific preventive behavior. The authors concluded “Twitter is being used mostly as a one-way communication tool.” (Using Twitter for breast cancer prevention: an analysis of breast cancer awareness month. Thackeray R1, Burton SH, Giraud-Carrier C, Rollins S, Draper CR. BMC Cancer. 2013;13:508).

In addition a new poll by Harris Interactive and HealthDay shows one third of patients want some mobile interaction with their physicians.

Some papers cited in Symplur’s HealthCare Hashtag Project database on patient use of Twitter include:

C. Twitter Use in Pharmacovigilance to Monitor Adverse Events

Pharmacovigilance is the systematic detection, reporting, collecting, and monitoring of adverse events pre- and post-market of a therapeutic intervention (drug, device, modality e.g.). In a Cutting Edge Information Study, 56% of pharma companies databases are an adverse event channel and more companies are turning to social media to track adverse events (in Pharmacovigilance Teams Turn to Technology for Adverse Event Reporting Needs). In addition there have been many reports (see Digital Drug Safety Surveillance: Monitoring Pharmaceutical Products in Twitter) that show patients are frequently tweeting about their adverse events.

There have been concerns with using Twitter and social media to monitor for adverse events. For example FDA funded a study where a team of researchers from Harvard Medical School and other academic centers examined more than 60,000 tweets, of which 4,401 were manually categorized as resembling adverse events and compared with the FDA pharmacovigilance databases. Problems associated with such social media strategy were inability to obtain extra, needed information from patients and difficulty in separating the relevant Tweets from irrelevant chatter.  The UK has launched a similar program called WEB-RADR to determine if monitoring #drug_reaction could be useful for monitoring adverse events. Many researchers have found the adverse-event related tweets “noisy” due to varied language but had noticed many people do understand some principles of causation including when adverse event subsides after discontinuing the drug.

However Dr. Clark Freifeld, Ph.D., from Boston University and founder of the startup Epidemico, feels his company has the algorithms that can separate out the true adverse events from the junk. According to their web site, their algorithm has high accuracy when compared to the FDA database. Dr. Freifeld admits that Twitter use for pharmacovigilance purposes is probably a starting point for further follow-up, as each patient needs to fill out the four-page forms required for data entry into the FDA database.

D. Use of Twitter in Big Data Analytics

Published on Aug 28, 2012

http://blogs.ischool.berkeley.edu/i29…

Course: Information 290. Analyzing Big Data with Twitter
School of Information
UC Berkeley

Lecture 1: August 23, 2012

Course description:
How to store, process, analyze and make sense of Big Data is of increasing interest and importance to technology companies, a wide range of industries, and academic institutions. In this course, UC Berkeley professors and Twitter engineers will lecture on the most cutting-edge algorithms and software tools for data analytics as applied to Twitter microblog data. Topics will include applied natural language processing algorithms such as sentiment analysis, large scale anomaly detection, real-time search, information diffusion and outbreak detection, trend detection in social streams, recommendation algorithms, and advanced frameworks for distributed computing. Social science perspectives on analyzing social media will also be covered.

This is a hands-on project course in which students are expected to form teams to complete intensive programming and analytics projects using the real-world example of Twitter data and code bases. Engineers from Twitter will help advise student projects, and students will have the option of presenting their final project presentations to an audience of engineers at the headquarters of Twitter in San Francisco (in addition to on campus). Project topics include building on existing infrastructure tools, building Twitter apps, and analyzing Twitter data. Access to data will be provided.

Other posts on this site on USE OF SOCIAL MEDIA AND TWITTER IN HEALTHCARE and Conference Coverage include:

Methodology for Conference Coverage using Social Media: 2014 MassBio Annual Meeting 4/3 – 4/4 2014, Royal Sonesta Hotel, Cambridge, MA

Strategy for Event Joint Promotion: 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

REAL TIME Cancer Conference Coverage: A Novel Methodology for Authentic Reporting on Presentations and Discussions launched via Twitter.com @ The 2nd ANNUAL Sachs Cancer Bio Partnering & Investment Forum in Drug Development, 19th March 2014 • New York Academy of Sciences • USA

PCCI’s 7th Annual Roundtable “Crowdfunding for Life Sciences: A Bridge Over Troubled Waters?” May 12 2014 Embassy Suites Hotel, Chesterbrook PA 6:00-9:30 PM

CRISPR-Cas9 Discovery and Development of Programmable Genome Engineering – Gabbay Award Lectures in Biotechnology and Medicine – Hosted by Rosenstiel Basic Medical Sciences Research Center, 10/27/14 3:30PM Brandeis University, Gerstenzang 121

Tweeting on 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

http://pharmaceuticalintelligence.com/press-coverage/

Statistical Analysis of Tweet Feeds from the 14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment 9/30 – 10/1/2014 • Congress Center Basel – SACHS Associates, London

1st Pitch Life Science- Philadelphia- What VCs Really Think of your Pitch

What VCs Think about Your Pitch? Panel Summary of 1st Pitch Life Science Philly

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

Read Full Post »

Heroes in Medical Research: The Postdoctoral Fellow

Writer: Stephen J. Williams, Ph.D

Thank your Postdoc

The National Postdoctoral Association (NPA) had its Fifth Annual Celebration Of National Postdoc Appreciation Week (NPAW) in September and I wanted to focus a posting on curating stories from postdoctoral fellows as well as private investigators (PIs) and mentors on the impacts that postdoctoral fellows had in research and to recognize the critical and tremendous contributions which postdocs make to science.

During our postdoctoral years, we develop deep friendships which last a lifetime, a close bonding to our kindred scientists different in nature than our bonding with our mentors.  Nothing can replace a great mentor but our fellow postdocs make a huge difference in our complete scientific training.

                                   It’s always the little things that stand out in our fondest memories

Unfortunately I have a plethora of fond, little memories; too many for this posting but just want to ad in a few things:

  • Thank you!    –  To all those postdocs who worked tirelessly to make a memorable PostDoc Day!
  • Thank you!  –  To all my postdoc colleagues who stayed late n the lab with me giving each other moral and scientific support
  • Thank you!  – All my postdoc friends who would give up their time to show me how to make and use a text box correctly in Word
  • Thank you!  =-  for your friendship and understanding in those rough times we had experienced

To enliven the discussion, I ask that postdocs, past, present, and future, as well as PI’s and postdoc mentors comment on their postdoc experience. I also would like PI’s to share the stories how their postdocs made an impact to their labs.

A few interesting links and articles from the web on the importance and struggles of postdocs are included below:

Keith Micoli, from New York University Langone Medical Center states in an Elsevier article on The Academic Executive Brief

Consequently, it’s very difficult to come up with accurate numbers. Current estimates on number of postdocs come between 40,000 and 90,000 — a range that is unacceptable. A solid bet is that there are 60,000 postdocs and that more than half, if not two thirds or higher, are international.

– from US research enterprise powered by international postdocs by Keith Micoli at NYU

Survey Methodology

Since Science started conducting annual surveys seven years ago, alternating between polling postdocs and postdoc advisors, the attributes that survey respondents select as being most important to a successful postdoc have not varied much.This year’s survey was launched on March 15, 2011, with e-mail invitations sent out to about 40,000 current and former postdoc advisors worldwide. Of the 798 completed surveys that were collected, 71 percent came from Europe (39 percent) and North America (32 percent). The remaining respondents were located in Asia/Australia/Pacific Rim (20 percent) or other areas of the world (9 percent). Most were males (72 percent) 40 years of age and older (76 percent) and worked in academic institutions (70 percent) and government organizations (13 percent). The primary area represented was the life sciences (57 percent).

However only a handful of institutions were featured.

An open letter to AAAS journal “Science”: Postdocs need to address the “The Future of Research”

https://thewinnower.com/papers/an-open-letter-to-aaas-journal-science-postdocs-need-to-address-the-the-future-of-research?jm.npa=

This letter, posted on the Winnower.com, was a response to Callier’s article “Ailing academia needs culture change”1 and discussed how postdoctoral fellows have to lead in effecting change if the US research enterprise is to flourish in the future. In addition, the authors have been organizing Boston area postdoctoral associations and are sponsoring a symposium to be held at Boston University October 2-3 2014, focusing on the challenges facing graduate students and postdoctoral fellows: the “Future of Research” symposium (futureofresearch.org, @FORsymp).

  1. V. Callier, N. L. Vanderford. “Ailing academia needs culture change.” Science, 2014: 345; 6199: 885. DOI: 10.1126/science.345.6199.885-b

On the surface, many acknowledge the importance of postdoctoral fellows to the US research effort,

HOWEVER, the QUESTION remains DO POSTDOCS FEEL APPRECIATED FOR THEIR EFFORTS?

Please read Jacquelyn Gil, Ph.D.’s GREAT blog post

Have you hugged your postdoc today?

in The Contemplative Mammoth about her surviving postdoctoral life.

For some postdoc humor go to

http://phdcomics.com/comics.php where Jorge Cham, Ph.D. has been satiring the Ph.D. life since he was a graduate student in the late 90’s.

and see if you could be a star in their movie about Ph.D.’s: The PhD Movie and the sequel.

Don’t Underestimate Your Postdoc

Dr. Thomas C. Sudhof, MD is an example of a postdoctoral fellow making great contributions to a lab. A summary of his work is seen below and obtained from the site thebestschools.org on the “50 Most Influential Scientists”.

http://www.thebestschools.org/features/50-influential-scientists-world-today/#S%C3%BCdhof

Thomas C. Südhof

Thomas C. Südhof is a biochemist and professor in the School of Medicine in the Department of Molecular and Cellular Physiology at Stanford University. He is best known for his work in the area of synaptic transmission, which is the process by which signaling chemicals known as neurotransmitters are released by one neuron and bind to and activate the receptors of another neuron.

Südhof won the 1985 Nobel Prize in Physiology or Medicine, along with Randy Schekman and James Rothman.

Südhof, a native of Germany, obtained his MD from the University of Göttingen and conducted his postdoctoral training in the department of molecular genetics at the University of Texas’s Health Science Center. During his postdoctoral training, he worked on describing the role of the LDL receptor in cholesterol metabolism, for which Michael S. Brown and Joseph L. Goldstein were awarded the Nobel Prize in Physiology or Medicine in 1985.

 

Another example from the site includes Dr. Craig Mello (Craig C. Mello’s Home Page.) who, along with Dr., Andrew Fire discovered RNAi when both at Carnegie Institute. Both received a Nobel for their work.

So again would love to hear and curate personal stories highlighting how postdocs make a great contribution to US science.

More articles in this “Heroes in Medical Research” series and posts on Scientific Careers from this site include:

Heroes in Medical Research: Green Fluorescent Protein and the Rough Road in Science

Heroes in Medical Research: Developing Models for Cancer Research

Heroes in Medical Research: Dr. Carmine Paul Bianchi Pharmacologist, Leader, and Mentor

Heroes in Medical Research: Dr. Robert Ting, Ph.D. and Retrovirus in AIDS and Cancer

Heroes in Medical Research: Barnett Rosenberg and the Discovery of Cisplatin

Science Budget FY’14: Stakeholders’ Reactions on Selective Budget Drops and Priorities Shift

Careers for Researchers Beyond Academia

BEYOND THE “MALE MODEL”: AN ALTERNATIVE FEMALE MODEL OF SCIENCE, TECHNOLOGY AND INNOVATION

Read Full Post »

Track 9 Pharmaceutical R&D Informatics: Collaboration, Data Science and Biologics @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

April 30, 2014

 

Big Data and Data Science in R&D and Translational Research

10:50 Chairperson’s Remarks

Ralph Haffner, Local Area Head, Research Informatics, F. Hoffmann-La Roche AG

11:00 Can Data Science Save Pharmaceutical R&D?

Jason M. Johnson, Ph.D., Associate Vice President,

Scientific Informatics & Early Development and Discovery Sciences IT, Merck

Although both premises – that the viability of pharmaceutical R&D is mortally threatened and that modern “data science” is a relevant superhero – are

suspect, it is clear that R&D productivity is progressively declining and many areas of R&D suboptimally use data in decision-making. We will discuss

some barriers to our overdue information revolution, and our strategy for overcoming them.

11:30 Enabling Data Science in Externalized Pharmaceutical R&D

Sándor Szalma, Ph.D., Head, External Innovation, R&D IT,

Janssen Research & Development, LLC

Pharmaceutical companies have historically been involved in many external partnerships. With recent proliferation of hosted solutions and the availability

of cost-effective, massive high-performance computing resources there is an opportunity and a requirement now to enable collaborative data science. We

discuss our experience in implementing robust solutions and pre-competitive approaches to further these goals.

12:00 pm Co-Presentation: Sponsored by

Collaborative Waveform Analytics: How New Approaches in Machine Learning and Enterprise Analytics will Extend Expert Knowledge and Improve Safety Assessment

  • Tim Carruthers, CEO, Neural ID
  • Scott Weiss, Director, Product Strategy, IDBS

Neural ID’s Intelligent Waveform Service (IWS) delivers the only enterprise biosignal analysis solution combining machine learning with human expertise. A collaborative platform supporting all phases of research and development, IWS addresses a significant unmet need, delivering scalable analytics and a single interoperable data format to transform productivity in life sciences. By enabling analysis from BioBook (IDBS) to original biosignals, IWS enables users of BioBook to evaluate cardio safety assessment across the R&D lifecycle.

12:15 Building a Life Sciences Data

Sponsored by

Lake: A Useful Approach to Big Data

Ben Szekely, Director & Founding Engineer,

Cambridge Semantics

The promise of Big Data is in its ability to give us technology that can cope with overwhelming volume and variety of information that pervades R&D informatics. But the challenges are in practical use of disconnected and poorly described data. We will discuss: Linking Big Data from diverse sources for easy understanding and reuse; Building R&D informatics applications on top of a Life Sciences Data Lake; and Applications of a Data Lake in Pharma.

12:40 Luncheon Presentation I:

Sponsored by

Chemical Data Visualization in Spotfire

Matthew Stahl, Ph.D., Senior Vice President,

OpenEye Scientific Software

Spotfire deftly facilitates the analysis and interrogation of data sets. Domain specific data, such as chemistry, presents a set of challenges that general data analysis tools have difficulty addressing directly. Fortunately, Spotfire is an extensible platform that can be augmented with domain specific abilities. Spotfire has been augmented to naturally handle cheminformatics and chemical data visualization through the integration of OpenEye toolkits. The OpenEye chemistry extensions for Spotfire will be presented.

1:10 Luncheon Presentation II 

1:50 Chairperson’s Remarks

Yuriy Gankin, Ph.D., Co. Founder and CSO, GGA Software Services

1:55 Enable Translational Science by Integrating Data across the R&D Organization

Christian Gossens, Ph.D., Global Head, pRED Development Informatics Team,

pRED Informatics, F. Hoffmann-La Roche Ltd.

Multi-national pharmaceutical companies face an amazingly complex information management environment. The presentation will show that

a systematic system landscaping approach is an effective tool to build a sustainable integrated data environment. Data integration is not mainly about

technology, but the use and implementation of it.

2:25 The Role of Collaboration in Enabling Great Science in the Digital Age: The BARD Data Science Case Study

Andrea DeSouza, Director, Informatics & Data Analysis,

Broad Institute

BARD (BioAssay Research Database) is a new, public web portal that uses a standard representation and common language for organizing chemical biology data. In this talk, I describe how data professionals and scientists collaborated to develop BARD, organize the NIH Molecular Libraries Program data, and create a new standard for bioassay data exchange.

May 1. 2014

BIG DATA AND DATA SCIENCE IN R&D AND TRANSLATIONAL RESEARCH

10:30 Chairperson’s Opening Remarks

John Koch, Director, Scientific Information Architecture & Search, Merck

10:35 The Role of a Data Scientist in Drug Discovery and Development

Anastasia (Khoury) Christianson, Ph.D., Head, Translational R&D IT, Bristol-

Myers Squibb

A major challenge in drug discovery and development is finding all the relevant data, information, and knowledge to ensure informed, evidencebased

decisions in drug projects, including meaningful correlations between preclinical observations and clinical outcomes. This presentation will describe

where and how data scientists can support pharma R&D.

11:05 Designing and Building a Data Sciences Capability to Support R&D and Corporate Big Data Needs

Shoibal Datta, Ph.D., Director, Data Sciences, Biogen Idec

To achieve Biogen Idec’s strategic goals, we have built a cross-disciplinary team to focus on key areas of interest and the required capabilities. To provide

a reusable set of IT services we have broken down our platform to focus on the Ingestion, Digestion, Extraction and Analysis of data. In this presentation, we will outline how we brought focus and prioritization to our data sciences needs, our data sciences architecture, lessons learned and our future direction.

11:35 Data Experts: Improving Sponsored by

Translational Drug-Development Efficiency

Jamie MacPherson, Ph.D., Consultant, Tessella

We report on a novel approach to translational informatics support: embedding Data Experts’ within drug-project teams. Data experts combine first-line

informatics support and Business Analysis. They help teams exploit data sources that are diverse in type, scale and quality; analyse user-requirements and prototype potential software solutions. We then explore scaling this approach from a specific drug development team to all.

 

Read Full Post »

Imaging-guided cancer treatment

Imaging-guided cancer treatment

Writer & reporter: Dror Nir, PhD

It is estimated that the medical imaging market will exceed $30 billion in 2014 (FierceMedicalImaging). To put this amount in perspective; the global pharmaceutical market size for the same year is expected to be ~$1 trillion (IMS) while the global health care spending as a percentage of Gross Domestic Product (GDP) will average 10.5% globally in 2014 (Deloitte); it will reach ~$3 trillion in the USA.

Recent technology-advances, mainly miniaturization and improvement in electronic-processing components is driving increased introduction of innovative medical-imaging devices into critical nodes of major-diseases’ management pathways. Consequently, in contrast to it’s very small contribution to global health costs, medical imaging bears outstanding potential to reduce the future growth in spending on major segments in this market mainly: Drugs development and regulation (e.g. companion diagnostics and imaging surrogate markers); Disease management (e.g. non-invasive diagnosis, guided treatment and non-invasive follow-ups); and Monitoring aging-population (e.g. Imaging-based domestic sensors).

In; The Role of Medical Imaging in Personalized Medicine I discussed in length the role medical imaging assumes in drugs development.  Integrating imaging into drug development processes, specifically at the early stages of drug discovery, as well as for monitoring drug delivery and the response of targeted processes to the therapy is a growing trend. A nice (and short) review highlighting the processes, opportunities, and challenges of medical imaging in new drug development is: Medical imaging in new drug clinical development.

The following is dedicated to the role of imaging in guiding treatment.

Precise treatment is a major pillar of modern medicine. An important aspect to enable accurate administration of treatment is complementing the accurate identification of the organ location that needs to be treated with a system and methods that ensure application of treatment only, or mainly to, that location. Imaging is off-course, a major component in such composite systems. Amongst the available solution, functional-imaging modalities are gaining traction. Specifically, molecular imaging (e.g. PET, MRS) allows the visual representation, characterization, and quantification of biological processes at the cellular and subcellular levels within intact living organisms. In oncology, it can be used to depict the abnormal molecules as well as the aberrant interactions of altered molecules on which cancers depend. Being able to detect such fundamental finger-prints of cancer is key to improved matching between drugs-based treatment and disease. Moreover, imaging-based quantified monitoring of changes in tumor metabolism and its microenvironment could provide real-time non-invasive tool to predict the evolution and progression of primary tumors, as well as the development of tumor metastases.

A recent review-paper: Image-guided interventional therapy for cancer with radiotherapeutic nanoparticles nicely illustrates the role of imaging in treatment guidance through a comprehensive discussion of; Image-guided radiotherapeutic using intravenous nanoparticles for the delivery of localized radiation to solid cancer tumors.

 Graphical abstract

 Abstract

One of the major limitations of current cancer therapy is the inability to deliver tumoricidal agents throughout the entire tumor mass using traditional intravenous administration. Nanoparticles carrying beta-emitting therapeutic radionuclides [DN: radioactive isotops that emits electrons as part of the decay process a list of β-emitting radionuclides used in radiotherapeutic nanoparticle preparation is given in table1 of this paper.) that are delivered using advanced image-guidance have significant potential to improve solid tumor therapy. The use of image-guidance in combination with nanoparticle carriers can improve the delivery of localized radiation to tumors. Nanoparticles labeled with certain beta-emitting radionuclides are intrinsically theranostic agents that can provide information regarding distribution and regional dosimetry within the tumor and the body. Image-guided thermal therapy results in increased uptake of intravenous nanoparticles within tumors, improving therapy. In addition, nanoparticles are ideal carriers for direct intratumoral infusion of beta-emitting radionuclides by convection enhanced delivery, permitting the delivery of localized therapeutic radiation without the requirement of the radionuclide exiting from the nanoparticle. With this approach, very high doses of radiation can be delivered to solid tumors while sparing normal organs. Recent technological developments in image-guidance, convection enhanced delivery and newly developed nanoparticles carrying beta-emitting radionuclides will be reviewed. Examples will be shown describing how this new approach has promise for the treatment of brain, head and neck, and other types of solid tumors.

The challenges this review discusses

  • intravenously administered drugs are inhibited in their intratumoral penetration by high interstitial pressures which prevent diffusion of drugs from the blood circulation into the tumor tissue [1–5].
  • relatively rapid clearance of intravenously administered drugs from the blood circulation by kidneys and liver.
  • drugs that do reach the solid tumor by diffusion are inhomogeneously distributed at the micro-scale – This cannot be overcome by simply administering larger systemic doses as toxicity to normal organs is generally the dose limiting factor.
  • even nanoparticulate drugs have poor penetration from the vascular compartment into the tumor and the nanoparticles that do penetrate are most often heterogeneously distributed

How imaging could mitigate the above mentioned challenges

  • The inclusion of an imaging probe during drug development can aid in determining the clearance kinetics and tissue distribution of the drug non-invasively. Such probe can also be used to determine the likelihood of the drug reaching the tumor and to what extent.

Note: Drugs that have increased accumulation within the targeted site are likely to be more effective as compared with others. In that respect, Nanoparticle-based drugs have an additional advantage over free drugs with their potential to be multifunctional carriers capable of carrying both therapeutic and diagnostic imaging probes (theranostic) in the same nanocarrier. These multifunctional nanoparticles can serve as theranostic agents and facilitate personalized treatment planning.

  • Imaging can also be used for localization of the tumor to improve the placement of a catheter or external device within tumors to cause cell death through thermal ablation or oxidative stress secondary to reactive oxygen species.

See the example of Vintfolide in The Role of Medical Imaging in Personalized Medicine

vinta

Note: Image guided thermal ablation methods include radiofrequency (RF) ablation, microwave ablation or high intensity focused ultrasound (HIFU). Photodynamic therapy methods using external light devices to activate photosensitizing agents can also be used to treat superficial tumors or deeper tumors when used with endoscopic catheters.

  • Quality control during and post treatment

For example: The use of high intensity focused ultrasound (HIFU) combined with nanoparticle therapeutics: HIFU is applied to improve drug delivery and to trigger drug release from nanoparticles. Gas-bubbles are playing the role of the drug’s nano-carrier. These are used both to increase the drug transport into the cell and as ultrasound-imaging contrast material. The ultrasound is also used for processes of drug-release and ablation.

 HIFU

Additional example; Multifunctional nanoparticles for tracking CED (convection enhanced delivery)  distribution within tumors: Nanoparticle that could serve as a carrier not only for the therapeutic radionuclides but simultaneously also for a therapeutic drug and 4 different types of imaging contrast agents including an MRI contrast agent, PET and SPECT nuclear diagnostic imaging agents and optical contrast agents as shown below. The ability to perform multiple types of imaging on the same nanoparticles will allow studies investigating the distribution and retention of nanoparticles initially in vivo using non-invasive imaging and later at the histological level using optical imaging.

 multi

Conclusions

Image-guided radiotherapeutic nanoparticles have significant potential for solid tumor cancer therapy. The current success of this therapy in animals is most likely due to the improved accumulation, retention and dispersion of nanoparticles within solid tumor following image-guided therapies as well as the micro-field of the β-particle which reduces the requirement of perfectly homogeneous tumor coverage. It is also possible that the intratumoral distribution of nanoparticles may benefit from their uptake by intratumoral macrophages although more research is required to determine the importance of this aspect of intratumoral radionuclide nanoparticle therapy. This new approach to cancer therapy is a fertile ground for many new technological developments as well as for new understandings in the basic biology of cancer therapy. The clinical success of this approach will depend on progress in many areas of interdisciplinary research including imaging technology, nanoparticle technology, computer and robot assisted image-guided application of therapies, radiation physics and oncology. Close collaboration of a wide variety of scientists and physicians including chemists, nanotechnologists, drug delivery experts, radiation physicists, robotics and software experts, toxicologists, surgeons, imaging physicians, and oncologists will best facilitate the implementation of this novel approach to the treatment of cancer in the clinical environment. Image-guided nanoparticle therapies including those with β-emission radionuclide nanoparticles have excellent promise to significantly impact clinical cancer therapy and advance the field of drug delivery.

Read Full Post »

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson

Life-cycle of Science 2

 

 

 

 

 

 

 

 

 

 

 

Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

(this discussion is in a three part series including:

Using Scientific Content Curation as a Method for Validation and Biocuration

Using Scientific Content Curation as a Method for Open Innovation)

 

Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value.   He also writes extensively about tech and entrepreneurship.

October 2013 Wired article by Clive Thompson, entitled “How Successful Networks Nurture Good Ideas: Thinking Out Loud”, describes how the voluminous writings, postings, tweets, and sharing on social media is fostering connections between people and ideas which, previously, had not existed. The article was generated from Clive Thompson’s book Smarter Than you Think: How Technology is Changing Our Minds for the Better.Tom Peters also commented about the article in his blog (see here).

Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:

 

Our ideas are PRODUCTS OF OUR ENVIRONMENT.

They are influenced by the conversations around us.

However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.

But HOW MUCH Knowledge is OUT THERE?

 

Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:

  • 154.6 billion EMAILS per DAY
  • 400 million TWEETS per DAY
  • 1 million BLOG POSTS (including this one) per DAY
  • 2 million COMMENTS on WordPress per DAY
  • 16 million WORDS on Facebook per DAY
  • TOTAL 52 TRILLION WORDS per DAY

As he estimates this would be 520 million books per DAY (book with average 100,000 words).

A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.

 

Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:

 

  • 2.9 billion internet users
  • 981 million websites (only 25,000 hacked today)
  • 128 billion emails
  • 385 million Tweets
  • > 2.7 million BLOG posts today (including this one)

 

The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)

 

So how many science blogs are out there? Well back in 2008 “grrlscientistasked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.

So granted, there is a lot of

goodbadugly

 

              ….. when it comes to scientific information on the internet!

 

 

 

 

 

I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:

http://pharmaceuticalintelligence.com/2014/06/17/the-gonzalez-protocol-worse-than-useless-for-pancreatic-cancer/

 

and in a Nature Report:Stem cells: Taking a stand against pseudoscience

http://www.nature.com/news/stem-cells-taking-a-stand-against-pseudoscience-1.15408

Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.

 

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

 

Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:

 

“a meeting of natural philosophers to discuss promoting knowledge of the

natural world through observation and experiment”, i.e. science.

The Society met weekly to witness experiments and discuss what we

would now call scientific topics. The first Curator of Experiments

was Robert Hooke.”

 

from The History of the Royal Society

 

Royal Society CoatofArms

 

 

 

 

 

 

The Royal Society of London for Improving Natural Knowledge.

(photo credit: Royal Society)

(Although one wonders why they met “in-cognito”)

Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:

  • Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
  • Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
  • Current publishing does not encourage open feedback and review
  • Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
  • Published data and methodology incomplete
  • Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format

(put in here a brief blurb of summary of problems and why curation could help)

 

Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

Curation is an active filtering of the web’s  and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view.  Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale.  One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.

– Larry H. Bernstein, MD, FCAP

Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP

The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.

Justin D Pearlman, MD, PhD, FACC from The Voice of Content Consultant on The  Methodology of Curation in Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison, Drs. Larry Bernstein and Aviva Lev-Ari likens the medical and scientific curation process to curation of musical works into a thematic program:

 

Work of Original Music Curation and Performance:

 

Music Review and Critique as a Curation

Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care

… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.

 

In Summary: How Scientific Content Curation Can Help

 

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.

Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN

 

At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:

Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation

 

http://pharmaceuticalintelligence.com/2013/07/29/cardiovascular-original-research-cases-in-methodology-design-for-content-curation-and-co-curation/

demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …

which has created a need for more context-driven scientific search and discourse.

In Science and Curation: the New Practice of Web 2.0, Célya Gruson-Daniel (@HackYourPhd) states:

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.

.. where Célya considers curation an essential practice to manage open science and this new style of research.

As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.

  1. Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS

i)        Evidence: curation methodology leading to new insights for biomarkers

 

  1. Same as #1 but multiple players (experts) each bringing unique insights, perspectives, skills yielding new research = NEW LINE of CRITICAL THINKING

ii)      Evidence: co-curation methodology among cardiovascular experts leading to cardiovascular series ebooks

Life-cycle of Science 2

The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging.  Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communication was peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases. In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.

 

The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to

INCREASE THE SCIENTIFIC CURRENCY.

For detailed description of methodology please see Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid:

“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”

— JISC Senior Management Briefing Paper, The Data Deluge (2004)

 

As she states proper data and content curation is important for:

  • Post-analysis
  • Data and research result reuse for new research
  • Validation
  • Preservation of data in newer formats to prolong life-cycle of research results

However she laments the lack of

  • Funding for such efforts
  • Training
  • Organizational support
  • Monitoring
  • Established procedures

 

Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.

Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.

And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

curating-tableA GREAT table about the differences between Collecting and Curating by Nancy White at http://d20innovation.d20blogs.org/2012/07/07/understanding-content-curation/

 

 

 

 

 

 

 

 

 

 

 

University of Massachusetts Medical School has aggregated some useful curation tools at http://esciencelibrary.umassmed.edu/data_curation

Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.

See here for a curation of how networks fosters knowledge, by Erika Harrison on ScoopIt

(http://www.scoop.it/t/mobilizing-knowledge-through-complex-networks)

 

“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”

Andrea Carafa insight from World Economic Forum New Champions 2012 “Power of Networks

 

Creating Content Curation Communities: Breaking Down the Silos!

 

An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as

  • information integration (various types of data and formats)
  • social integration (marginalized by scientific communities, no funding, no recognition)

The authors set forth some ways to overcome these challenges of the content curation community including:

  1. standardization in practices
  2. visualization to document contributions
  3. emphasizing role of information professionals in content curation communities
  4. maintaining quality control to increase respectability
  5. recognizing participation to professional communities
  6. proposing funding/national meeting – Data Intensive Collaboration in Science and Engineering Workshop

A few great presentations and papers from the 2012 DICOSE meeting are found below

Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown

 

Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe

 

Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee

 

Ben Li. The mirages of big data. (PDF) NotesForLiReflectionsByBen

 

Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland

 

Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman

 

Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber

Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.

Open Science Case Studies in Curation

1. Open Science Project from Digital Curation Center

This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.

The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.

 

see: Pennock, M. (2006). “Curating e-Science Data”. DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3330. Available online: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation– See more at: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation/curating-e-science-data#sthash.RdkPNi9F.dpuf

 

2.      cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.

 

3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.

 

4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.

 

  1. OpenWetWareOpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us.   If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.

6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.

7. OpenScience ProjectThe OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.

 

9. Some ongoing biomedical knowledge (curation) projects at ISI

IICurate
This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.

BioScholar
It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).

10. Tools useful for scientific content curation

 

Research Analytic and Curation Tools from University of Queensland

 

Thomson Reuters information curation services for pharma industry

 

Microblogs as a way to communicate information about HPV infection among clinicians and patients; use of Chinese microblog SinaWeibo as a communication tool

 

VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries.  So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.

 

 

11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education

 

From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset

http://digimind.com/blog/experts/pharma-3-0/

Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book  entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.

 

Thomson Reuters and NuMedii Launch Ground-Breaking Initiative to Identify Drugs for Repurposing. Companies leverage content, Big Data analytics and expertise to improve success of drug discovery

 

Content Curation as a Context for Teaching and Learning in Science

 

#OZeLIVE Feb2014

http://www.youtube.com/watch?v=Ty-ugUA4az0

Creative Commons license

 

DigCCur: A graduate level program initiated by University of North Carolina to instruct the future digital curators in science and other subjects

 

Syracuse University offering a program in eScience and digital curation

 

Curation Tips from TED talks and tech experts

Steven Rosenbaum from Curation Nation

http://www.youtube.com/watch?v=HpncJd1v1k4

 

Pawan Deshpande form Curata on how content curation communities evolve and what makes a good content curation:

http://www.youtube.com/watch?v=QENhIU9YZyA

 

How the Internet of Things is Promoting the Curation Effort

Update by Stephen J. Williams, PhD 3/01/19

Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts.  I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running.  One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in.  About an army of 4 million have Wiki entries and maintain these databases.

Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)

In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information”  senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.

As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.”   Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”

Enter Wikidata.  According to founder Denny Vrandecic,

Language depends on knowing a lot of common sense, which computers don’t have access to

A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.

Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items.  Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.

The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.

But this effort needs money

Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s.  In response, Amazon and Facebook had donated millions for the Wikimedia projects.  Google recently gave 3.1 million USD$ in donations.

 

Future postings on the relevance and application of scientific curation will include:

Using Scientific Content Curation as a Method for Validation and Biocuration

 

Using Scientific Content Curation as a Method for Open Innovation

 

Other posts on this site related to Content Curation and Methodology include:

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

6 Steps to More Effective Content Curation

Stem Cells and Cardiac Repair: Content Curation & Scientific Reporting

Cancer Research: Curations and Reporting

Cardiovascular Diseases and Pharmacological Therapy: Curations

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

The Young Surgeon and The Retired Pathologist: On Science, Medicine and HealthCare Policy – The Best Writers Among the WRITERS

Reconstructed Science Communication for Open Access Online Scientific Curation

 

 

Read Full Post »

USPTO Guidance On Patentable Subject Matter

USPTO Guidance On Patentable Subject Matter

Curator and Reporter: Larry H Bernstein, MD, FCAP

LH Bernstein

LH Bernstein

 

 

 

 

 

 

Revised 4 July, 2014

http://pharmaceuticalintelligence.com/2014/07/03/uspto-guidance-on-patentable-subject-matter

 

I came across a few recent articles on the subject of US Patent Office guidance on patentability as well as on Supreme Court ruling on claims. I filed several patents on clinical laboratory methods early in my career upon the recommendation of my brother-in-law, now deceased.  Years later, after both brother-in-law and patent attorney are no longer alive, I look back and ask what I have learned over $100,000 later, with many trips to the USPTO, opportunities not taken, and a one year provisional patent behind me.

My conclusion is

(1) that patents are for the protection of the innovator, who might realize legal protection, but the cost and the time investment can well exceed the cost of startup and building a small startup enterprize, that would be the next step.

(2) The other thing to consider is the capability of the lawyer or firm that represents you.  A patent that is well done can be expected to take 5-7 years to go through with due diligence.   I would not expect it to be done well by a university with many other competing demands. I might be wrong in this respect, as the climate has changed, and research universities have sprouted engines for change.  Experienced and productive faculty are encouraged or allowed to form their own such entities.

(3) The emergence of Big Data, computational biology, and very large data warehouses for data use and integration has changed the landscape. The resources required for an individual to pursue research along these lines is quite beyond an individuals sole capacity to successfully pursue without outside funding.  In addition, the changed designated requirement of first to publish has muddied the water.

Of course, one can propose without anything published in the public domain. That makes it possible for corporate entities to file thousands of patents, whether there is actual validation or not at the time of filing.  It would be a quite trying experience for anyone to pursue in the USPTO without some litigation over ownership of patent rights. At this stage of of technology development, I have come to realize that the organization of research, peer review, and archiving of data is still at a stage where some of the best systems avalailable for storing and accessing data still comes considerably short of what is needed for the most complex tasks, even though improvements have come at an exponential pace.

I shall not comment on the contested views held by physicists, chemists, biologists, and economists over the completeness of guiding theories strongly held.  Only history will tell.  Beliefs can hold a strong sway, and have many times held us back.

I am not an expert on legal matters, but it is incomprehensible to me that issues concerning technology innovation can be adjudicated in the Supreme Court, as has occurred in recent years. I have postgraduate degrees in  Medicine, Developmental Anatomy, and post-medical training in pathology and laboratory medicine, as well as experience in analytical and research biochemistry.  It is beyond the competencies expected for these type of cases to come before the Supreme Court, or even to the Federal District Courts, as we see with increasing frequency,  as this has occurred with respect to the development and application of the human genome.

I’m not sure that the developments can be resolved for the public good without a more full development of an open-access system of publishing. Now I present some recent publication about, or published by the USPTO.

DR ANTHONY MELVIN CRASTO

Dr. Melvin Castro - Organic Chemistry and New Drug Development

Dr. Melvin Castro – Organic Chemistry and New Drug Development

 

 

 

 

 

 

 

 

YOU ARE FOLLOWING THIS BLOG You are following this blog, along with 1,014 other amazing people (manage).

patentimages.storage.goog…

USPTO Guidance On Patentable Subject Matter: Impediment to Biotech Innovation

Joanna T. Brougher, David A. Fazzolare J Commercial Biotechnology 2014 20(3):Brougher

jcbiotech-patents

jcbiotech-patents

 

 

 

 

 

 

 

 

 

 

 

Abstract In June 2013, the U.S. Supreme Court issued a unanimous decision upending more than three decades worth of established patent practice when it ruled that isolated gene sequences are no longer patentable subject matter under 35 U.S.C. Section 101.While many practitioners in the field believed that the USPTO would interpret the decision narrowly, the USPTO actually expanded the scope of the decision when it issued its guidelines for determining whether an invention satisfies Section 101.

The guidelines were met with intense backlash with many arguing that they unnecessarily expanded the scope of the Supreme Court cases in a way that could unduly restrict the scope of patentable subject matter, weaken the U.S. patent system, and create a disincentive to innovation. By undermining patentable subject matter in this way, the guidelines may end up harming not only the companies that patent medical innovations, but also the patients who need medical care.  This article examines the guidelines and their impact on various technologies.

Keywords:   patent, patentable subject matter, Myriad, Mayo, USPTO guidelines

Full Text: PDF

References

35 U.S.C. Section 101 states “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

” Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 566 U.S. ___ (2012)

Association for Molecular Pathology et al., v. Myriad Genetics, Inc., 569 U.S. ___ (2013).

Parke-Davis & Co. v. H.K. Mulford Co., 189 F. 95, 103 (C.C.S.D.N.Y. 1911)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

DOI: http://dx.doi.org/10.5912/jcb664

 

Science 4 July 2014; 345 (6192): pp. 14-15  DOI: http://dx.doi.org/10.1126/science.345.6192.14
  • IN DEPTH

INTELLECTUAL PROPERTY

Biotech feels a chill from changing U.S. patent rules

A 2013 Supreme Court decision that barred human gene patents is scrambling patenting policies.

PHOTO: MLADEN ANTONOV/AFP/GETTY IMAGES

A year after the U.S. Supreme Court issued a landmark ruling that human genes cannot be patented, the biotech industry is struggling to adapt to a landscape in which inventions derived from nature are increasingly hard to patent. It is also pushing back against follow-on policies proposed by the U.S. Patent and Trademark Office (USPTO) to guide examiners deciding whether an invention is too close to a natural product to deserve patent protection. Those policies reach far beyond what the high court intended, biotech representatives say.

“Everything we took for granted a few years ago is now changing, and it’s generating a bit of a scramble,” says patent attorney Damian Kotsis of Harness Dickey in Troy, Michigan, one of more than 15,000 people who gathered here last week for the Biotechnology Industry Organization’s (BIO’s) International Convention.

At the meeting, attorneys and executives fretted over the fate of patent applications for inventions involving naturally occurring products—including chemical compounds, antibodies, seeds, and vaccines—and traded stories of recent, unexpected rejections by USPTO. Industry leaders warned that the uncertainty could chill efforts to commercialize scientific discoveries made at universities and companies. Some plan to appeal the rejections in federal court.

USPTO officials, meanwhile, implored attendees to send them suggestions on how to clarify and improve its new policies on patenting natural products, and even announced that they were extending the deadline for public comment by a month. “Each and every one of you in this room has a moral duty … to provide written comments to the PTO,” patent lawyer and former USPTO Deputy Director Teresa Stanek Rea told one audience.

At the heart of the shake-up are two Supreme Court decisions: the ruling last year in Association for Molecular Pathology v. Myriad Genetics Inc. that human genes cannot be patented because they occur naturally (Science, 21 June 2013, p. 1387); and the 2012 Mayo v. Prometheus decision, which invalidated a patent on a method of measuring blood metabolites to determine drug doses because it relied on a “law of nature” (Science, 12 July 2013, p. 137).

Myriad and Mayo are already having a noticeable impact on patent decisions, according to a study released here. It examined about 1000 patent applications that included claims linked to natural products or laws of nature that USPTO reviewed between April 2011 and March 2014. Overall, examiners rejected about 40%; Myriad was the basis for rejecting about 23% of the applications, and Mayo about 35%, with some overlap, the authors concluded. That rejection rate would have been in the single digits just 5 years ago, asserted Hans Sauer, BIO’s intellectual property counsel, at a press conference. (There are no historical numbers for comparison.) The study was conducted by the news service Bloomberg BNA and the law firm Robins, Kaplan, Miller & Ciseri in Minneapolis, Minnesota.

USPTO is extending the decisions far beyond diagnostics and DNA?

The numbers suggest USPTO is extending the decisions far beyond diagnostics and DNA, attorneys say. Harness Dickey’s Kotsis, for example, says a client recently tried to patent a plant extract with therapeutic properties; it was different from anything in nature, Kotsis argued, because the inventor had altered the relative concentrations of key compounds to enhance its effect. Nope, decided USPTO, too close to nature.

In March, USPTO released draft guidance designed to help its examiners decide such questions, setting out 12 factors for them to weigh. For example, if an examiner deems a product “markedly different in structure” from anything in nature, that counts in its favor. But if it has a “high level of generality,” it gets dinged.

The draft has drawn extensive criticism. “I don’t think I’ve ever seen anything as complicated as this,” says Kevin Bastian, a patent attorney at Kilpatrick Townsend & Stockton in San Francisco, California. “I just can’t believe that this will be the standard.”

USPTO officials appear eager to fine-tune the draft guidance, but patent experts fear the Supreme Court decisions have made it hard to draw clear lines. “The Myriad decision is hopelessly contradictory and completely incoherent,” says Dan Burk, a law professor at the University of California, Irvine. “We know you can’t patent genetic sequences,” he adds, but “we don’t really know why.”

Get creative in using Draft Guidelines!

For now, Kostis says, applicants will have to get creative to reduce the chance of rejection. Rather than claim protection for a plant extract itself, for instance, an inventor could instead patent the steps for using it to treat patients. Other biotech attorneys may try to narrow their patent claims. But there’s a downside to that strategy, they note: Narrower patents can be harder to protect from infringement, making them less attractive to investors. Others plan to wait out the storm, predicting USPTO will ultimately rethink its guidance and ease the way for new patents.

 

Public comment period extended

USPTO has extended the deadline for public comment to 31 July, with no schedule for issuing final language. Regardless of the outcome, however, Stanek Rea warned a crowd of riled-up attorneys that, in the world of biopatents, “the easy days are gone.”

 

United States Patent and Trademark Office

Today we published and made electronically available a new edition of the Manual of Patent Examining Procedure (MPEP). Manual of Patent Examining Procedure uspto.gov http://www.uspto.gov/web/offices/pac/mpep/index.html Summary of Changes

PDF Title Page
PDF Foreword
PDF Introduction
PDF Table of Contents
PDF Chapter 600 –
PDF   Parts, Form, and Content of Application Chapter 700 –
PDF    Examination of Applications Chapter 800 –
PDF   Restriction in Applications Filed Under 35 U.S.C. 111; Double Patenting Chapter 900 –
PDF   Prior Art, Classification, and Search Chapter 1000 –
PDF  Matters Decided by Various U.S. Patent and Trademark Office Officials Chapter 1100 –
PDF   Statutory Invention Registration (SIR); Pre-Grant Publication (PGPub) and Preissuance Submissions Chapter 1200 –
PDF    Appeal Chapter 1300 –
PDF   Allowance and Issue Appendix L –
PDF   Patent Laws Appendix R –
PDF   Patent Rules Appendix P –
PDF   Paris Convention Subject Matter Index 
PDF Zipped version of the MPEP current revision in the PDF format.

Manual of Patent Examining Procedure (MPEP)Ninth Edition, March 2014

The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to:
http://uspto-mpep.ideascale.com.

Manual of Patent Examining Procedure (MPEP) Ninth Edition, March 2014
The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to: http://uspto-mpep.ideascale.com.

Note: For current fees, refer to the Current USPTO Fee Schedule.
Consolidated Laws – The patent laws in effect as of May 15, 2014. Consolidated Rules – The patent rules in effect as of May 15, 2014.  MPEP Archives (1948 – 2012)
Current MPEP: Searchable MPEP

The documents updated in the Ninth Edition of the MPEP, dated March 2014, include changes that became effective in November 2013 or earlier.
All of the documents have been updated for the Ninth Edition except Chapters 800, 900, 1000, 1300, 1700, 1800, 1900, 2000, 2300, 2400, 2500, and Appendix P.
More information about the changes and updates is available from the “Blue Page – Introduction” of the Searchable MPEP or from the “Summary of Changes” link to the HTML and PDF versions provided below. Discuss the Manual of Patent Examining Procedure (MPEP) Welcome to the MPEP discussion tool!

We have received many thoughtful ideas on Chapters 100-600 and 1800 of the MPEP as well as on how to improve the discussion site. Each and every idea submitted by you, the participants in this conversation, has been carefully reviewed by the Office, and many of these ideas have been implemented in the August 2012 revision of the MPEP and many will be implemented in future revisions of the MPEP. The August 2012 revision is the first version provided to the public in a web based searchable format. The new search tool is available at http://mpep.uspto.gov. We would like to thank everyone for participating in the discussion of the MPEP.

We have some great news! Chapters 1300, 1500, 1600 and 2400 of the MPEP are now available for discussion. Please submit any ideas and comments you may have on these chapters. Also, don’t forget to vote on ideas and comments submitted by other users. As before, our editorial staff will periodically be posting proposed new material for you to respond to, and in some cases will post responses to some of the submitted ideas and comments.Recently, we have received several comments concerning the Leahy-Smith America Invents Act (AIA). Please note that comments regarding the implementation of the AIA should be submitted to the USPTO via email t aia_implementation@uspto.gov or via postal mail, as indicated at the America Invents Act Web site. Additional information regarding the AIA is available at www.uspto.gov/americainventsact  We have also received several comments suggesting policy changes which have been routed to the appropriate offices for consideration. We really appreciate your thinking and recommendations!

FDA Guidance for Industry:Electronic Source Data in Clinical Investigations

Electronic Source Data

Electronic Source Data

 

 

 

 

 

 

 

The FDA published its new Guidance for Industry (GfI) – “Electronic Source Data in Clinical Investigations” in September 2013.
The Guidance defines the expectations of the FDA concerning electronic source data generated in the context of clinical trials. Find out more about this Guidance.
http://www.gmp-compliance.org/enews_4288_FDA%20Guidance%20for%20Industry%3A%20Electronic%20Source%20Data%20in%20Clinical%20Investigations
_8534,8457,8366,8308,Z-COVM_n.html

After more than 5 years and two draft versions, the final version of the Guidance for
Industry (GfI) – “Electronic Source Data in Clinical Investigations” was published in
September 2013. This new FDA Guidance defines the FDA’s expectations for sponsors,
CROs, investigators and other persons involved in the capture, review and retention of
electronic source data generated in the context of FDA-regulated clinical trials.In an
effort to encourage the modernization and increased efficiency of processes in clinical
trials, the FDA clearly supports the capture of electronic source data and emphasizes
the agency’s intention to support activities aimed at ensuring the reliability, quality,
integrity and traceability of this source data, from its electronic source to the electronic
submission of the data in the context of an authorization procedure. The Guidance
addresses aspects as data capture, data review and record retention. When the
computerized systems used in clinical trials are described, the FDA recommends
that the description not only focus on the intended use of the system, but also on
data protection measures and the flow of data across system components and
interfaces. In practice, the pharmaceutical industry needs to meet significant
requirements regarding organisation, planning, specification and verification of
computerized systems in the field of clinical trials. The FDA also mentions in the
Guidance that it does not intend to apply 21 CFR Part 11 to electronic health records
(EHR). Author: Oliver Herrmann Q-Infiity Source: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/
Guidances/UCM328691.pdf
Webinar: https://collaboration.fda.gov/p89r92dh8wc

 

Read Full Post »

« Newer Posts - Older Posts »