Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘open innovation’


 

Yay! Bloomberg View Seems to Be On the Side of the Lowly Scientist!

 

Reporter: Stephen J. Williams, Ph.D.

Justin Fox at BloombergView had just published an article near and dear to the hearts of all those #openaccess scientists and those of us @Pharma_BI and @MozillaScience who feel strong about #openscience #opendata and the movement to make scientific discourse freely accessible.

His article “Academic Publishing Can’t Remain Such a Great Business” discusses the history of academic publishing and how consolidation of smaller publishers into large scientific publishing houses (Bigger publishers bought smaller ones) has produced a monopoly like environment in which prices for journal subscriptions are rising. He also discusses how the open access movement is challenging this model and may oneday replace the big publishing houses.

A few tidbits from his article:

Publishers of academic journals have a great thing going. They generally don’t pay for the articles they publish, or for the primary editing and peer reviewing essential to preparing them for publication (they do fork over some money for copy editing). Most of this gratis labor is performed by employees of academic institutions. Those institutions, along with government agencies and foundations, also fund all the research that these journal articles are based upon.

Yet the journal publishers are able to get authors to sign over copyright to this content, and sell it in the form of subscriptions to university libraries. Most journals are now delivered in electronic form, which you think would cut the cost, but no, the price has been going up and up:

 

This isn’t just inflation at work: in 1994, journal subscriptions accounted for 51 percent of all library spending on information resources. In 2012 it was 69 percent.

Who exactly is getting that money? The largest academic publisher is Elsevier, which is also the biggest, most profitable division of RELX, the Anglo-Dutch company that was known until February as Reed Elsevier.

 

RELX reports results in British pounds; I converted to dollars in part because the biggest piece of the company’s revenue comes from the U.S. And yes, those are pretty great operating-profit margins: 33 percent in 2014, 39 percent in 2013. The next biggest academic publisher is Springer Nature, which is closely held (by German publisher Holtzbrinck and U.K. private-equity firm BC Partners) but reportedly has annual revenue of about $1.75 billion. Other biggies that are part of publicly traded companies include Wiley-Blackwell, a division of John Wiley & Sons; Wolters Kluwer Health, a division of Wolters Kluwer; and Taylor & Francis, a division of Informa.

And gives a brief history of academic publishing:

The history here is that most early scholarly journals were the work of nonprofit scientific societies. The goal was to disseminate research as widely as possible, not to make money — a key reason why nobody involved got paid. After World War II, the explosion in both the production of and demand for academic research outstripped the capabilities of the scientific societies, and commercial publishers stepped into the breach. At a time when journals had to be printed and shipped all over the world, this made perfect sense.

Once it became possible to effortlessly copy and disseminate digital files, though, the economics changed. For many content producers, digital copying is a threat to their livelihoods. As Peter Suber, the director of Harvard University’s Office for Scholarly Communication, puts it in his wonderful little book, “Open Access”:

And while NIH Tried To Force These Houses To Accept Open Access:

About a decade ago, the universities and funding agencies began fighting back. The National Institutes of Health in the U.S., the world’s biggest funder of medical research, began requiring in 2008 that all recipients of its grants submit electronic versions of their final peer-reviewed manuscripts when they are accepted for publication in journals, to be posted a year later on the NIH’s open-access PubMed depository. Publishers grumbled, but didn’t want to turn down the articles.

Big publishers are making $ by either charging as much as they can or focus on new customers and services

For the big publishers, meanwhile, the choice is between positioning themselves for the open-access future or maximizing current returns. In its most recent annual report, RELX leans toward the latter while nodding toward the former:

Over the past 15 years alternative payment models for the dissemination of research such as “author-pays” or “author’s funder-pays” have emerged. While it is expected that paid subscription will remain the primary distribution model, Elsevier has long invested in alternative business models to address the needs of customers and researchers.

Elsevier’s extra services can add news avenues of revenue

https://www.elsevier.com/social-sciences/business-and-management

https://www.elsevier.com/rd-solutions

but they may be seeing the light on OpenAccess (possibly due to online advocacy, an army of scientific curators and online scientific communities):

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

SAME SCIENTIFIC IMPACT: Scientific Publishing – Open Journals vs. Subscription-based

e-Recognition via Friction-free Collaboration over the Internet: “Open Access to Curation of Scientific Research”

Indeed we recently put up an interesting authored paper “A Patient’s Perspective: On Open Heart Surgery from Diagnosis and Intervention to Recovery” (free of charge) letting the community of science freely peruse and comment, and generally well accepted by both author and community as a nice way to share academic discourse without the enormous fees, especially on opinion papers in which a rigorous peer review may not be necessary.

But it was very nice to see a major news outlet like Bloomberg View understand the lowly scientist’s aggravations.

Thanks Bloomberg!

 

 

 

 

 

Advertisements

Read Full Post »


Podcast Review: Quiet Innovation Podcast on Obtaining $ for Your Startup

Reporter: Stephen J. Williams, Ph.D.

 

I wanted to highlight an interesting interview (What it Really Takes to Get Money for Your Startup) with David S. Rose, serial entrepreneur and Founder and CEO of Gust.com, which is a global collaboration platform for early stage angel investing, connecting hundreds of thousands of entrepreneurs and investors in over 75 countries. The interview with David and CFA John P. Gavin was broadcast on the podcast Quiet Innovation (from PodCast Addict @Podcast_Addict) from. I had tweeted it out on my Twitter account below (see the http link)

 

… but will include some notes from the podcast here. In addition you can link to the podcast directly using the links below:

QI-013 David Rose Interview_01.mp3

Or download the mp3

http://t.co/XPjLrJQG7O

This post is a followup from yesterday’s post Protecting Your Biotech IP and Market Strategy: Notes from Life Sciences Collaborative 2015 Meeting.

Some highlights from the podcast

  • IDEAS DON’T GET FUNDED

David Rose discusses how there are hundreds of thousands of new ideas, some which are great some which are not… having an idea may be an initial step but for an investor to even consider your idea it is more important to have

  • EXECUTION

This is what David feels is critical to investors, such as himself, to decide whether your idea is investable. A startup needs to show they can accomplish their goal and show at least a rudimentary example of this, whether it is putting up a website or writing up a design blueprint for a new widget. He says starting a business today (either tech or manufacturing) requires a lot less capital than years ago (unless you are starting a biotech). He gives an example of internet startups he had founded in the 90’s versus today… in the 90’s you needed $2 million… today you can do it for $2,000. But the ability to show that you can EXECUTE this plan is CRITICAL.

David sites three aspects which are important to investors:

  1. Integrity – Be humble about yourself. He says there are way too many people who claim ‘our idea is the best’ or ‘we do it better than anyone’ or ‘we are the first to have this idea’. As he says Jeff Bezos of Amazon was not the first to have the idea of selling books over the internet, he just EXECUTED the plan extremely well.
  2. Passion- Investors need to see that you are ‘all in’ and committed. A specific example is angels asking how much money have you put into your idea (skin in the game)
  3. Experience- David says there are TWO important types of experience in developing startups and both valid. The first is how many startups have you done and succeeded and the second is how many startups have failed. He says investors actually like if you have failed because they are learning experiences, just as valuable if not more than having startups always succeed. Investors need to know how you can deal with adversity. All three points goes back to execution.

David Rose gave some reading suggestions as well including

Lucky or Smart? Secrets to an Entrepreneurial Life by Bo Peabody – He highlights this book to help people understand that a startup entrepreneur should always hire someone smarter than themselves.

Derek Sivers post Ideas Are Just a Multiplier of Execution  – where a great idea is worth $20 but a great idea plus execution is worth $20 million.

Eris Reese’s post The Lean Startup in his blog StartUpLessonsLearned – being frugal (gets back to what he said about not needed as much capital as you would think i.e. Don’t Burn Through the Cash) and also get metrics on your startup or idea (as long as you have the IP). He suggests taking out an ad to see what the interest is out there. You can measure the clicks from the ad and use that as a marketing tool to potential investors i.e. Getting Feedback

Some other posts on this site about Investing and Startups include:

Protecting Your Biotech IP and Market Strategy: Notes from Life Sciences Collaborative 2015 Meeting

THE BLOOMBERG INNOVATION INDEX: Country Rankings by Six Measures of the Capacity to Innovate as a Nation

Updated: Investing and Inventing: Is the Tango of Mars and Venus Still on

Sand Hill Angels

The Bioscience Crowdfunding Environment: The Bigger Better VC?

Technion-Cornell Innovation Institute in NYC: Postdocs keep exclusive license to their IP and take a fixed dollar amount of Equity if the researchers create a Spinoff company

Tycho Brahe, where art thou? Today’s Renaissance of the Self-Funded Scientist!

 

 

Read Full Post »


Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »


Pfizer Cambridge Collaborative Innovation Events: ‘The Role of Innovation Districts in Metropolitan Areas to Drive the Global an | Basecamp Business.

Reporter: Stephen J. Williams, Ph.D.

Monday, September 8 2014 5:30pm – 7:00pm Other Time Presented by:

Event Details:
Date/Time:
Monday, September 8, 2014, 5:30-7PM EDT
Venue: Pfizer Cambridge Seminar Room (ground floor)
Location: Pfizer Inc., 610 Main Street, Cambridge, MA 02139 . Click here for a map to the location
(Corner of Portland and Albany street, Cambridge, MA 02139)
RSVP: To confirm your attendance please RSVP online through this website. This is an ONLINE REGISTRATION-ONLY event (there will not be registration at the door).

The Role of Innovation Districts in Metropolitan Areas to Drive the Global and Local Economy: Cambridge/Boston Case Study

Join Pfizer Cambridge at our new residence for a fascinating evening led by Vise-President and Founding Director, Bruce Katz of Brookings Institution, followed by a networking reception with key partners in our new Cambridge residence; Boston-Cambridge big pharma and biotech, members of the venture capital community, renowned researchers, advocacy groups and Pfizer Cambridge scientists and clinicians.

Boston/Cambridge is one of most prominent biomedical hubs in the world and known for its thriving economy. Recent advances in biomedical innovation and cutting-edge technologies have been a major factor in stimulating growth for the city. The close proximity of big pharma, biotech, academia and venture capital in Boston/Cambridge has particularly been crucial in fostering a culture ripe for such innovation.

Bruce Katz will shed light on the state of the local and global economy and the role innovation districts can play in accelerating therapies to patients. Katz will focus on the success Boston/Cambridge has had thus far in advancing biomedical discoveries as well as offer insights on the city’s future outlook.

The Brookings Institution is a nonprofit public policy organization based in Washington, D.C. Mr. Katz is Founding Director of the Brookings Metropolitan Policy Program, which aims to provide decision makers in the public, corporate, and civic sectors with policy ideas for improving the health and prosperity of cities and metropolitan areas.

Agenda:

5:30-6PM      Registration/Gathering (please arrive by no later than 5:45PM EDT with a
                       government issued ID to allow sufficient time for security check)

6-7PM            Welcoming remarks by Cambridge/Boston Site Head and Group Senior 
                       Vice-President WorldWide R&D, Dr. Jose-Carlos Gutierrez-Ramos

                        Keynote speaker: Bruce Katz, 
                        Founding Director Metropolitan Policy Program
                        Vice-president, The Brookings Institution

7-8PM             Open reception and Networking

8PM                 Event ends

This May, Pfizer Cambridge sites are integrating and relocating our research and development teams into our new local headquarters at 610 Main Street, Cambridge, MA 02139. The unified Cambridge presence represents the opportunity to interlace Pfizer’s R&D capability in the densest biomedical community in the world, to potentially expand our already existing collaborations and to embark on forging possible new connections. These events will further drive our collective mission and passion to deliver new medicines to patients in need. Our distinguished invited guests will include leaders in the Boston-Cambridge venture capital and biotech community, renowned researchers, advocacy groups and Pfizer Cambridge scientists and clinicians.  

Online registration:
If you are experiencing issues with online registration, please contact: Cambridge_site_head@pfizer.com  



Hashtags: #bcnet-PCCIE

Monday, September 8 2014 5:30pm – 7:00pm Other Time

Location: Pfizer Inc.
610 Main St
Cambridge, MA 02139
Contact:
 

Read Full Post »