Feeds:
Posts
Comments

Posts Tagged ‘Open access’


Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

Curator: Stephen J. Williams, PhD.

Dr. Cathy N. Davidson from Duke University gives a talk entitled: Now You See It.  Why the Future of Learning Demands a Paradigm Shift

In this talk, shown below, Dr. Davidson shows how our current education system has been designed for educating students for the industrial age type careers and skills needed for success in the Industrial Age and how this educational paradigm is failing to prepare students for the challenges they will face in their future careers.

Or as Dr. Davidson summarizes

Designing education not for your past but for their future

As the video is almost an hour I will summarize some of the main points below

PLEASE WATCH VIDEO

Summary of talk

Dr. Davidson starts the talk with a thesis: that Institutions tend to preserve the problems they were created to solve.

All the current work, teaching paradigms that we use today were created for the last information age (19th century)

Our job to to remake the institutions of education work for the future not the one we inherited

Four information ages or technologies that radically changed communication

  1. advent of writing: B.C. in ancient Mesopotamia allowed us to record and transfer knowledge and ideas
  2. movable type – first seen in 10th century China
  3. steam powered press – allowed books to be mass produced and available to the middle class.  First time middle class was able to have unlimited access to literature
  4. internet- ability to publish and share ideas worldwide

Interestingly, in the early phases of each of these information ages, the same four complaints about the new technology/methodology of disseminating information was heard

  • ruins memory
  • creates a distraction
  • ruins interpersonal dialogue and authority
  • reduces complexity of thought

She gives an example of Socrates who hated writing and frequently stated that writing ruins memory, creates a distraction, and worst commits ideas to what one writes down which could not be changed or altered and so destroys ‘free thinking’.

She discusses how our educational institutions are designed for the industrial age.

The need for collaborative (group) learning AND teaching

Designing education not for your past but for the future

In other words preparing students for THEIR future not your past and the future careers that do not exist today.

In the West we were all taught to answer silently and alone.  However in Japan, education is arranged in the han or group think utilizing the best talents of each member in the group.  In Japan you are arranged in such groups at an early age.  The concept is that each member of the group contributes their unique talent and skill for the betterment of the whole group.  The goal is to demonstrate that the group worked well together.

see https://educationinjapan.wordpress.com/education-system-in-japan-general/the-han-at-work-community-spirit-begins-in-elementary-school/ for a description of “in the han”

In the 19th century in institutions had to solve a problem: how to get people out of the farm and into the factory and/or out of the shop and into the firm

Takes a lot of regulation and institutionalization to convince people that independent thought is not the best way in the corporation

keywords for an industrial age

  • timeliness
  • attention to task
  • standards, standardization
  • hierarchy
  • specialization, expertise
  • metrics (measures, management)
  • two cultures: separating curriculum into STEM versus artistic tracts or dividing the world of science and world of art

This effort led to a concept used in scientific labor management derived from this old paradigm in education, an educational system controlled and success measured using

  • grades (A,B,C,D)
  • multiple choice tests

keywords for our age

  • workflow
  • multitasking attention
  • interactive process (Prototype, Feedback)
  • data mining
  • collaboration by difference

Can using a methodology such as scientific curation affect higher education to achieve this goal of teaching students to collaborate in an interactive process using data mining to create a new workflow for any given problem?  Can a methodology of scientific curation be able to affect such changes needed in academic departments to achieve the above goal?

This will be the subject of future curations tested using real-world in class examples.

However, it is important to first discern that scientific content curation takes material from Peer reviewed sources and other expert-vetted sources.  This is unique from other types of content curation in which take from varied sources, some of which are not expert-reviewed, vetted, or possibly ‘fake news’ or highly edited materials such as altered video and audio.  In this respect, the expert acts not only as curator but as referee.  In addition, collaboration is necessary and even compulsory for the methodology of scientific content curation, portending the curator not as the sole expert but revealing the CONTENT from experts as the main focus for learning and edification.

Other article of note on this subject in this Open Access Online Scientific Journal include:

The above articles will give a good background on this NEW Conceived Methodology of Scientific Curation and its Applicability in various areas such as Medical Publishing, and as discussed below Medical Education.

To understand the new paradigm in medical communication and the impact curative networks have or will play in this arena please read the following:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

This article discusses a history of medical communication and how science and medical communication initially moved from discussions from select individuals to the current open accessible and cooperative structure using Web 2.0 as a platform.

 

Read Full Post »


Live Conference Coverage @Medcitynews Converge 2018 @Philadelphia: Promising Drugs and Breaking Down Silos

Reporter: Stephen J. Williams, PhD

Promising Drugs, Pricing and Access

The drug pricing debate rages on. What are the solutions to continuing to foster research and innovation, while ensuring access and affordability for patients? Can biosimilars and generics be able to expand market access in the U.S.?

Moderator: Bunny Ellerin, Director, Healthcare and Pharmaceutical Management Program, Columbia Business School
Speakers:
Patrick Davish, AVP, Global & US Pricing/Market Access, Merck
Robert Dubois M.D., Chief Science Officer and Executive Vice President, National Pharmaceutical Council
Gary Kurzman, M.D., Senior Vice President and Managing Director, Healthcare, Safeguard Scientifics
Steven Lucio, Associate Vice President, Pharmacy Services, Vizient

What is working and what needs to change in pricing models?

Robert:  He sees so many players in the onStevencology space discovering new drugs and other drugs are going generic (that is what is working).  However are we spending too much on cancer care relative to other diseases (their initiative Going Beyond the Surface)

Steven:  the advent of biosimilars is good for the industry

Patrick:  large effort in oncology, maybe too much (750 trials on Keytruda) and he says pharma is spending on R&D (however clinical trials take large chunk of this money)

Robert: cancer has gotten a free ride but cost per year relative to benefit looks different than other diseases.  Are we overinvesting in cancer or is that a societal decision

Gary:  maybe as we become more specific with precision medicines high prices may be a result of our success in specifically targeting a mutation.  We need to understand the targeted drugs and outcomes.

Patrick: “Cancer is the last big frontier” but he says prices will come down in most cases.  He gives the example of Hep C treatment… the previous only therapeutic option was a very toxic yearlong treatment but the newer drugs may be more cost effective and safer

Steven: Our blockbuster drugs could diffuse the expense but now with precision we can’t diffuse the expense over a large number of patients

President’s Cancer Panel Recommendation

Six recommendations

  1. promoting value based pricing
  2. enabling communications of cost
  3. financial toxicity
  4. stimulate competition biosimilars
  5. value based care
  6. invest in biomedical research

Patrick: the government pricing regime is hurting.  Alot of practical barriers but Merck has over 200 studies on cost basis

Robert:  many concerns/impetus started in Europe on pricing as they are a set price model (EU won’t pay more than x for a drug). US is moving more to outcomes pricing. For every one health outcome study three studies did not show a benefit.  With cancer it is tricky to establish specific health outcomes.  Also Medicare gets best price status so needs to be a safe harbor for payers and biggest constraint is regulatory issues.

Steven: They all want value based pricing but we don’t have that yet and there is a challenge to understand the nuances of new therapies.  Hard to align all the stakeholders together so until some legislation starts to change the reimbursement-clinic-patient-pharma obstacles.  Possibly the big data efforts discussed here may help align each stakeholders goals.

Gary: What is the data necessary to understand what is happening to patients and until we have that information it still will be complicated to determine where investors in health care stand at in this discussion

Robert: on an ICER methods advisory board: 1) great concern of costs how do we determine fair value of drug 2) ICER is only game in town, other orgs only give recommendations 3) ICER evaluates long term value (cost per quality year of life), budget impact (will people go bankrupt)

4) ICER getting traction in the public eye and advocates 5) the problem is ICER not ready for prime time as evidence keeps changing or are they keeping the societal factors in mind and they don’t have total transparancy in their methodology

Steven: We need more transparency into all the costs associated with the drug and therapy and value-based outcome.  Right now price is more of a black box.

Moderator: pointed to a recent study which showed that outpatient costs are going down while hospital based care cost is going rapidly up (cost of site of care) so we need to figure out how to get people into lower cost setting

Breaking Down Silos in Research

“Silo” is healthcare’s four-letter word. How are researchers, life science companies and others sharing information that can benefit patients more quickly? Hear from experts at institutions that are striving to tear down the walls that prevent data from flowing.

Moderator: Vini Jolly, Executive Director, Woodside Capital Partners
Speakers:
Ardy Arianpour, CEO & Co-Founder, Seqster @seqster
Lauren Becnel, Ph.D., Real World Data Lead for Oncology, Pfizer
Rakesh Mathew, Innovation, Research, & Development Lead, HealthShareExchange
David Nace M.D., Chief Medical Officer, Innovaccer

Seqster: Seqster is a secure platform that helps you and your family manage medical records, DNA, fitness, and nutrition data—all in one place. Founder has a genomic sequencing background but realized sequence  information needs to be linked with medical records.

HealthShareExchange.org :

HealthShare Exchange envisions a trusted community of healthcare stakeholders collaborating to deliver better care to consumers in the greater Philadelphia region. HealthShare Exchange will provide secure access to health information to enable preventive and cost-effective care; improve quality of patient care; and facilitate care transitions. They have partnered with multiple players in healthcare field and have data on over 7 million patients.

Innovacer

Data can be overwhelming, but it doesn’t have to be this way. To drive healthcare efficiency, we designed a modular suite of products for a smooth transition into a data-driven world within 4 weeks. Why does it take so much money to move data around and so slowly?

What is interoperatibility?

Ardy: We knew in genomics field how to build algorithms to analyze big data but how do we expand this from a consumer standpoint and see and share your data.

Lauren: how can we use the data between patients, doctors, researchers?  On the research side genomics represent only 2% of data.  Silos are one issue but figuring out the standards for data (collection, curation, analysis) is not set. Still need to improve semantic interoperability. For example Flatiron had good annotated data on male metastatic breast cancer.

David: Technical interopatabliltiy (platform), semantic interopatability (meaning or word usage), format (syntactic) interopatibility (data structure).  There is technical interoperatiblity between health system but some semantic but formats are all different (pharmacies use different systems and write different prescriptions using different suppliers).  In any value based contract this problem is a big issue now (we are going to pay you based on the quality of your performance then there is big need to coordinate across platforms).  We can solve it by bringing data in real time in one place and use mapping to integrate the format (need quality control) then need to make the data democratized among players.

Rakesh:  Patients data should follow the patient. Of Philadelphia’s 12 health systems we had a challenge to make data interoperatable among them so tdhey said to providers don’t use portals and made sure hospitals were sending standardized data. Health care data is complex.

David: 80% of clinical data is noise. For example most eMedical Records are text. Another problem is defining a patient identifier which US does not believe in.

 

 

 

 

Please follow on Twitter using the following #hash tags and @pharma_BI

#MCConverge

#cancertreatment

#healthIT

#innovation

#precisionmedicine

#healthcaremodels

#personalizedmedicine

#healthcaredata

And at the following handles:

@pharma_BI

@medcitynews

Read Full Post »


How Will FDA’s new precisionFDA Science 2.0 Collaboration Platform Protect Data?

Reporter: Stephen J. Williams, Ph.D.

As reported in MassDevice.com

FDA launches precisionFDA to harness the power of scientific collaboration

FDA VoiceBy: Taha A. Kass-Hout, M.D., M.S. and Elaine Johanson

Imagine a world where doctors have at their fingertips the information that allows them to individualize a diagnosis, treatment or even a cure for a person based on their genes. That’s what President Obama envisioned when he announced his Precision Medicine Initiative earlier this year. Today, with the launch of FDA’s precisionFDA web platform, we’re a step closer to achieving that vision.

PrecisionFDA is an online, cloud-based, portal that will allow scientists from industry, academia, government and other partners to come together to foster innovation and develop the science behind a method of “reading” DNA known as next-generation sequencing (or NGS). Next Generation Sequencing allows scientists to compile a vast amount of data on a person’s exact order or sequence of DNA. Recognizing that each person’s DNA is slightly different, scientists can look for meaningful differences in DNA that can be used to suggest a person’s risk of disease, possible response to treatment and assess their current state of health. Ultimately, what we learn about these differences could be used to design a treatment tailored to a specific individual.

The precisionFDA platform is a part of this larger effort and through its use we want to help scientists work toward the most accurate and meaningful discoveries. precisionFDA users will have access to a number of important tools to help them do this. These tools include reference genomes, such as “Genome in the Bottle,” a reference sample of DNA for validating human genome sequences developed by the National Institute of Standards and Technology. Users will also be able to compare their results to previously validated reference results as well as share their results with other users, track changes and obtain feedback.

Over the coming months we will engage users in improving the usability, openness and transparency of precisionFDA. One way we’ll achieve that is by placing the code for the precisionFDA portal on the world’s largest open source software repository, GitHub, so the community can further enhance precisionFDA’s features.Through such collaboration we hope to improve the quality and accuracy of genomic tests – work that will ultimately benefit patients.

precisionFDA leverages our experience establishing openFDA, an online community that provides easy access to our public datasets. Since its launch in 2014, openFDA has already resulted in many novel ways to use, integrate and analyze FDA safety information. We’re confident that employing such a collaborative approach to DNA data will yield important advances in our understanding of this fast-growing scientific field, information that will ultimately be used to develop new diagnostics, treatments and even cures for patients.

fda-voice-taha-kass-1x1Taha A. Kass-Hout, M.D., M.S., is FDA’s Chief Health Informatics Officer and Director of FDA’s Office of Health Informatics. Elaine Johanson is the precisionFDA Project Manager.

 

The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of MassDevice.com or its employees.

So What Are the Other Successes With Such Open Science 2.0 Collaborative Networks?

In the following post there are highlighted examples of these Open Scientific Networks and, as long as

  • transparancy
  • equal contributions (lack of heirarchy)

exists these networks can flourish and add interesting discourse.  Scientists are already relying on these networks to collaborate and share however resistance by certain members of an “elite” can still exist.  Social media platforms are now democratizing this new science2.0 effort.  In addition the efforts of multiple biocurators (who mainly work for love of science) have organized the plethora of data (both genomic, proteomic, and literature) in order to provide ease of access and analysis.

Science and Curation: The New Practice of Web 2.0

Curation: an Essential Practice to Manage “Open Science”

The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.

Curation and Scientific and Technical Culture: Creating Hybrid Networks

Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

  • Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor

which has created a need for more context-driven scientific search and discourse.

However another issue would be Individual Bias if these networks are closed and protocols need to be devised to reduce bias from individual investigators, clinicians.  This is where CONSENSUS built from OPEN ACCESS DISCOURSE would be beneficial as discussed in the following post:

Risk of Bias in Translational Science

As per the article

Risk of bias in translational medicine may take one of three forms:

  1. a systematic error of methodology as it pertains to measurement or sampling (e.g., selection bias),
  2. a systematic defect of design that leads to estimates of experimental and control groups, and of effect sizes that substantially deviate from true values (e.g., information bias), and
  3. a systematic distortion of the analytical process, which results in a misrepresentation of the data with consequential errors of inference (e.g., inferential bias).

This post highlights many important points related to bias but in summarry there can be methodologies and protocols devised to eliminate such bias.  Risk of bias can seriously adulterate the internal and the external validity of a clinical study, and, unless it is identified and systematically evaluated, can seriously hamper the process of comparative effectiveness and efficacy research and analysis for practice. The Cochrane Group and the Agency for Healthcare Research and Quality have independently developed instruments for assessing the meta-construct of risk of bias. The present article begins to discuss this dialectic.

  • Information dissemination to all stakeholders is key to increase their health literacy in order to ensure their full participation
  • threats to internal and external validity  represent specific aspects of systematic errors (i.e., bias)in design, methodology and analysis

So what about the safety and privacy of Data?

A while back I did a post and some interviews on how doctors in developing countries are using social networks to communicate with patients, either over established networks like Facebook or more private in-house networks.  In addition, these doctor-patient relationships in developing countries are remote, using the smartphone to communicate with rural patients who don’t have ready access to their physicians.

Located in the post Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

I discuss some of these problems in the following paragraph and associated posts below:

Mobile Health Applications on Rise in Developing World: Worldwide Opportunity

According to International Telecommunication Union (ITU) statistics, world-wide mobile phone use has expanded tremendously in the past 5 years, reaching almost 6 billion subscriptions. By the end of this year it is estimated that over 95% of the world’s population will have access to mobile phones/devices, including smartphones.

This presents a tremendous and cost-effective opportunity in developing countries, and especially rural areas, for physicians to reach patients using mHealth platforms.

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

In Summary, although there are restrictions here in the US governing what information can be disseminated over social media networks, developing countries appear to have either defined the regulations as they are more dependent on these types of social networks given the difficulties in patient-physician access.

Therefore the question will be Who Will Protect The Data?

For some interesting discourse please see the following post

Atul Butte Talks on Big Data, Open Data and Clinical Trials

 

Read Full Post »


 

Yay! Bloomberg View Seems to Be On the Side of the Lowly Scientist!

 

Reporter: Stephen J. Williams, Ph.D.

Justin Fox at BloombergView had just published an article near and dear to the hearts of all those #openaccess scientists and those of us @Pharma_BI and @MozillaScience who feel strong about #openscience #opendata and the movement to make scientific discourse freely accessible.

His article “Academic Publishing Can’t Remain Such a Great Business” discusses the history of academic publishing and how consolidation of smaller publishers into large scientific publishing houses (Bigger publishers bought smaller ones) has produced a monopoly like environment in which prices for journal subscriptions are rising. He also discusses how the open access movement is challenging this model and may oneday replace the big publishing houses.

A few tidbits from his article:

Publishers of academic journals have a great thing going. They generally don’t pay for the articles they publish, or for the primary editing and peer reviewing essential to preparing them for publication (they do fork over some money for copy editing). Most of this gratis labor is performed by employees of academic institutions. Those institutions, along with government agencies and foundations, also fund all the research that these journal articles are based upon.

Yet the journal publishers are able to get authors to sign over copyright to this content, and sell it in the form of subscriptions to university libraries. Most journals are now delivered in electronic form, which you think would cut the cost, but no, the price has been going up and up:

 

This isn’t just inflation at work: in 1994, journal subscriptions accounted for 51 percent of all library spending on information resources. In 2012 it was 69 percent.

Who exactly is getting that money? The largest academic publisher is Elsevier, which is also the biggest, most profitable division of RELX, the Anglo-Dutch company that was known until February as Reed Elsevier.

 

RELX reports results in British pounds; I converted to dollars in part because the biggest piece of the company’s revenue comes from the U.S. And yes, those are pretty great operating-profit margins: 33 percent in 2014, 39 percent in 2013. The next biggest academic publisher is Springer Nature, which is closely held (by German publisher Holtzbrinck and U.K. private-equity firm BC Partners) but reportedly has annual revenue of about $1.75 billion. Other biggies that are part of publicly traded companies include Wiley-Blackwell, a division of John Wiley & Sons; Wolters Kluwer Health, a division of Wolters Kluwer; and Taylor & Francis, a division of Informa.

And gives a brief history of academic publishing:

The history here is that most early scholarly journals were the work of nonprofit scientific societies. The goal was to disseminate research as widely as possible, not to make money — a key reason why nobody involved got paid. After World War II, the explosion in both the production of and demand for academic research outstripped the capabilities of the scientific societies, and commercial publishers stepped into the breach. At a time when journals had to be printed and shipped all over the world, this made perfect sense.

Once it became possible to effortlessly copy and disseminate digital files, though, the economics changed. For many content producers, digital copying is a threat to their livelihoods. As Peter Suber, the director of Harvard University’s Office for Scholarly Communication, puts it in his wonderful little book, “Open Access”:

And while NIH Tried To Force These Houses To Accept Open Access:

About a decade ago, the universities and funding agencies began fighting back. The National Institutes of Health in the U.S., the world’s biggest funder of medical research, began requiring in 2008 that all recipients of its grants submit electronic versions of their final peer-reviewed manuscripts when they are accepted for publication in journals, to be posted a year later on the NIH’s open-access PubMed depository. Publishers grumbled, but didn’t want to turn down the articles.

Big publishers are making $ by either charging as much as they can or focus on new customers and services

For the big publishers, meanwhile, the choice is between positioning themselves for the open-access future or maximizing current returns. In its most recent annual report, RELX leans toward the latter while nodding toward the former:

Over the past 15 years alternative payment models for the dissemination of research such as “author-pays” or “author’s funder-pays” have emerged. While it is expected that paid subscription will remain the primary distribution model, Elsevier has long invested in alternative business models to address the needs of customers and researchers.

Elsevier’s extra services can add news avenues of revenue

https://www.elsevier.com/social-sciences/business-and-management

https://www.elsevier.com/rd-solutions

but they may be seeing the light on OpenAccess (possibly due to online advocacy, an army of scientific curators and online scientific communities):

Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals

SAME SCIENTIFIC IMPACT: Scientific Publishing – Open Journals vs. Subscription-based

e-Recognition via Friction-free Collaboration over the Internet: “Open Access to Curation of Scientific Research”

Indeed we recently put up an interesting authored paper “A Patient’s Perspective: On Open Heart Surgery from Diagnosis and Intervention to Recovery” (free of charge) letting the community of science freely peruse and comment, and generally well accepted by both author and community as a nice way to share academic discourse without the enormous fees, especially on opinion papers in which a rigorous peer review may not be necessary.

But it was very nice to see a major news outlet like Bloomberg View understand the lowly scientist’s aggravations.

Thanks Bloomberg!

 

 

 

 

 

Read Full Post »


Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »


PROGRAM ANNOUNCEMENT

Conference Program is available at

http://www.sachsforum.com/newyork14/


Event’s agenda available at:
http://www.sachsforum.com/newyork14/newyork14-agenda.html

Wednesday, 19th March 2014
Registration and coffee begins – 08.00
Program begins – 08.15
Networking reception will take place at 18.00 – 20.00

Once you arrive at 7 World Trade Center (250 Greenwich St, New York, NY10007, USA).
Please use the D Elevator Bank to the 40th floor where Sachs Team will welcome you at the registration desk.

For urgent issues, please contact:
Tomas@sachsforum.com (cell number +44 77 043 158 71)
Or Mina@sachsforum.com (cell number +44 74 636 695 04) Cells available from 15th March.

Announcement

LEADERS IN PHARMACEUTICAL BUSINESS INTELLIGENCE will cover the event for the Scientific Media

Dr. Lev-Ari will be in attendance on 3/19/2014 at 

The New York Academy of Sciences.

Editorials of event coverage via our 

Open Access Scientific Journal

http://pharmaceuticalintelligence.com

Date             Views to Date      # of articles      “NIH Clicks”  “Nature Clicks”

3/05/2014      338,958                 1,717                 1,830                   965

  • 369 Articles on Cancer
  • 74 articles on Imaging-based Cancer Patient Management

https://pharmaceuticalintelligence.com/?s=Cancer+

  • Cancer e-Book

Series C: e-Books on Cancer & Oncology

Series C Content Consultant: Larry H. Bernstein, MD, FCAP 

VOLUME ONE 

Cancer Biology and Genomics for Disease Diagnosis

2014

Stephen J. Williams, PhD, Senior Editor

sjwilliamspa@comcast.net

Tilda Barliya, PhD, Editor

tildabarliya@gmail.com

Ritu Saxena, PhD, Editor

ritu.uab@gmail.com

https://pharmaceuticalintelligence.com/biomed-e-books/series-c-e-books-on-cancer-oncology/cancer-biology-and-genomics-for-disease-diagnosis/

SIX SOURCES of INVESTMENT for BioMed INVENTIONS

Curator: Aviva Lev-Ari, PhD, RN

Investing and inventing: Is the Tango of Mars and Venus Still on

MEDIA COVERAGE

The Event will be broadcasted via our distributions channels on the Internet and all Search Engines featuring WordPress.com

  • Scientific Journal

http://pharmaceuticalintelligence.com

https://pharmaceuticalintelligence.com/2014/03/05/milestone-for-our-venture-we-celebrate-our-top-authors-by-number-of-articles-in-the-journal-to-date-1000-301-58-49-46-43-40-28-20/

  • Facebook HomePage of LEADERS IN PHARMACEUTICAL BUSINESS INTELLIGENCE

http://www.facebook.com/LeadersInPharmaceuticalBusinessIntelligence

  • On Twitter.com  @pharma_BI

http://twitter.com/pharma_BI

  • 53 BioMed Groups on LinkedIn.com

http://www.linkedin.com/in/avivalevari

  • Dr. Lev-Ari’s BioMed Group launched by and managed by on LinkedIn.com – LEADERS IN PHARMACEUTICAL BUSINESS INTELLIGENCE

http://www.linkedin.com/groups?gid=4346921&trk=hb_side_g

2nd ANNUAL

Sachs Cancer Bio Partnering &
Investment Forum

Promoting Public & Private Sector Collaboration & Investment

in Drug Development

19th March 2014 • New York Academy of Sciences • USA  
spi2012

http://www.sachsforum.com/newyork14/

 

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum is designed to bring together thought leaders from cancer research institutes, patient advocacy groups, pharma and biotech to facilitate partnering and funding/investment. We expect around 200 delegates and there is an online meeting system and meeting facilities to make the event transactional. There will also be a track of about 30 presentations by listed and private biotechnology companies seeking licensing/investment.

divider

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

  • Advances in Translational Research
  • Strategies for Small Molecule and Biologicals Drug Development
  • Deal Making
  • Public & Private Partnerships
  • Diagnostics
  • Immunotherapies and Cancer Vaccines
  • Case Study

Confirmed Speakers & Chairs include:
Anne Altmeyer, Executive Director Business Development & LicensingNovartis Pharmaceuticals
Ariel Jasie, Executive Director of Business DevelopmentCelgene
Beth Jacobs, Managing PartnerExcellentia Global Partners
Boris Peaker, Executive Director, Biotechnology Equity ResearchOppenheimer & Co. Inc.
Carole Nuechterlein, Head Roche Venture FundF.Hoffmann-La Roche AG Roche Venture Fund
Dan Snyder, President and COOMolecularMD
Daryl Mitteldorf, Executive DirectorGlobal Prostate Cancer Alliance
Dennis Purcell, Senior Managing PartnerAisling Capital
Doug Plessinger, Vice President of Clinical and Medical AffairsArgos Therapeutics, Inc.
Elizabeth Bachert, Senior Director Worldwide Business DevelopmentPfizer
Esteban Pombo-Villar, COOOxford BioTherapeutics AG
Florian Schodel, CEO, Philimmune LLC
Frederick Cope, President and CSONavidea Biopharmaceuticals
Guillaume Vignon, Director of Global BD Oncology, Merck Serono SA
Harren Jhoti, PresidentAstex Pharmaceuticals Inc.
Harry Glorikan, Managing DirectorPrecision for Medicine
James Mulé, Executive Vice President and Associate Center Director for Translational Research,
H Lee Moffit Cancer Center
Keith Knutson, Program Director and Principal Investigator of the Cancer Vaccines and immune Therapies ProgramVaccine and Gene Therapy Institute of Florida
Kevin DeGeeter, AnalystLadenburg Thalmann & Co, Inc.
Klaus Urbahns, Head, Discovery TechnologiesMerck Serono
Kristina Khodova, Project Manager, OncologySkolkovo Foundation
Lorenza Castellon, Business Development ConsultantSuda Ltd.
Louis DeGennaro, Executive VP, CMO, The Leukemia and Lymphoma Society
Louise Perkins, Chief Science OfficerMelanoma Research Alliance
Mara Goldstein, Managing Director, Senior Healthcare AnalystCantor Fitzgerald
Michael Goldberg, Managing PartnerMontaur Capital
Nathan Tinker, Executive DirectorNewYorkBIO
Nicholas Dracopoli, Vice President and Head of OncologyJanssen Research & Development
Peter Hoang, Managing Director, Office of Innovations, Technology Based VenturesThe University of Texas MD Anderson Cancer Center
Philip Gotwals, Executive Director, Oncology Research CollaborationsNovartis Institutes for BioMedical Research
Robert Petit, CSOAdvaxis Inc.
Stephen Brozak, Managing Partner and PresidentWBB Securities, LLC
Steven Tregay, CEOForma Therapeutics
Steven W. Young, PresidentAddario lung Cancer Medical Institute
Stuart Barich, Managing Director, Healthcare Investment BankingOppenheimer & Company
Tariq Kassum MD, Vice President, Business Development and StrategyMillennium Pharmaceuticals
TBC, Cardinal Health
TBC, UCSD
Timothy Herpin, Vice President, Head of Transactions (UK), Business DevelopmentAstraZeneca
Vikas Sharma, Director, Business DevelopmentRexahn Pharmaceuticals, Inc.
Walter Capone, PresidentThe Multiple Myeloma Research Foundation

View the full list of 2013 Forum Speakers & Chairs >>

divider

Presenting Opportunities for Biotech, Pharmaceutical companies  and Patient Advocacy Groups

Presenting at the forum offers excellent opportunities to showcase activities and highlight investment and partnership opportunities. Biotech companies will be able to communicate investment and licensing opportunities. These are for both public and private companies. The audience is comprised of financial and industry investors. These are streamed 15 minute presentations. The patient advocacy presentations are 30 minutes.

Sachs forums are recognised as the leading international stage for those interested in investing in the biotech and life science industry and are highly transactional. They draw together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. The Boston forum provides the additional interaction with the academic/scientific and patient advocacy communities.

Sponsorship and Exhibition

Sachs Associates has developed an extensive knowledge of the key individuals operating within the European and global biotech industry. This together with a growing reputation for excellence puts Sachs Associates at the forefront of the industry and provides a powerful tool by which to increase the position of your company in this market.

Raise your company’s profile directly with your potential clients. All of our sponsorship packages are tailor made to each client, allowing your organisation to gain the most out of attending our industry driven events.

To learn more about presenting, exhibition or sponsorship opportunities, please contact
Mina Orda + 44 (0)203 463 4890 or by email: Mina Orda.

 

spi2012
Register Now
Register
To Exhibit
Register
To Present
OVERVIEW sachs Speakers sachs Presenting Companies sachs Attendees sachs Program sachs Sponsors / Supporters sachs Venue sachs Accommodation
Biotech i Europe Investor Forum
sachs sachs
Companies Who Presented at the 2013 Forum Included:
Aileron Therapeutics, Inc.
AnaptysBio, Inc
Argos Therpeutics, Inc
Atossa Genetics
BioCancell Ltd.
BioLineRx Ltd.
Cellectis
CENTROSE
Churchill Pharmaceuticals
Constellation Pharmaceuticals
CureVac GmbH
Dicerna Pharmaceuticals
Etubics Corporation
Genisphere
immatics biotechnologies GmbH
ImmunoGen, Inc
Life Science Nation
MacroGenics, Inc
Melanovus Oncology
MiNA Therapeutics
MolecularMD
Oncolix, Inc.
OncoSec Medical Incorporated
Oxford BioTherapeutics
RAMOT at Tel Aviv University
Rescue Therapeutics, Inc.
Sialix, Inc.
Sorrento Therapeutics
to-BBB technologies BV
TVAX Biomedical, Inc.
The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum is designed to bring together thought leaders from cancer research institutes, patient advocacy groups, pharma and biotech to facilitate partnering and funding/investment. We expect around 200 delegates and there is an online meeting system and meeting facilities to make the event transactional. There will also be a track of about 30 presentations by listed and private biotechnology companies seeking licensing/investment.dividerThe 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

  • Advances in Translational Research
  • Strategies for Small Molecule and Biologicals Drug Development
  • Deal Making
  • Public & Private Partnerships

Confirmed Speakers & Chairs include:

The 2nd Annual Sachs Cancer Bio Partnering & Investment Forum will cover the following topics in the program:

  • Advances in Translational Research
  • Strategies for Small Molecule and Biologicals Drug Development
  • Deal Making
  • Public & Private Partnerships
  • Diagnostics
  • Immunotherapies and Cancer Vaccines

Confirmed Speakers & Chairs include:
Anne Altmeyer, Executive Director Business Development & LicensingNovartis Pharmaceuticals
Ariel Jasie, Executive Director of Business DevelopmentCelgene
Beth Jacobs, Managing PartnerExcellentia Global Partners
Boris Peaker, Executive Director, Biotechnology Equity ResearchOppenheimer & Co. Inc.
Carole Nuechterlein, Head Roche Venture FundF.Hoffmann-La Roche AG Roche Venture Fund
Daryl Mitteldorf, Executive DirectorGlobal Prostate Cancer Alliance
Dennis Purcell, Senior Managing PartnerAisling Capital
Doug Plessinger, Vice President of Clinical and Medical AffairsArgos Therapeutics, Inc.
Elizabeth Bachert, Senior Director Worldwide Business DevelopmentPfizer
Esteban Pombo-Villar, COOOxford BioTherapeutics AG
Florian Schodel, CEO, Philimmune LLC
Guillaume Vignon, Director of Global BD Oncology, Merck Serono SA
Harren Jhoti, PresidentAstex Pharmaceuticals Inc.
Harry Glorikan, Managing DirectorPrecision for Medicine
James Mulé, Executive Vice President and Associate Center Director for Translational Research,
H Lee Moffit Cancer Center
Keith Knutson, Program Director and Principal Investigator of the Cancer Vaccines and immune Therapies ProgramVaccine and Gene Therapy Institute of Florida
Klaus Urbahns, Head, Discovery TechnologiesMerck Serono
Kristina Khodova, Project Manager, OncologySkolkovo Foundation
Lorenza Castellon, Business Development ConsultantSuda Ltd.
Louis DeGennaro, Executive VP, CMO, The Leukemia and Lymphoma Society
Louise Perkins, Chief Science OfficerMelanoma Research Alliance
Mara Goldstein, Managing Director, Senior Healthcare AnalystCantor Fitzgerald
Nathan Tinker, Executive DirectorNewYorkBIO
Nicholas Dracopoli, Vice President and Head of OncologyJanssen Research & Development
Peter Hoang, Managing Director, Office of Innovations, Technology Based VenturesThe University of Texas MD Anderson Cancer Center
Philip Gotwals, Executive Director, Oncology Research CollaborationsNovartis Institutes for BioMedical Research
Robert Petit, CSOAdvaxis Inc.
Steven Tregay, CEOForma Therapeutics
Steven W. Young, PresidentAddario lung Cancer Medical Institute
Stuart Barich, Managing Director, Healthcare Investment BankingOppenheimer & Company
Tariq Kassum MD, Vice President, Business Development and StrategyMillennium Pharmaceuticals
Timothy Herpin, Vice President, Head of Transactions (UK), Business DevelopmentAstraZeneca
Walter Capone, PresidentThe Multiple Myeloma Research Foundation

_______

View the full list of 2013 Forum Speakers & Chairs >>

dividerPresenting Opportunities for Biotech, Pharmaceutical companies  and Patient Advocacy Groups

Presenting at the forum offers excellent opportunities to showcase activities and highlight investment and partnership opportunities. Biotech companies will be able to communicate investment and licensing opportunities. These are for both public and private companies. The audience is comprised of financial and industry investors. These are streamed 15 minute presentations. The patient advocacy presentations are 30 minutes.

Sachs forums are recognised as the leading international stage for those interested in investing in the biotech and life science industry and are highly transactional. They draw together an exciting cross-section of early-stage/pre-IPO, late-stage and public companies with leading investors, analysts, money managers and pharmas. The Boston forum provides the additional interaction with the academic/scientific and patient advocacy communities.

Sponsorship and Exhibition

Sachs Associates has developed an extensive knowledge of the key individuals operating within the European and global biotech industry. This together with a growing reputation for excellence puts Sachs Associates at the forefront of the industry and provides a powerful tool by which to increase the position of your company in this market.

Raise your company’s profile directly with your potential clients. All of our sponsorship packages are tailor made to each client, allowing your organisation to gain the most out of attending our industry driven events.

To learn more about presenting, exhibition or sponsorship opportunities, please contact
Mina Orda + 44 (0)203 463 4890 or by email: Mina Orda.

SOURCE

http://www.sachsforum.com/newyork14/index.html

From: Mina@sachsforum.com
To: AvivaLev-Ari@alum.berkeley.edu
Sent: Mon Dec 16 12:01:21 UTC 2013

From: Tomas Andrulionis <Tomas@sachsforum.com>
Date: Tue, 10 Dec 2013 16:13:53 +0000
To: “avivalev-ari@alum.berkeley.edu” <avivalev-ari@alum.berkeley.edu>
Conversation: Complimentary Invitation for the 2nd Annual Sachs Cancer Bio Partnering & Investment Forum, 19th March 2014, New York Academy of Sciences

Read Full Post »


Reporter: Aviva Lev-Ari, PhD, RN

 

 

READ THE HISTORY OF PeerJ

https://pharmaceuticalintelligence.com/open-access-scientific-journal/peerj-model-for-open-access-online-scientific-journal/

 

From: PeerJ <newsletter@peerj.com>
Date: Wed, 12 Jun 2013 18:10:54 +0000
To: AvivaLev-Ari <Avivalev-ari@alum.Berkeley.edu>
Subject: PeerJ turned One today – help us celebrate by entering our competition

PeerJ
Hi Aviva,
We are very pleased to announce <http://blog.peerj.com/post//celebrating-the-one-year-anniversary-of-peerj>  that this is the one year anniversary of PeerJ – it was on June 12th, 2012 that we first announced ourselves and started the process towards becoming a fully-fledged publishing company. Today, just 12 months later, PeerJ is completely up and running; we are publishing high quality peer-reviewed science; and we are doing our very best to change the world by pushing the boundaries of Open Access!

To briefly overview what has been achieved in the last year – we announced ourselves on June 12th 2012 and opened the PeerJ doors for submissions on December 3rd. We published our first PeerJ articles on Feb 12th 2013, and followed up by launching PeerJ PrePrints on April 3rd 2013. This last year has been spent recruiting an Editorial Board of 800 world renowned researchers; building cutting edge submission, peer-review, publication and pre-print software from scratch; establishing ourselves with all the major organizations who archive, index, list and certify new publications; and building an entirely new type <http://blog.peerj.com/post/46261563342/6-reasons-to-publish-with-peerj>  of publishing company from the ground up.

Some of the highlights have included:

* the fantastic reception <http://svpow.com/2012/08/30/peerj-sorted/>  to our membership model <https://peerj.com/pricing/&gt;  which means that authors now have a way to publish for their lifetime for a single low price payment (starting at just $99);

* the fact that we have been processing submissions with extreme speed and effectiveness <http://blog.peerj.com/post/45340534713/peerj-is-fast> ;

* the fact that our Open Peer Review process has been so well received <http://blog.peerj.com/post/43139131280/the-reception-to-peerjs-open-peer-review>  with over 40% of reviewers now providing their name and almost 80% of authors making their reviews public;

* being named by the Chronicle of Higher Education as one of the “Top 10 Tech Innovators in the Education Sector <http://chronicle.com/article/The-Idea-Makers-Tech/138823/> ” and by Nature as “a significant innovation <http://www.nature.com/news/journal-offers-flat-fee-for-all-you-can-publish-1.10811> ”;

* and of course the fact that the journal has turned out to be so innovative <http://blog.peerj.com/post/42920094844/peerj-functionality> , beautiful and aesthetically pleasing <http://theseamonster.net/2013/05/peerj-awesomeness/> .

The giveaway

We are celebrating this milestone with a new PeerJ Competition. On June 19th, we will give away 12 “complimentary publication” passes (the ability to publish one paper with us at no cost to you or any of your co-authors) + a PeerJ Charlie T-Shirt + a pin + a fridge magnet (!) to a random selection of 12 people (one for each month of our first year) who publicly post some variation of the following message:

“PeerJ just turned one! Open access publishing, for just $99 for life – check them out and submit now!”

Please include a link to us as well (you choose the best one!).

The last year has been an intense journey, and to be honest we have been so busy we almost missed the anniversary! We would like to take this opportunity to thank the many thousands of researchers who have signed up as PeerJ Members; all those who have authored or reviewed articles; all those who have joined our Editorial Board; and anyone who have simply expressed their support – without the involvement and enthusiasm of these people we would not be where we are today. Of course, we must also thank our dedicated staff (Alf Eaton, Patrick McAndrew and Jackie Thai) and Tim O’Reilly, who collectively took a chance on a brand new publishing concept, but who have been irreplaceable in making us what we are today!

Please encourage your colleagues to look into PeerJ, and make sure they consider submitting their next article to us. The future of academic publishing is here, right now <http://blog.peerj.com/post/46261563342/6-reasons-to-publish-with-peerj> .

The PeerJ Founders
and the PeerJ Team

Read Full Post »

Older Posts »