Feeds:
Posts
Comments

Posts Tagged ‘#science2_0’

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson

Life-cycle of Science 2

 

 

 

 

 

 

 

 

 

 

 

Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

(this discussion is in a three part series including:

Using Scientific Content Curation as a Method for Validation and Biocuration

Using Scientific Content Curation as a Method for Open Innovation)

 

Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value.   He also writes extensively about tech and entrepreneurship.

October 2013 Wired article by Clive Thompson, entitled “How Successful Networks Nurture Good Ideas: Thinking Out Loud”, describes how the voluminous writings, postings, tweets, and sharing on social media is fostering connections between people and ideas which, previously, had not existed. The article was generated from Clive Thompson’s book Smarter Than you Think: How Technology is Changing Our Minds for the Better.Tom Peters also commented about the article in his blog (see here).

Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:

 

Our ideas are PRODUCTS OF OUR ENVIRONMENT.

They are influenced by the conversations around us.

However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.

But HOW MUCH Knowledge is OUT THERE?

 

Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:

  • 154.6 billion EMAILS per DAY
  • 400 million TWEETS per DAY
  • 1 million BLOG POSTS (including this one) per DAY
  • 2 million COMMENTS on WordPress per DAY
  • 16 million WORDS on Facebook per DAY
  • TOTAL 52 TRILLION WORDS per DAY

As he estimates this would be 520 million books per DAY (book with average 100,000 words).

A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.

 

Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:

 

  • 2.9 billion internet users
  • 981 million websites (only 25,000 hacked today)
  • 128 billion emails
  • 385 million Tweets
  • > 2.7 million BLOG posts today (including this one)

 

The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)

 

So how many science blogs are out there? Well back in 2008 “grrlscientistasked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.

So granted, there is a lot of

goodbadugly

 

              ….. when it comes to scientific information on the internet!

 

 

 

 

 

I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:

http://pharmaceuticalintelligence.com/2014/06/17/the-gonzalez-protocol-worse-than-useless-for-pancreatic-cancer/

 

and in a Nature Report:Stem cells: Taking a stand against pseudoscience

http://www.nature.com/news/stem-cells-taking-a-stand-against-pseudoscience-1.15408

Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.

 

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

 

Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:

 

“a meeting of natural philosophers to discuss promoting knowledge of the

natural world through observation and experiment”, i.e. science.

The Society met weekly to witness experiments and discuss what we

would now call scientific topics. The first Curator of Experiments

was Robert Hooke.”

 

from The History of the Royal Society

 

Royal Society CoatofArms

 

 

 

 

 

 

The Royal Society of London for Improving Natural Knowledge.

(photo credit: Royal Society)

(Although one wonders why they met “in-cognito”)

Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:

  • Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
  • Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
  • Current publishing does not encourage open feedback and review
  • Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
  • Published data and methodology incomplete
  • Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format

(put in here a brief blurb of summary of problems and why curation could help)

 

Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

Curation is an active filtering of the web’s  and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view.  Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale.  One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.

– Larry H. Bernstein, MD, FCAP

Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP

The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.

Justin D Pearlman, MD, PhD, FACC from The Voice of Content Consultant on The  Methodology of Curation in Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison, Drs. Larry Bernstein and Aviva Lev-Ari likens the medical and scientific curation process to curation of musical works into a thematic program:

 

Work of Original Music Curation and Performance:

 

Music Review and Critique as a Curation

Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care

… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.

 

In Summary: How Scientific Content Curation Can Help

 

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.

Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN

 

At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:

Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation

 

http://pharmaceuticalintelligence.com/2013/07/29/cardiovascular-original-research-cases-in-methodology-design-for-content-curation-and-co-curation/

demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …

which has created a need for more context-driven scientific search and discourse.

In Science and Curation: the New Practice of Web 2.0, Célya Gruson-Daniel (@HackYourPhd) states:

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.

.. where Célya considers curation an essential practice to manage open science and this new style of research.

As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.

  1. Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS

i)        Evidence: curation methodology leading to new insights for biomarkers

 

  1. Same as #1 but multiple players (experts) each bringing unique insights, perspectives, skills yielding new research = NEW LINE of CRITICAL THINKING

ii)      Evidence: co-curation methodology among cardiovascular experts leading to cardiovascular series ebooks

Life-cycle of Science 2

The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging.  Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communication was peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases. In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.

 

The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to

INCREASE THE SCIENTIFIC CURRENCY.

For detailed description of methodology please see Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid:

“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”

— JISC Senior Management Briefing Paper, The Data Deluge (2004)

 

As she states proper data and content curation is important for:

  • Post-analysis
  • Data and research result reuse for new research
  • Validation
  • Preservation of data in newer formats to prolong life-cycle of research results

However she laments the lack of

  • Funding for such efforts
  • Training
  • Organizational support
  • Monitoring
  • Established procedures

 

Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.

Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.

And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

curating-tableA GREAT table about the differences between Collecting and Curating by Nancy White at http://d20innovation.d20blogs.org/2012/07/07/understanding-content-curation/

 

 

 

 

 

 

 

 

 

 

 

University of Massachusetts Medical School has aggregated some useful curation tools at http://esciencelibrary.umassmed.edu/data_curation

Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.

See here for a curation of how networks fosters knowledge, by Erika Harrison on ScoopIt

(http://www.scoop.it/t/mobilizing-knowledge-through-complex-networks)

 

“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”

Andrea Carafa insight from World Economic Forum New Champions 2012 “Power of Networks

 

Creating Content Curation Communities: Breaking Down the Silos!

 

An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as

  • information integration (various types of data and formats)
  • social integration (marginalized by scientific communities, no funding, no recognition)

The authors set forth some ways to overcome these challenges of the content curation community including:

  1. standardization in practices
  2. visualization to document contributions
  3. emphasizing role of information professionals in content curation communities
  4. maintaining quality control to increase respectability
  5. recognizing participation to professional communities
  6. proposing funding/national meeting – Data Intensive Collaboration in Science and Engineering Workshop

A few great presentations and papers from the 2012 DICOSE meeting are found below

Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown

 

Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe

 

Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee

 

Ben Li. The mirages of big data. (PDF) NotesForLiReflectionsByBen

 

Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland

 

Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman

 

Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber

Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.

Open Science Case Studies in Curation

1. Open Science Project from Digital Curation Center

This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.

The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.

 

see: Pennock, M. (2006). “Curating e-Science Data”. DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3330. Available online: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation– See more at: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation/curating-e-science-data#sthash.RdkPNi9F.dpuf

 

2.      cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.

 

3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.

 

4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.

 

  1. OpenWetWareOpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us.   If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.

6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.

7. OpenScience ProjectThe OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.

 

9. Some ongoing biomedical knowledge (curation) projects at ISI

IICurate
This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.

BioScholar
It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).

10. Tools useful for scientific content curation

 

Research Analytic and Curation Tools from University of Queensland

 

Thomson Reuters information curation services for pharma industry

 

Microblogs as a way to communicate information about HPV infection among clinicians and patients; use of Chinese microblog SinaWeibo as a communication tool

 

VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries.  So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.

 

 

11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education

 

From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset

http://digimind.com/blog/experts/pharma-3-0/

Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book  entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.

 

Thomson Reuters and NuMedii Launch Ground-Breaking Initiative to Identify Drugs for Repurposing. Companies leverage content, Big Data analytics and expertise to improve success of drug discovery

 

Content Curation as a Context for Teaching and Learning in Science

 

#OZeLIVE Feb2014

http://www.youtube.com/watch?v=Ty-ugUA4az0

Creative Commons license

 

DigCCur: A graduate level program initiated by University of North Carolina to instruct the future digital curators in science and other subjects

 

Syracuse University offering a program in eScience and digital curation

 

Curation Tips from TED talks and tech experts

Steven Rosenbaum from Curation Nation

http://www.youtube.com/watch?v=HpncJd1v1k4

 

Pawan Deshpande form Curata on how content curation communities evolve and what makes a good content curation:

http://www.youtube.com/watch?v=QENhIU9YZyA

 

How the Internet of Things is Promoting the Curation Effort

Update by Stephen J. Williams, PhD 3/01/19

Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts.  I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running.  One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in.  About an army of 4 million have Wiki entries and maintain these databases.

Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)

In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information”  senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.

As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.”   Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”

Enter Wikidata.  According to founder Denny Vrandecic,

Language depends on knowing a lot of common sense, which computers don’t have access to

A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.

Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items.  Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.

The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.

But this effort needs money

Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s.  In response, Amazon and Facebook had donated millions for the Wikimedia projects.  Google recently gave 3.1 million USD$ in donations.

 

Future postings on the relevance and application of scientific curation will include:

Using Scientific Content Curation as a Method for Validation and Biocuration

 

Using Scientific Content Curation as a Method for Open Innovation

 

Other posts on this site related to Content Curation and Methodology include:

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

6 Steps to More Effective Content Curation

Stem Cells and Cardiac Repair: Content Curation & Scientific Reporting

Cancer Research: Curations and Reporting

Cardiovascular Diseases and Pharmacological Therapy: Curations

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

The Young Surgeon and The Retired Pathologist: On Science, Medicine and HealthCare Policy – The Best Writers Among the WRITERS

Reconstructed Science Communication for Open Access Online Scientific Curation

 

 

Read Full Post »

PLENARY KEYNOTE PRESENTATIONS: THURSDAY, MAY 1 | 8:00 – 10:00 AM @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

 

Reporter: Aviva Lev-Ari, PhD, RN

 

Keynote Introduction: Sponsored by Fred Lee, M.D., MPH, Director, Healthcare Strategy and Business Development, Oracle Health Sciences

Heather Dewey-Hagborg

Artist, Ph.D. Student, Rensselaer Polytechnic Institute

Heather Dewey-Hagborg is an interdisciplinary artist, programmer and educator who explores art as research and public inquiry. She recreates identity from strands of human hair in an entirely different way. Collecting hairs she finds in random public places – bathrooms, libraries, and subway seats – she uses a battery of newly developing technologies to create physical, life-sized portraits of the owners of these hairs. Her fixation with a single hair leads her to controversial art projects and the study of genetics. Traversing media ranging from algorithms to DNA, her work seeks to question fundamental assumptions underpinning perceptions of human nature, technology and the environment. Examining culture through the lens of information, Heather creates situations and objects embodying concepts, probes for reflection and discussion. Her work has been featured in print, television, radio, and online. Heather has a BA in Information Arts from Bennington College and a Masters degree from the Interactive Telecommunications Program at Tisch School of the Arts, New York University. She is currently a Ph.D. student in Electronic Arts at Rensselaer Polytechnic Institute.

 

Yaniv Erlich, Ph.D.

Principal Investigator and Whitehead Fellow, Whitehead Institute for Biomedical Research

 

Dr. Yaniv Erlich is Andria and Paul Heafy Family Fellow and Principal Investigator at the Whitehead Institute for Biomedical Research. He received a bachelor’s degree from Tel-Aviv University, Israel and a PhD from the Watson School of Biological Sciences at Cold Spring Harbor Laboratory in 2010. Dr. Erlich’s research interests are computational human genetics. Dr. Erlich is the recipient of the Burroughs Wellcome Career Award (2013), Harold M. Weintraub award (2010), the IEEE/ACM-CS HPC award (2008), and he was selected as one of 2010 Tomorrow’s PIs team of Genome Technology.

 

Isaac Samuel Kohane, M.D., Ph.D.

Henderson Professor of Health Sciences and Technology, Children’s Hospital and Harvard Medical School;

Director, Countway Library of Medicine; Director, i2b2 National Center for Biomedical Computing;

Co-Director, HMS Center for Biomedical Informatics

 

Isaac Kohane, MD, PhD, co-directs the Center for Biomedical Informatics at Harvard Medical School. He applies computational techniques, whole genome analysis, and functional genomics to study human diseases through the developmental lens, and particularly through the use of animal model systems. Kohane has led the use of whole healthcare systems, notably in the i2b2 project, as “living laboratories” to drive discovery research in disease genomics (with a focus on autism) and pharmacovigilance

(including providing evidence for the cardiovascular risk of hypoglycemic agents which ultimately contributed to “black box”ing by the FDA) and comparative effectiveness with software and methods adopted in over 84 academic health centers internationally. Dr. Kohane has published over 200 papers in the medical literature and authored a widely used book on Microarrays for an Integrative Genomics. He has been elected to multiple honor societies including the American Society for Clinical Investigation, the American College of Medical Informatics, and the Institute of Medicine. He leads a doctoral program in genomics and bioinformatics within the Division of Medical Science at Harvard University. He is also an occasionally practicing pediatric endocrinologist.

 

#SachsBioinvestchat, #bioinvestchat

#Sachs14thBEF

Read Full Post »

Track 5 Next-Gen Sequencing Informatics: Advances in Analysis and Interpretation of NGS Data @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

NGS Bioinformatics Marketplace: Emerging Trends and Predictions

10:50 Chairperson’s Remarks

Narges Baniasadi, Ph.D., Founder & CEO, Bina Technologies, Inc.

11:00 Global Next-Generation Sequencing Informatics Markets: Inflated Expectations in an Emerging Market

Greg Caressi, Senior Vice President, Healthcare and Life Sciences, Frost & Sullivan

This presentation evaluates the global next-generation sequencing (NGS) informatics markets from 2012 to 2018. Learn key market drivers and restraints,

key highlights for many of the leading NGS informatics services providers and vendors, revenue forecasts, and the important trends and predictions that

affect market growth.

Organizational Approaches to NGS Informatics

11:30 High-Performance Databases to Manage and Analyze NGS Data

Joseph Szustakowski, Ph.D., Head, Bioinformatics, Biomarker Development,

Novartis Institutes for Biomedical Research

The size, scale, and complexity of NGS data sets call for new data management and analysis strategies. High-performance database systems

combine the advantages of both established and cutting edge technologies. We are using high performance database systems to manage and analyze NGS, clinical, pathway, and phenotypic data with great success. We will describe our approach and concrete success stories that demonstrate its efficiency and effectiveness.

12:00 pm Taming Big Science Data Growth with Converged Infrastructure

Aaron D. Gardner, Senior Scientific Consultant,

BioTeam, Inc.

Many of the largest NGS sites have identified IO bottlenecks as their number one concern in growing their infrastructure to support current and projected

data growth rates. In this talk Aaron D. Gardner, Senior Scientific Consultant, BioTeam, Inc. will share real-world strategies and implementation details

for building converged storage infrastructure to support the performance, scalability and collaborative requirements of today’s NGS workflows.

12:15 Next Generation Sequencing:  Workflow Overview from a High-Performance Computing Point of View

Carlos P. Sosa, Ph.D., Applications Engineer, HPC Lead,

Cray, Inc.

Next Generation Sequencing (NGS) allows for the analysis of genetic material with unprecedented speed and efficiency. NGS increasingly shifts the burden

from chemistry done in a laboratory to a string manipulation problem, well suited to High- Performance Computing. We explore the impact of the NGS

workflow in the design of IT infrastructures. We also present Cray’s most recent solutions for NGS workflow.

SOSA in REAL TIME

Bioinformatics and BIG DATA – NGS @ CRAY i 2014

I/O moving, storage data – UNIFIED solution by Cray

  • Data access
  • Fast Access
  • Storage
  • manage high performance computinf; NGS work flow, multiple human genomes 61 then 240 sequentiallt, with high performance in 51 hours, 140 genomes in simultaneous

Architecture @Cray for Genomics

  • sequensors
  • Galaxy
  • servers for analysis
  •  workstation: Illumina, galaxy, CRAY does the integration of 3rd party SW using a workflow LEVERAGING the network, the fastest in the World, network useding NPI for scaling and i/O
  • Compute blades, reserves formI?O nodes, the Fastest interconnet in the industry
  • scale of capacity and capability, link interconnect in the file System: lustre
  • optimization of bottle neck: capability, capacity, file structure for super fast I/O

12:40 Luncheon Presentation I

Erasing the Data Analysis Bottleneck with BaseSpace

Jordan Stockton, Ph.D., Marketing Director,

Enterprise Informatics, Illumina, Inc.

Since the inception of next generation sequencing, great attention has been paid to challenges such as storage, alignment, and variant calling. We believe

that this narrow focus has distracted many biologists from higher-level scientific goals, and that simplifying this process will expedite the discovery

process in the field of applied genomics. In this talk we will show that applications in BaseSpace can empower a new class of researcher to go from

sample to answer quickly, and can allow software developers to make their tools accessible to a vast and receptive audience.

1:10 Luncheon Presentation II: Sponsored by

The Empowered Genome Community: First Insights from Shareable Joint Interpretation of Personal Genomes for Research

Nathan Pearson, Ph.D. Principal Genome Scientist,

QIAGEN

Genome sequencing is becoming prevalent however understanding each genome requires comparing many genomes. We launched the Empowered Genome Community, consisting of people from programs such as the Personal Genome Project (PGP) and Illumina’s Understand Your Genome. Using Ingenuity Variant Analysis, members have identified proof of principle insights on a common complex disease (here,myopia) derived by open collaborative analysis of PGP genomes.

Pearson in REAL TIME

One Genome vs. population of Genomes

IF one Genome:

  1. ancestry
  2. family health
  3. less about drug and mirrors
  4. health is complex

CHallenges

1. mine genome

2. what all genome swill do for Humanity not what my genome can do for me

3. Cohort analysis, rich for variance

4. Ingenuity Variant Analysis – secure environment

5. comparison of genomes, a sequence, reference matching

6. phynogenum, statistical analysis as Population geneticists do

Open, collabrative myopia analysis GENES rare leading to myuopia – 111 genomes

– first-pass finding highlight 12 plausibly myopia-relevant genes: variants in cases vs control

– refine finding and analysis, statistical association, common variance

Read Full Post »

Track 4 Bioinformatics: Utilizing Massive Quantities of –omic Information across Research Initiatives @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

 

Bioinformatics for Big Data

10:50 Chairperson’s Remarks

Les Mara, Founder, Databiology, Ltd.

 

11:00 Data Management Best Practices for Genomics Service Providers

Vas Vasiliadis, Director, Products, Computation Institute,

University of Chicago and Argonne National Laboratory

Genomics research teams in academia and industry are increasingly limited at all stages of their work by large and unwieldy datasets, poor integration between the computing facilities they use for analysis, and difficulty in sharing analysis results with their customers and collaborators. We will discuss issues with current approaches and describe emerging best practices for managing genomics data through its lifecycle.

Vas in REAL TIME

Computation Institute @ University of Chicago solutions to non profit entities, scale and make available in an affordable way “I have nothing to say on Big Data”, 57.7% survey by NAS, average time researcher spend on research, it will get worse, research data management morphed into better ways, industrial robust way, commercial start ups are role model. All functions of an enterprise now available as applications for small business.

  • Highly scaleable, invisible
  • high performance
  • In Genomics, tools – shipping hard drive new ways to develop research infrastructure:
  • dropbox, does not scale Amazon’s Webservices is the cloud
  • security in sharing across campuses, InCommon – cross domains sw access constrains are mitigated.
  • identity provision for multiple identity – identity Hub, one time association done, Group Hubs, i.e., ci connect – UChicago, access to systems at other campuses – connecting science to cycles of data, network not utilizied efficiently – tools not design for that, FTP, Firewalls are designed for data not Big data.
  • Science DMZ – carve realestate for Science data transfer, monitoring the transfer
  • Reproducibility, Provenance, Public mandates
  • Data publication Service: VIVO, fisshare, Fedora, duracloud, doi, identification, store, preserve,, curation workflow
  • Search for discovery: Faceted Search. browse distributed, access locally – automation required, outsourcing, delivery throufg SaaS
  • We are all on cloud

11:30 NGS Analysis to Drug Discovery: Impact of High-Performance Computing in Life Sciences

Bhanu Rekepalli, Ph.D., Assistant Professor and Research Scientist, Joint Institute for Computational Sciences, The University of Tennessee, Oak Ridge National Laboratory

We are working with small-cluster-based applications most widely used by the scientific community on the world’s premier supercomputers. We incorporated these parallel applications into science gateways with user-friendly, web-based portals. Learn how the research at UTK-ORNL will help to bridge the gap between the rate of big data generation in life sciences and the speed and ease at which biologists and pharmacists can study this data.

Bhanu in REAL TIME

Cost per Genome does down, 2011 from $100,000 to $1,000

  • Solutions:
  • architecture
  • parallel informatics
  • SW modules
  • web-based gateway
  • XSEDE.org sponsured by NSF at all sponsored research by NSF
  • LCF – applications: Astrophysics, Bioinfo, CFD, highly scalable wrappers for the analysis Blast scaling results in Biology
  • Next generation super computers: Xeon/Phi

NICS Informatics Science gateway – PoPLAR Portal for Parallel Scaling Life Sciences Applications & Research

  • automated workflows
  • Smithsonian Institute, generate genomes fro all life entities in the universe: BGI
  • Titan Genomic Data analysis –   Everglade ecosystem, sequenced
  • Univ S. Carolina great computing infrastructure
  • Super computer: KRAKEN
  • 5-10 proteins modeling on supercomputers for novel drug discovery
  • Vascular Tree system for Heart transplant – visualization and modeling

12:00 pm The Future of Biobank Informatics

Bruce Pharr, Vice President, Product Marketing, Laboratory Systems, Remedy Informatics

As biobanks become increasingly essential to basic, translational, and clinical research for genetic studies and personalized medicine, biobank informatics must address areas from biospecimen tracking, privacy protection, and quality management to pre-analytical and clinical collection/identification of study data elements. This presentation will examine specific requirements for third-generation biobanks and how biobank informatics will meet those requirements.

Bruce Pharr in REAL TIME

Flexible Standartization

BioBank use of informatics in the1980s – bio specimens. 1999 RAND research 307 M biospecimens in US biobanks growing at 20M per year.

2nd – Gen Bioband

2005 – 3rd-Gen Biobanks – 15000 studies on Cancer, biospecimen, Consent of donors is a must.

Biobank – PAtion , Procedure, specimen acquistion, storage, processing, distribution, analysis

Building Registries – Mosaic Platform

  • Specimen Track BMS,
  • Mosaic Ontology:  application and Engine

1. standardize specimen requirement

Registries set up the storage: administrator dashboard vs user bashboard

2. Interoperability

3. Quality analysis

4. Informed Consent

 

12:15 Learn How YarcData’s Graph Analytics Appliance Makes It Easy to Use Big Data in Life Sciences

Ted Slater, Senior Solutions Architect, Life Sciences, YarcData, a division of Cray

YarcData, a division of Cray, offers high performance solutions for big data graph analytics at scale, finally giving researchers the power to leverage all the data they need to stratify patients, discover new drug targets, accelerate NGS analysis, predict biomarkers, and better understand diseases and their treatments.

12:40 Luncheon Presentation I

The Role of Portals for Managing Biostatistics Projects at a CRO

Les Jordan, Director, Life Sciences IT Consulting, Quintiles

This session will focus on how portals and other tools are used within Quintiles and at other pharmas to manage projects within the biostatistics department.

1:10 Luncheon Presentation II (Sponsorship Opportunity Available) or Lunch on Your Own

1:50 Chairperson’s Remarks

Michael Liebman, Ph.D., Managing Director, IPQ Analytics, LLC

Sabrina Molinaro, Ph.D., Head of Epidemiology, Institute of ClinicalPhysiology, National Research Council –

CNR Italy

1:55 Integration of Multi-Omic Data Using Linked Data Technologies

Aleksandar Milosavljevic, Ph.D., Professor, Human Genetics; Co-Director,

Program in Structural & Computational Biology and Molecular Biophysics;

Co-Director, Computational and Integrative Biomedical Research Center,

Baylor College of Medicine

By virtue of programmatic interoperability (uniform REST APIs), Genboree servers enable virtual integration of multi-omic data that is distributed across multiple physical locations. Linked Data technologies of the Semantic Web provide an additional “logical” layer of integration by enabling distributed queries across the distributed data and by bringing multi-omic data into the context of pathways and other background knowledge required for data interpretation.

2:25 Building Open Source Semantic Web-Based Biomedical Content Repositories to Facilitate and Speed Up Discovery and Research

Bhanu Bahl, Ph.D., Director, Clinical and Translational Science Centre,

Harvard Medical School

Douglas MacFadden, CIO, Harvard Catalyst at Harvard Medical School

Eagle-i open source network at Harvard provides a state-of-the-art informatics

Read Full Post »

AWARDS: Best of Show Awards, Best Practices Awards and 2014 Benjamin Franklin Award  @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reorter: Aviva Lev-Ari, PhD, RN

 

Best of Show Awards

The Best of Show Awards offer exhibitors an opportunity to distinguish their products from the competition. Judged by a team of leading industry experts and Bio-IT World editors, this award identifies exceptional innovation in technologies used by life science professionals today. Judging and the announcement of winners is conducted live in the Exhibit Hall. Winners will be announced on Wednesday, April 30 at 5:30pm. The deadline for product submissions is February 21, 2014. To learn more about this program, contact Ryan Kirrane at 781-972-1354 or email rkirrane@healthtech.com.

2014 WINNER(s) are announced in Real Time

2014 – Five categories

1. Clinical ad Health IT – Astazeneca with Tessella – Real Time Analytics for Clinical Trial (RTACT) – engine for innovations

2. Research and Drug Discovery: U-bioPRED with the TranSMART Foundation – Open Source  – Emperial College – Biomarkers for Asthma,  hospitals, 340 universities, 34 Pharmas

3. Informatics: Pistoia Alliance – HELM – Pfizer, released data for HELM Project

4. Knowledge Management Finalists: GENENTECH – Genentech Cell Line Resource

5. IT Infrastructure/HPC Winner:

Baylor College of Medicine with DNAnexus –

 

2014 Judges’Prize – UK for Patient Data Intgration

2014 Editors’ Choice Award: Mount Sinai – Rethinking Type 2 Diabetes through Data Informatics

2014 Benjamin Franklin Award

The Benjamin Franklin Award for Open Access in the Life Sciences is a humanitarian/bioethics award presented annually by the Bioinformatics Organization to an individual who has, in his or her practice, promoted free and open access to the materials and methods used in the life sciences. Nominations are now being accepted!

The winner will be announced in the Ampitheater at 9:00am on Wednesday, April 30 during the Plenary Keynote and Awards Program, WEDNESDAY, APRIL 30 | 8:00 – 9:45 AM.

Full details including previous laureates and entry forms are available at www.bioinformatics.org/franklin.

2014 WINNER is:

Helen Berman, Ph.D.

Board of Governors Professor of Chemistry and Chemical Biology, Rutgers University;

Founding Member, Worldwide Protein Data Bank (wwPDB); Director, Research Collaboratory for Structural Bioinformatics PDB (RCSB PDB)

Helen: ACCEPTANCE AWARD SPEECH

Proteins: Synthesis, enzymes, Health & Disease

PDB depositors: 850 new entries / month, 468 Miliions downloads & views, PDB Access

History of sharing the databank on protein

J.D. Bernl – 1944 crystalied Pepsin with Dorothy Hodgkin Oxford, manyWomen Distingushed

1960 – Early structure of proteins: Myoglobin, hemoglobin

1970

1980

1990

2000  Ribosomes

2010s: macromolecule machines

  • Science of protein structure
  • Technology: electromicroscopy,  Structure Genomics – data driven science Hybrid methods at Present for 3D structure identification

COMMUNITY ATTITUDE –  1971 PDB archive established at Cold Spring Harbor, Walter Hamilton, petition to have an Open DB of Protein, Brookhaven Labs, to be shared with UK, Nature New Biology: Seven Structures to the DB

1982 – AIDs epidemic – NIH – requested data to be Open, community set its own rules on data organization Fred Richards, Yale, requested on moral ground, DB to be Open.

1993 – mandatory to sahre dat linked to publication, no Journal will accet  an article id data was not in PDB.

1996 – dictionary put together

2008: experimental data madatory to be put in PDB, Validation

2011: PDBx  definition of X-Ray, NMR, and 3DEM, small-angle Scattering

Collaboration with to enable: self storage, structure based drug design

SCIENCE in ther IMPORTANT to be put there, IT evolved, changes to data

global organization collaboration

Communities to work together

L.D>Bernal – SOcial function of Science, 1939

Elenor Ostrom 2009 Nobel Prize in Economics – Community collaboration by rules

Best Practices Awards

Add value to your Conference & Expo attendance, sponsorship or exhibit package, and further heighten your visibility with the creative positioning offered as a Best Practices participant. Winners will be selected by a peer review expert panel in early 2014.

Bio-IT World will present the Awards in the Amphitheater at 9:30am on Wednesday, April 30 during the Plenary Keynote and Awards Program, WEDNESDAY, APRIL 30 | 8:00 – 9:45 AM

Early bird deadline (no fee) for entry is December 16, 2013 and final deadline (fee) for entry is February 10, 2014. Full details including previous winners and entry forms are available at Bio-ITWorldExpo.com.

2014 WINNER(s) are:

 

Read Full Post »

Track 6 Systems Pharmacology: Pathways to Patient Response @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

Reporter: Aviva Lev-Ari, PhD, RN

April 30, 2014

Modeling: Novel Tools

10:50 Chairperson’s Remarks

Avi Ma’ayan, Ph.D., Associate Professor, Pharmacology and Systems

Therapeutics, Icahn School of Medicine at Mount Sinai

11:00 The Human Avatar: Quantitative Systems Pharmacology to Support Physician Decision Making in Neurology and Psychiatry

Hugo Geerts, Ph.D., MBA, BA, CSO, In Silico Biosciences;

Adjunct Associate Professor, Perelman School of Medicine, University of Pennsylvania

CNS Quantitative Systems Pharmacology uses computer-based mechanistic modeling integrating brain network neurophysiology, functional imaging of

genetics, pharmacology of drug-receptor interactions and parameterization with clinical data. A patient model (“human avatar”) can be developed

accounting for polypharmacy and life history of traumatic events to help identify optimal treatments.

 

11:30 VisANT: An Integrative Network Platform to Connect Genes, Drugs, Diseases and Therapies

Zhenjun Hu, Ph.D., Research Associate Professor, Center for Advanced Genomic Technology,

Bioinformatics Program, Boston University

With the rapid accumulation of our knowledge on diseases, disease-related genes and drug targets, network-based analysis plays an increasingly

important role in systems biology, systems pharmacology and translational science. The new release of VisANT aims to provide new functions to facilitate

the convenient network analysis of diseases, therapies, genes and drugs.

12:00 pm Selected Oral Poster Presentation: Individualized PK/PD Biosimulations for Precision Drug Dosing: Diabetes Mellitus

Clyde Phelix, Ph.D., Associate Professor, Biology,

University of Texas San Antonio

Individualized biosimulations offer many advantages to precision medicine. Using one’s transcriptome to determine parameters of kinetic models of metabolism reanimates that individual for in silico testing. The Transcriptome-To-Metabolome™ Model is multiorgan and multicompartmental, including over 30 primary and secondary metabolic pathways and transport processes. Thus pharmacokinetics/pharmacodynamics studies can be performed in silico before treating each patient.

12:40 Luncheon Presentations (Sponsorship Opportunities Available) or Lunch on Your Own

Modeling: Cancer

1:50 Chairperson’s Remarks

Hugo Geerts, Ph.D., MBA, BA, CSO, In Silico Biosciences; Adjunct Associate Professor, Perelman School of Medicine, University of Pennsylvania

In REAL TIME

»»1:55 FEATURED PRESENTATION

Identifying Drug Targets from Drug-Induced Changes in Genome-Wide mRNA Expression

Avi Ma’ayan, Ph.D., Associate Professor, Pharmacology and Systems Therapeutics, Icahn School of Medicine at Mount Sinai

We collected and organized publicly available genome-wide gene expression data where hundreds of drugs were used to treat mammalian cells and changes in expression were compared to a control. We then developed computational methods that try to find the drug targets from the expression changes. We show that different steps in the analysis can contribute to approaching the right answer.

In REAL TIME

System biology and drug related by phynotypes, drugs causes diseasespatient and side effects

Networs,

Gene-set Libraries stored in Gene Matrix Transpose(GMT) files, KEGG Example

Drug-set Libraries

Drug-Drug similarity data, SIDER 2 Side Effect Resource, FDA adverse effect Report data

Connactivity Map: Broad  Institute, L1000 cell lines microarray, different  drug dose, DRUG effect on GENES

  • develop new compondts,
  • measure toxicity

LINC-L1000 data overview, Drug-drug similarity structure, connversion

for Vector side effect

LINCS Canvas Browser

Cell-Line/Drug Browser

New method for clustering patient by outcomes, survival analysis

http://www/maayanlab.net/LINCS/LCB/

Drug interact with target drug vs transcription factors, over expression

Over expression of transcription factors vs knock out for validation

2:25 Infrastructure for Comparison of Systematically Generated Cancer Networks vs. Literature Models

Dexter Pratt, Project Director,

NDEx, Cytoscape Consortium

Cancer subtype genetic networks can be generated by systematic analysis of patient somatic mutation data. Comparison to existing models of cancer

mechanisms is an important step in investigating these data-derived models. Recent work on Network Based Stratification (NBS) at the Ideker Lab will be

described along with tools for network comparison under development in the NDEx project.

In REAL TIME

Network based classification, unsupervised methoods

Ovarian cancer- sparse mutations, no two patients share same mutation, clustering by expression profile – can be cause, gene – gene interaction, smooth knowlede,

Reference networks, Common Entity identification system used, started at UCSD. overlap of curated PATHWAYS, query, neighborhoods in the reference network,

Using mapping tables to mapp identifiers for entity correspondence

Complex Reference Networks N:1 and 1:N

Transcriptionalcontrol motif, extract motifs mapp data to motifs, concordence,  and other metrics to be computed fromreferenced data,

Boundaries of Pathways – Reaction chain,  Differentially expressed genes –>> enzymes –>>> reactions  (differentilly regulated) –>> smaoll molecules

CONCLUTIONS

Cliniccal relevance, hypothesis motifs and interactions.

MAY 1, 2014

Modeling: Drug/Dose Response

1:55 Chairperson’s Remarks

Birgit Schoeberl, Ph.D., Vice President, Research, Merrimack Pharmaceuticals

»»2:00 FEATURED PRESENTATION

Systems Approaches to Risk Assessment

Lawrence J. Lesko, Ph.D., FCP, Clinical Professor and Director, Center for Pharmacometrics and Systems Pharmacology, University of Florida

“Idiosyncratic” adverse drug events (ADEs) are a substantial societal burden in terms of morbidity, mortality and healthcare costs. Predicting who

will suffer ADEs from what medications is extremely difficult with current observational or surveillance approaches. A new mechanistic approach to

drug safety science is sorely needed. Systems approaches may address this unmet medical need.

2:30 Pharmacodynamic Characterization of Compounds in Drug Discovery

Rui-Ru Ji, Ph.D., Principal Scientist, Genomics, Bristol-Myers Squibb

The transcriptome reacts in a dose-dependent manner to compound treatment. We will present methodology and will discuss multiple applications of dose

response profiling of the whole transcriptome.

Read Full Post »

PLENARY KEYNOTE PRESENTATIONS: TUESDAY, APRIL 29 | 4:00 – 5:00 PM @ BioIT World, April 29 – May 1, 2014 Seaport World Trade Center, Boston, MA

 

Reporter: Aviva Lev-Ari, PhD, RN

 

PLENARY KEYNOTE PRESENTATIONS:

TUESDAY, APRIL 29 | 4:00 – 5:00 PM

Keynote Introduction: Sponsored by Dave Wilson, Senior Director, Business Development Manager, Global Channels, Hitachi Data Systems

John Quackenbush, Ph.D.

CEO, GenoSpace; Professor, Dana-Farber

Cancer Institute and Harvard School of Public Health

John Quackenbush received his Ph.D. in 1990 in theoretical physics from UCLA working on string theory models. Following two years as a postdoctoral fellow in physics, Dr. Quackenbush applied for and received a Special Emphasis Research Career Award from the National Center for Human Genome Research to work on the Human Genome Project. He spent two years at the Salk Institute and two years at Stanford University working at the interface of genomics and computational biology. In 1997 he joined the faculty of The Institute for Genomic Research (TIGR) where his focus began to shift to understanding what was encoded within the human genome. Since joining the faculties of the Dana-Farber Cancer Institute and the Harvard School of Public Health in 2005, his work has focused on decoding and modeling the networks of interacting genes that drive disease. In 2011 he and partner Mick Correll launched GenoSpace to facilitate genomic data analysis and interpretation, focused on accelerating research and delivering relevant and actionable solutions for personalized medicine.

IN REAL TIME FROM THE AMPHITHEATER of World BioIT2014

Twitter

#BioIT14

2900 attendees 140 exhibitor, 250 Speakers, Best of Show Awart, Best Practices Award, Franklin Award, Memorial to Pat McGovern ex-CEO and Chairman of IDG and launcher of BioIT, McGovern Institute for Brain Research @MIT his gift $350 million, [Broad’s gift to MIT was $650million]

Hitachi Data Perspective

Cloud and Aanlytics

John Quackenbush about Precision Medicine

Desire to use an information ecosystem for mediicine

The DRIVER is DATA – access t data Data that drives innovations in BioMedical

IT

  • Cloud Computing data, information and STORAGE of Data, data access, integration,
  • iPhone – applications for needs,

Bio – anniversary of DNA discovery structure in 1953

Genome Sequence – Transforming Medicine: Big Data: Volume, Velocity, Variety

Genomic Medicine – data for interpretation of Symptoms: diet, exercise

Cost of generation of data drops clinical relevance of data – sequencing now $1000 pay with credit card

Cost of the Analysis – $100,000 – Research number the genes translational, identify biomarkers to better achieve efficacy in segments of the population.

Diagnosis – Clinical Medicine

Reimbursement – few $ to identify VARIANCE relevant to treat disease

Cloud – secure the infrastructure – same dat looked by different parties to answer different questions.

GenoSpace for Research – N= many patients

GenoSpace for Clinical Care – N=1

GenoSpace for Patient Community – N=many individual patients

Patient CONSENT

  • Secure storage data
  • analytics and visualization
  • diverse data
  • share dat securely

data in transit to be secure,  consumption of data

R&D Context

1000 Patients

50 Clinical site

large complex data

MMRF’s COMMPASS Study @Dana Farber – Multiple Myeloma Research Foundation

PORTAL design – to make data analysis of Cohort of Patioets, attribute analyzer, tools to find properties of cohort, compare across cohorts

Data analysis made easy – Precision Medicine based on Prediction

Population level data

end stage treatment

clincal trial

Translational Research – Pharma targets patients 

MMRF – gateway to the Community, interface for Patients to provide information during the course of Treatment, PATIENTS share, 1000 patients signed up to share data

  • Patient Reported outcomes
  • data integration
  • clinical trial recruitment
  • biomarker discovery

HOW to deliver data to POINT of CARE: Cancer more data Clinical (Pathology/Lab)

BioPoetry: Story what the data analysis MEANS

CURATION OF DATA – GenoSpace – for Clinical Labs

  • Pathology Group: Sequencing
  • Application development for REPORTS: FullView – meta data GEnoSpace 
  • Look at the assay for standard of Care
  • PDF format to scan and place in EMR, language suggestive,
  • MD’s Portal, giving access to Patients to add data

 

Thomson Reuter – Annotate

An OS for Precision Medicin

Genomics and integration with Clinical data

how to create system for all parties involved. Use of data for multiple needs that overlap

Information management – patient at the center

Precision Medicine is the FUTURE – Digital Architects for Precision Mediicne

 

 

Read Full Post »

LPBI Repository of HashTags for Scientific Conferences

Reporter: Aviva Lev-Ari, PhD, RN

BioIT World, April 29 – May 1, 2014, Seaport World Trade Center, Boston, MA

http://www.bio-itworldexpo.com/uploadedFiles/Bio-IT_World_Expo/Agenda/14/2014-BIT-Brochure.pdf

http://pharmaceuticalintelligence.com/2014/04/09/bioit-world-april-29-may-1-2014-seaport-world-trade-center-boston-ma/

Searches by Dr. Stephen  J Williams

Hashtags that get more than 3000 views and @sites that have at least 3000 followers

#pharmaIT
#BIOIT
#technews
#curation
#pharmanews
#mobilehealth
#mhealth
#science
#science2_0

2014 Bio-IT World Twitter feed @bioitworld

#healtcare
#BIOIT14
#Boston   —-NOTE that #city has alot of appeal now

Twitter Feeds

@Biotech News
@MhealthForAll
@mHealthAlliance
@HCtrends
@mobilehealth360
@science 2_0
Brian Dolan@mobilehealth    —- has 4000 followers

,,,,

14th ANNUAL BIOTECH IN EUROPE FORUM For Global Partnering & Investment

30th September – 1st October 2014 • Congress Center Basel

SACHS Associates, London

http://www.sachsforum.com/zurich14/index.html

http://pharmaceuticalintelligence.com/2014/03/25/14th-annual-biotech-in-europe-forum-for-global-partnering-investment-930-1012014-•-congress-center-basel-sachs-associates-london/

 

NOT LISTED YET


We need to establish for our business, I.e.,

#CancerImmunoTherapy@pharma_BI
#TranslationalMedicine@pharma_BI
#CardiovascularPharmacoGenomics@pharma_BI
#CancerChemo-RT@pharma_BI
#BioMed-MedicalDevices@Pharma_BI

Open Access OnLine Scientific Journal
BioMed-MedTech Venture
Scientific Conference Press Coverage

25 characters for each #______

How about the following

#pharma_BiandBioMed
#pharma_e-Seriesande-Books
#pharma_ ScientificConferencePress 

 

Read Full Post »

April 2014: Tsunami in the Global Pharmaceutical Industry & Consumer Health Care Sector – New Organizational Structure Emerging

Commentator: Aviva Lev- Ari, PhD, RN

 

 

UPDATED on 2/19/2015

2014 – The Year of Pharma Very Expensive M&A

 

http://pharmaceuticalintelligence.com/2015/02/19/2014-the-year-of-pharma-very-expensive-ma/

 

 

UPDATED on 7/21/2014

Allergan Reports Second Quarter 2014 Operating Results

http://agn.client.shareholder.com/releasedetail.cfm?ReleaseID=860851

FierceBiotech reported t

 Allergan aims ax at R&D, eliminating 1,500 jobs in bitter takeover fight

By John Carroll

Struggling to escape Valeant’s ($VRX) unwanted $53 billion takeover attempt, Allergan came up with plans to chop back its budget–axing 1,500 workers and eliminating another 250 vacant positions. Allergan’s release Monday morning is light on details, but the company clearly plans to cut back on early discovery work in what had been a rapidly growing R&D division.

Altogether, Allergan ($AGN) says its cost-cutting regimen–which will eliminate 13% of its workforce–will reduce its 2015 budget by $475 million. Reductions in spending will hit across the board, affecting its commercial organization, general and administrative functions, manufacturing and research and development. The emphasis at the company now is preserving “customer-facing” staff as well as all the key development programs now in the pipeline.

But there was a clear hint of where the ax will fall. The company noted that while it will continue all programs in the clinic, “any reductions in discovery programs will not impact approvals within the strategic plan period.”

Allergan CEO David Pyott

Allergan execs had earlier promised some deep cuts as they continue to resist the increasingly bitter charges being leveled against the company and its executive staff by Valeant and its allies. But the company just lost a key ally. The Wall Street Journal reported this morning that one of its biggest investors, Capital Research & Management, sold its stake in the company after meeting with CEO David Pyott.

In a call with analysts Monday morning Pyott emphasized that the company is in the hunt for new acquisitions, both large and small. He steered clear of mentioning any possible buyout targets, but offered that the perfect profile would be a “specialist in nature” with a good growth profile, good margins and a new therapeutic “pillar” that they could use to develop new products and grow sales more.

Bill Ackman and Valeant have been working to scrape together a 25% stake in the company, which they say will trigger a shareholders’ meeting to vote on its slate of proposed directors.

Just a few weeks ago Allergan was forced to acknowledge that the FDA had rejected–for the third time–migraine drug Semprana. Two of those rejections came after Pyott bought the therapy. Allergan said today that the next FDA action on Semprana is expected by the end of the second quarter in 2015.

R&D cutbacks were definitely not on Pyott’s agenda when he began the year. In an interview with FierceBiotech at the J.P. Morgan conference in January, Pyott bullishly outlined plans to beef up its growing R&D wing, which at that time had a staff of about 2,500. Pyott outlined plans to add hundreds more investigators as it looked to boost its total research allocation from $1 billion to $1.5 billion over the next 5 years. And a confident Pyott added that he was ready and willing to spend billions more to cover the cost of new acquisitions and pacts aimed at expanding the company’s core research focuses–while pondering the addition of a new drug category to the list of 5 core focuses if the opportunity looks right.

Allergan beat out Street estimates for Q2 and raised its earnings estimates for the next two years, a move that analysts say could make Valeant pay more than $53 billion if it plans to complete the acquisition.

“The company raised its guidance to a range of $8.20-$8.40 in 2015 and ~$10 in 2016, versus our $6.70 and $8.23 and consensus of $6.90 and $8.18, respectively,” noted Sterne Agee analyst Shibani Malhotra this morning. “Applying an 18x – 20x multiple to 2016 guidance gives a standalone value of $180-$200 per share. Today’s announcement by Allergan makes it more difficult for Valeant (VRX, $121.97, NR) to demonstrate how a merger can add incremental value and AGN shareholders may now require Valeant to pay a greater premium for Allergan, we believe.”

Related Articles:

‘Unpromising’ Allergan drug projects headed for the chopping block–report

FDA hands out its third rejection for Allergan’s migraine drug Semprana

Hostile Allergan bid is part of Valeant’s war on ‘value-destroying’ R&D

SOURCE

From: FierceBiotech <editors@fiercebiotech.com>

Reply-To: <editors@fiercebiotech.com> Date: Mon, 21 Jul 2014 16:22:21 +0000 (GMT)

To: <avivalev-ari@alum.berkeley.edu>

Subject: | 07.21.14 | Allergan slashes R&D, cuts 1,500 jobs; J&J partner gets a ‘breakthrough’

 

  • Much higher Concentration ratio in the Global Pharmaceutical Industry & Consumer Health Care Sector following the FIVE Tzunami Waves presented, below.
  • The Consumer is expected to experience an increase in product prices involved

 April 28, 2014 – Wave: Pfizer is willing to pay 58.7 billion pounds, or $98.7 billion for AstraZeneca

UPDATED on 4/28/2014

http://dealbook.nytimes.com/2014/04/27/pfizer-said-to-pursue-astrazeneca/?_php=true&_type=blogs&emc=edit_dlbkam_20140428&nl=business&nlid=40094405&_r=0

Updated, 9:06 a.m. | Pfizer publicly announced its interest in acquiring AstraZeneca of Britain on Monday, in what would be one of the biggest in an already swelling series of deal efforts among drug makers.

In a statement, Pfizer said it was willing to pay 58.7 billion pounds, or $98.7 billion. That would make it one of the largest-ever acquisition efforts in the pharmaceutical industry, surpassing Pfizer’s $90 billion takeover of Warner-Lambert 14 years ago.

Pfizer’s prospective bid was valued at £46.61 a share, roughly 30 percent above where AstraZeneca was trading at the beginning of the year.

The move is aimed at putting pressure on AstraZeneca, which has turned down a number of informal takeover approaches from Pfizer.

AstraZeneca shares surged 16.1 percent, to £47.37 in afternoon trading in London on Monday. Shares in Pfizer were up 2.6 percent in premarket trading, at $31.53.

On Monday, AstraZeneca said in a statement that it had agreed to meet in January with Pfizer, which made a preliminary offer of cash and stock representing a value of £46.61 a share – the same amount Pfizer revealed on Monday.

AstraZeneca said its board determined in January that the offer “very significantly undervalued AstraZeneca and its prospects.”

 

New York Times cites the following link on

Monday, April 28, 2014 – 2:08am EDT

STATEMENT FROM PFIZER INC. (“PFIZER”)

POSSIBLE OFFER FOR ASTRAZENECA PLC (“ASTRAZENECA”)

 

SOURCE 

http://www.pfizer.com/news/press-release/press-release-detail/pfizer_confirms_prior_discussions_with_astrazeneca_regarding_a_possible_combination_and_its_continuing_interest_in_a_possible_merger_transaction

END OF UPDATE

Wave #1: Novartis & GlaxoSmithKline – Swiss and British

The Swiss pharmaceutical giant Novartis announced an overhaul of its operations on Tuesday that included an agreement to

  • buy the cancer drug business of its British rival GlaxoSmithKline for up to $16 billion. The deals announced on Tuesday come on the heels of eye-popping transactions in the drug sector in recent months and speculation about even more to come.

As part of its restructuring, Novartis said it would

  • sell its vaccine business to GlaxoSmithKline for $7.1 billion and combine its over-the-counter pharmaceutical business with Glaxo’s consumer drug business.

That new joint venture would be one of the world’s biggest companies in the consumer health care sector. Its products would

  • include Novartis’s Excedrin pain reliever and Maalox antacid, and
  • Glaxo’s Aquafresh toothpaste and Nicorette chewing gum.

Novartis, based in Basel, Switzerland, also said it had agreed to

  • sell its animal health division to Eli Lilly & Company for $5.4 billion, and that
  • it would put its flu vaccine business up for sale.
Novartis plans several deals with GlaxoSmithKline as part of its restructuring.

The deals grew out of a strategic review begun last year as Novartis faced pressure from investors to exit some of its less profitable businesses.

“This is about getting us into fighting shape for the next 10 years,” Joseph Jimenez, Novartis’s chief executive, said by telephone.

Over the next decade, Mr. Jimenez said, health care systems will be under strain, trying to hold down costs as the number of older people grows rapidly – even as fewer people are actually able to pay for their medications. “It’s a demographic fact,” he said.

Pharmaceutical companies across the globe “are looking at their portfolios,” he said, “and they’re asking, ‘How can I be a winner in this industry?’ The winners will be the ones who can innovate, who have global scale.”

According to data from Thomson Reuters, deals this year in the health care sector – driven primarily by acquisitions by pharmaceutical companies – have resulted in global transactions worth about $64.1 billion through April 10. That is the sector’s strongest start to a year since 2009.

The deals would allow Novartis to focus on higher-margin businesses in which the company already has scale, while staying active in the over-the-counter market.

By acquiring Glaxo’s oncology business, Novartis would expand its cancer drug offerings, including adding Tafinlar and Mekinist, two recently approved drugs used to treat skin cancer. The GlaxoSmithKline cancer drug business had revenue of about $1.6 billion in 2013. For Glaxo, the proposed deals are expected to provide greater scale in two of the company’s core businesses –

  • vaccines and
  • over-the-counter products.

The transactions are expected to increase its annual revenue by £1.3 billion, to about £26.9 billion.

The transactions with GlaxoSmithKline, expected to be completed by the first half of 2015, are subject to regulatory and shareholder approval.

Mr. Jimenez, an American who took over as Novartis C.E.O. in 2010, said he anticipated few regulatory hurdles, as the businesses being combined would be complementary ones, for the most part.

Glaxo’s combined consumer health care business, based on 2013 performance, would have revenue of £6.5 billion, making it the largest provider of over-the-counter drugs.

GlaxoSmithKline would hold a controlling interest of 63.5 percent of the combined company, with the rest held by Novartis.

“Opportunities to build greater scale and combine high quality assets in vaccines and consumer health care are scarce,” Andrew Witty, the GlaxoSmithKline chief executive, said in a statement. “With this transaction we will substantially strengthen two of our core businesses and create significant new options to increase value for shareholders.”

Emma Walmsley, the president of Glaxo’s consumer health care segment, will serve as chief executive of the combined consumer business.

The deal is also expected to expand Glaxo’s vaccine portfolio, including adding Bexsero, a treatment for meningitis.

Novartis, which had revenue of $57.9 billion in 2013, employs about 136,000 people in 150 countries.

GlaxoSmithKline was advised by Lazard, Citigroup, Zaoui & Company and Arkle Associates.

Wave # 2: Mallinckrodt Pharmaceuticals Irish and American partners

Mallinckrodt Pharmaceuticals, the Irish drug maker:

spun off from the medical device company Covidien last year, agreed this year to buy Questcor Pharmaceuticals for $5.6 billion in cash and shares, and acquired Cadence Pharmaceuticals of San Diego for about $1.3 billion in cash.

After a failed bid to gain control of the German pharmaceutical wholesaler Celesio, the health care company McKesson Corporation received enough shareholder support in January to complete the $8.3 billion deal.

Wave #3: Pharmaceutical Industries in India

And Sun Pharmaceutical Industries of India said this month that it would pay about $4 billion in stock for Ranbaxy Laboratories, a smaller Indian rival.

In addition to the Novartis deal, there are potentially tens of billions of dollars in transactions being discussed in the sector.

Wave #4: Allergen under hostile take over

Pershing Square Capital Management, which is led by the activist investor William A. Ackman, and the health care company Valeant are teaming up on a bid to buy Allergan, the maker of Botox, for about $46 billion. The bid by Valeant was announced on Tuesday.

Wave #5: AstraZeneca declines Pfizer

The British drug company AstraZeneca recently spurned several informal takeover approaches by Pfizer, according to a person briefed on the matter. One of those approaches valued AstraZeneca at about 60 billion pounds, or nearly $100 billion, according to The Sunday Times, a British newspaper.

The announcement on Tuesday was positive for Novartis shares, which rose 2.5 percent in midday trading in Zurich.

Pharmaceutical shares also rose elsewhere. In London trading, GlaxoSmithKline added 5.4 percent and AstraZeneca gained 6.6 percent. In Frankfurt, Eli Lilly rose 2.1 percent and Pfizer rose 2.4 percent. Sanofi rose 1.8 percent in Paris, and Roche gained 0.7 percent in Zurich.

Neil Gough, David Gelles, Michael J. de la Merced, Alexandra Stevenson and Andrew Pollack contributed reporting.

SOURCE

Novartis Builds a Major Overhaul on a Flurry of Deals

 

Other related articles

Billionaires With Big Ideas Are Privatizing American Science

 

Other related articles published on this Open Access Online Scientific Journal include the following:

Predictions on Biotech Sector’s Two-year Boom

http://pharmaceuticalintelligence.com/2014/03/27/predictions-on-biotech-sectors-two-year-boom/

 

Updated: Investing and Inventing: Is the Tango of Mars and Venus Still on

http://pharmaceuticalintelligence.com/2013/12/08/investing-and-inventing-is-the-tango-of-mars-and-venus-still-on/

 

 

 

 

Read Full Post »

« Newer Posts