Feeds:
Posts
Comments

Archive for the ‘Computational Biology/Systems and Bioinformatics’ Category

 

 

News in Exploration of the Biological Causes of Mental Illness: Potential for New Treatments

Reporter: Aviva Lev-Ari, PhD,RN

Broad’s Stanley Center for Psychiatric Genome Research: Ted Stanley Pledges $650M

Initially opened with a gift from Stanley and his late wife in 2007, the Broad’s Stanley Center has already made progress in identifying genetic risk factors for schizophrenia and bipolar disorder and investigating therapeutic efforts based on those discoveries. This week researchers from Broad and other institutes published a GWAS analysis inNature that identified more than 100 regions of DNA associated with schizophrenia.

“Ten years ago, finding the biological causes of psychiatric disorders was like trying to climb a wall with no footholds,” Stanley Center Director Steven Hyman said in a statement. “But in the last few years, we’ve turned this featureless landscape into something we can exploit. If this is a wall, we’ve put toeholds into it. Now, we have to start climbing.”

SOURCE

http://www.genomeweb.com/clinical-genomics/ted-stanley-pledges-650m-broads-stanley-center-psychiatric-genome-research?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=Broad%20Gets%20$650M%20for%20Psychiatric%20Genomics%20Research;%20Personalized%20Medicine%20Survey;%20Waters%20Q2%20-%2007/22/2014%2011:05:00%20AM

 

The Nature paper1 was produced by the Psychiatric Genomics Consortium (PGC) — a collaboration of more than 80 institutions, including the Broad Institute. Hundreds of researchers from the PGC pooled samples from more than 150,000 people, of whom 36,989 had been diagnosed with schizophrenia. This enormous sample size enabled them to spot 108 genetic locations, or loci, where the DNA sequence in people with schizophrenia tends to differ from the sequence in people without the disease. “This paper is in some ways proof that genomics can succeed,” Hyman says.

 

“This is a pretty exciting moment in the history of this field,” agrees Thomas Insel, director of the National Institute of Mental Health (NIMH) in Bethesda, Maryland, who was not involved in the study.

SOURCE

http://www.nature.com/news/gene-hunt-gain-for-mental-health-1.15602#/b1

 

 

Biological insights from 108 schizophrenia-associated genetic loci

Ripke, S. et alNature http://dx.doi.org/10.1038/nature13595 (2014).

SOURCE

Nature (2014) doi:10.1038/nature13595
Published online 22 July 2014

 

Abstract

Schizophrenia is a highly heritable disorder. Genetic risk is conferred by a large number of alleles, including common alleles of small effect that might be detected by genome-wide association studies. Here we report a multi-stage schizophrenia genome-wide association study of up to 36,989 cases and 113,075 controls. We identify 128 independent associations spanning 108 conservatively defined loci that meet genome-wide significance, 83 of which have not been previously reported. Associations were enriched among genes expressed in brain, providing biological plausibility for the findings. Many findings have the potential to provide entirely new insights into aetiology, but associations at DRD2 and several genes involved in glutamatergic neurotransmission highlight molecules of known and potential therapeutic relevance to schizophrenia, and are consistent with leading pathophysiological hypotheses. Independent of genes expressed in brain, associations were enriched among genes expressed in tissues that have important roles in immunity, providing support for the speculated link between the immune system and schizophrenia.

 

Discussion

In the largest (to our knowledge) molecular genetic study of schizophrenia, or indeed of any neuropsychiatric disorder, ever conducted, we demonstrate the power of GWAS to identify large numbers of risk loci. We show that the use of alternative ascertainment and diagnostic schemes designed to rapidly increase sample size does not inevitably introduce a crippling degree of heterogeneity. That this is true for a phenotype like schizophrenia, in which there are no biomarkers or supportive diagnostic tests, provides grounds to be optimistic that this approach can be successfully applied to GWAS of other clinically defined disorders.

We further show that the associations are not randomly distributed across genes of all classes and function; rather they converge upon genes that are expressed in certain tissues and cellular types. The findings include molecules that are the current, or the most promising, targets for therapeutics, and point to systems that align with the predominant aetiological hypotheses of the disorder. This suggests that the many novel findings we report also provide an aetiologically relevant foundation for mechanistic and treatment development studies. We also find overlap between genes affected by rare variants in schizophrenia and those within GWAS loci, and broad convergence in the functions of some of the clusters of genes implicated by both sets of genetic variants, particularly genes related to abnormal glutamatergic synaptic and calcium channel function. How variation in these genes impact function to increase risk for schizophrenia cannot be answered by genetics, but the overlap strongly suggests that common and rare variant studies are complementary rather than antagonistic, and that mechanistic studies driven by rare genetic variation will be informative for schizophrenia.

 

 

 

Manhattan plot showing schizophrenia associations.

Manhattan plot of the discovery genome-wide association meta-analysis of 49 case control samples (34,241 cases and 45,604 controls) and 3 family based association studies (1,235 parent affected-offspring trios). The x axis is chromosomal position and the y axis is the significance (–log10 P; 2-tailed) of association derived by logistic regression. The red line shows the genome-wide significance level (5×10−8). SNPs in green are in linkage disequilibrium with the index SNPs (diamonds) which represent independent genome-wide significant associations.

 

SOURCE

 

Biological insights from 108 schizophrenia-associated genetic loci

 

Schizophrenia Working Group of the Psychiatric Genomics Consortium

 

Nature (2014) doi:10.1038/nature13595

 

nature13595-f1

Read Full Post »

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson

Life-cycle of Science 2

 

 

 

 

 

 

 

 

 

 

 

Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

(this discussion is in a three part series including:

Using Scientific Content Curation as a Method for Validation and Biocuration

Using Scientific Content Curation as a Method for Open Innovation)

 

Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value.   He also writes extensively about tech and entrepreneurship.

October 2013 Wired article by Clive Thompson, entitled “How Successful Networks Nurture Good Ideas: Thinking Out Loud”, describes how the voluminous writings, postings, tweets, and sharing on social media is fostering connections between people and ideas which, previously, had not existed. The article was generated from Clive Thompson’s book Smarter Than you Think: How Technology is Changing Our Minds for the Better.Tom Peters also commented about the article in his blog (see here).

Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:

 

Our ideas are PRODUCTS OF OUR ENVIRONMENT.

They are influenced by the conversations around us.

However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.

But HOW MUCH Knowledge is OUT THERE?

 

Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:

  • 154.6 billion EMAILS per DAY
  • 400 million TWEETS per DAY
  • 1 million BLOG POSTS (including this one) per DAY
  • 2 million COMMENTS on WordPress per DAY
  • 16 million WORDS on Facebook per DAY
  • TOTAL 52 TRILLION WORDS per DAY

As he estimates this would be 520 million books per DAY (book with average 100,000 words).

A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.

 

Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:

 

  • 2.9 billion internet users
  • 981 million websites (only 25,000 hacked today)
  • 128 billion emails
  • 385 million Tweets
  • > 2.7 million BLOG posts today (including this one)

 

The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)

 

So how many science blogs are out there? Well back in 2008 “grrlscientistasked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.

So granted, there is a lot of

goodbadugly

 

              ….. when it comes to scientific information on the internet!

 

 

 

 

 

I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:

http://pharmaceuticalintelligence.com/2014/06/17/the-gonzalez-protocol-worse-than-useless-for-pancreatic-cancer/

 

and in a Nature Report:Stem cells: Taking a stand against pseudoscience

http://www.nature.com/news/stem-cells-taking-a-stand-against-pseudoscience-1.15408

Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.

 

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

 

Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:

 

“a meeting of natural philosophers to discuss promoting knowledge of the

natural world through observation and experiment”, i.e. science.

The Society met weekly to witness experiments and discuss what we

would now call scientific topics. The first Curator of Experiments

was Robert Hooke.”

 

from The History of the Royal Society

 

Royal Society CoatofArms

 

 

 

 

 

 

The Royal Society of London for Improving Natural Knowledge.

(photo credit: Royal Society)

(Although one wonders why they met “in-cognito”)

Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:

  • Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
  • Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
  • Current publishing does not encourage open feedback and review
  • Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
  • Published data and methodology incomplete
  • Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format

(put in here a brief blurb of summary of problems and why curation could help)

 

Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

Curation is an active filtering of the web’s  and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view.  Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale.  One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.

– Larry H. Bernstein, MD, FCAP

Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP

The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.

Justin D Pearlman, MD, PhD, FACC from The Voice of Content Consultant on The  Methodology of Curation in Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison, Drs. Larry Bernstein and Aviva Lev-Ari likens the medical and scientific curation process to curation of musical works into a thematic program:

 

Work of Original Music Curation and Performance:

 

Music Review and Critique as a Curation

Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care

… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.

 

In Summary: How Scientific Content Curation Can Help

 

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.

Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN

 

At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:

Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation

 

http://pharmaceuticalintelligence.com/2013/07/29/cardiovascular-original-research-cases-in-methodology-design-for-content-curation-and-co-curation/

demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …

which has created a need for more context-driven scientific search and discourse.

In Science and Curation: the New Practice of Web 2.0, Célya Gruson-Daniel (@HackYourPhd) states:

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.

.. where Célya considers curation an essential practice to manage open science and this new style of research.

As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.

  1. Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS

i)        Evidence: curation methodology leading to new insights for biomarkers

 

  1. Same as #1 but multiple players (experts) each bringing unique insights, perspectives, skills yielding new research = NEW LINE of CRITICAL THINKING

ii)      Evidence: co-curation methodology among cardiovascular experts leading to cardiovascular series ebooks

Life-cycle of Science 2

The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging.  Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communication was peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases. In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.

 

The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to

INCREASE THE SCIENTIFIC CURRENCY.

For detailed description of methodology please see Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid:

“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”

— JISC Senior Management Briefing Paper, The Data Deluge (2004)

 

As she states proper data and content curation is important for:

  • Post-analysis
  • Data and research result reuse for new research
  • Validation
  • Preservation of data in newer formats to prolong life-cycle of research results

However she laments the lack of

  • Funding for such efforts
  • Training
  • Organizational support
  • Monitoring
  • Established procedures

 

Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.

Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.

And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

curating-tableA GREAT table about the differences between Collecting and Curating by Nancy White at http://d20innovation.d20blogs.org/2012/07/07/understanding-content-curation/

 

 

 

 

 

 

 

 

 

 

 

University of Massachusetts Medical School has aggregated some useful curation tools at http://esciencelibrary.umassmed.edu/data_curation

Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.

See here for a curation of how networks fosters knowledge, by Erika Harrison on ScoopIt

(http://www.scoop.it/t/mobilizing-knowledge-through-complex-networks)

 

“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”

Andrea Carafa insight from World Economic Forum New Champions 2012 “Power of Networks

 

Creating Content Curation Communities: Breaking Down the Silos!

 

An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as

  • information integration (various types of data and formats)
  • social integration (marginalized by scientific communities, no funding, no recognition)

The authors set forth some ways to overcome these challenges of the content curation community including:

  1. standardization in practices
  2. visualization to document contributions
  3. emphasizing role of information professionals in content curation communities
  4. maintaining quality control to increase respectability
  5. recognizing participation to professional communities
  6. proposing funding/national meeting – Data Intensive Collaboration in Science and Engineering Workshop

A few great presentations and papers from the 2012 DICOSE meeting are found below

Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown

 

Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe

 

Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee

 

Ben Li. The mirages of big data. (PDF) NotesForLiReflectionsByBen

 

Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland

 

Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman

 

Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber

Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.

Open Science Case Studies in Curation

1. Open Science Project from Digital Curation Center

This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.

The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.

 

see: Pennock, M. (2006). “Curating e-Science Data”. DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3330. Available online: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation– See more at: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation/curating-e-science-data#sthash.RdkPNi9F.dpuf

 

2.      cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.

 

3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.

 

4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.

 

  1. OpenWetWareOpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us.   If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.

6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.

7. OpenScience ProjectThe OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.

 

9. Some ongoing biomedical knowledge (curation) projects at ISI

IICurate
This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.

BioScholar
It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).

10. Tools useful for scientific content curation

 

Research Analytic and Curation Tools from University of Queensland

 

Thomson Reuters information curation services for pharma industry

 

Microblogs as a way to communicate information about HPV infection among clinicians and patients; use of Chinese microblog SinaWeibo as a communication tool

 

VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries.  So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.

 

 

11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education

 

From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset

http://digimind.com/blog/experts/pharma-3-0/

Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book  entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.

 

Thomson Reuters and NuMedii Launch Ground-Breaking Initiative to Identify Drugs for Repurposing. Companies leverage content, Big Data analytics and expertise to improve success of drug discovery

 

Content Curation as a Context for Teaching and Learning in Science

 

#OZeLIVE Feb2014

http://www.youtube.com/watch?v=Ty-ugUA4az0

Creative Commons license

 

DigCCur: A graduate level program initiated by University of North Carolina to instruct the future digital curators in science and other subjects

 

Syracuse University offering a program in eScience and digital curation

 

Curation Tips from TED talks and tech experts

Steven Rosenbaum from Curation Nation

http://www.youtube.com/watch?v=HpncJd1v1k4

 

Pawan Deshpande form Curata on how content curation communities evolve and what makes a good content curation:

http://www.youtube.com/watch?v=QENhIU9YZyA

 

How the Internet of Things is Promoting the Curation Effort

Update by Stephen J. Williams, PhD 3/01/19

Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts.  I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running.  One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in.  About an army of 4 million have Wiki entries and maintain these databases.

Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)

In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information”  senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.

As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.”   Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”

Enter Wikidata.  According to founder Denny Vrandecic,

Language depends on knowing a lot of common sense, which computers don’t have access to

A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.

Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items.  Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.

The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.

But this effort needs money

Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s.  In response, Amazon and Facebook had donated millions for the Wikimedia projects.  Google recently gave 3.1 million USD$ in donations.

 

Future postings on the relevance and application of scientific curation will include:

Using Scientific Content Curation as a Method for Validation and Biocuration

 

Using Scientific Content Curation as a Method for Open Innovation

 

Other posts on this site related to Content Curation and Methodology include:

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

6 Steps to More Effective Content Curation

Stem Cells and Cardiac Repair: Content Curation & Scientific Reporting

Cancer Research: Curations and Reporting

Cardiovascular Diseases and Pharmacological Therapy: Curations

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

The Young Surgeon and The Retired Pathologist: On Science, Medicine and HealthCare Policy – The Best Writers Among the WRITERS

Reconstructed Science Communication for Open Access Online Scientific Curation

 

 

Read Full Post »

Life-work in Engineering of Improved Heart Valve

Curator and Reporter: Larry H Bernstein, MD, FCAP

 

An authority and author of the book on cardiovascular valve devices is challenged by patient’s mother to go beyond what is available.  The results are splendid after re-engineering the design to the problem.

 

Reverse Engineering A Human Heart Valve

By Jim Pomager

aortic valve - a remarkable piece of biomechanical engineering

aortic valve – a remarkable piece of biomechanical engineering

 

 

 

The aortic valve is a remarkable piece of biomechanical engineering. On any given day, the leaflets (or cusps) of a healthy aortic valve will open and close 100,000+ times, allowing the proper amount of blood to flow from the heart to the rest of the body. Over a lifetime, a healthy valve endures more than 3.4 billion heartbeats.

Unfortunately, the aortic valve doesn’t always remain healthy. (What organ does?) According to the American Heart Association, up to 1.5 million people in the United States suffer from aortic stenosis (AS), a calcification of the aortic valve that narrows its opening and restricts blood flow. In the early stages, the disease is often asymptomatic, but as it progresses, it can cause chest pain, weakness, and difficulty breathing. And in approximately 300,000 people worldwide, the condition develops into severe AS, which has a one-year survival rate of approximately 50 percent, if left untreated.

Fortunately, there are treatment options.  The most common and successful is aortic valve replacement (AVR), wherein a mechanical or tissue-based valve is substituted for the diseased valve. For decades, replacement valves were implanted via open heart surgery, which involves an extended hospital stay and months of recovery. But in recent years, a promising new approach has emerged: transcatheter aortic valve implementation (TAVI), also known as transcatheter aortic valve replacement (TAVR). In TAVI, a tissue-based artificial valve is delivered into the diseased heart valve via a blood vessel, rather than through a large incision in the chest.

TAVI has many benefits, the most obvious (and compelling) of which is its noninvasiveness, which means shorter recovery times and faster attainment of quality-of-life outcomes for the patient. Replacement of a transcatheter aortic valve (TAV) can also be a minimally invasive exercise — a second TAV can simply be implanted within the first.

On the other hand, the use of TAVI procedures in U.S. hospitals is not yet widespread (though it is growing rapidly). The longevity of current-generation TAVs also remains unknown because it is an emerging technology, compared to evidence of 15+ years for surgically implanted heart valves. Plus, TAVI is only approved in the U.S. for use in AS patients who are either ineligible for surgical valve replacement or at high risk. (TAVI has been available in Europe since 2007, and clinical trials are underway in the U.S. for its use in intermediate-risk patients.)

What’s really needed is an improved TAV — one that outperforms current transcatheter valves, is as durable as a surgical valve, and operates more like … well, a healthy human aortic valve. Such a valve would open the door to TAVI’s use in the hundreds of thousands of lower-risk (and generally younger) AS patients whose only current option is a surgically implanted valve, and who would rather not have their chest opened.

Now, a man who has dedicated his professional career to studying the aortic valve has invented a new artificial valve design that he says will revolutionize TAVI. And if everything goes according to plan, his TAV will reach European patients in 2015 and U.S. patients soon after. How did he and his startup company design such technology? By reverse engineering the aortic valve.

The Man Behind The Valve

Mano Thubrikar

Mano Thubrikar

 

 

 

Mano Thubrikar, quite literally wrote the book on heart valves and heart disease — two of them, in fact. His The Aortic Valve (1989) and Vascular Mechanics and Pathology (2007) are leading textbooks in cardiovascular studies, and the former is widely used as a guide in the design of bioprosthetic heart valves.

After earning an undergraduate degree in metallurgy, a master’s in materials science, and a Ph.D. in biomedical engineering, Dr. Thubrikar spent the first 30 years of his career exclusively in academic research. He studied the aortic valve and bioprostheses from almost every conceivable angle while working at the University of Virginia (UVA) and at the Carolinas Medical Center and the University of North Carolina (UNC) at Charlotte.

But in 2003, Dr. Thubrikar received a phone call that would change the trajectory of his career and set him on the path to develop a novel TAV technology. A woman contacted him to discuss her son, a 35-year-old athlete with a calcified aortic valve. The condition was the result of a bicuspid valve, a congenital condition where the aortic valve has two cusps, rather than the customary three. The man needed a valve replacement, and his only choice was to have a mechanical heart valve surgically implanted. However, the surgical valve meant he would have to stay on anticoagulants for the rest of his life, effectively ending his athletic pursuits. Dr. Thubrikar informed the mother that there just weren’t any treatments available that would allow her son to continue his active lifestyle.

“Didn’t you write the book on the aortic valve?” she asked. “Why didn’t you make a valve that my son could use?”

The conversation and question deeply affected the researcher. “I went home and was so disturbed,” he told me during a recent visit to his office. “I talked to my wife and said, “You know what? Years of research, writing papers, and giving presentations — that’s done. I now need to make a heart valve.”

Soon after, Dr. Thubrikar left Carolinas Medical Center to embark on his new mission. He joined artificial heart valve pioneer Edwards Lifesciences as a Distinguished Scientist, but left after it became clear that the company’s plans for him didn’t align with his own.

So in 2007 — coincidentally, the same year Edwards launched the first commercially available TAV device — Dr. Thubrikar returned to academia, joining the staff at the South Dakota School of Mines & Technology. There he spent the next three years working on a new artificial valve design — one based on decades of research on the physics behind the human aortic valve.

Looking To The Human Body For Design Output
According to Dr. Thubrikar’s research, the natural aortic valve follows four strong design principles for maximum longevity and optimal hemodynamic performance. Those criteria are:

1. A specific coaptation height — When the valve’s three leaflets come together to close the valve, there is some surface-to-surface contact between the leaflets, rather than an edge-to-edge seal. This safety margin helps prevent against blood leakage back into the left ventricle.

2. No folds in the leaflets — Natural aortic valve cusps flex without folding. Folds would crease the tissue and cause unwanted stress on the leaflets, negatively impacting durability.

3. Minimum overall height — Extra height would produce dead space, which can lead to a variety of issues.

4. Minimum leaflet flexion — The human aortic valve manages to open completely with the leaflets moving only 70 degrees, not the 90 degrees you might expect. Again, this improves the valve’s longevity.

“You almost need to be a solid geometry design engineer to understand the math and the equations behind these principles,” he explained. “With these criteria, however, you have design parameters for the aortic valve. The mathematical equations give you the output of how an artificial valve should be designed.”

Dimensions of the natural aortic valve

Dimensions of the natural aortic valve

Dimensions of the natural aortic valve

 

 

Based on these four principles, Dr. Thubrikar reverse engineered the aortic heart valve, developing a new artificial valve design that mimics the aortic valve’s precise geometry. In October 2010, he launched a startup company called Thubrikar Aortic Valve, Inc. to commercialize his new creation, which he calls Optimum TAV and touts as “nature’s valve by design.”

“When someone asks me, ‘How does your valve compare with Edwards’?’ or ‘How does your valve compare with Medtronic’s?’, I say ‘We don’t compare our valve to them,'” Dr. Thubrikar told me. “We compare our valve with the natural aortic valve.”

On the surface, Optimum TAV looks similar to other artificial heart valves on the market, with three leaflets of bovine pericardium tissue mounted on a metal stent-frame. (In fact, the design is often mistaken for another widely used surgical valve.) But according to Dr. Thubrikar, it has a unique combination of features that will help it overcome the major design limitations of current-generation TAVs (if we’re going to compare). Those design limitations include:

  • Suture holes in the leaflet body — While all TAVs (including Optimum TAV) are constructed by sewing animal tissue to a metal frame, piercing the flexion zone of the leaflets leads to potential wear. Optimum TAV does not have a single suture hole in the working portion of the leaflet body.
  • Blood flow through frame — Some TAV frames are as tall as 5 cm in height, extending up into the aorta once implanted. As a result, blood must pass through the frame to enter the coronary arteries. Proteins in the blood will accumulate on the frame, and can eventually break loose and cause thromboembolisms (blood clots).  Optimum TAV is only 2 cm in height. (Related, the low height of the Thubrikar valve also makes it less likely to require a pacemaker.)
  • Thick outer frame — The thicker the frame, the smaller the valve opening will be, allowing less blood to pass through. This opening is referred to as the valve’s EOA, or effective orifice area. The average EOA of a surgical valve is around 1.9 cm2, and some TAVs have EOAs as small as 1.5 cm2(technically, a mild form of stenosis). In bench tests, Optimum TAV’s EOA was 2.3 to 2.4 cm2. (A healthy aortic valve has an EOA of approximately 2.7 cm2.)
  • Clipped calcified leaflets — Some current TAVs are anchored to the patient’s original valve using a paper-clip like mechanism. In this design, there is the potential that the TAVs leaflets will come into contact with the old, calcified leaflets during the operation, causing wear. Optimum TAV’s design eliminates the possibility of contact between the leaflets and native valve.
  • Paravalvular leakage — In some cases, a space forms between the outside of a TAV and the surrounding heart tissue, and blood can leak through. Optimum TAV has a high skirt to prevent this type of gap from developing. In addition, Optimum TAV’s novel frame architecture allows it to conform to and seal off either a round or elliptical annulus (the ring-shaped base of the original valve). This is particularly helpful in minimizing or eliminating leakage in bicuspid patients, who often have an irregularly shaped annulus.
  • Balloon expansion — TAV frames made of stainless steel must be forced open by a balloon. The TAV’s tissue can get caught between the balloon and the frame and potentially tear. Optimum TAV’s frame is made of nitinol, which automatically expands once deployed from the catheter.

 

optimum TAV

optimum TAV

 

 

Optimum TAV

“Other technologies have built-in issues,” Dr. Thubrikar said. “To be able to avoid those problems in a comprehensive fashion is no small feat.”

Trial By Fire
During the two and a half years following the establishment of Thubrikar Aortic Valve, Optimum TAV seemed to be moving steadily toward market. The company raised enough funding to get started, primarily from friends, family, physicians, entrepreneurs, and technology industry executives. Patent applications were filed, suppliers were selected, valves were painstakingly produced (by hand, over one-and-a-half to two days each), and preclinical testing began.

Members of the Thubrikar Aortic Valve team

Members of the Thubrikar Aortic Valve team

 

 

Members of the Thubrikar Aortic Valve team (left to right): Deodatt Wadke, member of the board of directors and cofounder; Samir Wadke, executive director of business development and cofounder; Dr. Mano Thubrikar, president and founder; Samuel Evans, research engineer II; and Nikhil Heble, counsel, secretary, and cofounder

But the fledgling company was dealt a major setback in April 2013, when a fire destroyed the Horsham, Pa. office building to which the Thubrikar Aortic Valve laboratory had recently relocated (from South Dakota). All of its equipment was destroyed and needed to be replaced. The company had to relocate to nearby Norristown, Pa. Not an ideal scenario for a startup trying to make the most of extremely limited resources.

The company was undeterred by the fire, and the last year has been a successful one for Thubrikar. The company completed most of its preclinical testing (including implants in 12 animals and two diseased human cadaver hearts), reached design freeze on Optimum TAV, filed a provisional patent application for its proprietary delivery catheter, and achieved almost $2 million in total funding. Perhaps the biggest milestone came in August 2013, when Optimum TAV met the International Organization for Standardization’s (ISO’s) durability requirements by surpassing 200 million cycles in a third-party ISO certified laboratory.

The durability testing has continued, and Optimum TAV continues to function beyond 390 million cycles, which approximates 11 years in vivo. Surgical valves typically last anywhere from 12 to 18 years, and Thubrikar expects his valve to last at least that long.

“I would not be surprised if it surpasses the longevity of even the surgical valve,” he said.

The company also received its first institutional investment, from Delaware Crossing Investor Group (DCIG), in 2014. The primary DCIG investor, Marv Woodall, led the commercialization of the world’s first stents as president of Johnson & Johnson Interventional Systems (now Cordis) and was on the board of director of the first TAV company, Percutaneous Valve Technologies (PVT, now part of Edwards Lifesciences). Thubrikar has recruited him as its business advisor.

What Lies Ahead
Like many other developers of novel medical devices, Thubrikar Aortic Valve has decided to take its product to market through Europe initially, given European regulators’ comfort level with TAV and the FDA’s steep requirement for clinical trials. “We have spoken to the FDA and will continue to do so on a regular basis,” according to Dr. Thubrikar. “But they asked for a lot more preclinical testing than the European Notified Bodies to start a clinical trial.”

The company is now working to raise an additional $2 million to $10 million, and expects the granting of its patent for Optimum TAV in 2014. The finances will enable Thubrikar to not only conduct a first-in-human (FIH) feasibility study in up to 15 patients this year, but also to expand to a full European clinical trial of about 65 additional patients in 2015. If all goes well, a 2015 CE Mark for Optimum TAV isn’t out of the question.

However, trial success is vital, since today’s investors — and large companies in search of technology acquisitions — wait for significant clinical data to accumulate before backing a medical device. “We realize that until we actually implant the valve in a patient, other companies will think, ‘You don’t know what can go wrong,'” Dr. Thubrikar explained. “We had one big company say, ‘We will pay you four times as much once the product is in a patient.’ They want you to de-risk everything, to work out all the bugs yourself on your own dime.”

Yet Dr. Thubrikar thinks its only a matter of time until his life’s work finally arrives in the hands of interventional cardiologists, who he said have been “knocking at his door” since he first presented a paper on the technology in 2012. Since then, he has spoken at several of the largest interventional cardiology conferences, and word continues to spread about Optimum TAV. Like many other researchers-turned-entreprenuers, he steadfastly believes that his invention will eventually reach the market, where it can begin helping patients — like the one whose mother contacted him a decade ago.

“If hell freezes over, if we don’t get any money, I don’t care,” he said. “I don’t care how it happens. We are going to make a heart valve. That’s the only mission in my life.”

For more information on Thubrikar Aortic Valve and Optimum TAV, visit http://tavi.us/.

 

 

 

 

Read Full Post »

Reference Genes in the Human Gut Microbiome: The BGI Catalogue

Reporter: Aviva Lev-Ari, PhD, RN

An integrated catalog of reference genes in the human gut microbiome

Nature Biotechnology (2014) doi:10.1038/nbt.2942

Received 01 April 2014

Accepted 03 June 2014

Published online 06 July 2014

Article tools

Abstract

Many analyses of the human gut microbiome depend on a catalog of reference genes. Existing catalogs for the human gut microbiome are based on samples from single cohorts or on reference genomes or protein sequences, which limits coverage of global microbiome diversity. Here we combined 249 newly sequenced samples of the Metagenomics of the Human Intestinal Tract (MetaHit) project with 1,018 previously sequenced samples to create a cohort from three continents that is at least threefold larger than cohorts used for previous gene catalogs. From this we established the integrated gene catalog (IGC) comprising 9,879,896 genes. The catalog includes close-to-complete sets of genes for most gut microbes, which are also of considerably higher quality than in previous catalogs. Analyses of a group of samples from Chinese and Danish individuals using the catalog revealed country-specific gut microbial signatures. This expanded catalog should facilitate quantitative characterization of metagenomic, metatranscriptomic and metaproteomic data from the gut microbiome to understand its variation across populations in human health and disease.

SOURCE
Nature Biotechnology (2014) doi:10.1038/nbt.2942

 

BGI Scientists Expand Reference Genes for Human Microbiome

By Aaron Krol

July 14, 2014 | The Beijing Genomics Institute (BGI), China’s gene sequencing powerhouse, has released a set of reference genes for the human gut microbiome, in a catalogue that is substantially larger and covers a greater diversity of human populations than any previous resources. The work is described in a recentNature Biotechnology paper, “An integrated catalog of reference genes in the human gut microbiome,” by senior author Jun Wang of BGI-Shenzhen, while the reference itself is freely available at meta.genomics.cn.

A reference set of genes that have been found in organisms living in the human gut is an essential resource for profiling the species present in a person’s microbiota, and can also help to estimate their abundance and phylogenetic relationships, or to identify species that are correlated with aspects of human health. However, as the authors note, “there has been no comprehensive and uniformly processed database that can represent the human gut microbiota around the world.” The two largest previous reference catalogues, from the MetaHIT project and the Human Microbiome Project (HMP), have contained imperfectly sequenced and redundant genes, and have only sequenced samples taken from individuals from Europe and the U.S., respectively. The BGI team combined sequencing data from both of those projects with hundreds of Chinese samples from a study of diabetes, plus 249 newly-sequenced samples from Europe. In order to adequately cover the genomes of organisms that occur commonly in the human gut, but at such low abundance that few reads can be recovered from them, the team also integrated reference genomes of bacteria and archaea from the NCBI and EMBL databases for any species that were 90% covered by the combined samples used in this project.

The resulting catalogue, the Integrated Gene Catalogue (IGC), contains nearly 10 million unique genes — a greater than 70% increase over either the MetaHIT or HMP resources. Because of a stricter quality control pipeline, the IGC also eliminates large proportions of short or fragmented genes from the prior databases. When using the IGC to assemble metagenomes from both the sample sets used in the creation of the IGC, and three independent sample sets, in all cases between 74 and 81% of sequencing data could be mapped to the IGC. The authors suggest that this is “close to the maximum achievable mapping rates,” given the estimate that prokaryotic genomes have on average 87% gene content.

The impressive breadth of the IGC allows for some interesting observations. Individual samples used in the project contained roughly 760,000 genes on average, and any two samples would share roughly one third of those genes in common. Each sample contributed an average of 469 genes found in no other sample. As in other microbiome references, the species identity of most genes remains a mystery; only around 16% could be confidently assigned to a genus. While nearly all species found in a large proportion of samples were already known to be part of the human microbiota from previous studies, the wine-fermenting genus Oenococcus, found in 13.5% of samples in the IGC, had never previously been shown to live in the human gut.

Based on their experience creating the IGC, the BGI team offer a number of suggestions for future investigation of the human gut microbiome. They speculate that “we may have reached saturated coverage of core gene content and functions, but rare genes will continue to be discovered,” adding that most of the new genes included in the IGC were found in only a small minority of individuals. They also propose that, while deeper sequencing of individuals is a tempting way to get better read depth of low-abundance species, it may in fact be more cost-effective to simply sequence more samples at current read depths. In the case of the low-abundance genus Enterococcus, the IGC was able to improve coverage by over 70% thanks to a handful of samples where the genus was found in unexpectedly high abundance, a finding that may be repeated with other organisms.

Discovering more of these rare genes, the authors suggest, may shed a great deal of light on important functional differences between humans’ commensal organisms. While the genes of known function that are found at high frequency in the IGC tend to cover basic processes like metabolism and signal transduction, those found in fewer than 1% of individuals tend to be involved in adaptive processes, like DNA repair, antibiotic resistance, and responses to phages and the human immune system. Covering more human populations is also likely to yield new functional insights: in a comparison of Danish to Chinese samples, using the IGC as a map, genes highly divergent between the two groups tended to be involved in the metabolism of specific carbohydrates, amino acids, and vitamins, strongly suggesting a relationship with human diet.

“Similar to the field of human genetics, where the search for new alleles has progressed from common to rare,” the authors conclude, “our data indicate that cataloging of our ‘other genome,’ the human gut microbiome, is also entering the stage for identification of rare or individual-specific genes.”

With the IGC made available to all researchers around the world online, it is likely that in the coming months new studies will appear using the IGC as a reference map, helping to show whether outside groups find the new catalogue a useful and reliable tool for studying the human microbiome.

 SOURCE

 

 

Read Full Post »

Larry H Bernstein, MD, FCAP, Reporter and Curator

http://pharmaceyticalinnovation.com/7/10/2014/A new relationship identified in preterm stress and development of autism or schizophrenia/

 

This is a fascinating study.  It is of considerable interest because it deals with several items that need to be addressed with respect to neurodevelopmental disruptive disorders.  It leaves open some aspects that are known, but not subject to investigation in the experiments.  Then there is also no reporting of some associations that are known at the time of deveopment of these disorders – autism spectrum, and schizophrenia.  Of course, I don’t know how it would be possible to also look at prediction of a possible relationship to later development of mood disorders.

  1. The placenta functions as an endocrine organ in the conversion of androsteinedione to testosterone during pregnancy, which is delivered to the fetus.
  2. The conversion is by a known enzymatic pathway – and there is a sex difference in the depression of testosterone in males, females not affected.
  3. There is a greater susceptibility of males to autism and schizophrenia than of females, which I as reader, had not known, but if this is true, it would lend some credence to a biological advantage to protect the females of animal species, and might raise some interest into what relationship it has to protecting multitasking for females.
  4. It is well known that the twin studies that have been carried out determined that in identical twins, there is discordance as a rule.  Those studies are old, and they did not examine whether the other identical twin might be anywhere on the autism spectrum disorder (not then termed “spectrum”.
  5. However, there is a clear effect of stress on “gene expression”, and in this case we are looking at enzymation suppression at the placental level affecting trascriptional activity in the male fetus.  The same genetic signature exists in the male genetic profile, so we are not looking at a clear somatic mutation in this study.
  6. There is also much less specific an association with the MTHFR gene mutation at either one or two loci. This would have to be looked at as a possible separate post translational somatic mutation.
  7. Whether there is another component expressed later in the function of the zinc metalloproteinase under stress in the affected subject is worth considering, but can’t be commented on with respect to the study.

Penn Team Links Placental Marker of Prenatal Stress to Neurodevelopmental Problems 

By Ilene Schneider          July 8, 2014

When a woman experiences a stressful event early in pregnancy, the risk that her child will develop autism spectrum disorders or schizophrenia increases. The way in which maternal stress is transmitted to the brain of the developing fetus, leading to these problems in neurodevelopment, is poorly understood.

New findings by University of Pennsylvania School of Veterinary Medicine scientists suggest that an enzyme found in the placenta is likely playing an important role. This enzyme, O-linked-N-acetylglucosamine transferase, or OGT, translates maternal stress into a reprogramming signal for the brain before birth. The study was supported by the National Institute of Mental Health.

“By manipulating this one gene, we were able to recapitulate many aspects of early prenatal stress,” said Tracy L. Bale, senior author on the paper and a professor in the Department of Animal Biology at Penn Vet. “OGT seems to be serving a role as the ‘canary in the coal mine,’ offering a readout of mom’s stress to change the baby’s developing brain. Bale, who also holds an appointment in the Department of Psychiatry, co-authored tha paper with postdoctoral researcher Christopher L. Howerton, for PNAS.

OGT is known to play a role in gene expression through chromatin remodeling, a process that makes some genes more or less available to be converted into proteins. In a study published last year in PNAS, Bale’s lab found that placentas from male mice pups had lower levels of OGT than those from female pups, and placentas from mothers that had been exposed to stress early in gestation had lower overall levels of OGT than placentas from the mothers’ unstressed counterparts.

“People think that the placenta only serves to promote blood flow between a mom and her baby, but that’s really not all it’s doing,” Bale said. “It’s a very dynamic endocrine tissue and it’s sex-specific, and we’ve shown that tampering with it can dramatically affect a baby’s developing brain.”

To elucidate how reduced levels of OGT might be transmitting signals through the placenta to a fetus, Bale and Howerton bred mice that partially or fully lacked OGT in the placenta. They then compared these transgenic mice to animals that had been subjected to mild stressors during early gestation, such as predator odor, unfamiliar objects or unusual noises, during the first week of their pregnancies.

The researchers performed a genome-wide search for genes that were affected by the altered levels of OGT and were also affected by exposure to early prenatal stress using a specific activational histone mark and found a broad swath of common gene expression patterns.

They chose to focus on one particular differentially regulated gene called Hsd17b3, which encodes an enzyme that converts androstenedione, a steroid hormone, to testosterone. The researchers found this gene to be particularly interesting in part because neurodevelopmental disorders such as autism and schizophrenia have strong gender biases, where they either predominantly affect males or present earlier in males.

Placentas associated with male mice pups born to stressed mothers had reduced levels of the enzyme Hsd17b3, and, as a result, had higher levels of androstenedione and lower levels of testosterone than normal mice.

“This could mean that, with early prenatal stress, males have less masculinization,” Bale said. “This is important because autism tends to be thought of as the brain in a hypermasculinized state, and schizophrenia is thought of as a hypomasculinized state. It makes sense that there is something about this process of testosterone synthesis that is being disrupted.”

Furthermore, the mice born to mothers with disrupted OGT looked like the offspring of stressed mothers in other ways. Although they were born at a normal weight, their growth slowed at weaning. Their body weight as adults was 10 to 20 percent lower than control mice.

Because of the key role that that the hypothalamus plays in controlling growth and many other critical survival functions, the Penn Vet researchers then screened the mouse genome for genes with differential expression in the hypothalamus, comparing normal mice, mice with reduced OGT and mice born to stressed mothers.

They identified several gene sets related to the structure and function of mitochrondria, the powerhouses of cells that are responsible for producing energy. And indeed, when compared by an enzymatic assay that examines mitochondria biogenesis, both the mice born to stressed mothers and mice born to mothers with reduced OGT had dramatically reduced mitochondrial function in their hypothalamus compared to normal mice. These studies were done in collaboration with Narayan Avadhani’s lab at Penn Vet. Such reduced function could explain why the growth patterns of mice appeared similar until weaning, at which point energy demands go up.

“If you have a really bad furnace you might be okay if temperatures are mild,” Bale said. “But, if it’s very cold, it can’t meet demand. It could be the same for these mice. If you’re in a litter close to your siblings and mom, you don’t need to produce a lot of heat, but once you wean you have an extra demand for producing heat. They’re just not keeping up.”

Bale points out that mitochondrial dysfunction in the brain has been reported in both schizophrenia and autism patients. In future work, Bale hopes to identify a suite of maternal plasma stress biomarkers that could signal an increased risk of neurodevelopmental disease for the baby.

“With that kind of a signature, we’d have a way to detect at-risk pregnancies and think about ways to intervene much earlier than waiting to look at the term placenta,” she said.

 

Read Full Post »

Genomics, Proteomics and standards

Larry H. Bernstein, MD, FCAP, Curator

http://pharmaceuticalintelligence/7/6/2014/Genomics, Proteomics and standards

This article is a look at where the biomedical research sciences are in developing standards for development in the near term.

 

Let’s Not Wait for the FDA: Raising the Standards of Biomarker Development – A New Series

published by Theral Timpson on Tue, 07/01/2014 – 15:03

We talk a lot on this show about the potential of personalized medicine. Never before have we learned at such breakneck speed just how our bodies function. The pace of biological research staggers the mind and hints at a time when we will “crack the code” of the system that is homo sapiens, going from picking the low hanging fruit to a more rational approach. The high tech world has put at the fingertips of biologists just the tools to do it. There is plenty of compute, plenty of storage available to untangle, or decipher the human body. Yet still, we talk of potential.

Chat with anyone heavily involved in the life science industry–be it diagnostics or pharma– and you’ll quickly hear that we must have better biomarkers.

Next week we launch a series, Let’s Not Wait for the FDA: Raising the Standards of Biomarker Development, where we will pursue the “hotspots” that are haunting those in the field.

The National Biomarker Development Alliance (NBDA) is a non profit organization based at Arizona State University and led by the formidable Anna Barker, former deputy director of the NCI. The aim of the NBDA is to identify problem areas in biomarker development–from the biospecimen and sampling issues to experiment design to bioinformatics challenges–and raise the standards in each area. This series of interviews is based on their approach. We will purse each of these topics with a special guest.

The place to start is with samples. The majority of researchers who are working on biomarker assays don’t give much thought to the “story” of their samples. Yet the quality of their research will never exceed the quality of the samples with which they start–a very scary thought according toCarolyn Compton, a former pathologist, now professor of pathology at ASU and Johns Hopkins. Carolyn worked originally as a clinical pathologist and knows first hand the the issues around sample degradation. She left the clinic when she was recruited to the NCI with the mission of bringing more awareness to the issue of bio specimens. She joins us as our first guest in the series.

That Carolyn has straddled the world of the clinic and the world of research is key to her message. And it’s key to this series. As we see an increased push to “translate” research into clinical applications, we find that these two worlds do not work enough together.

Researchers spend a lot of time analyzing data and developing causal relationships from certain biological molecules to a disease. But how often do these researchers consider how the history of a sample might be altering their data?

“Garbage in, garbage out,” says Carolyn, who links low quality samples with the abysmal non-reproducable rate of most published research.

Two of our guests in the series have worked on the adaptive iSpy breast cancer trials. These are innovative clinical trials that have been designed to “adapt” to the specific biology of those in the trial. Using the latest advances in genetics, the iSPY trials aim to match experimental drugs with the molecular makeup of tumors most likely to respond to them. And the trials are testing multiple drugs at once.

Don Berry is known for bringing statistics to clinical trials. He designed the iSpy trials and joins us to explain how these new trials work and of the promise of the adaptive design.

Laura Esserman is the director of the breast cancer center at UCSC and has been heavily involved in the implementation of the iSpy trials. Esserman is concerned that “if we keep doing conventional clinical trials, people are going to give up on doing them.” An MBA as well as an MD, Esserman brings what she learned about innovation in the high-tech industry to treatment for breast cancer.

From there we turn to the topic of “systems biology” where we will chat with George Poste, a tour de force when it comes to considering all of the various aspects of biology. Anyone who has ever been present for one of George’s presentations has no doubt come away scratching your head wondering if we’ll ever really glimpse the whole system that is a human being. If there is one brain that has seen all the rooms and hallways of our complex system, it’s George Poste.

We’ll finish the series by interviewing David Haussler from UCSC of Genome Browser fame. Recently Haussler has worked extensively on an NCI project, The Cancer Genome Atlas, to bring together data sets and connect cancer researchers around the world. What is the promise and pitfalls David sees with the latest bioinformatics tools?

George Poste says that in the literature we have identified 150,000 biomarkers that have causal linkage to disease. Yet only 100 of these have been commercialized and are used in the clinic. Why is the number so low? We hope to come up with some answers in this series.

 

 

Why Hasn’t Clinical Genetics Taken Off? (part 2)

published by Sultan Meghji on Fri, 06/20/2014 – 14:49

 

In my previous post, I made the broad comment that education of the patient and front line doctors was the single largest barrier to entry for clinical genetics. Here I look at the steps in the scientific process and where the biggest opportunities lie:

The Sequencing (still)

PCR is a perfectly reasonable technology for sequencing in the research lab today, but the current configuration of technologies need to change. We need to move away from an expert level skill set and a complicated chemistry process in the lab to a disposable, consumer friendly set of technologies. I’m not convinced PCR is the right technology for that and would love to see nanopore be a serious contender, but lack of funding for a broad spectrum of both physics-only as well as physical-electrical startups have slowed the progress of these technologies. And waiting in the wings, other technologies are spinning up in research labs around the world. Price is no longer a serious problem in the space – reliable, repeatable, easy to use sequencing technologies are. The complexity of the current technology (both in terms of sample preparation and machine operation) is a big hurdle.

The Analysis (compute)

Over the last few years, quite a bit of commentary and effort has been put into making the case that the compute is a significant challenge (including more than a few comments by yours truly in that vein!). Today, it can be said with total confidence that compute is NOT a problem. Compute has been commoditized. Through excellent new software to advanced platforms and new hardware, it is a trivial exercise to do the analysis and costs tiny amounts of money ($<25 per sample on a cloud provider appears to be the going rate for a clinical exome in terms of platform & infrastructure cost). Integration with the sequencer and downstream medical middleware is the biggest opportunity.

The Analysis (value)

The bigger challenge on the analysis is the specific things being analyzed as mapped to the needs of the patient. We are still in a world where the vast majority of the sequencing work is being done in support of a specific patient with a specific disease. There isn’t even broad consensus yet in the scientific community about the basics of the pipeline (see my blog posthere for an attempt at capturing what I’m seeing in the market). A movement away from the recent trend in studying specific indications (esp. cancer) is called for. Broadening the sample population will allow us to pick simpler, clearer and easier pipelines which will then make them more adoptable. It would be a massive benefit to the world if the scientific, medical and regulatory communities would get together and start creating, in a crowdsourced manner, a small number of databases that are specifically useful to healthy people. Targeting things like nutrition, athletics, metabolism, and other normal aspects of daily life. A dataset that could, when any one person’s DNA is references, would find something useful. Including the regulators is key so that we can begin to move away from the old fashioned model of clearances that still permeate the industry.

The Regulators

Beyond the broader issues around education I referenced in my previous post, there is a massive upgrade in the regulation infrastructure that is needed. We still live in a world of fax machines, overnight shipping of paper documents and personal relationships all being more important than the quality of the science you as an innovator are bringing to bear.

Consider the recent massive growth in wearables, fitness trackers and other instrumentation local to the human body. Why must we treat clinical genetics simply as a diagnostic and not, as it should be, as a fundamental set of quantitative data about your body that you can leverage in a myriad of ways. Direct to consumer (DTC) genetics companies, most notably 23andme, have approached this problem poorly – instead of making it valuable to the average consumer, what they’ve done is attempted to straddle the line between medical and not. The Fitbit model has shown very clearly that lifestyle activities can be directly harnessed to build commercial value in scaling health related activities without becoming a regulatory issue. It’s time for genetics to do the same thing.

 

 

Development and Role of the Human Reference Sequence in Personal Genomics

Posted by @finchtalk on July 3, 2014

discovery in a digital world

 

 

 

A few weeks back, we published a review about the development and role of the human reference genome. A key point of the reference genome is that it is not a single sequence. Instead it is an assembly of consensus sequences that are designed to deal with variation in the human population and uncertainty in the data. The reference is a map and like a geographical maps evolves though increased understanding over time.

From the Wiley On Line site:

Abstract

Genome maps, like geographical maps, need to be interpreted carefully. Although maps are essential to exploration and navigation they cannot be completely accurate. Humans have been mapping the world for several millennia, but genomes have been mapped and explored for just a single century with the greatest advancements in making a sequence reference map of the human genome possible in the past 30 years. After the deoxyribonucleic acid (DNA) sequence of the human genome was completed in 2003, the reference sequence underwent several improvements and today provides the underlying comparative resource for a multitude genetic assays and biochemical measurements. However, the ability to simplify genetic analysis through a single comprehensive map remains an elusive goal.

Key Concepts:

  • Maps are incomplete and contain errors.
  • DNA sequence data are interpreted through biochemical experiments or comparisons to other DNA sequences.
  • A reference genome sequence is a map that provides the essential coordinate system for annotating the functional regions of the genome and comparing differences between individuals’ genomes.
  • The reference genome sequence is always product of understanding at a set point in time and continues to evolve.
  • DNA sequences evolve through duplication and mutation and, as a result, contain many repeated sequences of different sizes, which complicates data analysis.
  • DNA sequence variation happens on large and small scales with respect to the lengths of the DNA differences to include single base changes, insertions, deletions, duplications and rearrangements.
  • DNA sequences within the human population undergo continual change and vary highly between individuals.
  • The current reference genome sequence is a collection of sequences, an assembly, that include sequences assembled into chromosomes, sequences that are part of structurally complex regions that cannot be assembled, patches (fixes) that cannot be included in the primary sequence, and high variability sequences that are organised into alternate loci.
  • Genetic analysis is error prone and the data require validation because the methods for collecting DNA sequences create artifacts and the reference sequence used for comparative analyses is incomplete.

Keywords:DNA sequencing

 

Read Full Post »

USPTO Guidance On Patentable Subject Matter

USPTO Guidance On Patentable Subject Matter

Curator and Reporter: Larry H Bernstein, MD, FCAP

LH Bernstein

LH Bernstein

 

 

 

 

 

 

Revised 4 July, 2014

http://pharmaceuticalintelligence.com/2014/07/03/uspto-guidance-on-patentable-subject-matter

 

I came across a few recent articles on the subject of US Patent Office guidance on patentability as well as on Supreme Court ruling on claims. I filed several patents on clinical laboratory methods early in my career upon the recommendation of my brother-in-law, now deceased.  Years later, after both brother-in-law and patent attorney are no longer alive, I look back and ask what I have learned over $100,000 later, with many trips to the USPTO, opportunities not taken, and a one year provisional patent behind me.

My conclusion is

(1) that patents are for the protection of the innovator, who might realize legal protection, but the cost and the time investment can well exceed the cost of startup and building a small startup enterprize, that would be the next step.

(2) The other thing to consider is the capability of the lawyer or firm that represents you.  A patent that is well done can be expected to take 5-7 years to go through with due diligence.   I would not expect it to be done well by a university with many other competing demands. I might be wrong in this respect, as the climate has changed, and research universities have sprouted engines for change.  Experienced and productive faculty are encouraged or allowed to form their own such entities.

(3) The emergence of Big Data, computational biology, and very large data warehouses for data use and integration has changed the landscape. The resources required for an individual to pursue research along these lines is quite beyond an individuals sole capacity to successfully pursue without outside funding.  In addition, the changed designated requirement of first to publish has muddied the water.

Of course, one can propose without anything published in the public domain. That makes it possible for corporate entities to file thousands of patents, whether there is actual validation or not at the time of filing.  It would be a quite trying experience for anyone to pursue in the USPTO without some litigation over ownership of patent rights. At this stage of of technology development, I have come to realize that the organization of research, peer review, and archiving of data is still at a stage where some of the best systems avalailable for storing and accessing data still comes considerably short of what is needed for the most complex tasks, even though improvements have come at an exponential pace.

I shall not comment on the contested views held by physicists, chemists, biologists, and economists over the completeness of guiding theories strongly held.  Only history will tell.  Beliefs can hold a strong sway, and have many times held us back.

I am not an expert on legal matters, but it is incomprehensible to me that issues concerning technology innovation can be adjudicated in the Supreme Court, as has occurred in recent years. I have postgraduate degrees in  Medicine, Developmental Anatomy, and post-medical training in pathology and laboratory medicine, as well as experience in analytical and research biochemistry.  It is beyond the competencies expected for these type of cases to come before the Supreme Court, or even to the Federal District Courts, as we see with increasing frequency,  as this has occurred with respect to the development and application of the human genome.

I’m not sure that the developments can be resolved for the public good without a more full development of an open-access system of publishing. Now I present some recent publication about, or published by the USPTO.

DR ANTHONY MELVIN CRASTO

Dr. Melvin Castro - Organic Chemistry and New Drug Development

Dr. Melvin Castro – Organic Chemistry and New Drug Development

 

 

 

 

 

 

 

 

YOU ARE FOLLOWING THIS BLOG You are following this blog, along with 1,014 other amazing people (manage).

patentimages.storage.goog…

USPTO Guidance On Patentable Subject Matter: Impediment to Biotech Innovation

Joanna T. Brougher, David A. Fazzolare J Commercial Biotechnology 2014 20(3):Brougher

jcbiotech-patents

jcbiotech-patents

 

 

 

 

 

 

 

 

 

 

 

Abstract In June 2013, the U.S. Supreme Court issued a unanimous decision upending more than three decades worth of established patent practice when it ruled that isolated gene sequences are no longer patentable subject matter under 35 U.S.C. Section 101.While many practitioners in the field believed that the USPTO would interpret the decision narrowly, the USPTO actually expanded the scope of the decision when it issued its guidelines for determining whether an invention satisfies Section 101.

The guidelines were met with intense backlash with many arguing that they unnecessarily expanded the scope of the Supreme Court cases in a way that could unduly restrict the scope of patentable subject matter, weaken the U.S. patent system, and create a disincentive to innovation. By undermining patentable subject matter in this way, the guidelines may end up harming not only the companies that patent medical innovations, but also the patients who need medical care.  This article examines the guidelines and their impact on various technologies.

Keywords:   patent, patentable subject matter, Myriad, Mayo, USPTO guidelines

Full Text: PDF

References

35 U.S.C. Section 101 states “Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.

” Prometheus Laboratories, Inc. v. Mayo Collaborative Services, 566 U.S. ___ (2012)

Association for Molecular Pathology et al., v. Myriad Genetics, Inc., 569 U.S. ___ (2013).

Parke-Davis & Co. v. H.K. Mulford Co., 189 F. 95, 103 (C.C.S.D.N.Y. 1911)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Funk Brothers Seed Co. v. Kalo Inoculant Co., 333 U.S. 127, 131 (1948)

USPTO. Guidance For Determining Subject Matter Eligibility Of Claims Reciting Or Involving Laws of Nature, Natural Phenomena, & Natural Products.

http://www.uspto.gov/patents/law/exam/myriad-mayo_guidance.pdf

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

Courtney C. Brinckerhoff, “The New USPTO Patent Eligibility Rejections Under Section 101.” PharmaPatentsBlog, published May 6, 2014, accessed http://www.pharmapatentsblog.com/2014/05/06/the-new-patent-eligibility-rejections-section-101/

DOI: http://dx.doi.org/10.5912/jcb664

 

Science 4 July 2014; 345 (6192): pp. 14-15  DOI: http://dx.doi.org/10.1126/science.345.6192.14
  • IN DEPTH

INTELLECTUAL PROPERTY

Biotech feels a chill from changing U.S. patent rules

A 2013 Supreme Court decision that barred human gene patents is scrambling patenting policies.

PHOTO: MLADEN ANTONOV/AFP/GETTY IMAGES

A year after the U.S. Supreme Court issued a landmark ruling that human genes cannot be patented, the biotech industry is struggling to adapt to a landscape in which inventions derived from nature are increasingly hard to patent. It is also pushing back against follow-on policies proposed by the U.S. Patent and Trademark Office (USPTO) to guide examiners deciding whether an invention is too close to a natural product to deserve patent protection. Those policies reach far beyond what the high court intended, biotech representatives say.

“Everything we took for granted a few years ago is now changing, and it’s generating a bit of a scramble,” says patent attorney Damian Kotsis of Harness Dickey in Troy, Michigan, one of more than 15,000 people who gathered here last week for the Biotechnology Industry Organization’s (BIO’s) International Convention.

At the meeting, attorneys and executives fretted over the fate of patent applications for inventions involving naturally occurring products—including chemical compounds, antibodies, seeds, and vaccines—and traded stories of recent, unexpected rejections by USPTO. Industry leaders warned that the uncertainty could chill efforts to commercialize scientific discoveries made at universities and companies. Some plan to appeal the rejections in federal court.

USPTO officials, meanwhile, implored attendees to send them suggestions on how to clarify and improve its new policies on patenting natural products, and even announced that they were extending the deadline for public comment by a month. “Each and every one of you in this room has a moral duty … to provide written comments to the PTO,” patent lawyer and former USPTO Deputy Director Teresa Stanek Rea told one audience.

At the heart of the shake-up are two Supreme Court decisions: the ruling last year in Association for Molecular Pathology v. Myriad Genetics Inc. that human genes cannot be patented because they occur naturally (Science, 21 June 2013, p. 1387); and the 2012 Mayo v. Prometheus decision, which invalidated a patent on a method of measuring blood metabolites to determine drug doses because it relied on a “law of nature” (Science, 12 July 2013, p. 137).

Myriad and Mayo are already having a noticeable impact on patent decisions, according to a study released here. It examined about 1000 patent applications that included claims linked to natural products or laws of nature that USPTO reviewed between April 2011 and March 2014. Overall, examiners rejected about 40%; Myriad was the basis for rejecting about 23% of the applications, and Mayo about 35%, with some overlap, the authors concluded. That rejection rate would have been in the single digits just 5 years ago, asserted Hans Sauer, BIO’s intellectual property counsel, at a press conference. (There are no historical numbers for comparison.) The study was conducted by the news service Bloomberg BNA and the law firm Robins, Kaplan, Miller & Ciseri in Minneapolis, Minnesota.

USPTO is extending the decisions far beyond diagnostics and DNA?

The numbers suggest USPTO is extending the decisions far beyond diagnostics and DNA, attorneys say. Harness Dickey’s Kotsis, for example, says a client recently tried to patent a plant extract with therapeutic properties; it was different from anything in nature, Kotsis argued, because the inventor had altered the relative concentrations of key compounds to enhance its effect. Nope, decided USPTO, too close to nature.

In March, USPTO released draft guidance designed to help its examiners decide such questions, setting out 12 factors for them to weigh. For example, if an examiner deems a product “markedly different in structure” from anything in nature, that counts in its favor. But if it has a “high level of generality,” it gets dinged.

The draft has drawn extensive criticism. “I don’t think I’ve ever seen anything as complicated as this,” says Kevin Bastian, a patent attorney at Kilpatrick Townsend & Stockton in San Francisco, California. “I just can’t believe that this will be the standard.”

USPTO officials appear eager to fine-tune the draft guidance, but patent experts fear the Supreme Court decisions have made it hard to draw clear lines. “The Myriad decision is hopelessly contradictory and completely incoherent,” says Dan Burk, a law professor at the University of California, Irvine. “We know you can’t patent genetic sequences,” he adds, but “we don’t really know why.”

Get creative in using Draft Guidelines!

For now, Kostis says, applicants will have to get creative to reduce the chance of rejection. Rather than claim protection for a plant extract itself, for instance, an inventor could instead patent the steps for using it to treat patients. Other biotech attorneys may try to narrow their patent claims. But there’s a downside to that strategy, they note: Narrower patents can be harder to protect from infringement, making them less attractive to investors. Others plan to wait out the storm, predicting USPTO will ultimately rethink its guidance and ease the way for new patents.

 

Public comment period extended

USPTO has extended the deadline for public comment to 31 July, with no schedule for issuing final language. Regardless of the outcome, however, Stanek Rea warned a crowd of riled-up attorneys that, in the world of biopatents, “the easy days are gone.”

 

United States Patent and Trademark Office

Today we published and made electronically available a new edition of the Manual of Patent Examining Procedure (MPEP). Manual of Patent Examining Procedure uspto.gov http://www.uspto.gov/web/offices/pac/mpep/index.html Summary of Changes

PDF Title Page
PDF Foreword
PDF Introduction
PDF Table of Contents
PDF Chapter 600 –
PDF   Parts, Form, and Content of Application Chapter 700 –
PDF    Examination of Applications Chapter 800 –
PDF   Restriction in Applications Filed Under 35 U.S.C. 111; Double Patenting Chapter 900 –
PDF   Prior Art, Classification, and Search Chapter 1000 –
PDF  Matters Decided by Various U.S. Patent and Trademark Office Officials Chapter 1100 –
PDF   Statutory Invention Registration (SIR); Pre-Grant Publication (PGPub) and Preissuance Submissions Chapter 1200 –
PDF    Appeal Chapter 1300 –
PDF   Allowance and Issue Appendix L –
PDF   Patent Laws Appendix R –
PDF   Patent Rules Appendix P –
PDF   Paris Convention Subject Matter Index 
PDF Zipped version of the MPEP current revision in the PDF format.

Manual of Patent Examining Procedure (MPEP)Ninth Edition, March 2014

The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to:
http://uspto-mpep.ideascale.com.

Manual of Patent Examining Procedure (MPEP) Ninth Edition, March 2014
The USPTO continues to offer an online discussion tool for commenting on selected chapters of the Manual. To participate in the discussion and to contribute your ideas go to: http://uspto-mpep.ideascale.com.

Note: For current fees, refer to the Current USPTO Fee Schedule.
Consolidated Laws – The patent laws in effect as of May 15, 2014. Consolidated Rules – The patent rules in effect as of May 15, 2014.  MPEP Archives (1948 – 2012)
Current MPEP: Searchable MPEP

The documents updated in the Ninth Edition of the MPEP, dated March 2014, include changes that became effective in November 2013 or earlier.
All of the documents have been updated for the Ninth Edition except Chapters 800, 900, 1000, 1300, 1700, 1800, 1900, 2000, 2300, 2400, 2500, and Appendix P.
More information about the changes and updates is available from the “Blue Page – Introduction” of the Searchable MPEP or from the “Summary of Changes” link to the HTML and PDF versions provided below. Discuss the Manual of Patent Examining Procedure (MPEP) Welcome to the MPEP discussion tool!

We have received many thoughtful ideas on Chapters 100-600 and 1800 of the MPEP as well as on how to improve the discussion site. Each and every idea submitted by you, the participants in this conversation, has been carefully reviewed by the Office, and many of these ideas have been implemented in the August 2012 revision of the MPEP and many will be implemented in future revisions of the MPEP. The August 2012 revision is the first version provided to the public in a web based searchable format. The new search tool is available at http://mpep.uspto.gov. We would like to thank everyone for participating in the discussion of the MPEP.

We have some great news! Chapters 1300, 1500, 1600 and 2400 of the MPEP are now available for discussion. Please submit any ideas and comments you may have on these chapters. Also, don’t forget to vote on ideas and comments submitted by other users. As before, our editorial staff will periodically be posting proposed new material for you to respond to, and in some cases will post responses to some of the submitted ideas and comments.Recently, we have received several comments concerning the Leahy-Smith America Invents Act (AIA). Please note that comments regarding the implementation of the AIA should be submitted to the USPTO via email t aia_implementation@uspto.gov or via postal mail, as indicated at the America Invents Act Web site. Additional information regarding the AIA is available at www.uspto.gov/americainventsact  We have also received several comments suggesting policy changes which have been routed to the appropriate offices for consideration. We really appreciate your thinking and recommendations!

FDA Guidance for Industry:Electronic Source Data in Clinical Investigations

Electronic Source Data

Electronic Source Data

 

 

 

 

 

 

 

The FDA published its new Guidance for Industry (GfI) – “Electronic Source Data in Clinical Investigations” in September 2013.
The Guidance defines the expectations of the FDA concerning electronic source data generated in the context of clinical trials. Find out more about this Guidance.
http://www.gmp-compliance.org/enews_4288_FDA%20Guidance%20for%20Industry%3A%20Electronic%20Source%20Data%20in%20Clinical%20Investigations
_8534,8457,8366,8308,Z-COVM_n.html

After more than 5 years and two draft versions, the final version of the Guidance for
Industry (GfI) – “Electronic Source Data in Clinical Investigations” was published in
September 2013. This new FDA Guidance defines the FDA’s expectations for sponsors,
CROs, investigators and other persons involved in the capture, review and retention of
electronic source data generated in the context of FDA-regulated clinical trials.In an
effort to encourage the modernization and increased efficiency of processes in clinical
trials, the FDA clearly supports the capture of electronic source data and emphasizes
the agency’s intention to support activities aimed at ensuring the reliability, quality,
integrity and traceability of this source data, from its electronic source to the electronic
submission of the data in the context of an authorization procedure. The Guidance
addresses aspects as data capture, data review and record retention. When the
computerized systems used in clinical trials are described, the FDA recommends
that the description not only focus on the intended use of the system, but also on
data protection measures and the flow of data across system components and
interfaces. In practice, the pharmaceutical industry needs to meet significant
requirements regarding organisation, planning, specification and verification of
computerized systems in the field of clinical trials. The FDA also mentions in the
Guidance that it does not intend to apply 21 CFR Part 11 to electronic health records
(EHR). Author: Oliver Herrmann Q-Infiity Source: http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/
Guidances/UCM328691.pdf
Webinar: https://collaboration.fda.gov/p89r92dh8wc

 

Read Full Post »

Good and Bad News Reported for Ovarian Cancer Therapy

Reporter, Curator: Stephen J. Williams, Ph.D.

 

In a recent Fierce Biotech report

FDA review red-flags AstraZeneca’s case for ovarian cancer drug olaparib”,

John Carroll reports on a disappointing ruling by the FDA on AstraZeneca’s PARP1 inhibitor olaparib for maintenance therapy in women with cisplatin refractory ovarian cancer with BRCA mutation.   Early clinical investigations had pointed to efficacy of PARP inhibitors in ovarian tumors carrying the BRCA mutation. The scientific rationale for using PARP1 inhibitors in BRCA1/2 deficiency was quite clear:

  1. DNA damage can result in

1. double strand breaks (DSB)

  1.  DSB can be repaired by efficient homologous recombination (HR) or less efficient non-homologous end joining (NHEJ)

b. BRCA1 involved in RAD51 dependent HR at DSB sites

  1. In BRCA1 deficiency DSB repaired by less efficient NHEJ

 

 

2. single strand breaks, damage (SSB)

  1. PARP1 is activated by DNA damage and poly-ADP ribosylates histones and other proteins marking DNA for SSB repair
  2. SSB repair usually base excision (BER) or sometimes nucleotide excision repair (NER)

B. if PARP inhibited then SSB gets converted to DSB

C. in BRCA1/2 deficient background repair is forced to less efficient NHEJ thereby perpetuating some DNA damage pon exposure to DNA damaging agent

 

A good review explaining the pharmacology behind the rationale of PARP inhibitors in BRCA deficient breast and ovarian cancer is given by Drs. Christina Annunziata and Susan E. Bates in PARP inhibitors in BRCA1/BRCA2 germline mutation carriers with ovarian and breast cancer

(http://f1000.com/prime/reports/b/2/10/) and below a nice figure from their paper:

 

parpbrcadnadamage

 

 

 

 

 

 

 

(from Christina M Annunziata and Susan E Bates. PARP inhibitors in BRCA1/BRCA2 germline mutation carriers with ovarian and breast cancer.  F1000 Biol Reports, 2010; 2:10.)  Creative Commons

Dr. Sudipta Saha’s post BRCA1 a tumour suppressor in breast and ovarian cancer – functions in transcription, ubiquitination and DNA repair discusses how BRCA1 affects the double strand DNA repair process, augments histone modification, as well as affecting expression of DNA repair genes.

Dana Farber’s Dr. Ralph Scully, Ph.D., in Exploiting DNA Repair Targets in Breast Cancer (http://www.dfhcc.harvard.edu/news/news/article/5402/), explains his research investigating why multiple DNA repair pathways may have to be targeted with PARP therapy concurrent with BRCA1 deficiency.

 

However FDA investigators voiced their skepticism of AstraZeneca’s clinical results, namely

  • Small number of patients enrolled
  • BRCA1/2 cohort were identified retrospectively
  • results skewed by false benefit from “underperforming” control arm
  • possible inadvertent selection bias
  • hazard ratio suggesting improvement in progression free survival but higher risk/benefit

The FDA investigators released their report two days before an expert panel would be releasing their own report (reported in the link below from FierceBiotech)

UPDATED: FDA experts spurn AstraZeneca’s pitch for ovarian cancer drug olaparib

in which the expert panel reiterated the findings of the FDA investigators.   The expert panel’s job was to find if there was any clinical benefit for continuing consideration of olaparib, basically stating

“This trial has problems,” noted FDA cancer chief Richard Pazdur during the panel discussion. If investigators had “pristine evidence of a 7-month advantage in PFS, we wouldn’t be here.”

The expert panel was concerned for the above reasons as well as the reported handful of lethal cases of myelodysplastic syndrome and acute myeloid leukemia in the study, although the panel noted these patients had advanced disease before entering the trial, raising the possibility that prior drugs may have triggered their deaths.

 

This was certainly a disappointment as ….

it was at last year’s ASCO (2013) that investigators at Perelman School of Medicine at the University of Pennsylvania and Sheba Medical Center in Tel Hashomer, Israel presented data showing that in 193 cisplatin-refractory ovarian cancer patients carrying a BRCA1/2 mutation, 31% had a partial or complete tumor regression. In addition the study also showed good response in pancreatic and prostate cancer with tolerable side effects.

 

See here for study details: http://www.uphs.upenn.edu/news/News_Releases/2013/05/domchek/

 

As John Carrol from FierceBiotech notes, the decision may spark renewed interest by Pfizer of a bid for AstraZeneca as the potential FDA rejection would certainly dampen AstraZeneca’s future growth and profit plans. Last month AstraZeneca’s CEO made the case to shareholders to reject the Pfizer offer by pointing to AstraZeneca’s potential beefed-up pipeline. AstraZeneca had projected olaparib as a potential $2 billion-a-year seller, although some industry analysts see sales at less than half that amount.

A company spokeswoman said the monotherapy use of olaparib for ovarian cancer assessed by the U.S. expert panel this week was only one element of a broad development program.

 

 

Please see a table of current oncology clinical trials with PARP1 inhibitors

at end of this post

 

However, on the same day, FierceBiotechreports some great news (at least in Europe) on the ovarian cancer front:

 

EU backs Roche’s Avastin for hard-to-treat ovarian cancer

As Arlene Weintraub   of FierceBiotech reports:

EU Committee for Medicinal Products for Human Use (CHMP) handed down a positive ruling on Avastin, recommending that the European Commission approve the drug for use in women with ovarian cancer that’s resistant to platinum-based chemotherapy. It’s the first biologic to receive a positive opinion from the CHMP for this hard-to-treat form of the disease.

Please see here for official press release: CHMP recommends EU approval of Roche’s Avastin for platinum-resistant recurrent ovarian cancer

 

EU had been getting pressure from British doctors to approve Avastin based on clinical trial results although it may be important to note that the EU zone seems to have an ability to recruit more numbers for clinical trials than in US. For instance an EU women’s breast cancer prevention trial had heavy recruitment in what would be considered a short time frame compared to recruitment times for the US.

 

Below is a table on PARP1 inhibitors in current clinical trials (obtained from NewMedicine’s Oncology KnowledgeBase™). nm|OK is a relational knowledgeBASE covering all major aspects of product development in oncolology. The database comprises 6 modules each dedicated in a specific sector within the oncology field.

 

PARP1 Inhibitors Currently in Clinical Trials for Ovarian Cancer

 

Developer and

Drug Name

Development Status & Location
– Indications
AbbVie

Current as of: March 27, 2014

PARP inhibitor: ABT-767

Phase I (begin 5/11, ongoing 2/14) Europe (Netherlands) – solid tumors with BRCA1 or BRCA2 mutations, locally advanced or metastatic • ovarian cancer, advanced or metastatic • fallopian tube cancer, advanced or metastatic • peritoneal cancer, advanced or metastatic
AstraZeneca
Affiliate(s):
· Myriad GeneticsCurrent as of: June 26, 2014Generic Name: Olaparib
Brand Name: Lynparza
Other Designation: AZD2281, KU59436, KU-0059436, NSC 747856
Phase I (begin 7/05, closed 9/08) Europe (Netherlands, UK, Poland); phase II (begin 6/07, closed 2/08, completed 5/09) USA, Australia, Europe (Germany, Spain, Sweden, UK), phase II (begin 7/08, closed 2/09) USA, Australia, Europe (Belgium, Germany, Poland, Spain, UK), Israel, phase II (begin 8/08, closed 12/09, completed 3/13) USA, Australia, Canada, Europe (Belgium, France, Germany, Poland, Romania, Spain, Ukraine, UK), Israel, Russia; phase II (begin 2/10, closed 7/10) USA, Australia, Canada, Europe (Belgium, Czech Republic, Germany, Italy, Netherlands, Spain, UK), Japan, Panama, Peru (combination); MAA (accepted 9/13) EU, NDA (filed 2/14) USA – ovarian cancer, advanced or metastatic, BRCA positive • ovarian cancer, recurrent, platinum sensitive • ovarian cancer, advanced, refractory, BRCA1 or BRCA2-associatedPhase I (begin 5/08, ongoing 5/12) USA; phase II (begin 7/08, closed 10/09) Canada – breast cancer, locally advanced, BRCA1/BRCA2-associated or hereditary metastatic or inoperable • ovarian cancer, locally advanced, BRCA1/BRCA2-associated or hereditary metastatic or inoperable • breast cancer, triple-negative, BRCA-positive • ovarian cancer, high-grade serous and/or undifferentiated, BRCA-positive

Phase I (begin 10/10, ongoing 1/13) USA (combination) – ovarian cancer, inoperable or metastatic, refractory • breast cancer, inoperable or metastatic, refractory

Phase III (begin 8/13) USA, Australia, Brazil, Canada, Europe (France, Italy, Netherlands, Poland, Russia, Spain, UK), Israel, South Korea, phase III (begin 9/13) USA, Australia, Brazil, Canada, Europe (France, Germany, Italy, Netherlands, Poland, Russia, Spain, UK), Israel – ovarian cancer, serous, high grade, BRCA mutated, platinum-sensitive, relapsed, third line, maintenance • ovarian cancer, serous or endometrioid, high grade, BRCA mutated, platinum responsive (PR or CR), maintenance, first line • primary peritoneal cancer, high grade, BRCA mutated, platinum responsive (PR or CR), maintenance • fallopian tube cancer, high grade, BRCA mutated, platinum responsive (PR or

BioMarin Pharmaceutical

Current as of: June 14, 2014

PARP inhibitor:

BMN-673, BMN673, LT-673

Phase I/II (begin 1/11, ongoing 3/14) USA – solid tumors, advanced, recurrent

Phase I (begin 2/13, closed 4/13, completed 5/14) USA – healthy volunteers

Phase I/II (begin 11/13) USA – solid tumors, relapsed or refractory, BRCA mutated, second line

BiPar Sciences

Current as of: April 16, 2009

Parp inhibitor:

BSI-401

Preclin (ongoing 4/09) – solid tumors
Clovis Oncology
Affiliate(s):
· University of Newcastle Upon Tyne
· Cancer Research Campaign Technology
· PfizerCurrent as of: June 21, 2014Generic Name: Rucaparib
Brand Name: Rucapanc
Other Designation: AG140699, AG014699, AG-14,699, AG-14669, AG14699, AG140361, AG-14361, AG-014699, CO-338, PF-01367338
Phase I (begin 03, completed 05) Europe (UK) (combination), phase I (begin 2/10, closed 11/13) Europe (France, UK) (combination) – solid tumors, advanced

Phase II (begin 12/07, closed 10/13) Europe (UK) – breast cancer, advanced or metastatic, in patients carrying BRCA1 or BRCA2 mutations • ovarian cancer, advanced or metastatic, in patients carrying BRCA1 or BRCA2 mutations

Phase I/II (begin 11/11, ongoing 6/14) USA, Europe (UK) – solid tumors, metastatic, with mutated BRCA • breast cancer, metastatic, HEr2 negative, with mutated BRCA

Sanofi

Current as of: June 03, 2013

Generic Name: Iniparib
Brand Name: Tivolza
Other Designation: BSI-201, NSC 746045, SAR240550

Phase I/Ib (begin 3/06, closed 3/10) USA (combination), phase I (begin 7/10, closed 11/10) USA, phase I (begin 9/10, ongoing 2/11) Japan (combination); phase Ib (begin 1/07, ongoing 1/11) USA (combination) – solid tumors, advanced, refractory
Phase II (begin 5/08, closed 1/09) USA – ovarian cancer, advanced, refractory, BRCA-1 or BRCA-2 associated • fallopian tube cancer, advanced, refractory, BRCA-1 or BRCA-2 associated • peritoneal cancer, advanced, refractory, BRCA-1 or BRCA-2 associated
Tesaro
Affiliate(s):
· MerckCurrent as of: May 18, 2014Generic Name: Niraparib
Other Designation: MK-4827, MK4827
Phase I (begin 9/08, closed 2/11) USA, Europe (UK) – solid tumors, locally advanced or metastatic • ovarian cancer, locally advanced or metastatic, BRCA mutant • chronic lymphocytic leukemia (CLL), relapsed or refractory • prolymphocytic leukemia, T cell, relapsed or refractory
Phase Ib (begin 11/10, closed 3/11, terminated 10/12) USA (combination) – solid tumors, locally advanced or metastatic • ovarian cancer, serous, high grade, platinum resistant or refractoryPhase III (begin 5/13, ongoing 5/14) USA – ovarian cancer, platinum-sensitive, high grade serous or BRCA mutant, chemotherapy responsive • fallopian tube cancer • primary peritoneal cancer
Teva Pharmaceutical Industries

Current as of: May 04, 2013

Designation:

CEP-9722

Phase I (begin 5/11, closed 11/12, terminated 10/13) USA, phase I (begin 6/09, closed 7/12, completed 1/12) Europe (France and UK) (combination) – solid tumors, advanced, third line
Phase I (begin 5/11, completed 1/13) Europe (France) (combination) – solid tumors, advanced • mantle cell lymphoma (MCL), advanced

 

 

Summary of Combination Ovarian Cancer Trials with Avastin (current and closed)

 

Indication in Development ovarian cancer, advanced, recurrent, persistent • ovarian cancer, progressive, platinum resistant, second line • fallopian tube cancer, progressive, platinum resistant, second line • primary peritoneal cancer, progressive, platinum resistant, second line
Latest Status Phase II (begin 4/02, closed 8/04) USA, phase II (begin 11/04, closed 10/05) USA; phase III (begin 10/09) Europe (Belgium, Bosnia and Herzegovina, Denmark, Finland, France, Germany, Greece, Italy, Netherlands, Norway, Portugal, Spain, Sweden), Turkey
Clinical History Refer to the Combination Trial Module for trials of Avastin in combination with various chemotherapeutic regimens.According to results from the AURELIA clinical trial (protocol ID: MO22224; 2009-011400-33; NCT00976911), the median PFS in women with progressive platinum resistant ovarian, fallopian tube or primary peritoneal cancer treated with Avastin in combination with chemotherapy, was 6.7 months compared to 3.4 months in those treated with chemotherapy alone for an HR of 0.48 (range =0.38–0.60).. In addition, the objective response rate was 30.9% in women treated with Avastin compared to 12.6% in those on chemotherapy (p=0.001). Certain AE (Grade 2 to 5) that occurred more often in the Avastin arm compared to the chemotherapy alone arm were high blood pressure (20% versus 7%) and an excess of protein in the urine (11% versus 1%). Gastrointestinal perforations and fistulas occurred in 2% of women in the Avastin arm compared to no events in the chemotherapy arm (Pujade-Lauraine E, etal, ASCO12, Abs. LBA5002).A multicenter (n=124), randomized, open label, 2-arm, phase III clinical trial (protocol ID: MO22224; 2009-011400-33; NCT00976911; http://clinicaltrials.gov/ct2/results?term=NCT00976911 ), dubbed AURELIA, was initiated in October 2009, in Europe (Belgium, Bosnia and Herzegovina, Denmark, Finland, France, Germany, Greece, Italy, Netherlands, Norway, Portugal, Spain, and Sweden), and Turkey, to evaluate the efficacy and safety of Avastin added to chemotherapy versus chemotherapy alone in patients with epithelial ovarian, fallopian tube or primary peritoneal cancer with disease progression within 6 months of platinum therapy in the first line setting. The trials primary outcome measure is PFS. Secondary outcome measures include objective response rate, biological PFS interval, OS, QoL, and safety and tolerability. According to the protocol, all patients are treated with standard chemotherapy with IV paclitaxel (80 mg/m²) on days 1, 8, 15 and 22 of each 4-week cycle; or IV topotecan at a dose of 4 mg/m² on days 1, 8 and 15 of each 4-week cycle, or 1.25 mg/kg on days 1-5 of each 3-week cycle; or IV liposomal doxorubicin (40 mg/m²) every 4 weeks. Patients (n=179) randomized to arm 2 of the trial are treated with IV Avastin at a dose of 10 mg/kg twice weekly or 15 mg/kg thrice weekly concomitantly with the chemotherapy choice. Treatment continues until disease progression. Subsequently, patients are treated with the standard of care. Patients in arm 1 (n=182), on chemotherapy only may opt to be treated with IV Avastin (15 mg/kg) three times weekly. The trial was set up in cooperation with the Group d’Investigateurs Nationaux pour l’Etude des Cancers Ovariens (GINECO) and was conducted by the international network of the Gynecologic Cancer Intergroup (GCIG) and the pan-European Network of Gynaecological Oncological Trial Groups (ENGOT), under PI Eric Pujade-Lauraine, MD, Hopitaux Universitaires, Paris Centre, Hôpital Hôtel-Dieu (Paris, France). The trial enrolled 361 patients and was closed as of May 2012..Results were presented from a phase II clinical trial (protocol ID: CDR0000068839; GOG-0170D; NCT00022659) of bevacizumab in patients with persistent or recurrent epithelial ovarian cancer or primary peritoneal cancer that was performed by the Gynecologic Oncology Group to determine the ORR, PFS, and toxicity for this treatment. Patients must have been administered 1-2 prior cytotoxic regimens. Treatment consisted of bevacizumab (15 mg/kg) IV every 3 weeks until disease progression or prohibitive toxicity. Between April 2002 and August 2004, 64 patients were enrolled, of which 2 were excluded for wrong primary and borderline histology and 62 were evaluable (1 previous regimen=23, 2 previous regimens=39). The median disease free interval from completion of primary cytotoxic chemotherapy to first recurrence was 6.5 months. Early results demonstrated that some patients had confirmed objective responses and PFS in some was at least 6 months. Observed Grade 3 or 4 toxicities included allergy (Grade 3=1), cardiovascular (Grade 3=4; Grade 4=1), gastrointestinal (Grade 3=3), hepatic (Grade 3=1), pain (Grade 3=2), and pulmonary (Grade 4=1). As of 11/04, 36 patients were removed from the trial, including 29 for disease progression and 1 for toxicity in 33 cases reported. Preliminary evidence exists for objective responses to bevacizumab (Burger R, et al, ASCO05, Abs. 5009).An open label, single arm, 2-stage, phase II clinical trial (protocol ID: AVF2949g, NCT00097019) of bevacizumab in patients with platinum resistant, advanced (Stage III or IV), ovarian cancer or primary peritoneal cancer for whom subsequent doxorubicin or topotecan therapy also has failed was initiated in November 2004 at multiple locations in the USA to determine the safety and efficacy for this treatment.A multicenter phase II clinical trial was initiated in April 2002 to determine the 6-month PFS of patients with persistent or recurrent ovarian epithelial or primary peritoneal cancer treated with bevacizumab (protocol ID: GOG-0170D, CDR0000068839, NCT00022659). IV bevacizumab is administered over 30-90 minutes on day 1. Treatment is repeated every 21 days in the absence of disease progression or unacceptable toxicity. Patients are followed every 3 months for 2 years, every 6 months for 3 years, and then annually thereafter. A total of 22-60 patients will be accrued within 12-30 months. Robert A. Burger, MD, of Chao Family Comprehensive Cancer Center is Trial Chair.This trial was closed in August 2004.

 

 

Sources

http://www.fiercebiotech.com/story/fda-review-red-flags-astrazenecas-case-ovarian-cancer-drug-olaparib/2014-06-23

 

http://www.fiercebiotech.com/story/fda-experts-spurn-astrazenecas-pitch-ovarian-cancer-drug-olaparib/2014-06-25

 

http://www.fiercepharma.com/story/eu-backs-roches-avastin-hard-treat-ovarian-cancer/2014-06-27

 

In a followup to this original posting A Report From the Institute of Medicine of the National Academies of Sciences, Engineering, and Medicine entitled

Evolving Approaches in Research and Care for Ovarian Cancers

was generated in a ViewPoint piece in JAMA which discussed their Congressional mandated report on the State of the Science in Ovarian Cancer Research, titled

Ovarian Cancers: Evolving Paradigms in Research and Care 

highlights some of the research gaps felt by the committee in the current state of ovarian cancer research including:

  • consideration in research protocols of the multitude of histologic and morphologic subtypes of ovarian cancer, including the feeling of the committee that high grade serous OVCA originates from the distal end of the fallopian tube (espoused by Dr. Doubeau and Dr. Christopher Crum) versus originating from the ovarian surface epithelium
  • a call for expanded screening and prevention research with mutimodal screening including CA125 with secondary transvaginal screen
  • better patient education of the risk/benefit of genetic testing including BRCA1/2 as well as in consideration for PARP inhibitor therapy
  • treatments should be standardized and disseminated including more research in health outcomes and decision support for personalized therapy

This Perspective article can be found here: jvp160038

Some other posts relating to OVARIAN CANCER on this site include

Efficacy of Ovariectomy in Presence of BRCA1 vs BRCA2 and the Risk for Ovarian Cancer

Testing for Multiple Genetic Mutations via NGS for Patients: Very Strong Family History of Breast & Ovarian Cancer, Diagnosed at Young Ages, & Negative on BRCA Test

Ultrasound-based Screening for Ovarian Cancer

Dasatinib in Combination With Other Drugs for Advanced, Recurrent Ovarian Cancer

BRCA1 a tumour suppressor in breast and ovarian cancer – functions in transcription, ubiquitination and DNA repair

 

Read Full Post »

Larry H Bernstein, MD, FCAP, Reporter

Long noncoding RNA (lncRNA) lightens up the dark secrets

CASE WESTERN RESERVE INVESTIGATORS DISCOVER NOVEL CELLULAR GENES BY UNCOVERING UNCHARACTERIZED RNAS THAT ENCODE PROTEINS

News Release: June 23, 2014

Jeannette Spalding
216-368-3004
jeannette.spalding@case.edu 

Case Western Reserve School of Medicine scientists have made an extraordinary double discovery. First, they have identified thousands of novel long non-coding ribonucleic acid (lncRNA) transcripts. Second, they have learned that some of them defy conventional wisdom regarding lncRNA transcripts, because they actually do direct the synthesis of proteins in cells.

Both of the breakthroughs are detailed in the June 12 issue of Cell Reports.

Kristian E. Baker, PhD, assistant professor in the Center for RNA Molecular Biology, led the team that applied high throughput gene expression analysis to yield these impressive findings, which ultimately could lead to treatments for cancer and some genetic disorders.

“Our work establishes that lncRNAs in yeast can encode proteins, and we provide evidence that this is probably true also in mammals, including humans,” Baker said. “Our investigation has expanded our knowledge of the genetic coding potential of already well-characterized genomes.”

Collaborating with researchers including Case Western Reserve University graduate and undergraduate students, Baker analyzed yeast and mouse cells, which serve as model organisms because of their functional resemblance to human cells.

Previously, lncRNAs were thought to lack the information and capacity to encode for proteins, distinguishing them from the messenger RNAs that are expressed from known genes and act primarily as templates for the synthesis of proteins. Yet this team demonstrated that a subset of these lncRNAs is engaged by the translation machinery and can function to produce protein products.

In the future, Baker and fellow investigators will continue to look for novel RNA transcripts and also search for a function for these lncRNAs and their protein products in cells.

“Discovery of more transcripts equates to the discovery of new and novel genes,” Baker said. “The significance of this work is that we have discovered evidence for the expression of previously undiscovered genes. Knowing that genes are expressed is the very first step in figuring out what they do in normal cellular function or in dysfunction and disease.”

This investigation was funded by the National Institutes of Health’s National Institute of General Medical Sciences (GM080465 and GM095621) and the National Science Foundation (NSF1253788).

 

Reference:

Lecture Contents delivered at Koch Institute for Integrative Cancer Research, Summer Symposium 2014: RNA Biology, Cancer and Therapeutic Implications, June 13, 2014 @MIT

Curator of Lecture Contents: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/wp-admin/post.php?post=23174&action=edit
3:15 – 3:45, 6/13/2014, Laurie Boyer “Long non-coding RNAs: molecular regulators of cell fate”    

 http://pharmaceuticalintelligence.com/2014/06/13/315-345-2014-laurie-boyer-long-non-coding-rnas-molecular-regulators-of-cell-fate/

Read Full Post »

Michael Snyder @Stanford University sequenced the lymphoblastoid transcriptomes and developed an allele-specific full-length transcriptome

 

Reporter: Aviva Lev-Ari, PhD, RN

 

Allelic Expression of Deleterious Protein-Coding Variants across Human Tissues

http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1004304

  • Kimberly R. Kukurba,
  • Rui Zhang,
  • Xin Li,
  • Kevin S. Smith,
  • David A. Knowles,
  • Meng How Tan,
  • Robert Piskol,
  • Monkol Lek,
  • Michael Snyder,
  • Daniel G. MacArthur,
  • Jin Billy Li mail,
  • Stephen B. Montgomery mail

Conceived and designed the experiments: SBM KRK JBL. Performed the experiments: RZ KSS MHT RP ML. Analyzed the data: KRK RZ XL DAK DGM SBM. Contributed reagents/materials/analysis tools: MS DGM JBL SBM. Wrote the paper: KRK DGM MS JBL SBM.

Abstract

Personal exome and genome sequencing provides access to loss-of-function and rare deleterious alleles whose interpretation is expected to provide insight into individual disease burden. However, for each allele, accurate interpretation of its effect will depend on both its penetrance and the trait’s expressivity. In this regard, an important factor that can modify the effect of a pathogenic coding allele is its level of expression; a factor which itself characteristically changes across tissues. To better inform the degree to which pathogenic alleles can be modified by expression level across multiple tissues, we have conducted exome, RNA and deep, targeted allele-specific expression (ASE) sequencing in ten tissues obtained from a single individual. By combining such data, we report the impact of rare and common loss-of-function variants on allelic expression exposing stronger allelic bias for rare stop-gain variants and informing the extent to which rare deleterious coding alleles are consistently expressed across tissues. This study demonstrates the potential importance of transcriptome data to the interpretation of pathogenic protein-coding variants.

Author Summary

Gene expression is a fundamental cellular process that contributes to phenotypic diversity. Gene expression can vary between alleles of an individual through differences in genomic imprinting or cis-acting regulatory variation. Distinguishing allelic activity is important for informing the abundance of altered mRNA and protein products. Advances in sequencing technologies allow us to quantify patterns of allele-specific expression (ASE) in different individuals and cell-types. Previous studies have identified patterns of ASE across human populations for single cell-types; however the degree of tissue-specificity of ASE has not been deeply characterized. In this study, we compare patterns of ASE across multiple tissues from a single individual using whole transcriptome sequencing (RNA-Seq) and a targeted, high-resolution assay (mmPCR-Seq). We detect patterns of ASE for rare deleterious and loss-of-function protein-coding variants, informing the frequency at which allelic expression could modify the functional impact of personal deleterious protein-coding across tissues. We demonstrate that these interactions occur for one third of such variants however large direction flips in allelic expression are infrequent.

SOURCE

 

 

Researchers Report Generating Personal Transcriptome Using Long Reads

June 23, 2014

 

By a GenomeWeb staff reporter

NEW YORK (GenomeWeb) – Using a long-read-based approach, Stanford University researchers reported generating a personal transcriptome in the Proceedings of the National Academy of Sciences.

Senior author Michael Snyder and his colleagues sequenced the lymphoblastoid transcriptomes of three family members using the Pacific Biosciences system, reads from which they compared to shorter reads from the Illumina platform. From those transcriptomes, they developed an allele-specific full-length transcriptome for one of the family members. They were able to distinguish the two alleles even for complicated genes such as HLA genes.

“Here, we generate the deepest and longest single-molecule long-read dataset to date, to our [knowledge], for a trio of human cell lines,” Snyder and his colleagues wrote in their paper. They further “show[ed] that we can determine SNVs de novo and that using a [principal components] approach, molecules from genes with multiple heterozygous SNVs can be attributed to the two alleles.”

Such personal transcriptomes, Snyder and his colleagues added, are expected to become important in the understanding of individual biology and disease.

He and his colleagues used the PacBio platform to sequence some 711,000 circular consensus read molecules from the GM12878 cell line. They generated longer sub-reads for this study — an average 1,188 basepairs — than they did for the human organ panel dataset — an average 999.9 basepairs — that they presented last year in Nature Biotechnology.

They additionally noted that though both datasets equally represented shorter molecules between 0.8 kilobases and 1.3 kilobases in length, the present dataset better represented molecules longer than 1.7 kilobases.

The Stanford team also sequenced 100 million 101-basepair pair-end reads on the Illumina platform that they then analyzed using Cufflinks.

Both technologies, they reported, uncovered some 99,000 annotated exon-exon junctions, and Illumina reads covered an additional 92,000 or so annotated junctions while the PacBio reads covered a further 992 junctions. Additionally, of the 22,600 spliced genes classified by Gencode as either protein-coding genes or lincRNAs, long-read single molecule sequencing and 101-basepair paired-end sequencing identified 9,200 of them. Forty genes were found solely through long reads, 6,400 genes by 101-basepair paired-end sequencing, and 7,000 genes weren’t found using either approach.

The researchers had hypothesized that since circular consensus read generation needs read lengths to be at least twice as long as the cDNA length that consensus split-mapped molecules (CSMM) wouldn’t include a large number of longer genes.

However, they found that genes with and without a CSMM had similar lengths, though genes with a CSMM were less likely to be smaller than one kilobase, which the researchers said was likely due to the magnetic beads in the loading procedure preferring longer fragments.

Both expression and mature gene length, Snyder and his colleagues added, are important factors in whether or not a gene received a full-length consensus split-mapped molecule.

Such long reads, the researchers said, could include a number of novel exon-intron structures. To eliminate potential artifacts, the researchers focused on 12,000 full-length novel isoforms that could be attributed to a known gene and for which the exon-intron junction was annotated or otherwise supported by short-read sequencing.

Of these, 55 percent were novel combinations of known splice sites; 34 percent had a single novel donor or acceptor; and 11 percent had two or more novel splice sites.

Again comparing this work to their previous human organ panel dataset, Snyder and his colleagues found that some 2,100 genes had a novel isoform in the HOP sample, 4,300 in the current sample, and 600 were in both.

A goal of transcriptomic research, the researchers said, is to be able to assign RNA molecules to the allele from which they are expressed. And long-read sequencing is supposed to be able, they added, to determine each SNV affecting single RNA molecules.

To trace the origin of these alleles found in the GM12878 daughter cell line, they folded in data from the parental GM12891 and GM12892 lines, and examined that parental data for the presence or absence of SNVs present in the daughter.

Through a principal components analysis, they could separate out the two alleles based on the eigenvectors. For 166 genes with at least two annotated heterozygous SNVs, the researchers found that 158 of them had two or more SNVs, two genes had one SNV, and six genes did not appear to be heterozygous.

A few genes — particularly HLA genes — contained a number of SNVs, and for these, too, the researchers were mostly able to determine phasing.

“Even for complicated genes (e.g., HLA genes, whose sequences may differ considerably from the reference sequence) the two alleles are usually clearly distinguishable,” Snyder and his colleagues wrote.

They noted, though, that deeper sequencing would be necessary to determine whether one allele behaves different than another for different genes.

 

Related Stories

Team Finds Medically Relevant Info Using Personalized ‘Omics Profiling on Stanford’s SnyderMarch 15, 2012 / GenomeWeb Daily News

Team Taps PacBio SMRT Sequencing to Track Outcomes at Sites Targeted for Genome EditingApril 15, 2014 / In Sequence

Tumor Type Targeted Sequencing Method Picks Up Personalized Circulating Tumor DNA MarkersApril 8, 2014 / In Sequence

Long-read Sequencing Offers Phasing Data for HIV Drug Resistance MutationsMarch 24, 2014 / GenomeWeb Daily News

Genomics in the JournalsJanuary 30, 2014 / GenomeWeb Daily News

SOURCE

http://www.genomeweb.com//node/1407251?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=PerkinElmer,%20Enzo%20Settle%20IP%20Fight;%20Long-Read-based%20Transcriptome;%20Cancer%20Research%20Grants;%20More%20%20-%2006/23/2014%2004:05:00%20PM

 

 

Read Full Post »

« Newer Posts - Older Posts »