Feeds:
Posts
Comments

Archive for July, 2014

Read Full Post »

 

 

News in Exploration of the Biological Causes of Mental Illness: Potential for New Treatments

Reporter: Aviva Lev-Ari, PhD,RN

Broad’s Stanley Center for Psychiatric Genome Research: Ted Stanley Pledges $650M

Initially opened with a gift from Stanley and his late wife in 2007, the Broad’s Stanley Center has already made progress in identifying genetic risk factors for schizophrenia and bipolar disorder and investigating therapeutic efforts based on those discoveries. This week researchers from Broad and other institutes published a GWAS analysis inNature that identified more than 100 regions of DNA associated with schizophrenia.

“Ten years ago, finding the biological causes of psychiatric disorders was like trying to climb a wall with no footholds,” Stanley Center Director Steven Hyman said in a statement. “But in the last few years, we’ve turned this featureless landscape into something we can exploit. If this is a wall, we’ve put toeholds into it. Now, we have to start climbing.”

SOURCE

http://www.genomeweb.com/clinical-genomics/ted-stanley-pledges-650m-broads-stanley-center-psychiatric-genome-research?utm_source=SilverpopMailing&utm_medium=email&utm_campaign=Broad%20Gets%20$650M%20for%20Psychiatric%20Genomics%20Research;%20Personalized%20Medicine%20Survey;%20Waters%20Q2%20-%2007/22/2014%2011:05:00%20AM

 

The Nature paper1 was produced by the Psychiatric Genomics Consortium (PGC) — a collaboration of more than 80 institutions, including the Broad Institute. Hundreds of researchers from the PGC pooled samples from more than 150,000 people, of whom 36,989 had been diagnosed with schizophrenia. This enormous sample size enabled them to spot 108 genetic locations, or loci, where the DNA sequence in people with schizophrenia tends to differ from the sequence in people without the disease. “This paper is in some ways proof that genomics can succeed,” Hyman says.

 

“This is a pretty exciting moment in the history of this field,” agrees Thomas Insel, director of the National Institute of Mental Health (NIMH) in Bethesda, Maryland, who was not involved in the study.

SOURCE

http://www.nature.com/news/gene-hunt-gain-for-mental-health-1.15602#/b1

 

 

Biological insights from 108 schizophrenia-associated genetic loci

Ripke, S. et alNature http://dx.doi.org/10.1038/nature13595 (2014).

SOURCE

Nature (2014) doi:10.1038/nature13595
Published online 22 July 2014

 

Abstract

Schizophrenia is a highly heritable disorder. Genetic risk is conferred by a large number of alleles, including common alleles of small effect that might be detected by genome-wide association studies. Here we report a multi-stage schizophrenia genome-wide association study of up to 36,989 cases and 113,075 controls. We identify 128 independent associations spanning 108 conservatively defined loci that meet genome-wide significance, 83 of which have not been previously reported. Associations were enriched among genes expressed in brain, providing biological plausibility for the findings. Many findings have the potential to provide entirely new insights into aetiology, but associations at DRD2 and several genes involved in glutamatergic neurotransmission highlight molecules of known and potential therapeutic relevance to schizophrenia, and are consistent with leading pathophysiological hypotheses. Independent of genes expressed in brain, associations were enriched among genes expressed in tissues that have important roles in immunity, providing support for the speculated link between the immune system and schizophrenia.

 

Discussion

In the largest (to our knowledge) molecular genetic study of schizophrenia, or indeed of any neuropsychiatric disorder, ever conducted, we demonstrate the power of GWAS to identify large numbers of risk loci. We show that the use of alternative ascertainment and diagnostic schemes designed to rapidly increase sample size does not inevitably introduce a crippling degree of heterogeneity. That this is true for a phenotype like schizophrenia, in which there are no biomarkers or supportive diagnostic tests, provides grounds to be optimistic that this approach can be successfully applied to GWAS of other clinically defined disorders.

We further show that the associations are not randomly distributed across genes of all classes and function; rather they converge upon genes that are expressed in certain tissues and cellular types. The findings include molecules that are the current, or the most promising, targets for therapeutics, and point to systems that align with the predominant aetiological hypotheses of the disorder. This suggests that the many novel findings we report also provide an aetiologically relevant foundation for mechanistic and treatment development studies. We also find overlap between genes affected by rare variants in schizophrenia and those within GWAS loci, and broad convergence in the functions of some of the clusters of genes implicated by both sets of genetic variants, particularly genes related to abnormal glutamatergic synaptic and calcium channel function. How variation in these genes impact function to increase risk for schizophrenia cannot be answered by genetics, but the overlap strongly suggests that common and rare variant studies are complementary rather than antagonistic, and that mechanistic studies driven by rare genetic variation will be informative for schizophrenia.

 

 

 

Manhattan plot showing schizophrenia associations.

Manhattan plot of the discovery genome-wide association meta-analysis of 49 case control samples (34,241 cases and 45,604 controls) and 3 family based association studies (1,235 parent affected-offspring trios). The x axis is chromosomal position and the y axis is the significance (–log10 P; 2-tailed) of association derived by logistic regression. The red line shows the genome-wide significance level (5×10−8). SNPs in green are in linkage disequilibrium with the index SNPs (diamonds) which represent independent genome-wide significant associations.

 

SOURCE

 

Biological insights from 108 schizophrenia-associated genetic loci

 

Schizophrenia Working Group of the Psychiatric Genomics Consortium

 

Nature (2014) doi:10.1038/nature13595

 

nature13595-f1

Read Full Post »

In vitro Models of Tumor Microenvironment for New Cancer Target and Drug Discovery, 11/17 – 11/19/2014, Hyatt Boston Harbor

Reporter: Aviva Lev-Ari, PhD, RN

 

On 7/21/2014 Cambridge Healthtech Institute Announced:

Cambridge Healthtech Institute, 250 First Avenue, Suite 300, Needham, MA 02494, http://www.healthtech.com

 

FINAL AGENDA ANNOUNCEMENT: REGISTER BY AUGUST 22 & SAVE UP TO $400!

Traditional drug screening relies on monolayer cell culture, which is not always predictive of natural physiological state. This is especially problematic in cancer drug discovery, where simple cell cultures are not predictive of complex tumor microenvironment that consists of various cell types that interact in 3-dimensional structures. As the cost of drug development rises, there is increasing pressure for more predictive in vitro models for functional analysis and compound characterization. Cambridge Healthtech Institute’s Second Annual Physiologically-Relevant Cellular Tumor Models for Drug Discovery meeting will focus on the latest advances in 3D cellular tumor models and complex co-culture systems for functional analysis studies and compound screening/characterization.

 

KEYNOTE PRESENTATION:

An All-Human Microphysiologic Liver System for Carcinoma Metastasis

Alan H. Wells, M.D., D.M.Sc., Vice Chair and Thomas J. Gill III Professor, Pathology, University of Pittsburgh

 

ENGINEERING AND SCREENING TUMOR SPHEROID MODELS

New Tricks for Spheroids: Mimicking Stromal Interactions, Investigating Nanoparticle Drug Delivery, and Modeling Resection

Mark Grinstaff, Ph.D., Professor, Chemistry, Boston University

Functional Analysis of Therapeutic Antibodies and Antigens Using ex vivo Tumor Spheroids

Mitchell Ho, Ph.D., Chief, Antibody Therapy Section, Laboratory of Molecular Biology, National Cancer Institute, National Institutes of Health

 

TECHNOLOGY SHOWCASE: 3D CELLULAR MODELS FOR DRUG AND TARGET SCREENING

High-Throughput Compatible Co-Spheroid Model Analyzing Compound Effects on Both Tumor and Stroma Cells

Jan E. Ehlert, Ph.D., Head, Cellular Drug Discovery, ProQinase GmbH

Sponsored by: ProQinase GmbH

Additional sponsorship opportunities available. Contact Ilana Quigley at iquigley@healthtech.com.HIGH-CONTENT ANALYSIS OF TUMOR SPHEROID MODELS

Drug Discovery and Development of Novel Anticancer Agents: Applications of Novel 3D Multicellular Tumor Spheroid Models

Daniel V. LaBarbera, Ph.D., Assistant Professor, Drug Discovery and Medicinal Chemistry, The Skaggs School of Pharmacy and Pharmaceutical Sciences, The University of Colorado

Novel Stromal Targets that Support Tumor Spheroid Formation

Shane R. Horman, Ph.D., Research Investigator, Advanced Assay Group, Genomics Institute of the Novartis Research Foundation

Developing Biodynamic Screening Assays for 3D Live-Tissue Models

David Nolte, Ph.D., Professor, Physics, Purdue University; President, Animated Dynamics, Inc.

 

ENGINEERING COMPLEX 3D MODELS OF TUMOR MICROENVIRONMENT FOR DRUG SCREENING AND FUNCTIONAL ANALYSIS

Targeted Electric Field Therapy Development in 3D Models of the Heterogeneous Glioma Microenvironment

Scott S. Verbridge, Ph.D., Assistant Professor, School of Biomedical Engineering and Sciences, Virginia Tech – Wake Forest University

Targeting Physical and Stromal Determinants of Tumor Heterogeneity in Bioengineered 3D Models

Imran Rizvi, Ph.D., Instructor, Medicine and Dermatology, Harvard Medical School; Associate Bioengineer, Brigham and Women’s Hospital; Assistant, Biomedical Engineering, Wellman Center for Photomedicine, Massachusetts General Hospital

Hydrogel Co-Culture Systems for Growing Patient-Derived Xenografts: Use in Selective Drug Screening

Mary C. Farach-Carson, Ph.D., Ralph and Dorothy Looney Professor, Biochemistry and Cell Biology; Scientific Director, BioScience Research Collaborative, Rice University

Human Stroma-Derived Extracellular Matrices: 3D ECM Physiological Systems

Edna Cukierman, Ph.D., Associate Professor, Cancer Biology, Fox Chase Cancer Center

ENGINEERING IN VITRO MODELS OF CANCER METASTASIS

Microfluidic Models with Microvascular Networks to Study Metastatic Disease

Roger D. Kamm, Ph.D., Cecil and Ida Green Distinguished Professor, Biological and Mechanical Engineering, MIT

Monitoring Extravascular Migratory Metastasis of Angiotropic Cancer Cells Using a 3D in vitro Co-Culture System

Claire Lugassy, M.D., Research Associate Professor, Pathology and Lab Medicine, UCLA School of Medicine; Member, Jonsson Comprehensive Cancer Center

Using Block Cell Printing to Develop Single Cell Arrays for Drug Screening

Lidong Qin, Ph.D., Associate Member, Nanomedicine, Methodist Hospital Research Institute; Assistant Professor, Cell and Developmental Biology, Weill Cornell Medical College

 

————————————————————————

RECOMMENDED DINNER SHORT COURSES*

Stem Cell Models for Drug Discovery

Monday Evening, November 17 | 6:30-9:30 pm

Instructors:

Anne G. Bang, Ph.D., Director, Cell Biology, Prebys Center, Sanford-Burnham Medical Research Institute

Pamela J. Hornby, Ph.D., Senior Scientific Director and Research Fellow, Cardiovascular and Metabolic Disease, Translational Models, Janssen Pharmaceutical Companies of Johnson & Johnson

Wei Zheng, Ph.D., Group Leader, National Center for Advancing Translational Sciences, National Institutes of Health

Expert ThinkTank: How to Meet the Need for Physiologically-Relevant Assays?

Tuesday Evening, November 18 | 6:00-9:00 pm

Moderator:

Lisa Minor, Ph.D., President, In Vitro Strategies, LLC

Panelists:

Beverley Isherwood, Ph.D., Team Leader, AstraZeneca R&D

Michael Jackson, Ph.D., Senior Vice President, Drug Discovery and Development, Conrad Prebys Center for Chemical Genomics, Sanford-Burnham Medical Research Institute (tentative)

Jean-Louis Klein, Ph.D., Principal Scientist, Target and Pathway Validation, Platform Technology and Science, GlaxoSmithKline

Caroline Shamu, Ph.D., Director, ICCB-Longwood Screening Facility and Assistant Professor, Harvard Medical School

D. Lansing Taylor, Ph.D., Director, University of Pittsburgh Drug Discovery Institute and Allegheny Foundation; Professor, Computational and Systems Biology, University of Pittsburgh

Scott S. Verbridge, Ph.D., Assistant Professor, School of Biomedical Engineering and Sciences, Virginia Tech – Wake Forest University

Read Full Post »

Soft Tissue Transponder for Radiotherapy and Radiosurgery Treatments for Cancer got FDA Approval

Reporter: Aviva Lev-Ari, PhD, RN

Varian Medical Systems ($VAR) scored FDA 510(k) clearance for its soft tissue transponder for radiotherapy and radiosurgery treatments

 

FDA clears Varian soft tissue transponder to treat cancer

Varian Medical Systems’ Calypso soft tissue Beacon transponder–Courtesy of Varian

Varian Medical Systems ($VAR) scored FDA 510(k) clearance for its soft tissue transponder for radiotherapy and radiosurgery treatments.

The Palo Alto, CA-based company’s Calypso soft tissue Beacon transponders are implanted in soft tissue throughout the body, allowing physicians to target high energy treatment beam radiation at tumors without damaging surrounding tissue. The grain-sized device includes a real-time GPS monitoring system that continuously tracks and monitors the position of transponders during radiosurgery.

An earlier version of the product was cleared for use in the prostate and prostatic bed, but the new indication expands the device’s applications for other types of cancer, the company said in a statement. Varian plans to release the transponders toward the end of this year, and expects a full commercial roll-out in 2015.

“We’re pleased to be able to make the system available to clinicians who want to use it more broadly, not just for conventional radiotherapy but for some of the newer approaches, like stereotactic body radiotherapy (SBRT), which involves delivering higher radiation doses very quickly,” Andrea Morgan, Calypso product manager said in a statement. “For treatments like that, accurate targeting is essential, and the new Calypso transponders have an important role to play.”

The FDA nod bodes well for Varian, as the company struggles to recover from a disappointing second quarter. The devicemaker saw its net earnings fall nearly 18% in Q2, with profits of $92.7 million down from $112.8 million the same period last year. Revenue increased 1% to $779 million, primarily due to a 4% jump in oncology sales and a slight uptick in imaging components.

Regulatory blessings also help Varian forge ahead in its emerging markets, where the company sees strong demand for its oncology and medical imaging products. Last year, Varian built its first Asian subsidiary in South Korea, giving it an expanded market for its cancer-treating radiotherapy devices and imaging equipment. In January, the company renewed a three-year, $515 billion deal withToshiba Medical Systems to supply medical imaging components. The companies originally charted the deal in January 2011 for an estimated $450 billion.

Varian Medical Systems

Varian Medical Systems, Inc., of Palo Alto, California, is the world’s leading manufacturer of medical devices and software for treating cancer and other medical conditions with radiotherapy, radiosurgery, and brachytherapy. The company supplies informatics software for managing comprehensive cancer clinics, radiotherapy centers and medical oncology practices. Varian is a premier supplier of tubes, digital detectors, and image processing workstations for X-ray imaging in medical, scientific, and industrial applications and also supplies high-energy X-ray devices for cargo screening and non-destructive testing applications.  Varian Medical Systems employs approximately 6500 people who are located at manufacturing sites in North America, Europe, and China and approximately 70 sales and support offices around the world. For more information, visit http://www.varian.com or follow us on Twitter .

 

– read the release

Related Articles:
Varian’s profits slip as revenue ticks up slightly
Varian Medical pays $35M to settle Pitt patent spat
Varian touts emerging markets as looming Medicare changes take a toll
Varian, Toshiba announce a $515M medical imaging deal
Korean government ramps up plans to boost medical equipment exports

SOURCE

 

Read Full Post »

Read Full Post »

Why did this occur? The matter of Individual Actions Undermining Trust, The Patent Dilemma and The Value of a Clinical Trials

Why did this occur? The matter of Individual Actions Undermining Trust, The Patent Dilemma and The Value of a Clinical Trials

Reporter and Curator: Larry H. Bernstein, MD, FCAP

 

he large amount of funding tied to continued research and support of postdoctoral fellows leads one to ask how following the money can lead to discredited work in th elite scientific community.

Moreover, the pressure to publish in prestigious journals with high impact factors is a road to academic promotion.  In the last twenty years, it is unusual to find submissions for review with less than 6-8 authors, with the statement that all contributed to the work.  These factors can’t be discounted outright, but it is easy for work to fall through the cracks when a key investigator has over 200 publications and holds tenure in a great research environment.  But that is where we find ourselves today.

There is another issue that comes up, which is also related to the issue of carrying out research, and then protecting the work for commercialization.  It is more complicated in the sense that it is necessary to determine whether there is prior art, and then there is the possibility that after the cost of filing patent and a 6 year delay in obtaining protection, there is as great a cost in bringing the patent to finasl production.

I.  Individual actions undermining trust.

II. The patent dilemma.

III. The value of a clinical trial.

IV. The value contributions of RAP physicians
(radiologists, anesthesiologists, and pathologists – the last for discussion)
Those who maintain and inform the integrity of medical and surgical decisions

 

I. Top heart lab comes under fire

Kelly Servick

Science 18 July 2014: Vol. 345 no. 6194 p. 254 DOI: 10.1126/science.345.6194.25

 

In the study of cardiac regeneration, Piero Anversa is among the heavy hitters. His research into the heart’s repair mechanisms helped kick-start the field of cardiac cell therapy (see main story). After more than 4 decades of research and 350 papers, he heads a lab at Harvard Medical School’s Brigham and Women’s Hospital (BWH) in Boston that has more than $6 million in active grant funding from the National Institutes of Health (NIH). He is also an outspoken voice in a field full of disagreement.

So when an ongoing BWH investigation of the lab came to light earlier this year, Anversa’s colleagues were transfixed. “Reactions in the field run the gamut from disbelief to vindication,” says Mark Sussman, a cardiovascular researcher at San Diego State University in California who has collaborated with Anversa. By Sussman’s account, Anversa’s reputation for “pushing the envelope” and “challenging existing dogma” has generated some criticism. Others, however, say that the disputes run deeper—to doubts about a cell therapy his lab has developed and about the group’s scientific integrity. Anversa told Science he was unable to comment during the investigation.

“People are talking about this all the time—at every scientific meeting I go to,” says Charles Murry, a cardiovascular pathologist at the University of Washington, Seattle. “It’s of grave concern to people in the field, but it’s been frustrating,” because no information is available about BWH’s investigation. BWH would not comment for this article, other than to say that it addresses concerns about its researchers confidentially.

In April, however, the journal Circulation agreed to Harvard’s request to retract a 2012 paper on which Anversa is a corresponding author, citing “compromised” data. The Lancet also issued an “Expression of Concern” about a 2011 paper reporting results from a clinical trial, known as SCIPIO, on which Anversa collaborated. According to a notice from the journal, two supplemental figures are at issue.

For some, Anversa’s status has earned him the benefit of the doubt. “Obviously, this is very disconcerting,” says Timothy Kamp, a cardiologist at the University of Wisconsin, Madison, but “I would be surprised if it was an implication of a whole career of research.”

Throughout that career, Anversa has argued that the heart is a prolific, lifelong factory for new muscle cells. Most now accept the view that the adult heart can regenerate muscle, but many have sparred with Anversa over his high estimates for the rate of this turnover, which he maintained in the retracted Circulation paper.

Anversa’s group also pioneered a method of separating cells with potential regenerative abilities from other cardiac tissue based on the presence of a protein called c-kit. After publishing evidence that these cardiac c-kit+cells spur new muscle growth in rodent hearts, the group collaborated in the SCIPIO trial to inject them into patients with heart failure. In The Lancet, the scientists reported that the therapy was safe and showed modest ability to strengthen the heart—evidence that many found intriguing and provocative. Roberto Bolli, the cardiologist whose group at the University of Louisville in Kentucky ran the SCIPIO trial, plans to test c-kit+ cells in further clinical trials as part of the NIH-funded Cardiovascular Cell Therapy Research Network.

But others have been unable to reproduce the dramatic effects Anversa saw in animals, and some have questioned whether these cells really have stem cell–like properties. In May, a group led by Jeffery Molkentin, a molecular biologist at Cincinnati Children’s Hospital Medical Center in Ohio, published a paper in Nature tracing the genetic lineage of c-kit+ cells that reside in the heart. He concluded that although they did make new muscle cells, the number is “astonishingly low” and likely not enough to contribute to the repair of damaged hearts. Still, Molkentin says that he “believe[s] in their therapeutic potential” and that he and Anversa have discussed collaborating.

Now, an anonymous blogger claims that problems in the Anversa lab go beyond controversial findings. In a letter published on the blog Retraction Watch on 30 May, a former research fellow in the Anversa lab described a lab culture focused on protecting the c-kit+ cell hypothesis: “[A]ll data that did not point to the ‘truth’ of the hypothesis were considered wrong,” the person wrote. But another former lab member offers a different perspective. “I had a great experience,” says Federica Limana, a cardiovascular disease researcher at IRCCS San Raffaele Pisana in Rome who spent 2 years of her Ph.D. work with the group in 1999 and 2000, as it was beginning to investigate c-kit+ cells. “In that period, there was no such pressure” to produce any particular result, she says.

Accusations about the lab’s integrity, combined with continued silence from BWH, are deeply troubling for scientists who have staked their research on theories that Anversa helped pioneer. Some have criticized BWH for requesting retractions in the midst of an investigation. “Scientific reputations and careers hang in the balance,” Sussman says, “so everyone should wait until all facts are clearly and fully disclosed.”

 

II.  Trolling Along: Recent Commotion About Patent Trolls

July 17, 2014

PriceWaterhouseCoopers recently released a study about 2014 Patent Litigation. PwC’s ultimate conclusion was that case volume increased vastly and damages continue a general decline, but what’s making headlines everywhere is that “patent trolls” now account for 67% of all new patent lawsuits (see, e.g., Washington Post and Fast Company).

Surprisingly, looking at PwC’s study, the word “troll” is not to be found. So, with regard to patent trolls, what does this study really mean for companies, patent owners and casual onlookers?

First of all, who are these trolls?

“Patent Troll” is a label applied to patent owners who do not make or manufacture a product, or offer a service. Patent trolls live (and die) by suing others for allegedly practicing an invention that is claimed by their patents.

The politically correct term is Non-practicing Entity (NPE). PwC solely uses the term NPE, which it defines as an entity that does not have the capability to design, manufacture, or distribute products with features protected by the patent.

So, what’s so bad about them?

The common impression of an NPEs is a business venture looking to collect and monetize assets (i.e., patents). In the most basic strategy, an NPE typically buys patents with broad claims that cover a wide variety of technologies and markets, and then sues a large group of alleged patent infringers in the hope to collect a licensing royalty or a settlement. NPEs typically don’t want to spend money on a trial unless they have to, and one tactic uses settlements with smaller businesses to build a “war chest” for potential suits with larger companies.

NPEs initiating a lawsuit can be viewed positively, such as a just defense of the lowly inventor who sold his patent to someone (with deeper pockets) who could fund the litigation to protect the inventor’s hard work against a mega-conglomerate who ripped off his idea.

Or NPE litigation can be seen negatively, such as an attorney’s demand letter on behalf of an anonymous shell corporation to shake down dozens of five-figure settlements from all the local small businesses that have ever used a fax machine.

NPEs can waste a company’s valuable time and resources with lawsuits, yet also bring value to their patent portfolios by energizing a patent sales and licensing market. There are unscrupulous NPEs, but it’s hardly the black and white situation that some media outlets are depicting.

What did PwC say about trolls?

Well, the PwC study looked at the success rates and awards of patent litigation decisions. One conclusion is that damages awards for NPEs averaged more than triple those for practicing entities over the last four years. We’ll come back to this statistic.

Another key observation is that NPEs have been successful 25% of the time overall, versus 35% for practicing entities. This makes sense because of the burden of proof the NPEs carry as a plaintiff at trial and the relative lack of success for NPEs at summary judgment. However, PwC’s report states that both types of entities win about two-thirds of their trials.

But what about this “67% of all patent trials are initiated by trolls” discussion?

The 67% number comes from the RPX Corporation’s litigation report (produced January 2014) that quantified the percentage of NPE cases filed in 2013 as 67%, compared to 64% in 2012, 47% in 2011, 30% in 2010 and 28% in 2009.

PwC refers to the RPX statistics to accentuate that this new study indicates that only 20% ofdecisions in 2013 involved NPE-filed cases, so the general conclusion would be that NPE cases tend to settle or be dismissed prior to a court’s decision. Admittedly, this is indicative of the prevalent “spray and pray” strategy where NPEs prefer to collect many settlement checks from several “targets” and avoid the courtroom.

In this study, who else is an NPE?

If someone were looking to dramatize the role of “trolls,” the name can be thrown around liberally (and hurtfully) to anyone who owns and asserts a patent without offering a product or a service. For instance, colleges and universities fall under the NPE umbrella as their research and development often ends with a series of published papers rather than a marketable product on an assembly line.

In fact, PwC distinguishes universities and non-profits from companies and individuals within their NPE analysis, with only about 5% of the NPE cases from 1995 to 2013 being attributed to universities and non-profits. Almost 50% of the NPE cases are attributed to an “individual,” who could be the listed inventor for the patent or a third-party assignee.

The word “troll” is obviously a derogatory term used to connote greed and hiding (under a bridge), but the term has adopted a newer, meme-like status as trolls are currently depicted as lacking any contribution to society and merely living off of others’ misfortunes and fears. [Three Billy Goats Gruff]. This is not always the truth with NPEs (e.g., universities).

No one wants to be called a troll—especially in front of a jury—so we’ve even recently seen courts bar defendants from referring to NPEs as such colorful terms as a “corporate shell,” “bounty hunter,” “privateer,” or someone “playing the lawsuit lottery.” [Judge Koh Bans Use Of Term ” Patent Troll” In Apple Jury Trial]

Regardless of the portrayal of an NPE, most people in the patent world distinguish the “trolls” by the strength of the patent, merits of the alleged infringement and their behavior upon notification. Often these are expressed as “frivolity” of the case and “gamesmanship” of the attorneys. Courts are able to punish plaintiffs who bring frivolous claims against a party and state bar associations are tasked with monitoring the ethics of attorneys. The USPTO is tasked with working to strengthen the quality of patents.

What’s the take-away from this study regarding NPEs?

The study focuses on patent litigation that produced a decision, therefore the most important and relevant conclusion is that, over the last four years, average damages awards for NPEs are more than triple the damages for practicing entities. Everything else in these articles, such as the initiation of litigation by NPEs, settlement percentages, and the general behavior of patent trolls is pure inference beyond the scope of the study.

This may sound sympathetic to trolls, but keep in mind that the study highlights that NPEs have more than triple the damages on average compared to practicing entities and it is meant to shock the reader a bit. One explanation for this is that NPEs are in the best position to choose the patents they want to assert and choose the targets they wish to sue—especially when the NPE is willing to ride that patent all the way to the end of a long, expensive trial. Sometimes settling is not an option. Chart 2b indicates that the disparity in the damages awarded to NPEs relative to practicing entities has always been big (since 2000), but perhaps going from two-fold from 2000 – 2009 to three times as much in the past 4 years indicates that NPEs are improving at finding patents and/or picking battles to take all the way to a court decision. More than anything, this seems to reflect the growth in the concept of patents as a business asset.

The PwC report is chock full of interesting patterns and trends of litigation results, so it’s a shame that the 67% number makes the headlines—far more interesting are the charts comparing success rates by 4-year periods (Chart 6b) or success rates for NPEs and practicing entities in front of a jury verusin front of a bench (Chart 6c), as well as other tables that reveal statistics for specific districts of the federal courts. Even the stats that look at the success rates of each type of NPE are telling because the reader sees that universities and non-profits have a higher success rate than non-practicing companies or individuals.

What do we do about the trolls?

The White House has recently called for Congress to do something about the trolls as horror stories of scams and shake-downs are shared. A bill was gaining momentum in the Senate, when Senator Leahy took it off the agenda in early July. That bill had miraculously passed 325-91 in the House and President Obama was willing to sign it if the Senate were to pass it. The bill was opposed by trial attorneys, universities, and bio-pharmaceutical businesses who felt as though the law would severely inhibit everyone’s access to the courts in order to hinder just the trolls. Regardless, most people think that the sitting Congressmen merely wanted a “win” prior to the mid-term elections and that patent reform is unlikely to reappear until next term.

In the meantime, the Supreme Court has recently reiterated rules concerning attorney fee-shifting on frivolous patent cases, as well as clarifying the validity of software patents. Time will tell if these changes have any effects on the damages awards that PwC’s study examined or even if they cause a chilling of the number of patent lawsuit filings.

Furthermore, new ways to challenge the validity of asserted patents have been initiated via the America Invents Act. For example, the Inter Partes Review (IPR) has yielded frightening preliminary statistics as to slowing, if not killing, patents that have been asserted in a suit. While these administrative trials are not cheap, many view these new tools at the Patent Trial and Appeals Board as anti-troll measures. It will be interesting to watch how the USPTO implements these procedures in the near future, especially while former Google counsel, Acting Director Michelle K. Lee, oversees the office.

In the private sector, Silicon Valley has recently seen a handful of tech companies come together as the License on Transfer Network, a group hoping to disarm the “Patent Assertion Entities.” Joining the LOT Network comes via an agreement that creates a license for use of a patent by anyone in the LOT network once that patent is sold. The thought is that the NPEs who consider purchasing patents from companies in the LOT Network will have fewer companies to sue since the license to the other active LOT participants will have triggered upon the transfer and, thus, the NPE will not be as inclined to “troll.” For instance, if a member-company such as Google were to sell a patent to a non-member company and an NPE bought that patent, the NPE would not be able to sue any members of the LOT Network with that patent.

Other notes

NPEs are only as evil as the people who run them—that being said, there are plenty of horror stories of small businesses receiving phantom demand letters that threaten a patent infringement suit without identifying themselves or the patent. This is an out-and-out scam and a plague on society that results in wasted time and resource, and inevitably higher prices on the consumer end.

It is a sin and a shame that patent rights can be misused in scams and shake-downs of businesses around us, but there is a reason that U.S. courts are so often used to defend patent rights. The PwC study, at minimum, reflects the high stakes of the patent market and perhaps the fragility. Nevertheless, merely monitoring the courts may not keep the trolls at bay.

I’d love to hear your thoughts.

*This is provided for informational purposes only, and does not constitute legal or financial advice. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney or patent agent.

 

III. Large-scale analysis finds majority of clinical trials don’t provide meaningful evidence

Ineffective TreatmentsMedical Ethics • Tags: Center for Drug Evaluation and ResearchClinical trialCTTIDuke University HospitalFDAFood and Drug AdministrationNational Institutes of HealthUnited States National Library of Medicine

04 May 2012

DURHAM, N.C.— The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making. The analysis, published today in JAMA, found the majority of clinical trials is small, and there are significant differences among methodical approaches, including randomizing, blinding and the use of data monitoring committees.

“Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high quality evidence to ensure confidence in guideline recommendations,” said Robert Califf, M.D., first author of the paper, vice chancellor for clinical research at Duke University Medical Center, and director of the Duke Translational Medicine Institute.

The analysis was conducted by the Clinical Trials Transformation Initiative (CTTI), a public private partnership founded by the Food and Drug Administration (FDA) and Duke. It extends the usability of the data in ClinicalTrials.gov for research by placing the data through September 27, 2010 into a database structured to facilitate aggregate analysis. This publically accessible database facilitates the assessment of the clinical trials enterprise in a more comprehensive manner than ever before and enables the identification of trends by study type.

 

The National Library of Medicine (NLM), a part of the National Institutes of Health, developed and manages ClinicalTrials.gov. This site maintains a registry of past, current, and planned clinical research studies.

“Since 2007, the Food and Drug Administration Amendment Act has required registration of clinical trials, and the expanded scope and rigor of trial registration policies internationally is producing more complete data from around the world,” stated Deborah Zarin, MD, director, ClinicalTrials.gov, and assistant director for clinical research projects, NLM. “We have amassed over 120,000 registered clinical trials. This rich repository of data has a lot to say about the national and international research portfolio.”

This CTTI project was a collaborative effort by informaticians, statisticians and project managers from NLM, FDA and Duke. CTTI comprises more than 60 member organizations with the goal of identifying practices that will improve the quality and efficiency of clinical trials.

“Since the ClinicalTrials.gov registry contains studies sponsored by multiple entities, including government, industry, foundations and universities, CTTI leaders recognized that it might be a valuable source for benchmarking the state of the clinical trials enterprise,” stated Judith Kramer, MD, executive director of CTTI.

The project goal was to produce an easily accessible database incorporating advances in informatics to permit a detailed characterization of the body of clinical research and facilitate analysis of groups of studies by therapeutic areas, by type of sponsor, by number of participants and by many other parameters.

“Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others,” says Califf. “For example, 96% of clinical trials have ≤1000 participants, and 62% have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors and consumers about the choices they must make to prevent and treat disease.”

The analysis showed heterogeneity in median trial size, with cardiovascular trials tending to be twice as large as those in oncology and trials in mental health falling in the middle. It also showed major differences in the use of randomization, blinding, and data monitoring committees, critical issues often used to judge the quality of evidence for medical decisions in clinical practice guidelines and systematic overviews.

“These results reinforce the importance of exploration, analysis and inspection of our clinical trials enterprise,” said Rachel Behrman Sherman, MD, associate director for the Office of Medical Policy at the FDA’s Center for Drug Evaluation and Research. “Generation of this evidence will contribute to our understanding of the number of studies in different phases of research, the therapeutic areas, and ways we can improve data collection about clinical trials, eventually improving the quality of clinical trials.”

Related articles

 

IV.  Lawmakers urge CMS to extend MU hardship exemption for pathologists

 

Eighty-nine members of Congress have asked the Centers for Medicare & Medicaid Services to give pathologists a break and extend the hardship exemption they currently enjoy for all of Stage 3 of the Meaningful Use program.In the letter–dated July 10 and addressed to CMS Administrator Marilyn Tavenner–the lawmakers point out that CMS had recognized in its 2012 final rule implementing Stage 2 of the program that it was difficult for pathologists to meet the Meaningful Use requirements and granted a one year exception for 2015, the first year that penalties will be imposed. They now are asking that the exception be expanded to include the full five-year maximum allowed under the American Recovery and Reinvestment Act.

“Pathologists have limited direct contact with patients and do not operate in EHRs,” the letter states. “Instead, pathologists use sophisticated computerized laboratory information systems (LISs) to support the work of analyzing patient specimens and generating test results. These LISs exchange laboratory and pathology data with EHRs.”

Interestingly, the lawmakers’ exemption request is only on behalf of pathologists, even though CMS had granted the one-year hardship exception to pathologists, radiologists and anesthesiologists.

Rep. Tom Price (R-Ga.), one of the members spearheading the letter, had also introduced a bill (H.R. 1309) in March 2013 that would exclude pathologists from the incentives and penalties of the Meaningful Use program. The bill, which has 31 cosponsors, is currently sitting in committee. That bill also does not include relief for radiologists or anesthesiologists.

CMS has provided some flexibility about the hardship exceptions in the past, most recently by allowing providers to apply for one due to EHR vendor delays in upgrading to Stage 2 of the program.

However, CMS also noted in the 2012 rule granting the one-year exception that it was granting the exception in large part because of the then-current lack of health information exchange and that “physicians in these three specialties should not expect that this exception will continue indefinitely, nor should they expect that we will grant the exception for the full 5-year period permitted by statute.”

To learn more:
– read the letter (.pdf)

Read Full Post »

This is not quite new on artemisin, but encouraging. It needs to be followed up.

Read Full Post »

Interesting observations to share on Lactoferrin

Read Full Post »

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson

Life-cycle of Science 2

 

 

 

 

 

 

 

 

 

 

 

Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

(this discussion is in a three part series including:

Using Scientific Content Curation as a Method for Validation and Biocuration

Using Scientific Content Curation as a Method for Open Innovation)

 

Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value.   He also writes extensively about tech and entrepreneurship.

October 2013 Wired article by Clive Thompson, entitled “How Successful Networks Nurture Good Ideas: Thinking Out Loud”, describes how the voluminous writings, postings, tweets, and sharing on social media is fostering connections between people and ideas which, previously, had not existed. The article was generated from Clive Thompson’s book Smarter Than you Think: How Technology is Changing Our Minds for the Better.Tom Peters also commented about the article in his blog (see here).

Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:

 

Our ideas are PRODUCTS OF OUR ENVIRONMENT.

They are influenced by the conversations around us.

However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.

But HOW MUCH Knowledge is OUT THERE?

 

Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:

  • 154.6 billion EMAILS per DAY
  • 400 million TWEETS per DAY
  • 1 million BLOG POSTS (including this one) per DAY
  • 2 million COMMENTS on WordPress per DAY
  • 16 million WORDS on Facebook per DAY
  • TOTAL 52 TRILLION WORDS per DAY

As he estimates this would be 520 million books per DAY (book with average 100,000 words).

A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.

 

Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:

 

  • 2.9 billion internet users
  • 981 million websites (only 25,000 hacked today)
  • 128 billion emails
  • 385 million Tweets
  • > 2.7 million BLOG posts today (including this one)

 

The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)

 

So how many science blogs are out there? Well back in 2008 “grrlscientistasked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.

So granted, there is a lot of

goodbadugly

 

              ….. when it comes to scientific information on the internet!

 

 

 

 

 

I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:

http://pharmaceuticalintelligence.com/2014/06/17/the-gonzalez-protocol-worse-than-useless-for-pancreatic-cancer/

 

and in a Nature Report:Stem cells: Taking a stand against pseudoscience

http://www.nature.com/news/stem-cells-taking-a-stand-against-pseudoscience-1.15408

Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.

 

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

 

Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:

 

“a meeting of natural philosophers to discuss promoting knowledge of the

natural world through observation and experiment”, i.e. science.

The Society met weekly to witness experiments and discuss what we

would now call scientific topics. The first Curator of Experiments

was Robert Hooke.”

 

from The History of the Royal Society

 

Royal Society CoatofArms

 

 

 

 

 

 

The Royal Society of London for Improving Natural Knowledge.

(photo credit: Royal Society)

(Although one wonders why they met “in-cognito”)

Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:

  • Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
  • Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
  • Current publishing does not encourage open feedback and review
  • Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
  • Published data and methodology incomplete
  • Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format

(put in here a brief blurb of summary of problems and why curation could help)

 

Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

Curation is an active filtering of the web’s  and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view.  Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale.  One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.

– Larry H. Bernstein, MD, FCAP

Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP

The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.

Justin D Pearlman, MD, PhD, FACC from The Voice of Content Consultant on The  Methodology of Curation in Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison, Drs. Larry Bernstein and Aviva Lev-Ari likens the medical and scientific curation process to curation of musical works into a thematic program:

 

Work of Original Music Curation and Performance:

 

Music Review and Critique as a Curation

Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care

… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.

 

In Summary: How Scientific Content Curation Can Help

 

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.

Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN

 

At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:

Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation

 

http://pharmaceuticalintelligence.com/2013/07/29/cardiovascular-original-research-cases-in-methodology-design-for-content-curation-and-co-curation/

demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …

which has created a need for more context-driven scientific search and discourse.

In Science and Curation: the New Practice of Web 2.0, Célya Gruson-Daniel (@HackYourPhd) states:

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.

.. where Célya considers curation an essential practice to manage open science and this new style of research.

As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.

  1. Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS

i)        Evidence: curation methodology leading to new insights for biomarkers

 

  1. Same as #1 but multiple players (experts) each bringing unique insights, perspectives, skills yielding new research = NEW LINE of CRITICAL THINKING

ii)      Evidence: co-curation methodology among cardiovascular experts leading to cardiovascular series ebooks

Life-cycle of Science 2

The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging.  Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communication was peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases. In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.

 

The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to

INCREASE THE SCIENTIFIC CURRENCY.

For detailed description of methodology please see Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid:

“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”

— JISC Senior Management Briefing Paper, The Data Deluge (2004)

 

As she states proper data and content curation is important for:

  • Post-analysis
  • Data and research result reuse for new research
  • Validation
  • Preservation of data in newer formats to prolong life-cycle of research results

However she laments the lack of

  • Funding for such efforts
  • Training
  • Organizational support
  • Monitoring
  • Established procedures

 

Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.

Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.

And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

curating-tableA GREAT table about the differences between Collecting and Curating by Nancy White at http://d20innovation.d20blogs.org/2012/07/07/understanding-content-curation/

 

 

 

 

 

 

 

 

 

 

 

University of Massachusetts Medical School has aggregated some useful curation tools at http://esciencelibrary.umassmed.edu/data_curation

Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.

See here for a curation of how networks fosters knowledge, by Erika Harrison on ScoopIt

(http://www.scoop.it/t/mobilizing-knowledge-through-complex-networks)

 

“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”

Andrea Carafa insight from World Economic Forum New Champions 2012 “Power of Networks

 

Creating Content Curation Communities: Breaking Down the Silos!

 

An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as

  • information integration (various types of data and formats)
  • social integration (marginalized by scientific communities, no funding, no recognition)

The authors set forth some ways to overcome these challenges of the content curation community including:

  1. standardization in practices
  2. visualization to document contributions
  3. emphasizing role of information professionals in content curation communities
  4. maintaining quality control to increase respectability
  5. recognizing participation to professional communities
  6. proposing funding/national meeting – Data Intensive Collaboration in Science and Engineering Workshop

A few great presentations and papers from the 2012 DICOSE meeting are found below

Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown

 

Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe

 

Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee

 

Ben Li. The mirages of big data. (PDF) NotesForLiReflectionsByBen

 

Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland

 

Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman

 

Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber

Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.

Open Science Case Studies in Curation

1. Open Science Project from Digital Curation Center

This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.

The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.

 

see: Pennock, M. (2006). “Curating e-Science Data”. DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3330. Available online: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation– See more at: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation/curating-e-science-data#sthash.RdkPNi9F.dpuf

 

2.      cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.

 

3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.

 

4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.

 

  1. OpenWetWareOpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us.   If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.

6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.

7. OpenScience ProjectThe OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.

 

9. Some ongoing biomedical knowledge (curation) projects at ISI

IICurate
This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.

BioScholar
It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).

10. Tools useful for scientific content curation

 

Research Analytic and Curation Tools from University of Queensland

 

Thomson Reuters information curation services for pharma industry

 

Microblogs as a way to communicate information about HPV infection among clinicians and patients; use of Chinese microblog SinaWeibo as a communication tool

 

VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries.  So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.

 

 

11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education

 

From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset

http://digimind.com/blog/experts/pharma-3-0/

Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book  entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.

 

Thomson Reuters and NuMedii Launch Ground-Breaking Initiative to Identify Drugs for Repurposing. Companies leverage content, Big Data analytics and expertise to improve success of drug discovery

 

Content Curation as a Context for Teaching and Learning in Science

 

#OZeLIVE Feb2014

http://www.youtube.com/watch?v=Ty-ugUA4az0

Creative Commons license

 

DigCCur: A graduate level program initiated by University of North Carolina to instruct the future digital curators in science and other subjects

 

Syracuse University offering a program in eScience and digital curation

 

Curation Tips from TED talks and tech experts

Steven Rosenbaum from Curation Nation

http://www.youtube.com/watch?v=HpncJd1v1k4

 

Pawan Deshpande form Curata on how content curation communities evolve and what makes a good content curation:

http://www.youtube.com/watch?v=QENhIU9YZyA

 

How the Internet of Things is Promoting the Curation Effort

Update by Stephen J. Williams, PhD 3/01/19

Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts.  I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running.  One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in.  About an army of 4 million have Wiki entries and maintain these databases.

Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)

In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information”  senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.

As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.”   Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”

Enter Wikidata.  According to founder Denny Vrandecic,

Language depends on knowing a lot of common sense, which computers don’t have access to

A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.

Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items.  Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.

The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.

But this effort needs money

Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s.  In response, Amazon and Facebook had donated millions for the Wikimedia projects.  Google recently gave 3.1 million USD$ in donations.

 

Future postings on the relevance and application of scientific curation will include:

Using Scientific Content Curation as a Method for Validation and Biocuration

 

Using Scientific Content Curation as a Method for Open Innovation

 

Other posts on this site related to Content Curation and Methodology include:

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

6 Steps to More Effective Content Curation

Stem Cells and Cardiac Repair: Content Curation & Scientific Reporting

Cancer Research: Curations and Reporting

Cardiovascular Diseases and Pharmacological Therapy: Curations

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

The Young Surgeon and The Retired Pathologist: On Science, Medicine and HealthCare Policy – The Best Writers Among the WRITERS

Reconstructed Science Communication for Open Access Online Scientific Curation

 

 

Read Full Post »

Diagnostic Approach to Neurodegenerative Disorders: Biomarkers Overview

 

Reporter: Aviva Lev-Ari, PhD, RN

ANNOUNCEMENT by Cambridge Healthtech Institute

 

Lisa Scimemi, MBE, MSM

Publisher

Insight Pharma Reports

lscimemi@healthtech.com

Web: http://www.InsightPharmaReports.com

 

 

Click here for more information on this research report, or to access your copy today.

Visit us online  for more information on this report, or over 1000 of other reports available through

Insight Pharma Reports.

 

Cambridge Healthtech Institute | 250 First Avenue, Suite 300 | Needham, MA 02494

Phone: 781.972.5400 or toll free 800.856.2556 | Fax: 781.972.5425 | http://www.healthtech.com

 

Biomarkers: Discovery and Development for a Diagnostic Approach to Neurodegenerative Disorders – Overview

Available Mid-July! 

Biomarkers: Discovery and Development for a Diagnostic Approach to Neurodegenerative Disorders has a focus in biomarkers for neurodegenerative diseases and diagnostic applications in development. Biomarkers have been a heavily studied topic of interest, and recently on the rise is the interest in neurodegenerative disorders. Although there are many techniques used to track neurodegenerative disease progression, this report will primarily focus on blood-based and cerebrospinal fluid-based biomarkers. In addition to covering background information, this report will highlight several technologies that have been developed for employing the use of biomarkers for neurodegenerative disease detection, analysis and therapeutic development. Including substantial background information, illustrated with graphics and figures, this report captures market growth of biomarkers, advantages, disadvantages, and validation techniques.

Three neurodegenerative disorders that are heavily focused on in this report include: Alzheimer’s Disease/Mild Cognitive Impairment, Parkinson’s Disease, and Amyotrophic Lateral Sclerosis. Part II of the report will include all three of these disorders, highlighting specifics including background, history, and development of the disease. Deeper into the chapters, the report will unfold biomarkers under investigation, genetic targets, and an analysis of multiple studies investigating these elements.

Experts interviewed in these chapters include:

  • Dr. Jens Wendland, Head of Neuroscience Genetics, Precision Medicine, PharmaTherapeutics, Pfizer Worldwide R&D
  • Dr. Howard J. Federoff, Executive Vice President for Health Sciences, Georgetown University
  • Dr. Andrew West, Associate Professor of Neurology and Neurobiology and Co-Director, Center for Neurodegeneration and Experimental Therapeutics
  • Dr. Merit Ester Cudkowicz, Chief of Neurology at Massachusetts General Hospital

Part III of the report makes a shift from neurobiomarkers to neurodiagnostics. This section highlights several diagnostics in play and in the making from a number of companies, identifying company strategies, research underway, hypotheses, and institution goals. Elite researchers and companies highlighted in this part include:

  • Dr. Xuemei Huang, Professor and Vice Chair, Department of Neurology; Professor of Neurosurgery, Radiology,  Pharmacology, and Kinesiology Director; Hershey Brain Analysis Research Laboratory for Neurodegenerative Disorders, Penn State University-Milton, S. Hershey Medical Center Department of Neurology
  • Dr. Andreas Jeromin, CSO and President of Atlantic Biomarkers
  • Julien Bradley, Senior Director, Sales & Marketing, Quanterix
  • Dr. Scott Marshall, Head of Bioanalytics, and Dr. Jared Kohler, Head of Biomarker Statistics, BioStat Solutions, Inc.

Further analysis appears in Part IV. This section includes a survey exclusively conducted for this report. With over 30 figures and graphics and an in depth analysis, this part features insight into targets under investigation, challenges, advantages, and desired features of future diagnostic applications. Furthermore, the survey covers more than just the featured neurodegenerative disorders in this report, expanding to Multiple Sclerosis and Huntington’s Disease.

Furthermore, Insight Pharma Reports also put together a generous amount of data compiling clinical trial information and pipeline data related to Alzheimer’s Disease, Parkinson’s Disease and Amyotrophic Lateral Sclerosis. This is the fifth and final part of this report and it contains the most current information in the aforementioned disease areas. Such information was provided by CenterWatch and Biotechgate Global Database.

SOURCE

http://www.insightpharmareports.com/neurobiomarkers-and-diagnostics-report

From: Lisa Scimemi <lisas@healthtech.com>
Reply-To: Lisa Scimemi <lisas@healthtech.com>
Date: Wed, 16 Jul 2014 15:20:07 -0400
To: Aviva <avivalev-ari@alum.berkeley.edu>
Subject: Biomarkers and Neurodegenerative Disorders

 

Read Full Post »

« Newer Posts - Older Posts »