Feeds:
Posts
Comments

Archive for the ‘Scoop.it’ Category

Geneticist George Church: A Future Without Limits

Reporter: Aviva Lev-Ari, PhD, RN

Article ID #155: Geneticist George Church: A Future Without Limits. Published on 10/24/2014

WordCloud Image Produced by Adam Tubman

UPDATED 12/05/2020

 

In the future, George Church believes, almost everything will be better because of genetics. If you have a medical problem, your doctor will be able to customize a treatment based on your specific DNA pattern. When you fill up your car, you won’t be draining the world’s dwindling supply of crude oil, because the fuel will come from microbes that have been genetically altered to produce biofuel. When you visit the zoo, you’ll be able to take your children to the woolly mammoth or passenger pigeon exhibits, because these animals will no longer be extinct. You’ll be able to do these things, that is, if the future turns out the way Church envisions it—and he’s doing everything he can to see that it does.

UPDATED 12/05/2020

George Church backs a startup solution to the massive gene therapy manufacturing bottleneck

Source: https://endpts.com/george-church-backs-a-startup-solution-to-the-massive-gene-therapy-manufacturing-bottleneck/
Jason Mast: Associate Editor
George Church and his graduate students have spent the last decade seeding startups on the razor’s edge between biology and science fiction: gene therapy to prevent aging, CRISPRed pigs that can be used to harvest organs for transplant, and home kits to test your poop for healthy or unhealthy bacteria. (OK, maybe they’re not all on that razor’s edge.)

But now a new spinout from the Department of Genetics’ second floor is tackling a far humbler problem — one that major company after major company has stumbled over as they tried to get cures for rare diseases and other gene therapies into the clinic and past regulators: How the hell do you build these?

CEO Lex Vovner of 64x Bio

“There’s a lot happening for new therapies but not enough attention around this problem,” Lex Rovner, who was a post-doc at Church’s lab from 2015 to 2018, told Endpoints News. “And if we don’t figure out how to fix this, many of these therapies won’t even reach patients.”

This week, with Church and a couple other prominent scientists as co-founders, Rovner launched 64x Bio to tackle one key part of the manufacturing bottleneck. They won’t be looking to retrofit plants or build gene therapy factories, as Big Pharma and big biotech are now spending billions to do. Instead, with $4.5 million in seed cash, they will try to engineer the individual cells that churn out a critical component of the therapies.

George Church
The goal is to build cells that are fine-tuned to do nothing but spit out the viral vectors that researchers and drug developers use to shuttle gene therapies into the body. Different vectors have different demands; 64x Bio will look to make efficient cellular factories for each.

“While a few general ways to increase vector production may exist, each unique vector serotype and payload poses a specific challenge,” Church said in an emailed statement. “Our platform enables us to fine tune custom solutions for these distinct combinations that are particularly hard to overcome.”

Before joining Church’s lab, Rovner did her graduate work at Yale, where she studied how to engineer bacteria to produce new kinds of protein for drugs or other purposes. And after leaving Church’s lab in 2018, she initially set out to build a manufacturing startup with a broad focus.

Yet as she spoke with hundreds of biotech executives on LinkedIn and in coffee shops around Cambridge, the same issue kept popping up: They liked their gene therapy technology in the lab but they didn’t know how to scale it up.

“Everyone kept saying the same thing,” Rovner said. “We basically realized there’s this huge problem.”

The issue would soon make headlines in industry publications: bluebird delaying the launch of Zynteglo, Novartis delaying the launch of Zolgensma in the EU, Axovant delaying the start of their Parkinson’s trial.

Part of the problem, Rovner said, is that gene therapies are delivered on viral vectors. You can build these vectors in mammalian cell lines by feeding them a small circular strand of DNA called a plasmid. The problem is that mammalian cells have, over billions of years, evolved tools and defenses precisely to avoid making viruses. (Lest the mammal they live in die of infection).

There are genetic mutations that can turn off some of the internal defenses and unleash a cell’s ability to produce virus, but they’re rare and hard to find. Other platforms, Rovner said, try to find these mutations by using CRISPR to knock out genes in different cells and then screening each of them individually, a process that can require hundreds of thousands of different 100-well plates, with each well containing a different group of mutant cells.

“It’s just not practical, and so these platforms never find the cells,” Rovner said.

64x Bio will try to find them by building a library of millions of mutant mammalian cells and then using a molecular “barcoding” technique to screen those cells in a single pool. The technique, Rovner said, lets them trace how much vector any given cell produces, allowing researchers to quickly identify super-producing cells and their mutations.

The technology was developed partially in-house but draws from IP at Harvard and the Wyss Institute. Harvard’s Pam Silver and Wyss’s Jeffrey Way are co-founders.

The company is now based in SoMa in San Francisco. With the seed cash from Fifty Years, Refactor and First Round Capital, Rovner is recruiting and looking to raise a Series A soon. They’re in talks with pharma and biotech partners, while they try to validate the first preclinical and clinical applications.

Gene therapy is one focus, but Rovner said the platform works for anything that involves viral vector, including vaccines and oncolytic viruses. You just have to find the right mutation.

“It’s the rare cell you’re looking for,” she said.

AUTHOR
Jason Mast
Associate Editor
jason@endpointsnews.com
@JasonMMast
Jason Mas

In 2005 he launched the Personal Genome Project, with the goal of sequencing and sharing the DNA of 100,000 volunteers. With an open-source database of that size, he believes, researchers everywhere will be able to meaningfully pursue the critical task of correlating genetic patterns with physical traits, illnesses, and exposure to environmental factors to find new cures for diseases and to gain basic insights into what makes each of us the way we are. Church, tagged as subject hu43860C, was first in line for testing. Since then, more than 13,000 people in the U.S., Canada, and the U.K. have volunteered to join him, helping to establish what he playfully calls the Facebook of DNA.

Church has made a career of defying the impossible. Propelled by the dizzying speed of technological advancement since then, the Personal Genome Project is just one of Church’s many attempts to overcome obstacles standing between him and the future.

“It’s not for everyone,” he says. “But I see a trend here. Openness has changed since many of us were young. People didn’t use to talk about sexuality or cancer in polite society. This is the Facebook generation.” If individuals were told which diseases or medical conditions they were genetically predisposed to, they could adjust their behavior accordingly, he reasoned. Although universal testing still isn’t practical today, the cost of sequencing an individual genome has dropped dramatically in recent years, from about $7 million in 2007 to as little as $1,000 today.

“It’s all too easy to dismiss the future,” he says. “People confuse what’s impossible today with what’s impossible tomorrow.”, especially through the emerging discipline of “synthetic” biology. The basic idea behind synthetic biology, he explained, was that natural organisms could be reprogrammed to do things they wouldn’t normally do, things that might be useful to people. In pursuit of this, researchers had learned not only how to read the genetic code of organisms but also how to write new code and insert it into organisms. Besides making plastic, microbes altered in this way had produced carpet fibers, treated wastewater, generated electricity, manufactured jet fuel, created hemoglobin, and fabricated new drugs. But this was only the tip of the iceberg, Church wrote. The same technique could also be used on people.

“Every cell in our body, whether it’s a bacterial cell or a human cell, has a genome,” he says. “You can extract that genome—it’s kind of like a linear tape—and you can read it by a variety of methods. Similarly, like a string of letters that you can read, you can also change it. You can write, you can edit it, and then you can put it back in the cell.”

This April, the Broad Institute, where Church holds a faculty appointment, was awarded a patent for a new method of genome editing called CRISPR (clustered regularly interspersed short palindromic repeats), which Church says is one of the most effective tools ever developed for synthetic biology. By studying the way that certain bacteria defend themselves against viruses, researchers figured out how to precisely cut DNA at any location on the genome and insert new material there to alter its function. Last month, researchers at MIT announced they had used CRISPR to cure mice of a rare liver disease that also afflicts humans. At the same time, researchers at Virginia Tech said they were experimenting on plants with CRISPR to control salt tolerance, improve crop yield, and create resistance to pathogens.

The possibilities for CRISPR technology seem almost limitless, Church says. If researchers have stored a genetic sequence in a computer, they can order a robot to produce a piece of DNA from the data. That piece can then be put into a cell to change the genome. Church believes that CRISPR is so promising that last year he co-founded a genome-editing company, Editas, to develop drugs for currently incurable diseases.

Source: news.nationalgeographic.com

See on Scoop.itCardiovascular and vascular imaging

Read Full Post »

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson

Life-cycle of Science 2

 

 

 

 

 

 

 

 

 

 

 

Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

(this discussion is in a three part series including:

Using Scientific Content Curation as a Method for Validation and Biocuration

Using Scientific Content Curation as a Method for Open Innovation)

 

Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value.   He also writes extensively about tech and entrepreneurship.

October 2013 Wired article by Clive Thompson, entitled “How Successful Networks Nurture Good Ideas: Thinking Out Loud”, describes how the voluminous writings, postings, tweets, and sharing on social media is fostering connections between people and ideas which, previously, had not existed. The article was generated from Clive Thompson’s book Smarter Than you Think: How Technology is Changing Our Minds for the Better.Tom Peters also commented about the article in his blog (see here).

Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:

 

Our ideas are PRODUCTS OF OUR ENVIRONMENT.

They are influenced by the conversations around us.

However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.

But HOW MUCH Knowledge is OUT THERE?

 

Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:

  • 154.6 billion EMAILS per DAY
  • 400 million TWEETS per DAY
  • 1 million BLOG POSTS (including this one) per DAY
  • 2 million COMMENTS on WordPress per DAY
  • 16 million WORDS on Facebook per DAY
  • TOTAL 52 TRILLION WORDS per DAY

As he estimates this would be 520 million books per DAY (book with average 100,000 words).

A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.

 

Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:

 

  • 2.9 billion internet users
  • 981 million websites (only 25,000 hacked today)
  • 128 billion emails
  • 385 million Tweets
  • > 2.7 million BLOG posts today (including this one)

 

The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)

 

So how many science blogs are out there? Well back in 2008 “grrlscientistasked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.

So granted, there is a lot of

goodbadugly

 

              ….. when it comes to scientific information on the internet!

 

 

 

 

 

I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:

http://pharmaceuticalintelligence.com/2014/06/17/the-gonzalez-protocol-worse-than-useless-for-pancreatic-cancer/

 

and in a Nature Report:Stem cells: Taking a stand against pseudoscience

http://www.nature.com/news/stem-cells-taking-a-stand-against-pseudoscience-1.15408

Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.

 

Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians

 

Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:

 

“a meeting of natural philosophers to discuss promoting knowledge of the

natural world through observation and experiment”, i.e. science.

The Society met weekly to witness experiments and discuss what we

would now call scientific topics. The first Curator of Experiments

was Robert Hooke.”

 

from The History of the Royal Society

 

Royal Society CoatofArms

 

 

 

 

 

 

The Royal Society of London for Improving Natural Knowledge.

(photo credit: Royal Society)

(Although one wonders why they met “in-cognito”)

Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:

  • Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
  • Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
  • Current publishing does not encourage open feedback and review
  • Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
  • Published data and methodology incomplete
  • Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format

(put in here a brief blurb of summary of problems and why curation could help)

 

Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN

Curation is an active filtering of the web’s  and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view.  Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale.  One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.

– Larry H. Bernstein, MD, FCAP

Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP

The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.

Justin D Pearlman, MD, PhD, FACC from The Voice of Content Consultant on The  Methodology of Curation in Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison, Drs. Larry Bernstein and Aviva Lev-Ari likens the medical and scientific curation process to curation of musical works into a thematic program:

 

Work of Original Music Curation and Performance:

 

Music Review and Critique as a Curation

Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care

… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.

 

In Summary: How Scientific Content Curation Can Help

 

Given the aforementioned problems of:

        I.            the complex and rapid deluge of scientific information

      II.            the need for a collaborative, open environment to produce transformative innovation

    III.            need for alternative ways to disseminate scientific findings

CURATION MAY OFFER SOLUTIONS

        I.            Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation

 

      II.            Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms

 

    III.            Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)

 

Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.

Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN

 

At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:

Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation

 

http://pharmaceuticalintelligence.com/2013/07/29/cardiovascular-original-research-cases-in-methodology-design-for-content-curation-and-co-curation/

demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …

which has created a need for more context-driven scientific search and discourse.

In Science and Curation: the New Practice of Web 2.0, Célya Gruson-Daniel (@HackYourPhd) states:

To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.

.. where Célya considers curation an essential practice to manage open science and this new style of research.

As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.

  1. Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS

i)        Evidence: curation methodology leading to new insights for biomarkers

 

  1. Same as #1 but multiple players (experts) each bringing unique insights, perspectives, skills yielding new research = NEW LINE of CRITICAL THINKING

ii)      Evidence: co-curation methodology among cardiovascular experts leading to cardiovascular series ebooks

Life-cycle of Science 2

The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging.  Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communication was peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases. In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.

 

The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to

INCREASE THE SCIENTIFIC CURRENCY.

For detailed description of methodology please see Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

 

In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid:

“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”

— JISC Senior Management Briefing Paper, The Data Deluge (2004)

 

As she states proper data and content curation is important for:

  • Post-analysis
  • Data and research result reuse for new research
  • Validation
  • Preservation of data in newer formats to prolong life-cycle of research results

However she laments the lack of

  • Funding for such efforts
  • Training
  • Organizational support
  • Monitoring
  • Established procedures

 

Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.

Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.

And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:

  • Knowledge
  • Comprehension
  • Application
  • Analysis
  • Synthesis
  • Evaluation

curating-tableA GREAT table about the differences between Collecting and Curating by Nancy White at http://d20innovation.d20blogs.org/2012/07/07/understanding-content-curation/

 

 

 

 

 

 

 

 

 

 

 

University of Massachusetts Medical School has aggregated some useful curation tools at http://esciencelibrary.umassmed.edu/data_curation

Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.

See here for a curation of how networks fosters knowledge, by Erika Harrison on ScoopIt

(http://www.scoop.it/t/mobilizing-knowledge-through-complex-networks)

 

“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”

Andrea Carafa insight from World Economic Forum New Champions 2012 “Power of Networks

 

Creating Content Curation Communities: Breaking Down the Silos!

 

An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as

  • information integration (various types of data and formats)
  • social integration (marginalized by scientific communities, no funding, no recognition)

The authors set forth some ways to overcome these challenges of the content curation community including:

  1. standardization in practices
  2. visualization to document contributions
  3. emphasizing role of information professionals in content curation communities
  4. maintaining quality control to increase respectability
  5. recognizing participation to professional communities
  6. proposing funding/national meeting – Data Intensive Collaboration in Science and Engineering Workshop

A few great presentations and papers from the 2012 DICOSE meeting are found below

Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown

 

Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe

 

Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee

 

Ben Li. The mirages of big data. (PDF) NotesForLiReflectionsByBen

 

Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland

 

Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman

 

Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber

Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.

Open Science Case Studies in Curation

1. Open Science Project from Digital Curation Center

This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.

The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.

 

see: Pennock, M. (2006). “Curating e-Science Data”. DCC Briefing Papers: Introduction to Curation. Edinburgh: Digital Curation Centre. Handle: 1842/3330. Available online: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation– See more at: http://www.dcc.ac.uk/resources/briefing-papers/introduction-curation/curating-e-science-data#sthash.RdkPNi9F.dpuf

 

2.      cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.

 

3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.

 

4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.

 

  1. OpenWetWareOpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us.   If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.

6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.

7. OpenScience ProjectThe OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.

8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.

 

9. Some ongoing biomedical knowledge (curation) projects at ISI

IICurate
This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.

BioScholar
It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).

10. Tools useful for scientific content curation

 

Research Analytic and Curation Tools from University of Queensland

 

Thomson Reuters information curation services for pharma industry

 

Microblogs as a way to communicate information about HPV infection among clinicians and patients; use of Chinese microblog SinaWeibo as a communication tool

 

VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries.  So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.

 

 

11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education

 

From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset

http://digimind.com/blog/experts/pharma-3-0/

Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book  entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.

 

Thomson Reuters and NuMedii Launch Ground-Breaking Initiative to Identify Drugs for Repurposing. Companies leverage content, Big Data analytics and expertise to improve success of drug discovery

 

Content Curation as a Context for Teaching and Learning in Science

 

#OZeLIVE Feb2014

http://www.youtube.com/watch?v=Ty-ugUA4az0

Creative Commons license

 

DigCCur: A graduate level program initiated by University of North Carolina to instruct the future digital curators in science and other subjects

 

Syracuse University offering a program in eScience and digital curation

 

Curation Tips from TED talks and tech experts

Steven Rosenbaum from Curation Nation

http://www.youtube.com/watch?v=HpncJd1v1k4

 

Pawan Deshpande form Curata on how content curation communities evolve and what makes a good content curation:

http://www.youtube.com/watch?v=QENhIU9YZyA

 

How the Internet of Things is Promoting the Curation Effort

Update by Stephen J. Williams, PhD 3/01/19

Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts.  I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running.  One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in.  About an army of 4 million have Wiki entries and maintain these databases.

Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)

In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information”  senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.

As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.”   Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”

Enter Wikidata.  According to founder Denny Vrandecic,

Language depends on knowing a lot of common sense, which computers don’t have access to

A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.

Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items.  Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.

The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.

But this effort needs money

Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s.  In response, Amazon and Facebook had donated millions for the Wikimedia projects.  Google recently gave 3.1 million USD$ in donations.

 

Future postings on the relevance and application of scientific curation will include:

Using Scientific Content Curation as a Method for Validation and Biocuration

 

Using Scientific Content Curation as a Method for Open Innovation

 

Other posts on this site related to Content Curation and Methodology include:

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

6 Steps to More Effective Content Curation

Stem Cells and Cardiac Repair: Content Curation & Scientific Reporting

Cancer Research: Curations and Reporting

Cardiovascular Diseases and Pharmacological Therapy: Curations

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

The Young Surgeon and The Retired Pathologist: On Science, Medicine and HealthCare Policy – The Best Writers Among the WRITERS

Reconstructed Science Communication for Open Access Online Scientific Curation

 

 

Read Full Post »

Cisco Projects the Internet of Things to be a 14 Trillion Dollar Industry

Reporter: Aviva Lev-Ari, PhD, RN

 

 

 

 

 

Twenty years ago, there were about 3 million devices connected to the Internet. By the end of this decade, Gartner estimates that there will be 26 billion devices on the global network.

 

This can only mean one thing: We’re living in the Internet of Things.

With anything and everything — including trees, insects, pill bottles, and sinks — going online, Cisco projects the Internet of Things to be a $14 trillion revenue opportunity.Helping people remember their daily medicine with light-up bottle caps and preventing illegal logging and monitoring traffic in real-time are worthwhile goals. But they are point solutions. They don’t resonate in our lives in ways that make it impossible to imagine how we lived without them.

 

In order for the Internet of Things to truly work, context about ourselves (think interests, location, intent) is required. Here are some reasons why the Internet of Things will only come to fruition in identity is incorporated into the user experience.

 

A wristband that measures your steps and heart rate is a helpful fitness tool. However, if that’s all it does, then it is nothing more than a tool, no matter how many other devices it can connect to. But what about a wristband that knows the wearer’s identity, understands his fitness routines, and tells the treadmill to speed up or slow down based on the wearer’s heart rate and exercise goals?

 

This sort of personalized connectedness delivers true value and breeds customer loyalty by tapping into each user’s unique situation and background. And it all starts with a deep understanding of of users’ identities.

 

In the Internet of Things, devices need to participate in a constant conversation with one another, their owners’ social feeds, and outside field of interest. Any device which relies on a one-time data dump will quickly become irrelevant. Connected devices need to be able not only to verify identity but also be flexible enough to grow and adapt as new channels and data points emerge.

 

The Internet of Things should model the kind of tailored, identity-driven recommendations today’s consumer is accustomed to receiving from leading brands like Amazon, Netflix and Spotify. To compare and contrast, let’s say you have a refrigerator that reorders eggs when you run out. Such a feature would be helpful, but likely would not deliver enough value to gain widespread adoption. However, if your refrigerator automatically shops for a recipe you just pinned, and recommends three new options for dinner based on what you have in the house and your dinner party, that’s extraordinary.

Source: blog.gigya.com

Read Full Post »

DALLAS, June 26, 2014 – The American Heart Association has announced its national officers for the 2014-15 fiscal year, which begins July 1.

Source: newsroom.heart.org

Read Full Post »

By switching off a single gene, scientists have converted human gastrointestinal cells into insulin-producing cells, demonstrating in principle that a drug could retrain cells inside a person’s GI tract to produce insulin. The finding raises the possibility that cells lost in type 1 diabetes may be more easily replaced through the reeducation of existing cells than through the transplantation of new cells created from embryonic or adult stem cells. The new research was reported in the online issue of the journal Nature Communications.

“People have been talking about turning one cell into another for a long time, but until now we hadn’t gotten to the point of creating a fully functional insulin-producing cell by the manipulation of a single target,” said the study’s senior author, Domenico Accili, MD, the Russell Berrie Foundation Professor of Diabetes (in Medicine) at Columbia University Medical Center (CUMC).

 

The finding raises the possibility that cells lost in type 1 diabetes may be more easily replaced through the reeducation of existing cells than through the transplantation of new cells created from embryonic or adult stem cells.

 

For nearly two decades, researchers have been trying to make surrogate insulin-producing cells for type 1 diabetes patients. In type 1 diabetes, the body’s natural insulin-producing cells are destroyed by the immune system.

 

Although insulin-producing cells can now be made in the lab from stem cells, these cells do not yet have all the functions of naturally occurring pancreatic beta cells.

 

This has led some researchers to try instead to transform existing cells in a patient into insulin-producers. Previous work by Dr. Accili’s lab had shown that mouse intestinal cells can be transformed into insulin-producing cells; the current Columbia study shows that this technique also works in human cells.

 

The Columbia researchers were able to teach human gut cells to make insulin in response to physiological circumstances by deactivating the cells’ FOXO1 gene. Accili and postdoctoral fellow Ryotaro Bouchi first created a tissue model of the human intestine with human pluripotent stem cells. Through genetic engineering, they then deactivated any functioning FOXO1 inside the intestinal cells. After seven days, some of the cells started releasing insulin and, equally important, only in response to glucose.

 

The team had used a comparable approach in its earlier, mouse study. In the mice, insulin made by gut cells was released into the bloodstream, worked like normal insulin, and was able to nearly normalize blood glucose levels in otherwise diabetic mice: New Approach to Treating Type I Diabetes? Columbia Scientists Transform Gut Cells into Insulin Factories. That work, which was reported in 2012 in the journal Nature Genetics, has since received independent confirmation from another group.

Source: www.sciencedaily.com

Read Full Post »

PERFUSE Registry: Diagnostic Accuracy Achieved with CT-Derived FFR Plus …

Source: www.tctmd.com

Read Full Post »

Intravenous immunoglobulin (IVIg) provides protection against endothelial cell …

Source: 7thspace.com

Read Full Post »

The study seeks to determine the accuracy of using anatomic and physiologic information measurable by computed tomography features of stenosis, plaque,…

Source: clinicaltrialsfeeds.org

Read Full Post »

Natural internal mammary-to-coronary artery bypass: ipsilateral but not contralateral bypasses reduce ischemia. http://t.co/QrqBIcJfJt

Source: circ.ahajournals.org

Read Full Post »

The present study was designed to investigate the expression changes of PPAR-alpha, -beta, -gamma and NF-kappa B in the hippocampus of rats with global cerebral ischemia/reperfusion injury (GCIRI) after treatment with …

Source: www.behavioralandbrainfunctions.com

Read Full Post »

Older Posts »