Feeds:
Posts
Comments

Posts Tagged ‘Computer Model’

Non-toxic antiviral nanoparticles with a broad spectrum of virus inhibition

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Infectious diseases account for 20% of global deaths, with viruses accounting for over a third of these deaths (1). Lower respiratory effects and human immunodeficiency viruses (HIV) are among the top ten causes of death worldwide, both of which contribute significantly to health-care costs (2). Every year, new viruses (such as Ebola) increase the mortality toll. Vaccinations are the most effective method of avoiding viral infections, but there are only a few of them, and they are not available in all parts of the world (3). After infection, antiviral medications are the only option; unfortunately, only a limited number of antiviral medications are approved in this condition. Antiviral drugs on a big scale that can influence a wide spectrum of existing and emerging viruses are critical.

The three types of treatments currently available are small molecules (such as nucleoside analogues and peptidomimetics), proteins that stimulate the immune system (such as interferon), and oligonucleotides (for example, fomivirsen). The primary priorities include HIV, hepatitis B and C viruses, Herpes Simplex Virus (HSV), human cytomegalovirus (HCMV), and influenza virus. They work mainly on viral enzymes, which are necessary for viral replication but which differ from other host enzymes to ensure selective function. The specificity of antivirals is far from perfect because viruses rely on the biosynthesis machinery for reproduction of infected cells, which results in a widespread and inherent toxicity associated with such therapy. However, most viruses mutate rapidly due to their improper replicating mechanisms and so often develop resistance (4). Finally, since antiviral substances are targeted at viral proteins, it is challenging to build broad-based antivirals that can act with a wide range of phylogenetic and structurally different virus.

Over the last decade breakthroughs in nanotechnology have led to scientists developing incredibly specialized nanoparticles capable of traveling in specific cells through a human body. A broad spectrum of destructive viruses is being targeted and not only bind to, but also destroy, by modern computer modeling technology.

An international team of researchers led by the University of Illinois at Chicago chemistry professor Petr Kral developed novel anti-viral nanoparticles that bind to a variety of viruses, including herpes simplex virus, human papillomavirus, respiratory syncytial virus, Dengue, and lentiviruses. In contrast to conventional broad-spectrum antivirals, which just prevent viruses from invading cells, the new nanoparticles eradicate viruses. The team’s findings have been published in the journal “Nature Materials.”

A molecular dynamics model showing a nanoparticle binding to the outer envelope of the human papillomavirus. (Credit: Petr Kral) https://today.uic.edu/files/2017/09/viralbindingcropped.png

The goal of this new study was to create a new anti-viral nanoparticle that could exploit the HSPG binding process to not only tightly attach with virus particles but also to destroy them. The work was done by a group of researchers ranging from biochemists to computer modeling experts until the team came up with a successful nanoparticle design that could, in principle, accurately target and kill individual virus particles.

The first step to combat many viruses consists in the attachment of heparin sulfate proteoglycan on cell surfaces to a protein (HSPG). Some of the antiviral medications already in place prevent an infection by imitating HSPG’s connection to the virus. An important constraint of these antivirals is that not only is this antiviral interaction weak, it does not kill the virus.

Kral said

We knew how the nanoparticles should bind on the overall composition of HSPG binding viral domains and the structures of the nanoparticles, but we did not realize why the various nanoparticles act so differently in terms of their both bond strength and viral entry in cells

Kral and colleagues assisted in resolving these challenges and guiding the experimentalists in fine-tuning the nanoparticle design so that it performed better.

The researchers have employed advanced computer modeling techniques to build exact structures of several target viruses and nanoparticles up to the atom’s position. A profound grasp of the interactions between individual atom groupings in viruses and nanoparticles allows the scientists to evaluate the strength and duration of prospective links between these two entities and to forecast how the bond could change over time and eventually kill the virus.


Atomistic MD simulations of an L1 pentamer of HPV capsid protein with the small NP (2.4 nm core, 100 MUP ligands). The NP and the protein are shown by van der Waals (vdW) and ribbon representations respectively. In the protein, the HSPG binding amino acids are displayed by vdW representation.

Kral added

We were capable of providing the design team with the data needed to construct a prototype of an antiviral of high efficiency and security, which may be utilized to save lives

The team has conducted several in vitro experiments following the development of a prototype nanoparticle design which have demonstrated success in binding and eventually destroying a wide spectrum of viruses, including herpes simplex, human papillomaviruses, respiratory syncytial viruses and dengue and lentiviruses.

The research is still in its early phases, and further in vivo animal testing is needed to confirm the nanoparticles’ safety, but this is a promising new road toward efficient antiviral therapies that could save millions of people from devastating virus infections each year.

The National Centers of Competence in Research on Bio-Inspired Materials, the University of Turin, the Ministry of Education, Youth and Sports of the Czech Republic, the Leenards Foundation, National Science Foundation award DMR-1506886, and funding from the University of Texas at El Paso all contributed to this study.

Main Source

Cagno, V., Andreozzi, P., D’Alicarnasso, M., Silva, P. J., Mueller, M., Galloux, M., … & Stellacci, F. (2018). Broad-spectrum non-toxic antiviral nanoparticles with a virucidal inhibition mechanism. Nature materials17(2), 195-203. https://www.nature.com/articles/nmat5053

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Rare earth-doped nanoparticles applications in biological imaging and tumor treatment

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/10/04/rare-earth-doped-nanoparticles-applications-in-biological-imaging-and-tumor-treatment/

Nanoparticles Could Boost Effectiveness of Allergy Shots

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/05/25/nanoparticles-could-boost-effectiveness-of-allergy-shots/

Immunoreactivity of Nanoparticles

Author: Tilda Barliya PhD

https://pharmaceuticalintelligence.com/2012/10/27/immunoreactivity-of-nanoparticles/

Nanotechnology and HIV/AIDS Treatment

Author: Tilda Barliya, PhD

https://pharmaceuticalintelligence.com/2012/12/25/nanotechnology-and-hivaids-treatment/

Nanosensors for Protein Recognition, and gene-proteome interaction

Curator: Larry H Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/01/30/nanosensors-for-protein-recognition-and-gene-proteome-interaction/

Read Full Post »

Artificial Intelligence Versus the Scientist: Who Will Win?

Will DARPA Replace the Human Scientist: Not So Fast, My Friend!

Writer, Curator: Stephen J. Williams, Ph.D.

scientistboxingwithcomputer

Last month’s issue of Science article by Jia You “DARPA Sets Out to Automate Research”[1] gave a glimpse of how science could be conducted in the future: without scientists. The article focused on the U.S. Defense Advanced Research Projects Agency (DARPA) program called ‘Big Mechanism”, a $45 million effort to develop computer algorithms which read scientific journal papers with ultimate goal of extracting enough information to design hypotheses and the next set of experiments,

all without human input.

The head of the project, artificial intelligence expert Paul Cohen, says the overall goal is to help scientists cope with the complexity with massive amounts of information. As Paul Cohen stated for the article:

“‘

Just when we need to understand highly connected systems as systems,

our research methods force us to focus on little parts.

                                                                                                                                                                                                               ”

The Big Mechanisms project aims to design computer algorithms to critically read journal articles, much as scientists will, to determine what and how the information contributes to the knowledge base.

As a proof of concept DARPA is attempting to model Ras-mutation driven cancers using previously published literature in three main steps:

  1. Natural Language Processing: Machines read literature on cancer pathways and convert information to computational semantics and meaning

One team is focused on extracting details on experimental procedures, using the mining of certain phraseology to determine the paper’s worth (for example using phrases like ‘we suggest’ or ‘suggests a role in’ might be considered weak versus ‘we prove’ or ‘provide evidence’ might be identified by the program as worthwhile articles to curate). Another team led by a computational linguistics expert will design systems to map the meanings of sentences.

  1. Integrate each piece of knowledge into a computational model to represent the Ras pathway on oncogenesis.
  2. Produce hypotheses and propose experiments based on knowledge base which can be experimentally verified in the laboratory.

The Human no Longer Needed?: Not So Fast, my Friend!

The problems the DARPA research teams are encountering namely:

  • Need for data verification
  • Text mining and curation strategies
  • Incomplete knowledge base (past, current and future)
  • Molecular biology not necessarily “requires casual inference” as other fields do

Verification

Notice this verification step (step 3) requires physical lab work as does all other ‘omics strategies and other computational biology projects. As with high-throughput microarray screens, a verification is needed usually in the form of conducting qPCR or interesting genes are validated in a phenotypical (expression) system. In addition, there has been an ongoing issue surrounding the validity and reproducibility of some research studies and data.

See Importance of Funding Replication Studies: NIH on Credibility of Basic Biomedical Studies

Therefore as DARPA attempts to recreate the Ras pathway from published literature and suggest new pathways/interactions, it will be necessary to experimentally validate certain points (protein interactions or modification events, signaling events) in order to validate their computer model.

Text-Mining and Curation Strategies

The Big Mechanism Project is starting very small; this reflects some of the challenges in scale of this project. Researchers were only given six paragraph long passages and a rudimentary model of the Ras pathway in cancer and then asked to automate a text mining strategy to extract as much useful information. Unfortunately this strategy could be fraught with issues frequently occurred in the biocuration community namely:

Manual or automated curation of scientific literature?

Biocurators, the scientists who painstakingly sort through the voluminous scientific journal to extract and then organize relevant data into accessible databases, have debated whether manual, automated, or a combination of both curation methods [2] achieves the highest accuracy for extracting the information needed to enter in a database. Abigail Cabunoc, a lead developer for Ontario Institute for Cancer Research’s WormBase (a database of nematode genetics and biology) and Lead Developer at Mozilla Science Lab, noted, on her blog, on the lively debate on biocuration methodology at the Seventh International Biocuration Conference (#ISB2014) that the massive amounts of information will require a Herculaneum effort regardless of the methodology.

Although I will have a future post on the advantages/disadvantages and tools/methodologies of manual vs. automated curation, there is a great article on researchinformation.infoExtracting More Information from Scientific Literature” and also see “The Methodology of Curation for Scientific Research Findings” and “Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison” for manual curation methodologies and A MOD(ern) perspective on literature curation for a nice workflow paper on the International Society for Biocuration site.

The Big Mechanism team decided on a full automated approach to text-mine their limited literature set for relevant information however was able to extract only 40% of relevant information from these six paragraphs to the given model. Although the investigators were happy with this percentage most biocurators, whether using a manual or automated method to extract information, would consider 40% a low success rate. Biocurators, regardless of method, have reported ability to extract 70-90% of relevant information from the whole literature (for example for Comparative Toxicogenomics Database)[3-5].

Incomplete Knowledge Base

In an earlier posting (actually was a press release for our first e-book) I had discussed the problem with the “data deluge” we are experiencing in scientific literature as well as the plethora of ‘omics experimental data which needs to be curated.

Tackling the problem of scientific and medical information overload

pubmedpapersoveryears

Figure. The number of papers listed in PubMed (disregarding reviews) during ten year periods have steadily increased from 1970.

Analyzing and sharing the vast amounts of scientific knowledge has never been so crucial to innovation in the medical field. The publication rate has steadily increased from the 70’s, with a 50% increase in the number of original research articles published from the 1990’s to the previous decade. This massive amount of biomedical and scientific information has presented the unique problem of an information overload, and the critical need for methodology and expertise to organize, curate, and disseminate this diverse information for scientists and clinicians. Dr. Larry Bernstein, President of Triplex Consulting and previously chief of pathology at New York’s Methodist Hospital, concurs that “the academic pressures to publish, and the breakdown of knowledge into “silos”, has contributed to this knowledge explosion and although the literature is now online and edited, much of this information is out of reach to the very brightest clinicians.”

Traditionally, organization of biomedical information has been the realm of the literature review, but most reviews are performed years after discoveries are made and, given the rapid pace of new discoveries, this is appearing to be an outdated model. In addition, most medical searches are dependent on keywords, hence adding more complexity to the investigator in finding the material they require. Third, medical researchers and professionals are recognizing the need to converse with each other, in real-time, on the impact new discoveries may have on their research and clinical practice.

These issues require a people-based strategy, having expertise in a diverse and cross-integrative number of medical topics to provide the in-depth understanding of the current research and challenges in each field as well as providing a more conceptual-based search platform. To address this need, human intermediaries, known as scientific curators, are needed to narrow down the information and provide critical context and analysis of medical and scientific information in an interactive manner powered by web 2.0 with curators referred to as the “researcher 2.0”. This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.

Yaneer Bar-Yam of the New England Complex Systems Institute was not confident that using details from past knowledge could produce adequate roadmaps for future experimentation and noted for the article, “ “The expectation that the accumulation of details will tell us what we want to know is not well justified.”

In a recent post I had curated findings from four lung cancer omics studies and presented some graphic on bioinformatic analysis of the novel genetic mutations resulting from these studies (see link below)

Multiple Lung Cancer Genomic Projects Suggest New Targets, Research Directions for

Non-Small Cell Lung Cancer

which showed, that while multiple genetic mutations and related pathway ontologies were well documented in the lung cancer literature there existed many significant genetic mutations and pathways identified in the genomic studies but little literature attributed to these lung cancer-relevant mutations.

KEGGinliteroanalysislungcancer

  This ‘literomics’ analysis reveals a large gap between our knowledge base and the data resulting from large translational ‘omic’ studies.

Different Literature Analyses Approach Yeilding

A ‘literomics’ approach focuses on what we don NOT know about genes, proteins, and their associated pathways while a text-mining machine learning algorithm focuses on building a knowledge base to determine the next line of research or what needs to be measured. Using each approach can give us different perspectives on ‘omics data.

Deriving Casual Inference

Ras is one of the best studied and characterized oncogenes and the mechanisms behind Ras-driven oncogenenis is highly understood.   This, according to computational biologist Larry Hunt of Smart Information Flow Technologies makes Ras a great starting point for the Big Mechanism project. As he states,” Molecular biology is a good place to try (developing a machine learning algorithm) because it’s an area in which common sense plays a minor role”.

Even though some may think the project wouldn’t be able to tackle on other mechanisms which involve epigenetic factors UCLA’s expert in causality Judea Pearl, Ph.D. (head of UCLA Cognitive Systems Lab) feels it is possible for machine learning to bridge this gap. As summarized from his lecture at Microsoft:

“The development of graphical models and the logic of counterfactuals have had a marked effect on the way scientists treat problems involving cause-effect relationships. Practical problems requiring causal information, which long were regarded as either metaphysical or unmanageable can now be solved using elementary mathematics. Moreover, problems that were thought to be purely statistical, are beginning to benefit from analyzing their causal roots.”

According to him first

1) articulate assumptions

2) define research question in counter-inference terms

Then it is possible to design an inference system using calculus that tells the investigator what they need to measure.

To watch a video of Dr. Judea Pearl’s April 2013 lecture at Microsoft Research Machine Learning Summit 2013 (“The Mathematics of Causal Inference: with Reflections on Machine Learning”), click here.

The key for the Big Mechansism Project may me be in correcting for the variables among studies, in essence building a models system which may not rely on fully controlled conditions. Dr. Peter Spirtes from Carnegie Mellon University in Pittsburgh, PA is developing a project called the TETRAD project with two goals: 1) to specify and prove under what conditions it is possible to reliably infer causal relationships from background knowledge and statistical data not obtained under fully controlled conditions 2) develop, analyze, implement, test and apply practical, provably correct computer programs for inferring causal structure under conditions where this is possible.

In summary such projects and algorithms will provide investigators the what, and possibly the how should be measured.

So for now it seems we are still needed.

References

  1. You J: Artificial intelligence. DARPA sets out to automate research. Science 2015, 347(6221):465.
  2. Biocuration 2014: Battle of the New Curation Methods [http://blog.abigailcabunoc.com/biocuration-2014-battle-of-the-new-curation-methods]
  3. Davis AP, Johnson RJ, Lennon-Hopkins K, Sciaky D, Rosenstein MC, Wiegers TC, Mattingly CJ: Targeted journal curation as a method to improve data currency at the Comparative Toxicogenomics Database. Database : the journal of biological databases and curation 2012, 2012:bas051.
  4. Wu CH, Arighi CN, Cohen KB, Hirschman L, Krallinger M, Lu Z, Mattingly C, Valencia A, Wiegers TC, John Wilbur W: BioCreative-2012 virtual issue. Database : the journal of biological databases and curation 2012, 2012:bas049.
  5. Wiegers TC, Davis AP, Mattingly CJ: Collaborative biocuration–text-mining development task for document prioritization for curation. Database : the journal of biological databases and curation 2012, 2012:bas037.

Other posts on this site on include: Artificial Intelligence, Curation Methodology, Philosophy of Science

Inevitability of Curation: Scientific Publishing moves to embrace Open Data, Libraries and Researchers are trying to keep up

A Brief Curation of Proteomics, Metabolomics, and Metabolism

The Methodology of Curation for Scientific Research Findings

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

The growing importance of content curation

Data Curation is for Big Data what Data Integration is for Small Data

Cardiovascular Original Research: Cases in Methodology Design for Content Co-Curation The Art of Scientific & Medical Curation

Exploring the Impact of Content Curation on Business Goals in 2013

Power of Analogy: Curation in Music, Music Critique as a Curation and Curation of Medical Research Findings – A Comparison

conceived: NEW Definition for Co-Curation in Medical Research

Reconstructed Science Communication for Open Access Online Scientific Curation

Search Results for ‘artificial intelligence’

 The Simple Pictures Artificial Intelligence Still Can’t Recognize

Data Scientist on a Quest to Turn Computers Into Doctors

Vinod Khosla: “20% doctor included”: speculations & musings of a technology optimist or “Technology will replace 80% of what doctors do”

Where has reason gone?

Read Full Post »

Reported by: Dr. Venkat S. Karra, Ph.D.

“Comprehensive computer models of entire cells have the potential to advance our understanding of cellular function and, ultimately, to inform new approaches for the diagnosis and treatment of disease.” Not only does the model allow researchers to address questions that aren’t practical to examine otherwise, it represents a stepping-stone towards its use  in bioengineering and medicine.

A team led by Stanford bioengineering Professor Markus Covert used data from more than 900 scientific papers to account for every molecular interaction that takes place in the life cycle of Mycoplasma genitalium. Mycoplasma genitalium is a humble parasitic bacterium, known mainly for showing up uninvited in human urogenital and respiratory tracts. But the pathogen also has the distinction of containing the smallest genome of any free-living organism – only 525 genes, as opposed to the 4,288 of E. coli, a more traditional laboratory bacterium.

“This is potentially the new Human Genome Project,” Karr said who is a co-first author and Stanford biophysics graduate student. “It’s to understand biology generally.”

“It’s going to take a really large community effort to get close to a human model.”

This is a breakthrough effort for computational biology, the world’s first complete computer model of an organism. “This achievement demonstrates a transforming approach to answering questions about fundamental biological processes,” said James M. Anderson, director of the National Institutes of Health Division of Program Coordination, Planning, and Strategic Initiatives.

Study results were published by Stanford researchers in the journal Cell.

The research was partially funded by an NIH Director’s Pioneer Award from the National Institute of Health Common Fund.

Source:

http://www.dddmag.com/news/2012/07/first-complete-computer-model-bacteria?et_cid=2783229&et_rid=45527476&linkid=http%3a%2f%2fwww.dddmag.com%2fnews%2f2012%2f07%2ffirst-complete-computer-model-bacteria

Read Full Post »

%d bloggers like this: