Posts Tagged ‘Lawrence Berkeley National Laboratory’

Searchable Genome for Drug Development

Reporter: Aviva Lev-Ari, PhD, RN

The Druggable Genome Is Now Googleable

By Aaron Krol

November 22, 2013 | Relationships between human genetic variation and drug responses are being documented at an accelerating rate, and have become some of the most promising avenues of research for understanding the molecular pathways of diseases and pharmaceuticals alike. Drug-gene interactions are a cornerstone of personalized medicine, and learning about the drugs that mediate gene expression can point the way toward new therapeutics with more targeted effects, or novel disease targets for existing drugs. So it may seem surprising that, until October of this year, a researcher interested in pharmacogenetics generally needed the help of a dedicated bioinformatician just to access the known background on a gene’s drug associations.

Obi and Malachi Griffith are particularly dedicated bioinformaticians, who specialize in applying data analytics to cancer research, a rich field for drug-gene information. Like many professionals in their budding field, the Griffiths pursued doctoral research in bioinformatics applications at a time when this was not quite recognized as a distinct discipline, and quickly found their data-mining talents in hot demand. “We found ourselves answering the same questions over and over again,” says Malachi. “A clinician or researcher, who perhaps wasn’t a bioinformatician, would have a list of genes, and would ask, ‘Well, which of these genes are kinases? Which of these genes has a known drug or is potentially druggable?’ And we would spend time writing custom scripts and doing ad hocanalyses, and eventually decided that you really shouldn’t need a bioinformatics expert to answer this question for you.”

The Griffiths – identical twin brothers, though Malachi helpfully sports a beard – had by this time joined each other at one of the world’s premiere genomic research centers, the Genome Institute at Washington University in St. Louis, and figured they had the resources to improve this state of affairs. The Genome Institute is generously funded by the NIH and was a major contributor to the Human Genome Project; the Griffiths had congregated there deliberately after completing post-doctoral fellowships at the Lawrence Berkeley National Laboratory in California (Obi) and the Michael Smith Genome Sciences Centre in Vancouver (Malachi). “When we finished our PhDs, we knew we would like to set up a lab together,” says Obi. At the Genome Institute, they pitched the idea of building a free, searchable online database of drug-gene associations, and soon the Drug Gene Interaction Database (DGIdb) was under development.

In Search of the Druggable Genome

Existing public databases, like DrugBank, the Therapeutic Target Database, and PharmGKB, were the first ports of call, where a wealth of information was waiting to be re-aggregated in a searchable format. “For their use cases [these databases] are quite powerful,” says Obi. “They were just missing that final component, which is user accessibility for the non-informatics expert.” Getting all this data into DGIdb was and remains the most labor-intensive part of the project. At least two steps removed from the original sources establishing each interaction, the Griffiths felt they had to reexamine each data point, tracing it back to publication and scrutinizing its reliability. “It’s sort of become a rite of passage in our group,” says Malachi. “When new people join the lab, they have to really dig into this resource, learn what it’s all about, and then contribute some of their time toward manual curation.”

The website’s main innovation, however, is its user interface, which presents itself like Google but returns results a little more like a good medical records system. The homepage lets you enter a gene or panel of genes into a search box, and if desired, add a few basic filters. Entering search terms brings up a chart that quickly summarizes any known drug interactions, which can then be further filtered or tracked back to the original sources. The emphasis is not on a detailed breakdown of publications or molecular behavior, but on immediately viewing which drugs affect a given gene’s expression and how. “We did try to place quite a bit of emphasis on creating something that was intuitive and easy to use,” says Malachi. Beta testing involved watching unfamiliar users navigate the website and taking notes on how they interacted with the platform.

DGIdb went live in February of this year, followed by a publication in Nature Methods this October, and the database is now readily accessible at The code is open source and can be modified for any specific use case, using the Perl, Ruby, Shell, or Python programming languages, and the Genome Institute has also made available their internal API for users who want to run documents through the database automatically, or perform more sophisticated search functions. User response will be key to sustaining and expanding the project, and the Griffiths are looking forward to an update that draws on outside researchers’ knowledge. “A lot of this information [on drug-gene interactions] really resides in the minds of experts,” says Malachi, “and isn’t in a form that we can easily aggregate it from… We’re really motivated to have a crowdsourcing element, so that we can start to harness all of that information.” In the meantime, the bright orange “Feedback” button on every page of the site is being bombarded with requests to add specific interactions to the database.

Not all these interactions are easy to validate. “Another area that we’re really actively trying to pursue,” adds Malachi, “is getting information out of sources where text mining is required, where information is really not in a form where the interaction between genes and drugs is laid out quickly.” He cites the example of, where the results of all registered clinical trials in the United States are made available online. This surely includes untapped material on drug-gene interactions, but nowhere are those results neatly summarized. “You either have a huge manual curation problem on your hands – there’s literally hundreds of thousands of clinical trial records – or you have to come up with some kind of machine learning, text-mining approach.” So far, the Genome Institute has been limited to manual curation for this kind of scenario, but with a resource as large as the clinical trials registry, the Griffiths hope to bring their programming savvy to bear on a more efficient attack.

In the meantime, new resources are continuously being brought into the database, rising from eleven data sources on launch to sixteen now, with more in the curation pipeline. DGIdb is already regularly incorporated in the Genome Institute’s research. Every cancer patient sequenced at Washington University has her genetic data run first through an analytics pipeline to find genes with unusual variants or levels of expression, and then through DGIdb to see whether any of these genes are known to be druggable. This is an ideal use case for the database, which is presently biased toward cancer-related interactions, the Griffiths’ own area of research.

The twins have a personal investment in advancing cancer therapeutics. Their mother died in her forties from an aggressive case of breast cancer, while Obi and Malachi were still in high school, and their family has continued to suffer disproportionately from cancer ever since. Says Obi, “We’ve had the opportunity to see [everything from] terrible, tragic outcomes… to the other end of the spectrum, where advances in the way cancer is treated were able to really make a huge difference to both our cousin and our brother,” both in remission after life-threatening cases of childhood leukemia and Ewing’s sarcoma, respectively. “Everyone can tell these stories,” Malachi adds, “but we’ve had a little more than our fair share.”

DGIdb can’t influence cancer care directly – most of the data available on drug-gene interactions is too tentative for clinical use – but it can spur research into more personalized treatments for genetically distinct cancers, and increasingly for other diseases as more information is brought inside. Meanwhile, companies like Foundation Medicine and MolecularHealth are drawing on similar drug-gene datasets, narrowed down to the most actionable information, to tailor clinical action to individual cancer patients. The Griffiths are cautiously optimistic that research like the Genome Institute’s is approaching the crucial tipping point where finely tuned clinical decisions could be made based on a patient’s genetic profile. “We’re still firmly on the academic research side,” says Malachi, but “we’re definitely at the stage where this idea needs to be pursued aggressively.”


Read Full Post »

World of Metabolites:  Lawrence Berkeley National Laboratory developed Imaging Technique for their Capturing

Reporter: Aviva Lev-Ari, PhD, RN


UPDATED on 9/27/2017

From: “Dr. Larry Bernstein” <>

Reply-To: “Dr. Larry Bernstein” <>

Date: Tuesday, September 26, 2017 at 10:45 AM

To: Aviva Lev-Ari <>

Precision or personalized medicine seeks to provide the right drug to the right patient at the right time. Hence the significance of the principal omics: disciplines of genomics, proteomics, and last but not least metabolomics, as diagnostic enablers. 

Primacy among the ‘omics is debatable, but the notion that metabolomics reflects the most accurate picture of disease states has reached significant momentum. “Almost every factor affecting health exerts its influence by altering metabolite levels,” says Mike Milburn, Ph.D., Chief Scientific Officer at Metabolon (Morrisville, North Carolina, USA). 

Where clinical chemistry blood tests typically quantify individual species for example, glucose or cholesterol, metabolomics measures hundreds or even thousands of metabolites to provide a nuanced view of disease states. 

Metabolon employs standard liquid chromatography-mass spectrometry (LC-MS) for metabolomic studies. Its proprietary informatics and processing platform, Precision MetabolomicsTM, overcomes the “big data” challenge, a natural consequence of measuring hundreds or thousands of small-molecule entities with widely differing concentrations in a single sample. Precision Metabolomics enables “n of 1” studies — meaningful clinical trials on a single patient, Milburn adds:

Diagnostic metabolomics resembles other medical testing, where results are compared against readings from healthy individuals or a reference population. Many metabolites serve that purpose but none on its own is sufficiently specific or diagnostic for a diagnosis — otherwise it would comprise a standalone test. Hence the reliance on metabolite panels or networks, which together may provide a clearer view of disease states than any single diagnostic molecule.


Imaging technique captures ever-changing world of metabolites

Thu, 06/13/2013 – 7:38am

The kinetic world of metabolites comes to life in this merged overlay of mass spectrometry images. It shows new versus pre-existing metabolites in a tumor section (yellow and red indicate newer metabolites). Image: Lawrence Berkeley National LaboratoryThe kinetic world of metabolites comes to life in this merged overlay of mass spectrometry images. It shows new versus pre-existing metabolites in a tumor section (yellow and red indicate newer metabolites). Image: Lawrence Berkeley National LaboratoryWhat would you do with a camera that can take a picture of something and tell you how new it is? If you’re Lawrence Berkeley National Laboratory scientists Katherine Louie, Ben Bowen, Jian-Hua Mao and Trent Northen, you use it to gain a better understanding of the ever-changing world of metabolites, the molecules that drive life-sustaining chemical transformations within cells.

They’re part of a team of researchers that developed a mass spectrometry imaging technique that not only maps the whereabouts of individual metabolites in a biological sample, but how new the metabolites are too.

That’s a big milestone, because metabolites are constantly in flux. They’re synthesized on-demand in order to sustain an organism’s energy requirements. When you eat lunch, metabolites momentarily fire up in various cell populations throughout your body to fuel your day. But they also have a dark side. Cancer cells tap metabolites to drive tumor development.

Unfortunately, the current ways to clinically analyze metabolites don’t capture their kinetics. Microscopy maps the cells and biomarkers in a tumor section. And traditional mass spectrometry reveals the abundance and spatial distribution of molecules such as metabolites.

But these images are static snapshots of a highly dynamic process. They’re blind to how recently the metabolites were synthesized, which is a key piece of information. The metabolic status of a cell population is a good indicator of what the cells were up to when the sample was taken.

To image the ebb and flow of metabolites, the scientists paired mass spectrometry with a clinically accepted way to label tissue that uses a hydrogen isotope called deuterium.

As reported in Nature Scientific Reports, they administered deuterium to mice with tumors. Newly synthesized lipids (a hallmark of metabolic activity) became labeled with deuterium, while pre-existing lipids remained unlabeled. The scientists then removed tumor sections and analyzed them with a type of mass spectrometry.

The resulting images look like freeze-frames of a slow-motion fireworks show. They reveal when and where metabolic turnover occurs in a tumor section, with the brighter colors depicting newly synthesized lipids.

The scientists also found that regions with new lipids had a higher tumor grade, which is a good predictor of how quickly a tumor is likely to grow.

“Our approach, called kinetic mass spectrometry imaging, could provide clinicians with quantifiable information they can use,” says Bowen.

The scientists are now applying their imaging technique to study metabolic flux in other biological systems, such as microbial communities.

Source: Lawrence Berkeley National Laboratory


Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

Walking Versus Running for Hypertension, Cholesterol, and Diabetes Mellitus Risk Reduction

  1. Paul T. Williams,
  2. Paul D. Thompson


From the Life Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, CA (P.T.W.); and Division of Cardiology, Hartford Hospital, Hartford, CT (P.D.T.).

Correspondence to Paul T. Williams, PhD, Life Sciences Division, Lawrence Berkeley National Laboratory, Donner 464, 1 Cycloton Rd, Berkeley, CA 94720.

Objective—To test whether equivalent energy expenditure by moderate-intensity (eg, walking) and vigorous-intensity exercise (eg, running) provides equivalent health benefits.

Approach and Results—We used the National Runners’ (n=33 060) and Walkers’ (n=15 945) Health Study cohorts to examine the effect of differences in exercise mode and thereby exercise intensity on coronary heart disease (CHD) risk factors. Baseline expenditure (metabolic equivant hours per day [METh/d]) was compared with self-reported, physician-diagnosed incident hypertension, hypercholesterolemia, diabetes mellitus, and CHD during 6.2 years follow-up. Running significantly decreased the risks for incident hypertension by 4.2% (P<10−7), hypercholesterolemia by 4.3% (P<10−14), diabetes mellitus by 12.1% (P<10−5), and CHD by 4.5% per METh/d (P=0.05). The corresponding reductions for walking were 7.2% (P<10−7), 7.0% (P<10−8), 12.3% (P<10−4), and 9.3% (P=0.01). Relative to <1.8 METh/d, the risk reductions for 1.8 to 3.6, 3.6 to 5.4, 5.4 to 7.2, and ≥7.2 METh/d were as follows: (1) 10.0%, 17.7%, 25.1%, and 34.9% from running and 14.0%, 23.8%, 21.8%, and 38.3% from walking for hypercholesterolemia; (2) 19.7%, 19.4%, 26.8%, and 39.8% from running and 14.7%, 19.1%, 23.6%, and 13.3% from walking for hypertension; and (3) 43.5%, 44.1%, 47.7%, and 68.2% from running, and 34.1%, 44.2% and 23.6% from walking for diabetes mellitus (walking >5.4 METh/d excluded for too few cases). The risk reductions were not significantly different for running than walking for diabetes mellitus (P=0.94), hypercholesterolemia (P=0.06), or CHD (P=0.26), and only marginally greater for walking than running for hypercholesterolemia (P=0.04).

Conclusions—Equivalent energy expenditures by moderate (walking) and vigorous (running) exercise produced similar risk reductions for hypertension, hypercholesterolemia, diabetes mellitus, and possibly CHD.

Read Full Post »