Feeds:
Posts
Comments

Archive for the ‘Personalized and Precision Medicine & Genomic Research’ Category

Prostate Cancer Molecular Diagnostic Market – the Players are: SRI Int’l, Genomic Health w/Cleveland Clinic, Myriad Genetics w/UCSF, GenomeDx and BioTheranostics

Curator: Aviva Lev-Ari, PhD, RN

On February 6, 2013 we reported that DR. MARK RUBIN, LEADING PROSTATE CANCER AND GENOMICS EXPERT, TO LEAD CUTTING-EDGE CENTER FOR TARGETED, INDIVIDUALIZED PATIENT CARE BASED ON EACH PATIENT’S GENETICS

Genomically Guided Treatment after CLIA Approval: to be offered by Weill Cornell Precision Medicine Institute

On May 16, 2013 we reported a major breakthrough in the Prostate Cancer Screening

A Blood Test to Identify Aggressive Prostate Cancer: a Discovery @ SRI International, Menlo Park, CA

After nearly a decade, my collaborators and I have found the first marker that specifically identifies the approximately six to eight percent of prostate cancers that are considered “aggressive,” meaning they will migrate to other parts of the body, at which point they are very difficult to treat. Although we have confirmed this marker, there is much to be done before a clinical application can be developed.

http://pharmaceuticalintelligence.com/2013/05/16/a-blood-test-to-identify-aggressive-prostate-cancer-a-discovery-sri-international-menlo-park-ca/

Prostate Cancer MDx Competition Heating Up; New Data from Genomic Health, Myriad

May 15, 2013

Life sciences companies are gearing up for battle to capture the profitable prostate cancer molecular diagnostic market.

Genomic Health and Myriad Genetics both made presentations to the investment community last week about their genomic tests that gauge a man’s risk of prostate cancer aggressiveness. As part of its annual investor day, Myriad discussed new data on its Prolaris test, which analyzes the expression level of 46 cell cycle progression genes and stratifies men’s risk of biochemical recurrence of prostate cancer. If the test reports low gene expression, then the patient is at low risk of disease progression, while high gene expression is associated with disease progression.

Meanwhile, around the same time last week, Genomic Health launched its Oncotype DX prostate cancer test and presented data from the first validation study involving the diagnostic. The Oncotype DX prostate cancer test analyzes the expression of 17 genes within four biological pathways to gauge prostate cancer aggressiveness. The test reports a genomic prostate score from 0 to 100; the lower the score the more certain a patient can be that they can avoid treatment and continue with active surveillance. Prostate cancer patients who are deemed to be at very low risk, low risk, or intermediate risk of progressing are eligible to be tested with the Oncotype Dx test. If, based on standard clinical measures, a person’s prostate cancer is considered high risk, then he is not a candidate for Genomic Health’s test.

These molecular tests are entering the market at a time when currently available tools aren’t specific enough to distinguish between men who have an aggressive form of prostate cancer and therefore, need invasive treatments, and those that are low risk and can do well with active surveillance. According to an NIH estimate, in 2010, the annual medical costs associated with prostate cancer in the US were $12 billion.

It is estimated that each year 23 million men undergo testing for prostate specific antigen, a protein produced by the prostate gland that increases when a man has prostate cancer. Additionally, one million men get a prostate biopsy annually, while 240,000 men end up with a diagnosis for prostate cancer, and around 30,000 die from the disease. Although most of the men diagnosed with prostate cancer end up receiving surgery or radiation treatment, as many as half of these men will probably not progress, and their disease isn’t life threatening.

While PSA testing has been shown to reduce prostate cancer deaths, a man’s PSA level may be increased for reasons other than cancer. As such, broadly screening men for PSA has been controversial in the healthcare community since the test isn’t specific enough to gauge which men are at low risk of developing aggressive prostate cancer and can forgo unnecessary treatments that can have significant side effects.

Both Myriad and Genomic Health are hoping their tests will further refine prostate cancer diagnosis and help doctors gain more confidence in determining which of their patients have aggressive disease and which are at low risk.

Myriad’s advantage

In this highly competitive space, Myriad has the first mover advantage, having launched Prolaris three years ago. The company has published four studies involving the test and conducted a number of trials analyzing around 3,000 patient samples.

Researchers from UCSF and Myriad recently published the fourth validation study in the Journal of Clinical Oncology, which analyzed samples from 400 men who had undergone a radical prostatectomy. In the published study, researchers reported that 100 percent of the men whom Prolaris deemed to be at “low risk” of progression did not experience a recurrence within the five years the study was ongoing. Meanwhile, 50 percent of those the test deemed to be a “high risk” did experience recurrence during that time (PGx Reporter 3/6/2013).

At a major medical conference recently, Myriad presented data from a study which tested biopsy samples from 141 patients treated with electron beam radiation therapy and found that the test score was significantly associated with patients’ outcome and provided information about disease progression beyond standard clinical measures. Although this finding needs to be further validated in a larger patient cohort, the researchers concluded that Prolaris “could be used to select high-risk men undergoing electron beam radiation therapy who may need combination therapy for their clinically localized prostate cancer.” In this study, around half of the cohort was African American.

Myriad has also shown in studies that its test can make accurate predictions from tissue from an initial prostate biopsy and from post-prostatectomy. The test has also shown in studies to be superior to the Gleason score, baseline PSA levels, and other prognostic factors in predicting prostate cancer-specific mortality.

Myriad has nearly completed hiring a 24-person sales force to drive sales of the test. Over the last year, Myriad has received more than 3,000 orders for its Prolaris test and 350 urologists have ordered it. The test carries a $3,400 price tag.

Although the company doesn’t have Medicare coverage yet for Prolaris, Myriad is conducting a study, called PROCEED, that it hopes will sway Medicare contractor Noridian to cover the diagnostic. The company has said it is on track to submit data from this registry to Medicare by late summer and expects to hear a decision about test coverage in calendar year 2014 (PGx Reporter 5/8/2013).

During the annual investor day last week, Myriad officials highlighted the gene panel for Prolaris, which features genes involved in cell cycle progression, and noted this as one of the advantages of its test over standard methods. “The Prolaris score measures how fast the tumor is growing. We look at the cell proliferation to look at a component of cancer that is not looked at by current clinical pathologic features,” Bill Rusconi, head of Myriad’s urology division, said.

“So, pathology like PSA score … only look at how far the tumor is progressed … [and] how advanced that tumor is. So, that’s only half of the picture because an advanced tumor could have been smoldering for 20 years, and may not go much further in the short term,” he noted. On the other hand, Rusconi added that a less advanced tumor could be progressing very quickly.

Another distinguishing point for the Prolaris test, according to Myriad, is that it is indicated for patients who are deemed to be at low and high risk by standard measures. Prostate cancer patients deemed to be at high risk of progression by standard clinical measures wouldn’t qualify for testing by Genomic Health’s test. Rusconi estimated that if Prolaris tested around 200,000 patients with localized prostate cancer to gauge the aggressiveness of their disease, the market opportunity for the test would be $700 million.

Myriad executives declined to comment on competing prostate cancer molecular tests, particularly Genomic Health’s product, noting that there isn’t a lot of published data to make any judgments. “We haven’t really seen any published data from any other competitor product. And so, I think in the absence of that, until data have made it through the peer review process and been in publication, it’s always difficult to understand exactly what type of information is available,” Mark Capone, president of Myriad Genetics Laboratories, told investors.

New competition

Like Myriad’s BRACAnalysis test, which comprises more than 80 percent of its product revenues, Genomic Health’s Oncotype DX breast cancer recurrence tests is bringing in the majority of its product revenues. However, the company believes that its newly launched Oncotype DX prostate cancer test stands to be its largest market opportunity to date.

Last week, researchers from University of California, San Francisco, presented data from the first validation study involving the Oncotype DX prostate cancer test. The study involved nearly 400 prostate cancer patients considered low or intermediate risk by standard methods such as Gleason score and showed that when the Oncotype DX score was used in conjunction with other measures, investigators identified more patients as having very low risk disease who were appropriate for active surveillance than when they diagnosed patients without the test score.

More than one third of patients classified as low risk by standard measures in the study were deemed to be “very low risk” by Oncotype DX and therefore could choose active surveillance. Meanwhile, 10 percent of patients in the study were found by clinical measures to be at very low risk or low risk, but the Oncotype DX test deemed them as having aggressive disease that needed treatment.

Matthew Cooperberg of UCSF, who presented data from this validation study at the American Urological Association’s annual meeting last week, highlighted this feature of the Oncotype DX prostate cancer test to investors during a conference call last week. He noted that the test not only gauges which low-risk patients can confidently remain with active surveillance, but it also finds those patients who didn’t receive an accurate risk assessment based on standard clinical measures. “It’s also equally important that we identify the man who frankly should not be on active surveillance, because they’re out there,” he said.

Genomic Health has aligned its test with guidelines from the National Comprehensive Cancer Network, which has expressed concern about over-diagnosis and over-treatment in prostate cancer patients. In 2010, NCCN guidelines established a new “very low risk” category for men with clinically insignificant prostate cancer and recommended that men who fall into this category and have a life expectancy of more than 20 years should only be followed with active surveillance. In 2011, NCCN made the active surveillance criteria more stringent for men in the “very low risk” category.

In order to develop the prostate cancer test, Genomic Health collaborated with the Cleveland Clinic on six feasibility studies and selected the gene expression panel after analyzing 700 genes on tissue samples from 700 patients. The commercial test analyzes the expression of 17 genes across four biological pathways.

Genomic Health executives suggested to investors that in determining the aggressiveness of prostate cancer a test that gauges critical genes in multiple pathways involved in the disease, as opposed to just one pathway, may be the better bet.

“After we selected those 700 [candidate] genes, we were completely agnostic as to what the best predictors would be. So, we let the genes do their thing and picked out the best performance,” said Eric Klein, chairman of Glickman Urological and Kidney Institute at the Cleveland Clinic and principal investigator for the original development studies for the Oncotype DX prostate cancer test. Referring to Myriad’s test, which assessed 46 cell cycle progression genes, Klein noted that while cell proliferation is important, it’s not the only pathway.

“So, I think one of the strengths of this assay is that it surveys the biology of the cancer better because it surveys other pathways,” he said. If a test only looks at genes in only one particular pathway, and the “score is low, you don’t know if you have missed the other underlying biology.”

This strategy of picking critical cancer-linked genes from multiple pathways has proven successful when launching Oncotype DX tests for breast cancer and colon cancer recurrence, company officials noted. Genomic Health’s prior experience launching molecular tests for cancer recurrence and the strength of the Oncotype DX brand will likely be advantages for the company.

Kim Popovits, CEO of Genomic Health, noted that the company has hired a “small sales force” to drive uptake of the prostate cancer test and reps will be targeting high-volume practices. “We have medical science liaisons that will be out there working to educate key opinion leaders with a similar approach to what we did in breast [cancer],” Popovits told investors. “We will begin to add to the sales organization as time goes on, as we see traction taking place, and as we move more towards payor reimbursement.”

The company plans to conduct a decision impact study as part of its effort to gain reimbursement coverage for the test. Genomic Health is also planning to do additional studies that will explore what level of active surveillance doctors should perform on patients who are deemed by the Oncotype DX test to be at very low or low risk.

The list price for the test is $3,820.

Other players

Although Myriad and Genomic Health are currently the main players in the prostate cancer molecular diagnostics space, the market will become an increasingly crowded one in the coming months.

Canadian firm GenomeDx is planning to launch a prostate cancer molecular diagnostic later this year, called Decipher. The company recently presented data at a medical conference on the test’s clinical validity and utility in predicting which patients are at risk of recurrence and metastasis after prostate cancer surgery. The company has said it has 22 studies underway with the Decipher test involving 4,000 patients (PGx Reporter 2/20/2013).

BioTheranostics recently published a study in the Proceedings of the National Academy of Sciences about its new 32-gene signature test, dubbed Prostate Cancer Index, which gauges PSA recurrence. In the study, which involved 270 tumor samples for patients treated with radical prostatectomy, the RT-PCR test (developed in collaboration with Massachusetts General Hospital) predicted PSA recurrence and had added value over standard measures such as Gleason score, tumor stage, surgical margin status, and pre-surgery PSA levels. The only other measure with significant prognostic value was surgical margin status.

The test could separate patients into groups based on PSA recurrence and whether they would develop metastatic disease within a 10-year period. PCI found that patients with a high risk score had a 14 percent risk of metastasis, while those in the low-risk group had a zero percent risk of metastasis. “In particular, this information may be useful at the biopsy stage, so that clinicians can better assess which patients can consider active surveillance versus those who should consider immediate treatment,” BioTheranostics CEO Richard Ding told PGx Reporter.

BioTheranostics has not yet determined when it will launch PCI. However, the company is planning additional follow-on studies to demonstrate the clinical utility of the test, including one study involving patients on active surveillance after having an initial prostate biopsy.

      Turna Ray is the editor of GenomeWeb’s Pharmacogenomics Reporter. She covers pharmacogenomics, personalized medicine, and companion diagnostics. E-mail Turna Ray or follow her GenomeWeb Twitter account at @PGxReporter.

Read Full Post »

Early Detection of Prostate Cancer: American Urological Association (AUA) Guideline

Author-Writer: Dror Nir, PhD

When reviewing the DETECTION OF PROSTATE CANCER section on the AUA website , The first thing that catches one’s attention is the image below; clearly showing two “guys” exploring with interest what could be a CT or MRI image…..

 fig 1

But, if you bother to read the review underneath this image regarding EARLY DETECTION OF PROSTATE CANCER: AUA GUIDELINE produced by an independent group that was commissioned by the AUA to conduct a systematic review and meta-analysis of the published literature on prostate cancer detection and screening; Panel Members: H. Ballentine Carter, Peter C. Albertsen, Michael J. Barry, Ruth Etzioni, Stephen J. Freedland, Kirsten Lynn Greene, Lars Holmberg, Philip Kantoff, Badrinath R. Konety, Mohammad Hassan Murad, David F. Penson and Anthony L. Zietman – You are bound to be left with a strong feeling that something is wrong!

The above mentioned literature review was done using rigorous approach.

“The AUA commissioned an independent group to conduct a systematic review and meta-analysis of the published literature on prostate cancer detection and screening. The protocol of the systematic review was developed a priori by the expert panel. The search strategy was developed and executed

by reference librarians and methodologists and spanned across multiple databases including Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid EMBASE, Ovid Cochrane Database of Systematic Reviews, Ovid Cochrane Central Register of Controlled Trials and Scopus. Controlled vocabulary supplemented with keywords was used to search for the relevant concepts of prostate cancer, screening and detection. The search focused on DRE, serum biomarkers (PSA, PSA Isoforms, PSA kinetics, free PSA, complexed PSA, proPSA, prostate health index, PSA velocity, PSA

doubling time), urine biomarkers (PCA3, TMPRSS2:ERG fusion), imaging (TRUS, MRI, MRS, MR-TRUS fusion), genetics (SNPs), shared-decision making and prostate biopsy. The expert panel manually identified additional references that met the same search criteria”

While reading through the document, I was looking for the findings related to the roll of imaging in prostate cancer screening; see highlighted above. The only thing I found: “With the exception of prostate-specific antigen (PSA)-based prostate cancer screening, there was minimal evidence to assess the outcomes of interest for other tests.

This must mean that: Notwithstanding hundreds of men-years and tens of millions of dollars which were invested in studies aiming to assess the contribution of imaging to prostate cancer management, no convincing evidence to include imaging in the screening progress was found by a group of top-experts in a thorough and rigorously managed literature survey! And it actually  lead the AUA to declare that “Nothing new in the last 20 years”…..

My interpretation of this: It says-it-all on the quality of the clinical studies that were conducted during these years, aiming to develop an improved prostate cancer workflow based on imaging. I hope that whoever reads this post will agree that this is a point worth considering!

For those who do not want to bother reading the whole AUA guidelines document here is a peer reviewed summary:

Early Detection of Prostate Cancer: AUA Guideline; Carter HB, Albertsen PC, Barry MJ, Etzioni R, Freedland SJ, Greene KL, Holmberg L, Kantoff P, Konety BR, Murad MH, Penson DF, Zietman AL; Journal of Urology (May 2013)”

It says:

“A systematic review was conducted and summarized evidence derived from over 300 studies that addressed the predefined outcomes of interest (prostate cancer incidence/mortality, quality of life, diagnostic accuracy and harms of testing). In addition to the quality of evidence, the panel considered values and preferences expressed in a clinical setting (patient-physician dyad) rather than having a public health perspective. Guideline statements were organized by age group in years (age<40; 40 to 54; 55 to 69; ≥70).

RESULTS: With the exception of prostate-specific antigen (PSA)-based prostate cancer screening, there was minimal evidence to assess the outcomes of interest for other tests. The quality of evidence for the benefits of screening was moderate, and evidence for harm was high for men age 55 to 69 years. For men outside this age range, evidence was lacking for benefit, but the harms of screening, including over diagnosis and over treatment, remained. Modeled data suggested that a screening interval of two years or more may be preferred to reduce the harms of screening.

CONCLUSIONS: The Panel recommended shared decision-making for men age 55 to 69 years considering PSA-based screening, a target age group for whom benefits may outweigh harms. Outside this age range, PSA-based screening as a routine could not be recommended based on the available evidence. The entire guideline is available at www.AUAnet.org/education/guidelines/prostate-cancer-detection.cfm.”

 

Other research papers related to the management of Prostate cancer were published on this Scientific Web site:

From AUA2013: “Histoscanning”- aided template biopsies for patients with previous negative TRUS biopsies

Imaging-biomarkers is Imaging-based tissue characterization

On the road to improve prostate biopsy

State of the art in oncologic imaging of Prostate

Imaging agent to detect Prostate cancer-now a reality

Scientists use natural agents for prostate cancer bone metastasis treatment

Today’s fundamental challenge in Prostate cancer screening

ROLE OF VIRAL INFECTION IN PROSTATE CANCER

Men With Prostate Cancer More Likely to Die from Other Causes

New Prostate Cancer Screening Guidelines Face a Tough Sell, Study Suggests

New clinical results supports Imaging-guidance for targeted prostate biopsy

Prostate Cancer: Androgen-driven “Pathomechanism” in Early-onset Forms of the Disease

Prostate Cancer and Nanotecnology

Prostate Cancer Cells: Histone Deacetylase Inhibitors Induce Epithelial-to-Mesenchymal Transition

Imaging agent to detect Prostate cancer-now a reality

Scientists use natural agents for prostate cancer bone metastasis treatment

ROLE OF VIRAL INFECTION IN PROSTATE CANCER

Prostate Cancers Plunged After USPSTF Guidance, Will It Happen Again?

Read Full Post »

Reporter: Aviva Lev-Ari, PhD, RN

 

Researchers Report on Mutational Patterns in Adenoid Cystic Carcinoma

May 20, 2013

NEW YORK (GenomeWeb News) – A Memorial Sloan-Kettering Cancer Center-led team has taken an exome- and genome-sequencing centered look at the mutations that may be found in the salivary gland cancer adenoid cystic carcinoma, or ACC.

As they reported in Nature Genetics online yesterday, the researchers did exome or genome sequencing on five-dozen matched ACC tumor and normal pairs.

Their analysis unearthed relatively few glitches in each tumor’s protein-coding sequences. But the group found suspicious mutations to several main pathways, including some — such as the PI3-kinase, fibroblast growth factor, and insulin-like growth factor-containing pathway — that may make promising treatment targets.

“Our discovery of genomic alterations in targetable pathways suggests potential avenues for novel treatments to address a typically chemoresistant malignancy,” corresponding author Timothy Chan, an oncology researcher at Memorial Sloan-Kettering, and his colleagues wrote, noting that “[v]erified ACC cell lines are needed to further substantiate the clinical usefulness of the mutations identified here.”

A few genetic glitches have been linked to ACC in the past, the team noted, including a fusion between the transcription factor genes MYB and NFIB. The tumors are also notorious for having higher-than-usual expression of certain genes, such as the epidermal growth factors. Even so, there is still a ways to go in characterizing and treating the aggressive cancer.

To get a better sense of the nature and frequency of mutations involved in ACC, the researchers used Illumina’s HiSeq2000 to do exome sequencing on 55 matched ACC and normal samples, as well as whole-genome sequencing on five more tumor-normal pairs.

For the exome sequencing experiments, they used Agilent SureSelect kits to capture protein-coding portions of the genome prior to sequencing. In the subsequent analyses, meanwhile, the group relied on Life Tehnologies’ SOLiD and Illumina’s MiSeq platforms to verify apparent single nucleotide glitches and small insertions and deletions.

With 106-fold coverage of the exomes, on average, and 37-fold average coverage of the genomes, the group was able to track down a mean of almost two-dozen somatic coding alterations per tumor.

When they used an algorithm called CHASM to distinguish between driver and passenger mutations in a set of 710 validated non-synonymous mutations, the researchers saw an over-representation of apparent driver mutations affecting genes known for processes ranging from chromatin regulation and DNA damage response to signaling and metabolism.

For instance, more than one-third of the tumors harbored mutations to chromatin regulators or chromatin state modifying genes such as SMARCA2, CREBBP, and KDM6A. Similarly, the researchers tracked down multiple mutations to genes coding for enzymes involved in adding or removing methylation and acetylation marks to histones.

Glitches to DNA damage response pathways also turned up in multiple tumors, they reported, as did mutations involving genes from the FGF-IGF-PI3K and other signaling pathways.

Some 57 percent of the tumors tested contained the MYB-NFIB fusion that had been implicated in ACC previously. But the new analysis also turned up mutations affecting genes that interact with MYB and in the NFIB gene itself, pointing to widespread — and perhaps complex — involvement for the two transcription factors in ACC.

“Our data highlight MYB as an active oncogenic partner in fusion transcripts in ACC,” the study’s authors said, “but also suggest a separate role for NFIB, given the presence of mutations specific to this gene.”

Going forward, the group hopes to see further analyses on alterations uncovered in the current study, particularly those falling in pathways that might be prone to clinical interventions.

“[O]ur data provide insights into the genetic framework underlying ACC oncogenesis,” the researchers concluded, “and establish a foundation for identifying new therapeutic strategies.”

 

 

Read Full Post »

Cigarette smoke induces pro-inflammatory cytokine release by activation of NF-kappaB and posttranslational modifications of histone deacetylase as seen in macrophages

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Abbreviations:

Chronic obstructive pulmonary disease (COPD)

Reactive oxygen species (ROS)

Hydroxyl radicals (·OH)

Glutathione (GSH)

Histone deacetylase (HDAC)

TNF (Tumour necrosis factor)

IκB kinase complex (IKK)

Interleukin (IL)

Cigarette smoking is the major etiologic factor in the pathogenesis of chronic obstructive pulmonary disease (COPD), which is characterized by an abnormal inflammatory response in the lungs to cigarette smoke with a progressive and irreversible airflow limitation. Chronic airway inflammation is an archetypal feature of COPD, and increased oxidative stress has been suggested to be responsible for triggering inflammatory events observed within the lungs of smokers and COPD patients. Although the precise mechanisms behind the pathogenesis of COPD are yet to be fully dissected, the current hypothesis suggests that cigarette smoke causes airway inflammation by activating macrophages, neutrophils, and T lymphocytes, which release proteases and reactive oxygen species (ROS) leading to cellular injury. As a consequence, chronic inflammatory processes are triggered that lead to small airway obstruction. An increased oxidant burden in smokers may be derived from the fact that cigarette smoke contains an estimated 1017 oxidants/free radicals and 4,700 chemical compounds, including reactive aldehydes (carbonyls) and quinones, per puff. Many of these are relatively long-lived, such as tar-semiquinone, which can generate hydroxyl radicals (·OH) and H2O2 by the Fenton reaction. One consequence of this increased oxidative stress is activation of redox-sensitive transcription factors, such as NF-κB and activator protein-1 (AP-1), which are critical to transcription of proinflammatory genes (IL-8, IL-6, and TNF-α). However, the precise transcriptional mechanisms leading to enhanced gene expression in response to cigarette smoke are still not clearly understood.

Cigarette smoke-mediated oxidative stress induces an inflammatory response in the lungs by stimulating the release of proinflammatory cytokines. Chromatin remodeling due to histone acetylation and deacetylation is known to play an important role in transcriptional regulation of proinflammatory genes. The aim of this study was to investigate the molecular mechanism(s) of inflammatory responses caused by cigarette smoke extract (CSE) in the human macrophage-like cell line MonoMac6 and whether the treatment of these cells with the antioxidant glutathione (GSH) monoethyl ester, or modulation of the thioredoxin redox system, can attenuate cigarette smoke-mediated IL-8 release. Exposure of MonoMac6 cells to CSE (1% and 2.5%) increased IL-8 and TNF-alpha production vs. control at 24 h and was associated with significant depletion of GSH levels associated with increased reactive oxygen species release in addition to activation of NF-kappaB. Inhibition of IKK ablated the CSE-mediated IL-8 release, suggesting that this process is dependent on theNF-kappaB pathway. CSE also reduced histone deacetylase (HDAC) activity and HDAC1, HDAC2, and HDAC3 protein levels. This was associated with posttranslational modification of HDAC1, HDAC2, and HDAC3 protein by nitrotyrosine and aldehyde-adduct formation. Pretreatment of cells with GSH monoethyl ester, but not thioredoxin/thioredoxin reductase, reversed cigarette smoke-induced reduction in HDAC levels and significantly inhibited IL-8 release. Thus cigarette smoke-induced release of IL-8 is associated with activation of NF-kappaB via IKK and reduction in HDAC levels/activity in macrophages. Moreover, cigarette smoke-mediated proinflammatory events are regulated by the redox status of the cells.

Source References:

http://ajplung.physiology.org/content/291/1/L46.long

http://carcin.oxfordjournals.org/content/23/9/1511.abstract?ijkey=3ea9eff65782ab8153fac166b1d85336efb795b8&keytype2=tf_ipsecsha

http://www.ncbi.nlm.nih.gov/pubmed/101105?dopt=Abstract

http://www.sciencemag.org/content/293/5535/1653.abstract?ijkey=cde39cb6af6142beff66405c8aed965e998d48c1&keytype2=tf_ipsecsha

http://www.ncbi.nlm.nih.gov/pubmed/8319604?dopt=Abstract

Read Full Post »

Observations on Finding the Genetic Links in Common Disease: Whole Genomic Sequencing Studies

Author: Larry H Bernstein, MD, FCAP

In this article I will address the following article by Dr. SJ Williams.

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

 

In the November 23, 2012 issue of Science, Jocelyn Kaiser reports (Genetic Influences On Disease  Remain Hidden in News and  Analysis) on the difficulties that many genomic studies are encountering correlating genetic variants to high risk of type 2 diabetes and heart disease. American Society of  Human Genetics annual 2012 meeting, results of DNA sequencing studies reporting on genetic variants and links to high risk type 2 diabetes and heart disease, part of an international effort to determine the genetic events contributing to complex, common diseases like diabetes.
The key point is that these disease links are challenged by the identification of genetic determinants that do not follow Mendelian Genetics.  There are many disease associated gene variants, and they have not been deleted as a result of natural selection.  In the case of diabetes (type 2), the genetic risk is a low as 26%.

Gene-wide-association studies (GAWS) have identified single nucleotide polymorphisms (SNPs) with associations for common diseases, most of these individually carry only only 20-40% of risk. This is not sufficient for prediction
and use in personalized  treatment.

What is the implication of this.  Researchers have gone to exome-sequencing and  to whole genome sequencing for answers. SNPs can be easily done  by microarray, and in a clinic setting. GWAS is difficult and has inherent complexity, and it has had high cost of use. But the cost of the technology has been dropping precipitously. Technology is being redesigned for more rapid diagnosis and use in clinical research and personalized medicine.  It appears that this is not  yet a game changer.

My own thinking is that the answer doesn’t  fully lie in the genome sequencing, but that it must turn on the very large weight of importance in the regulatory function in the genome, that which was once “considered” dark matter.  In the regulatory function you have a variety of interactions and adaptive changes to the proximate environment, and this is a key to the nascent study of metabolomics.

Three projects highlighted are:
1.  National Heart, Lung and Blood Institute Exome Sequencing Project (ESP)[2]: heart, lung, blood

  • A majority of variants linked to any disease are rare
  • Groups of variants in the same gene confirmed a link between
    APOC3 and risk for early-onset heart attack

2.  T2D-GENES Consortium
3.  GoT2D

  • SNP and PAX4 gene association for type 2 diabetes in East Asians
  • No new rare variants above 1.5% frequency for diabetes

http://www.phgfoundation.org/news/5164/

The unsupported conclusion from this has been

  1. the common disease-common variant hypothesis, which predicts that common disease-causing genetic variants exist in all human populations, but   (common unexplained complexity?) each individual variant will necessarily only have a small effect on disease susceptibility (i.e. a low associated relative risk).
  1. the common disease, many rare variants hypothesis, which postulates that disease is caused by multiple strong-effect variants, (an alternative complexity situation?) Dickson et al. (2010)  PLoS Biol 2010 8(1):e1000294

The reality is that it has been difficult to associate any variant with prediction of risk, but an alternative approach appears to be intron sequencing and missing information on gene-gene interactions.

Jocelyn Kaiser’s Science article notes this in a brief interview with Harry Dietz of Johns Hopkins University where he suspects that “much of the missing heritability lies in gene-gene interactions”.

Oliver Harismendy and Kelly Frazer and colleagues’ recent publication in Genome Biology  http://genomebiology.com/content/11/11/R118 support this notion.  The authors used targeted resequencing
of two endocannabinoid metabolic enzyme genes (fatty-acid-amide hydrolase (FAAH) and monoglyceride lipase (MGLL) in 147 normal weight and 142 extremely obese patients.

English: The human genome, categorized by func...

English: The human genome, categorized by function of each gene product, given both as number of genes and as percentage of all genes. (Photo credit: Wikipedia)

Read Full Post »

Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging

Curator: Aviva Lev-Ari, PhD, RN

Article ID #52: Synthetic Biology: On Advanced Genome Interpretation for Gene Variants and Pathways: What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging. Published on 5/17/2013

WordCloud Image Produced by Adam Tubman

UPDATED on 7/12/2021

  • Abstract. Synthetic biology is a field of scientific research that applies engineering principles to living organisms and living systems.
  • Introduction. This article is intended as a perspective on the field of synthetic biology. …
  • Genetic Manipulation—Plasmids. …
  • Genetic Manipulations—Genome. …
  • An Early Example of Synthetic Biology. …

UPDATED on 11/6/2018

Which biological systems should be engineered?

To solve real-world problems using emerging abilities in synthetic biology, research must focus on a few ambitious goals, argues Dan Fletcher, Professor of bioengineering and biophysics, and chair of the Department of Bioengineering at the University of California, Berkeley, USA. He is also a Chan Zuckerberg Biohub Investigator.
Start Quote

Artificial blood cells. Blood transfusions are crucial in treatments for everything from transplant surgery and cardiovascular procedures to car accidents, pregnancy-related complications and childhood malaria (see go.nature.com/2ozbfwt). In the United States alone, 36,000 units of red blood cells and 7,000 units of platelets are needed every day (see go.nature.com/2ycr2wo).

But maintaining an adequate supply of blood from voluntary donors can be challenging, especially in low- and middle-income countries. To complicate matters, blood from donors must be checked extensively to prevent the spread of infectious diseases, and can be kept for only a limited time — 42 days or 5 days for platelets alone. What if blood cells could be assembled from purified or synthesized components on demand?

In principle, cell-like compartments could be made that have the oxygen-carrying capacity of red blood cells or the clotting ability of platelets. The compartments would need to be built with molecules on their surfaces to protect the compartments from the immune system, resembling those on a normal blood cell. Other surface molecules would be needed to detect signals and trigger a response.

In the case of artificial platelets, that signal might be the protein collagen, to which circulating platelets are exposed when a blood vessel ruptures5. Such compartments would also need to be able to release certain molecules, such as factor V or the von Willebrand clotting factor. This could happen by building in a rudimentary form of exocytosis, for example, whereby a membrane-bound sac containing the molecule would be released by fusing with the compartment’s outer membrane.

It is already possible to encapsulate cytoplasmic components from living cells in membrane compartments6,7. Now a major challenge is developing ways to insert desired protein receptors into the lipid membrane8, along with reconstituting receptor signalling.

Red blood cells and platelets are good candidates for the first functionally useful synthetic cellular system because they lack nuclei. Complex functions such as nuclear transport, protein synthesis and protein trafficking wouldn’t have to be replicated. If successful, we might look back with horror on the current practice of bleeding one person to treat another.

Micrograph of red blood cells, 3 T-lymphocytes and activated platelets

Human blood as viewed under a scanning electron microscope.Credit: Dennis Kunkel Microscopy/SPL

Designer immune cells. Immunotherapy is currently offering new hope for people with cancer by shaping how the immune system responds to tumours. Cancer cells often turn off the immune response that would otherwise destroy them. The use of therapeutic antibodies to stop this process has drastically increased survival rates for people with multiple cancers, including those of the skin, blood and lung9. Similarly successful is the technique of adoptive T-cell transfer. In this, a patient’s T cells or those of a donor are engineered to express a receptor that targets a protein (antigen) on the surface of tumour cells, resulting in the T cells killing the cancerous cells (called CAR-T therapies)10. All of this has opened the door to cleverly rewiring the downstream signalling that results in the destruction of tumour cells by white blood cells11.

What if researchers went a step further and tried to create synthetic cells capable of moving towards, binding to and eliminating tumour cells?

In principle, untethered from evolutionary pressures, such cells could be designed to accomplish all sorts of tasks — from killing specific tumour cells and pathogens to removing brain amyloid plaques or cholesterol deposits. If mass production of artificial immune cells were possible, it might even lessen the need to tailor treatments to individuals — cutting costs and increasing accessibility.

To ensure that healthy cells are not targeted for destruction, engineers would also need to design complex signal-processing systems and safeguards. The designer immune cells would need to be capable of detecting and moving towards a chemical signal or tumour. (Reconstituting the complex process of cell motility is itself a major challenge, from the delivery of energy-generating ATP molecules to the assembly of actin and myosin motors that enable movement.)

Researchers have already made cell-like compartments that can change shape12, and have installed signalling circuits within them13. These could eventually be used to control movement and mediate responses to external signals.

Smart delivery vehicles. The relative ease of exposing cells in the lab to drugs, as well as introducing new proteins and engineering genomes, belies how hard it is to deliver molecules to specific locations inside living organisms. One of the biggest challenges in most therapies is getting molecules to the right place in the right cell at the right time.

Harnessing the natural proclivity of viruses to deliver DNA and RNA molecules into cells has been successful14. But virus size limits cargo size, and viruses don’t necessarily infect the cell types researchers and clinicians are aiming at. Antibody-targeted synthetic vesicles have improved the delivery of drugs to some tumours. But getting the drug close to the tumour generally depends on the vesicles leaking from the patient’s circulatory system, so results have been mixed.

Could ‘smart’ delivery vehicles containing therapeutic cargo be designed to sense where they are in the body and move the cargo to where it needs to go, such as across the blood–brain barrier?

This has long been a dream of those in drug delivery. The challenges are similar to those of constructing artificial blood and immune cells: encapsulating defined components in a membrane, incorporating receptors into that membrane, and designing signal-processing systems to control movement and trigger release of the vehicle’s contents.

The development of immune-cell ‘backpacks’ is an exciting step in the right direction. In this, particles containing therapeutic molecules are tethered to immune cells, exploiting the motility and targeting ability of the cells to carry the molecules to particular locations15.

A minimal chassis for expression. In each of the previous examples, the engineered cell-like system could conceivably be built to function over hours or days, without the need for additional protein production and regulation through gene expression. For many other tasks, however, such as the continuous production of insulin in the body, it will be crucial to have the ability to express proteins, upregulate or downregulate certain genes, and carry out functions for longer periods.

Engineering a ‘minimal chassis’ that is capable of sustained gene expression and functional homeostasis would be an invaluable starting point for building synthetic cells that produce proteins, form tissues and remain viable for months to years. This would require detailed understanding and incorporation of metabolic pathways, trafficking systems and nuclear import and export — an admittedly tall order.

It is already possible to synthesize DNA in the lab, whether through chemically reacting bases or using biological enzymes or large-scale assembly in a cell16. But we do not yet know how to ‘boot up’ DNA and turn a synthetic genome into a functional system in the absence of a live cell.

Since the early 2000s, biologists have achieved gene expression in synthetic compartments loaded with cytoplasmic extract17. And genetic circuits of increasing complexity (in which the expression of one protein results in the production or degradation of another) are now the subject of extensive research. Still to be accomplished are: long-lived gene expression, basic protein trafficking and energy production reminiscent of live cells.

End Quote

SOURCE

https://www.nature.com/articles/d41586-018-07291-3?utm_source=briefing-dy&utm_medium=email&utm_campaign=briefing&utm_content=20181106

UPDATED on 10/14/2013

Genetics of Atherosclerotic Plaque in Patients with Chronic Coronary Artery Disease

372/3:15 Genetic influence on LpPLA2 activity at baseline as evaluated in the exome chip-enriched GWAS study among ~13600 patients with chronic coronary artery disease in the STABILITY (STabilisation of Atherosclerotic plaque By Initiation of darapLadIb TherapY) trial. L. Warren, L. Li, D. Fraser, J. Aponte, A. Yeo, R. Davies, C. Macphee, L. Hegg, L. Tarka, C. Held, R. Stewart, L. Wallentin, H. White, M. Nelson, D. Waterworth.

Genetic influence on LpPLA2 activity at baseline as evaluated in the exome chip-enrichedGWASstudy among ~13600 patients with chronic coronary artery disease in the STABILITY (STabilisation of Atherosclerotic plaque By Initiation of darapLadIb TherapY) trial.

L. Warren1, L. Li1, D. Fraser1, J. Aponte1, A. Yeo2, R. Davies3, C. Macphee3, L. Hegg3,

L. Tarka3, C. Held4, R. Stewart5, L. Wallentin4, H. White5, M. Nelson1, D.

Waterworth3.

1) GlaxoSmithKline, Res Triangle Park, NC;

2) GlaxoSmithKline, Stevenage, UK;

3) GlaxoSmithKline, Upper Merion, Pennsylvania, USA;

4) Uppsala Clinical Research Center, Department of Medical Sciences, Uppsala University, Uppsala, Sweden;

5) 5Green Lane Cardiovascular Service, Auckland Cty Hospital, Auckland, New Zealand.

STABILITY is an ongoing phase III cardiovascular outcomes study that compares the effects of darapladib enteric coated (EC) tablets, 160 mg versus placebo, when added to the standard of care, on the incidence of major adverse cardiovascular events (MACE) in subjects with chronic coronary heart disease (CHD). Blood samples for determination of the LpPLA2 activity level in plasma and for extraction of DNA was obtained at randomization. To identify genetic variants that may predict response to darapladib, we genotyped ~900K common and low frequency coding variations using Illumina OmniExpress GWAS plus exome chip in advance of study completion. Among the 15828 Intent-to-Treat recruited subjects, 13674 (86%) provided informed consent for genetic analysis. Our pharmacogenetic (PGx) analysis group is comprised of subjects from 39 countries on five continents, including 10139 Whites of European heritage, 1682 Asians of East Asian or Japanese heritage, 414 Asians of Central/South Asian heritage, 268 Blacks, 1027 Hispanics and 144 others. Here we report association analysis of baseline levels of LpPLA2 to support future PGx analysis of drug response post trial completion. Among the 911375 variants genotyped, 213540 (23%) were rare (MAF < 0.5%).

Our analyses were focused on the drug target, LpPLA2 enzyme activity measured at baseline. GWAS analysis of LpPLA2 activity adjusting for age, gender and top 20 principle component scores identified 58 variants surpassing GWAS-significant threshold (5e-08).

Genome-wide stepwise regression analyses identified multiple independent associations from PLA2G7, CELSR2, APOB, KIF6, and APOE, reflecting the dependency of LpPLA2 on LDL-cholesterol levels. Most notably, several low frequency and rare coding variants in PLA2G7 were identified to be strongly associated with LpPLA2 activity. They are V279F (MAF=1.0%, P= 1.7e-108), a previously known association, and four novel associations due to I1317N (MAF=0.05%, P=4.9e-8), Q287X (MAF=0.05%, P=1.6e-7), T278M (MAF=0.02%, P=7.6e-5) and L389S (MAF=0.04%, P=4.3e-4).

All these variants had enzyme activity lowering effects and each appeared to be specific to certain ethnicity. Our comprehensive PGx analyses of baseline data has already provided great insight into common and rare coding genetic variants associated with drug target and related traits and this knowledge will be invaluable in facilitating future PGx investigation of darapladib response.

SOURCE

http://www.ashg.org/2013meeting/pdf/46025_Platform_bookmark%20for%20Web%20Final%20from%20AGS.pdf

Synthetic Biology: On Advanced Genome Interpretation for

  • Gene Variants and
  • Pathways,
  • Inversion Polymorphism,
  • Passenger Deletions,
  • De Novo Mutations,
  • Whole Genome Sequencing w/Linkage Analysis

What is the Genetic Base of Atherosclerosis and Loss of Arterial Elasticity with Aging?

In a recent publication by my colleague, Stephen J. Williams, Ph.D. on  5/15/2013 titled

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

http://pharmaceuticalintelligence.com/2013/05/15/finding-the-genetic-links-in-common-disease-caveats-of-whole-genome-sequencing-studies/

we learned that:

  • Groups of variants in the same gene confirmed link between APOC3 and higher risk for early-onset heart attack
  • No other significant gene variants linked with heart disease

APOC3 – apolipoprotein C-III – Potential Relevance to the Human Aging Process

Main reason for selection
Entry selected based on indirect or inconclusive evidence linking the gene product to ageing in humans or in one or more model systems
Description
APOC3 is involved in fat metabolism and may delay the catabolism of triglyceride-rich particles. Changes in APOC3 expression levels have been reported in aged mice [1754]. Results from mice suggest that FOXO1 may regulate the expression of APOC3 [1743]. Polymorphisms in the human APOC3 gene and promoter have been associated with lipoprotein profile, cardiovascular health, insulin (INS) sensitivity, and longevity [1756]. Therefore, APOC3 may impact on some age-related diseases, though its exact role in human ageing remains to be determined.

Cytogenetic information

Cytogenetic band
11q23.1-q2
Location
116,205,833 bp to 116,208,997 bp
Orientation
Plus strand

Display region using the UCSC Genome Browser

Protein information

Gene Ontology
Process: GO:0006869; lipid transport
GO:0016042; lipid catabolic process
GO:0042157; lipoprotein metabolic process
Function: GO:0005319; lipid transporter activity
Cellular component: GO:0005576; extracellular region
GO:0042627; chylomicron

Protein interactions and network

No interactions in records.

Retrieve sequences for APOC3

Promoter
Promoter
ORF
ORF
CDS
CDS

Homologues in model organisms

Bos taurus
APOC3_BOVI
Mus musculus
Apoc3
Pan troglodytes
APOC3

In other databases

AnAge
This species has an entry in AnAge

Selected references

  • [2125] Pollin et al. (2008) A null mutation in human APOC3 confers a favorable plasma lipid profile and apparent cardioprotection.PubMed
  • [1756] Atzmon et al. (2006) Lipoprotein genotype and conserved pathway for exceptional longevity in humansPubMed
  • [1755] Araki and Goto (2004) Dietary restriction in aged mice can partially restore impaired metabolism of apolipoprotein A-IV and C-IIIPubMed
  • [1743] Altomonte et al. (2004) Foxo1 mediates insulin action on apoC-III and triglyceride metabolismPubMed
  • [1754] Araki et al. (2004) Impaired lipid metabolism in aged mice as revealed by fasting-induced expression of apolipoprotein mRNAs in the liver and changes in serum lipidsPubMed
  • [1753] Panza et al. (2004) Vascular genetic factors and human longevityPubMed
  • [1752] Anisimov et al. (2001) Age-associated accumulation of the apolipoprotein C-III gene T-455C polymorphism C 

http://genomics.senescence.info/genes/entry.php?hgnc=APOC3

Apolipoprotein C-III is a protein component of very low density lipoprotein (VLDL). APOC3 inhibitslipoprotein lipase and hepatic lipase; it is thought to inhibit hepatic uptake[1] of triglyceride-rich particles. The APOA1, APOC3 and APOA4 genes are closely linked in both rat and human genomes. The A-I and A-IV genes are transcribed from the same strand, while the A-1 and C-III genes are convergently transcribed. An increase in apoC-III levels induces the development of hypertriglyceridemia.

Clinical significance

Two novel susceptibility haplotypes (specifically, P2-S2-X1 and P1-S2-X1) have been discovered in ApoAI-CIII-AIV gene cluster on chromosome 11q23; these confer approximately threefold higher risk ofcoronary heart disease in normal[2] as well as non-insulin diabetes mellitus.[3]Apo-CIII delays the catabolism of triglyceride rich particles. Elevations of Apo-CIII found in genetic variation studies may predispose patients to non-alcoholic fatty liver disease.

  1. ^ Mendivil CO, Zheng C, Furtado J, Lel J, Sacks FM (2009). “Metabolism of VLDL and LDL containing apolipoprotein C-III and not other small apolipoproteins – R2”.Arteriosclerosis, Thrombosis and Vascular Biology 30 (2): 239–45. doi:10.1161/ATVBAHA.109.197830PMC 2818784PMID 19910636.
  2. ^ Singh PP, Singh M, Kaur TP, Grewal SS (2007). “A novel haplotype in ApoAI-CIII-AIV gene region is detrimental to Northwest Indians with coronary heart disease”. Int J Cardiol 130 (3): e93–5. doi:10.1016/j.ijcard.2007.07.029PMID 17825930.
  3. ^ Singh PP, Singh M, Gaur S, Grewal SS (2007). “The ApoAI-CIII-AIV gene cluster and its relation to lipid levels in type 2 diabetes mellitus and coronary heart disease: determination of a novel susceptible haplotype”. Diab Vasc Dis Res 4 (2): 124–29. doi:10.3132/dvdr.2007.030PMID 17654446.

In 2013 we reported on the discovery that there is a

Genetic Associations with Valvular Calcification and Aortic Stenosis

N Engl J Med 2013; 368:503-512

February 7, 2013DOI: 10.1056/NEJMoa1109034

METHODS

We determined genomewide associations with the presence of aortic-valve calcification (among 6942 participants) and mitral annular calcification (among 3795 participants), as detected by computed tomographic (CT) scanning; the study population for this analysis included persons of white European ancestry from three cohorts participating in the Cohorts for Heart and Aging Research in Genomic Epidemiology consortium (discovery population). Findings were replicated in independent cohorts of persons with either CT-detected valvular calcification or clinical aortic stenosis.

CONCLUSIONS

Genetic variation in the LPA locus, mediated by Lp(a) levels, is associated with aortic-valve calcification across multiple ethnic groups and with incident clinical aortic stenosis. (Funded by the National Heart, Lung, and Blood Institute and others.)

SOURCE:

N Engl J Med 2013; 368:503-512

Related Research by Author & Curator of this article:

Artherogenesis: Predictor of CVD – the Smaller and Denser LDL Particles

Cardiovascular Biomarkers

Genetics of Conduction Disease: Atrioventricular (AV) Conduction Disease (block): Gene Mutations – Transcription, Excitability, and Energy Homeostasis

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013

Hypertriglyceridemia concurrent Hyperlipidemia: Vertical Density Gradient Ultracentrifugation a Better Test to Prevent Undertreatment of High-Risk Cardiac Patients

Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

Personalized Cardiovascular Genetic Medicine at Partners HealthCare and Harvard Medical School

Genomics Orientations for Individualized Medicine Volume One

Market Readiness Pulse for Advanced Genome Interpretation and Individualized Medicine

We present below the MARKET LEADER in Interpretation of the Genomics Computations Results in the emerging new ERA of Medicine:  Genomic Medicine, Knome.com and its home grown software power house.

A second Case study in the  Advanced Genome Interpretation and Individualized Medicine presented following the Market Leader, is the Genome-Phenome Analyzer by SimulConsult, A Simultaneous Consult On Your Patient’s Diagnosis, Chestnut Hill, MA

 

2012: The Year When Genomic Medicine Started Paying Off

Luke Timmerman

An excerpt of an interesting article mentioning Knome [emphasis ours]…

Remember a couple of years ago when people commemorated the 10-year anniversary of the first draft human genome sequencing? The storyline then, in 200, was that we all went off to genome camp and only came home with a lousy T-shirt. Society, we were told, invested huge scientific resources in deciphering the code of life, and there wasn’t much of a payoff in the form of customized, personalized medicine.

That was an easy conclusion to reach then, when personalized medicine advocates could only point to a couple of effective targeted cancer drugs—Genentech’s Herceptin and Novartis’ Gleevec—and a couple of diagnostics. But that’s changing. My inbox the past week has been full of analyst reports from medical meetings, which mostly alerted readers to mere “incremental” advances with a number of genomic-based medicines and diagnostics. But that’s a matter of focusing on the trees, not the forest. This past year, we witnessed some really impressive progress from the early days of “clinical genomics” or “medical genomics.” The investment in deep understanding of genomics and biology is starting to look visionary.

The movement toward clinical genomics gathered steam back in June at the American Society of Clinical Oncology annual meeting. One of the hidden gem stories from ASCO was about little companies like Cambridge, MA-based Foundation Medicine and Cambridge, MA-based Knome that started seeing a surprising surge in demand from physicians for their services to help turn genomic data into medical information. The New York Times wrote a great story a month later about a young genomics researcher at Washington University in St. Louis who got cancer, had access to incredibly rich information about his tumors, and—after some wrestling with his insurance company—ended up getting a targeted drug nobody would have thought to prescribe without that information. And last month, I checked back on Stanford University researcher Mike Snyder, who made headlines this year using a smorgasbord of “omics” tools to correctly diagnose himself early with Type 2 diabetes, and then monitor his progress back into a healthy state–read the entire article

http://www.knome.com/knome-blog/2012-the-year-when-genomic-medicine-started-paying-off/

Knome and Real Time Genomics Ink Deal to Integrate and Sell the RTG Variant Platform on knoSYS™100 System

Partnership to bring accurate and fast genome analysis to translational researchers

CAMBRIDGE, MA –  May 6, 2013 – Knome Inc., the genome interpretation company, and Real Time Genomics, Inc., the genome analytics company, today announced that the Real Time Genomics (RTG) Variant platform will be integrated into every shipment of the knoSYS™100 interpretation system. The agreement enables customers to easily purchase the RTG analytics engine as an upgrade to the system. The product will combine two world-class commercial platforms to deliver end-to-end genome analytics and interpretation with superior accuracy and speed. Financial terms of the agreement were not disclosed.

“In the past year demand for genome interpretation has surged as translational researchers and clinicians adopt sequencing for human disease discovery and diagnosis,” said Wolfgang Daum, CEO of Knome. “Concomitant with that demand is the need for accurate and easy-to-use industrial grade analysis that meets expectations of clinical accuracy. The RTG platform is both incredibly fast and truly differentiating to customers doing family studies, and we are excited to add such a powerful platform to the knoSYS ecosystem.”

The partnership simplifies the purchasing process by allowing knoSYS customers to purchase the RTG platform directly from Knome sales representatives.

“The Knome system is a perfect complementary channel to further expand our commercial effort to bring the RTG platform to market,” said Steve Lombardi, CEO of Real Time Genomics. “Knome has built a recognizable brand around human clinical genome interpretation, and by delivering the RTG platform within their system, both companies are simplifying genomics to help customers understand human disease and guide clinical actions.”

About Knome

Knome Inc. (www.knome.com) is a leading provider of human genome interpretation systems and services. We help clients in two dozen countries identify the genetic basis of disease, tumor growth, and drug response. Designed to accelerate and industrialize the process of interpreting whole genomes, Knome’s big data technologies are helping to pave the healthcare industry’s transition to molecular-based, precision medicine.

About Real Time Genomics

Real Time Genomics (www.realtimegenomics.com) has a passion for genomics.  The company offers software tools and applications for the extraction of unique value from genomes.  Its competency lies in applying the combination of its patented core technology and deep computational expertise in algorithms to solve problems in next generation genomic analysis.  Real Time Genomics is a private San Francisco based company backed by investment from Catamount Ventures, Lightspeed Venture Partners, and GeneValue Ltd.

http://www.knome.com/knome-blog/knome-and-real-time-genomics-ink-deal-to-integrate-and-sell-the-rtg-variant-platform-on-knosys100-system/

Direct-to-Consumer Genomics Reinvents Itself

Malorye Allison

An excerpt of an interesting article mentioning Knome [emphasis ours]:

Cambridge, Massachusetts–based Knome made one of the splashiest entries into the field, but has now turned entirely to contract research. The company began providing DTC whole-genome sequencing to independently wealthy individuals at a time when the price was still sky high. The company’s first client, Dan Stoicescu, was a former biotech entrepreneur who paid $350,000 to have his genome sequenced in 2008 so he could review it “like a stock portfolio” as new genetic discoveries unfolded4. About a year later, the company was auctioning off a genome, with such frills as a dinner with renowned Harvard genomics researcher George Church, at a starting price of $68,000; at the time, a full-genome sequence came at the price of $99,000, indicating that the cost of genome sequencing has been plummeting steadily.

Now, the company’s model is very different. “We stopped working with the ‘wealthy healthy’ in 2010,” says Jonas Lee, Knome’s chief marketing officer. “The model changed as sequencing changed.” The new emphasis, he says, is now on using Knome’s technology and technical expertise for genome interpretation. Knome’s customers are researchers, pharmaceutical companies and medical institutions, such as Johns Hopkins University School of Medicine in Baltimore, which in January signed the company up to interpret 1,000 genomes for a study of genetic variants underlying asthma in African American and African Caribbean populations.

Knome is trying to advance the clinical use of genomics, working with groups that “want to be prepared for what’s ahead,” Lee says. “We work with at least 50 academic institutions and 20 pharmaceutical companies looking at variants and drug response.” Cancer and idiopathic genetic diseases are the first sweet spots for genomic sequencing, he says. Although cancer genomics has been hot for a while, a recent string of discoveries of Mendelian diseases5 made by whole-genome sequencing has lit up that field, too. Lee is also confident, however, that “chronic diseases like heart disease are right behind those.” The company also provides software tools. The price for its KnomeDiscovery sequencing and analysis service starts at about $12,000 per sample–read the entire article here.

http://www.knome.com/knome-blog/direct-to-consumer-genomics-reinvents-itself/

Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves

VIEW VIDEO

http://www.colbertnation.com/the-colbert-report-videos/419824/october-04-2012/george-church

 

Knome Software Makes Sense of the Genome

The startup’s software takes raw genome data and creates a usable report for doctors.

DNA decoder: Knome’s software can tease out medically relevant changes in DNA that could disrupt individual gene function or even a whole molecular pathway, as is highlighted here—certain mutations in the BRCA2 gene, which affects the function of many other genes, can be associated with an increased risk of breast cancer.

A genome analysis company called Knome is introducing software that could help doctors and other medical professionals identify genetic variations within a patient’s genome that are linked to diseases or drug response. This new product, available for now only to select medical institutions, is a patient-focused spin on Knome’s existing products aimed at researchers and pharmaceutical companies. The Knome software turns a patient’s raw genome sequence into a medically relevant report on disease risks and drug metabolism. The software can be run within a clinic’s own network—rather than in the cloud, as is the case with some genome-interpretation services—which keeps the information private.

Advances in DNA sequencing technology have sharply reduced the amount of time and money required to identify all three billion base pairs of DNA in a person’s genome. But the use of genomic information for medical decisions is still limited because the process creates such large volumes of data. Less than five years ago, Knome, based in Cambridge, Massachusetts, made headlines by offering what seemed then like a low price—$350,000—for a genome sequencing and profiling package. The same service now costs just a few thousand dollars.

Today, genome profiling has two main uses in the clinic. It’s part of the search for the cause of rare genetic diseases, and it generates tumor-specific profiles to help doctors discover the weaknesses of a patient’s particular cancer. But within a few years, the technique could move beyond rare diseases and cancer. The information gleaned from a patient’s genome could explain the origin of specific disease, could help save costs by allowing doctors to pretreat future diseases, or could improve the effectiveness and safety of medications by allowing doctors to prescribe drugs that are tuned to a person’s ability to metabolize drugs.

But teasing out the relevant genetic information from a patient’s genome is not trivial. To find the particular genetic variant that causes a specific disease or drug response can require expertise from many disciplines—from genetics to statistics to software engineering—and a lot of time. In any given patient’s genome, millions of places in that genome will differ from the standard of reference. The vast majority of these differences, or variants, will be unrelated to a patient’s medical condition, but determining that can take between 20 minutes and two hours for each variant, says Heidi Rehm, a clinical geneticist who directs the Laboratory for Molecular Medicine at Partners Healthcare Center for Personalized Genetic Medicine in Boston, and who will soon serve on the clinical advisory board of Knome. “If you scale that to … millions of variants, it becomes impossible.”

A software package like Knome’s can help whittle down the list based on factors such as disease type, the pattern of inheritance in a family, and the effects of given mutations on genes. Other companies have introduced Web- or cloud-based services to perform such an analysis, but Knome’s software suite can operate within a hospital’s network, which is critically important for privacy-concerned hospitals.

The greatest benefit of the widespread adoption of genomics in the clinic will come from the “clinical intelligence” doctors gain from networks of patient data, says Martin Tolar, CEO of Knome. Information about the association between certain genetic variants and disease or drug response could be anonymized—that is, no specific patient could be tied to the data—and shared among large hospital networks. Knome’s software will make it easy to share that kind of information, says Tolar.

“In the future, you could be in the situation where your physician will be able to pull the most appropriate information for your specific case that actually leads to recommendations about drugs and so forth,” he says.

http://www.technologyreview.com/news/428179/knome-software-makes-sense-of-the-genome/

An End-to-end Human Genome Interpretation System

The knoSYS™100 seamlessly integrates an interpretation application (knoSOFT) and informatics engine (kGAP) with a high-performance grid computer. Designed for whole genome, exome, and targeted NGS data, the knoSYS™100 helps labs quickly go “from reads to reports.”


 


Advanced Interpretation and Reporting Software

The knoSYS™100 ships with knoSOFT, an advanced application for managing sequence data through the informatics pipeline, filtering variants, running gene panels, classifying/interpreting variants, and reporting results.

knoSOFT has powerful and scalable multi-sample comparison features–capable of performing family studies, tumor/normal studies, and large case-control comparisons of hundreds of whole genomes.

Multiple simultaneous users (10) are supported, including technicians running sequence data through informatics pipeline, developers creating next-generation gene panels, geneticists researching causal variants, and production staff processing gene panels.

http://www.knome.com/knosys-100-overview/

Publications

View our collection of journal articles and genome research papers written by Knome employees, Knome board members, and other industry experts.

Publications by Knome employees and board members

The Top Two Axes of Variation of the Combined Dataset (MS, BD, PD, and IBD)

21 Aug 2012

Discerning the Ancestry of European Americans in Genetic Association Studies

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Genetic association studies analyze both phenotypes (such as disease status) and genotypes (at sites of DNA variation) of a given set of individuals. … more

Pedigree and genetic risk prediction workflow

20 Aug 2012

Phased Whole-Genome Genetic Risk in a Family Quartet Using a Major Allele Reference Sequence

Co-authored by Dr. George Church and Dr. Heidi Rehm, Clinical and Scientific Board Members for Knome

Author summary: An individual’s genetic profile plays an important role in determining risk for disease and response to medical therapy. The development of technologies that facilitate rapid whole-genome sequencing will provide unprecedented power in the estimation of disease risk. Here we develop methods to characterize genetic determinants of disease risk and … more

20 Aug 2012

A Genome-Wide Investigation of SNPs and CNVs in Schizophrenia

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Schizophrenia is a highly heritable disease. While the drugs commonly used to treat schizophrenia offer important relief from some symptoms, other symptoms are not well treated, and the drugs cause serious adverse effects in many individuals. This has fueled intense interest over the years in identifying genetic contributors to … more

fetchObject

20 Aug 2012

Whole-Genome Sequencing of a Single Proband Together with Linkage Analysis Identifies a Mendelian Disease Gene

Co-authored by Dr. David Goldstein, Clinical and Scientific board member for Knome

Author summary: Metachondromatosis (MC) is an autosomal dominant condition characterized by exostoses (osteochondromas), commonly of the hands and feet, and enchondromas of long bone metaphyses and iliac crests. MC exostoses may regress or even resolve over time, and short stature … more

19 Aug 2012

Exploring Concordance and Discordance for Return of Incidental Findings from Clinical Sequencing Co-authored by Dr. Heidi Rehm, Clinical and Scientific board member for Knome

Introduction: There is an increasing consensus that whole-exome sequencing (WES) and whole-genome sequencing (WGS) will continue to improve in accuracy and decline in price and that the use of these technologies will eventually become an integral part of clinical medicine.1–7 … more

Publications by industry experts and thought-leaders

22 Aug 2012

Rate of De Novo Mutations and the Importance of Father’s Age to Disease Risk

Augustine Kong, Michael L. Frigge, Gisli Masson, Soren Besenbacher, Patrick Sulem, Gisli Magnusson, Sigurjon A. Gudjonsson, Asgeir Sigurdsson, Aslaug Jonasdottir, Adalbjorg Jonasdottir, Wendy S. W. Wong, Gunnar Sigurdsson, G. Bragi Walters, Stacy Steinberg, Hannes Helgason, Gudmar Thorleifsson, Daniel F. Gudbjartsson, Agnar Helgason, Olafur Th. Magnusson, Unnur Thorsteinsdottir, & Kari Stefansson

Abstract: Mutations generate sequence diversity and provide a substrate for selection. The rate of de novo mutations is therefore of major importance to evolution. Here we conduct a study of genome-wide mutation rates by sequencing the entire genomes of 78 … more

15 Aug 2012

Passenger Deletions Generate Therapeutic Vulnerabilities in Cancer

Florian L. Muller, Simona Colla, Elisa Aquilanti, Veronica E. Manzo, Giannicola Genovese, Jaclyn Lee, Daniel Eisenson, Rujuta Narurkar, Pingna Deng, Luigi Nezi, Michelle A. Lee, Baoli Hu, Jian Hu, Ergun Sahin, Derrick Ong, Eliot Fletcher-Sananikone, Dennis Ho, Lawrence Kwong, Cameron Brennan, Y. Alan Wang, Lynda Chin, & Ronald A. DePinho

Abstract: Inactivation of tumour-suppressor genes by homozygous deletion is a prototypic event in the cancer genome, yet such deletions often encompass neighbouring genes. We propose that homozygous deletions in such passenger genes can expose cancer-specific therapeutic vulnerabilities when the collaterally … more

1 Jul 2012

Structural Diversity and African Origin of the 17q21.31 Inversion Polymorphism

Karyn Meltz Steinberg, Francesca Antonacci, Peter H Sudmant, Jeffrey M Kidd, Catarina D Campbell, Laura Vives, Maika Malig, Laura Scheinfeldt, William Beggs, Muntaser Ibrahim, Godfrey Lema, Thomas B Nyambo, Sabah A Omar, Jean-Marie Bodo, Alain Froment, Michael P Donnelly, Kenneth K Kidd, Sarah A Tishkoff, & Evan E Eichler

Abstract: The 17q21.31 inversion polymorphism exists either as direct (H1) or inverted (H2) haplotypes with differential predispositions to disease and selection. We investigated its genetic diversity in 2,700 individuals, with an emphasis on African populations. We characterize eight structural haplotypes … more

http://www.knome.com/publications/

knome’s Systems & Software

Technical specifications

Connections and communications

Two networks: 40-Gigabit Infiniband QDR via a Mellanox Switch for storage traffic and HP ProCurve switch for network traffic

High performance computing cluster

Four nodes, each node with two 8-core/16 thread, 2.4Ghz, 64 bit Intel® Xeon® E5-2660 processor with 20MB cache, 128GB of DDR3 ECC 1600 memory; 2x2TB SATA drives (7,200RPM)

Metadata server

2x2TB 3.5″ drives with 6GB/sec SATA, RAID 1 and 2x300GB SSD (RAID 1)

Object storage server

Lustre array: Two 12x4TB arrays of 12 3.5″ drives with 6GB/sec serial SATA channels, each OSS powered by a 6-core Intel Xeon 64-bit processor running at 20GHz with 32GB RAM.

knoSYS_server

96TB total, 64TB useable storage (redundancy for failure tolerance). Expandable 384TB total.

Data sources

Reference genome GRCh37 (HG19)

dbSNP, v137

Condel (SIFT and PolyPhen-2)

HPO

OMIM

Exome Variant server, with allelisms and allele frequencies

1000 Genomes, with allelisms and allele frequencies

Human Gene Mutation db (HGMD)

Phastcons 46, mammalian conservation

PhyloP

Input/output formats

Input formats: kGAP accepts Illumina FASTQ and VCF 4.1 files as inputs

Output formats: annotated VCF files

Electrical and operating requirements

Line voltage: 110V to 120V AC, 200-240V (single phase)

Frequency: 50Hz to 60Hz

Current: 30A, RoSH compliant

Connection: NEMA L5-30

Operating temperature: 50° to 95° F

UPS included

Maximum operating altitude: 10,000 feet

Power consumption: 2,800 VA (peak)

Size and weight

Height 49.2 Inches (1250 mm)
Width 30.7 Inches (780 mm)
Depth 47.6 Inches (1210 mm)
Weight 394 lbs (179 kg)

Noise generation and heat dissipation

Enclosure provides 28dB of acoustic noise reduction; system suitable for placing in working lab environment

7200w of active heat dissipation

Included in the package

knoSYS™100 hardware

Knome software: knoSOFT, kGAP

Operating system: Linux (CentOS 6.3)

http://www.knome.com/knosys-100-specifications/

Our research services group uses a set of advanced software tools designed for whole genome and exome interpretation. These tools are also available to our clients through our knomeBASE informatics service. In addition to various scripts, libraries, and conversion utilities, these tools include knomeVARIANTS and knomePATHWAYS.

knomeVARIANTS

Genome_software_knomeVARIANTS

knome VARIANTS is a query kit that lets users search for candidate causal variants in studied genomes. It includes a query interface (see above), scripting libraries, and data conversion utilities.

Users select cases and controls, input a putative inheritance mode, and add sensible filter criteria (variant functional class, rarity/novelty, location in prior candidate regions, etc.) to automatically generate a sorted short-list of leading candidates. The application includes a SQL query interface to let users query the database as they wish, including by complex or novel sets of criteria.

In addition to querying, the application lets users export subsets of the database for viewing in MS Excel. Subsets can be output that target common research foci, including the following:

  • Sites implicated in phenotypes, regardless of subject genotypes
  • Sites where at least one studied genome mismatches the reference
  • Sites where a particular set of one or more genomes, but no other genomes, show a novel variant
  • Sites in phenotype-implicated genes
  • Sites with nonsense, frameshift, splice-site, or read-through variants, relative to reference
  • Sites where some but not all subject genome were called

knomePATHWAYS

Genome_software_knomePATHWAYS

knomePATHWAYS is a visualization tool that overlays variants found in each sample genome onto known gene interaction networks in order to help spot functional interactions between variants in distinct genes, and pathways enriched for variants in cases versus controls, differential drug responder groups, etc.

knomePATHWAYS integrates reference data from many sources, including GO, HPRD, and MsigDB (which includes KEGG and Reactome data). The application is particularly helpful in addressing higher-order questions, such as finding candidate genes and protein pathways, that are not readily addressed from tabular annotation data alone.

http://www.knome.com/interpretation-toolkit/

Genome-Phenome Analyzer by SimulConsult

A Simultaneous Consult On Your Patient’s Diagnosis

Clinicians can get a “simultaneous consult” about their patient’s diagnosis using SimulConsult’s diagnostic decision support software.

Using the free “phenome” version, medical professionals can enter patient findings into the software and get an initial differential diagnosis and suggestions about other useful findings, including tests.  The database used by the software has > 4,000 diagnoses, most complete for genetics and neurology.  It includes all genes in GeneTests and all diseases in GeneReviews.  The information about diseases is entered by clinicians, referenced to the literature and peer-reviewed by experts.  The software takes into account pertinent negatives, temporal information, and cost of tests, information ignored in other diagnostic approaches.  It transforms medical diagnosis by lowering costs, reducing errors and eliminating the medical diagnostic odysseys experienced by far too many patients and their families.

http://www.simulconsult.com/index.html

Using the “genome-phenome analyzer” version, a lab can combine a genome variant table with the phenotypic data entered by the referring clinician, thereby using the full power of genome + phenome to arrive at a diagnosis in seconds.  An innovative measure of pertinence of genes focuses attention on the genes accounting for the clinical picture, even if more than one gene is involved.  The referring clinician can use the results in the free phenome version of the software, for example adding information from confirmatory tests or adding new findings that develop over time.  For details, click here.

http://www.simulconsult.com/genome/index.html

Michael M. Segal MD, PhD, Founder,Chairman and Chief Scientist.  Dr. Segal did his undergraduate work at Harvard and his MD and PhD at Columbia, where his thesis project outlined rules for the types of chemical synapses that will form in a nervous system.  After his residency in pediatric neurology at Columbia, he moved to Harvard Medical School, where he joined the faculty and developed the microisland system for studying small numbers of brain neurons in culture.  Using this system, he developed a simplified model of epilepsy, work that won him national and international young investigator awards, and set the stage for later work on the molecular mechanism of attention deficit disorder.  Dr. Segal has a long history of interest in computers, and patterned the SimulConsult software after the way that experienced clinicians actually think about diagnosis.  He is on the Electronic Communication Committee of the Child Neurology Society and the Scientific Program Committee of the American Medical Informatics Association.

http://www.simulconsult.com/company/management.html

Read Full Post »

CT Angiography (CCTA) Reduced Medical Resource Utilization compared to Standard Care reported in JACC

Reporter: Aviva Lev-Ari, PhD, RN

Updated on 10/24/2022

CCTA Effective in Pre-procedural Planning of Myocardial Revascularization Interventions

Image showing a co-registration of invasive coronary angiography (A), coronary CTA and straight MPR (panel B and C) with CTA cross sections (panel D), corresponding OCT cross sections and longitudinal OCT view (E).

September 13, 2022 — The latest expert consensus document from the Society of Cardiovascular Computed Tomography (SCCT), co-published with EuroIntervention, describes Coronary CT Angiography (CCTA) as an effective tool for interventional cardiologists to prepare and optimize the coronary procedure.

Pre-procedural Planning of Coronary Revascularization by Cardiac Computed Tomography,” published in Journal of Cardiovascular Computed Tomography (JCCT), states that CCTA combined with fractional flow reserve (CT-FFR) or stress CT myocardial perfusion imaging (CT-MPI) can provide a comprehensive anatomical and physiological roadmap for coronary revascularization.

The expert consensus document explains that CCT may emerge in the field of interventional cardiology as no longer “a mere diagnostic tool,” as it was when first introduced into clinical practice more than 15 years ago.

According to the writing group, led by Daniele Andreini, MD, PhD, FSCCT of Centro Cardiologico Monzino in Milan, Italy, the potential value of CCTA to plan and guide interventional procedures lies in the wide information it can provide, including its accuracy for plaque and calcium characterization.

Andreini and his co-authors explain that, with its 3-dimensional nature and physiological assessment, CCTA is the only non-invasive imaging modality to assess Syntax Score and Syntax Score II, which enable the Heart Team to select the mode of revascularization (PCI or CABG) for patients with complex disease based on long-term mortality.

Additionally, CCTA may help in identifying anatomical characteristics of chronic total occlusions (CTO) that are associated with increased complexity of CTO percutaneous coronary intervention (PCI).

Before PCI, CCTA has the potential to be used to overcome some limitations of conventional invasive coronary angiography (ICA), including vessel foreshortening and difficulties in selecting optimal projections, with particular importance in bifurcation and ostial lesions.

For more information: www.scct.org

CT Scanner Delivers Less Radiation

Faster, more sensitive scans and better image processing may reduce the risk of x-ray-related cancers.

 WHY IT MATTERS

A new CT scanner exposes patients to less radiation while providing doctors with clearer images to help with diagnoses, according to researchers at the National Institutes of Health.

“CT” stands for Computerized Tomography, which involves combining lots of x-ray images taken from different angles into a three-dimensional view of what’s inside the body. The technology can be especially useful for diagnoses in emergency situations, and the number of CT scans in recent years has increased dramatically, says Marcus Chen, a cardiovascular imager at the National Heart, Lung and Blood Institute, in Bethesda, Maryland.  But the increase in the use of CT scans raises concerns about the amount of radiation to which patients are exposed, says Chen.

The risk of developing cancer from the radiation delivered by one CT scan is low, but the large number of scans performed each year—more than 70 million—translates to a significant risk. Researchers at the National Cancer Institute estimated that the 72 million CT scans performed in the U.S. in 2007 could lead to 29,000 new cancers. On average, the organ studied in a CT scan of an adult receives around 15 millisieverts of radiation, compared with roughly 3.1 millisieverts of radiation exposurefrom natural sources each year.

This concern has led researchers to seek ways to reduce the amount of radiation exposure a patient receives in a scan. They are working to improve both hardware, to make the scans go faster and need less repetition, and software, to process the x-ray data better (see “Clear CT Scans with Less Radiation”).

The new CT scanning system, from Toshiba Medical, combines several improvements to reduce radiation exposure. The overall body of a CT scanner is shaped like a large ring. An x-ray tube and a detector spin separately in the ring, opposite one another, and a patient lies in the center.  X-rays travel through the patient as they are delivered by the tube and captured by the detectors. The new Toshiba machine has five times as many detectors as most machines, which means that more of an organ can be captured at a time, decreasing the number of passes of the scanner required.

The x-ray components in the new system also spin faster—it takes only 275 milliseconds for them to complete a rotation, instead of 350 millisesconds—which means a patient gets irradiated for less time. In cases where doctors are looking at a moving organ such as the heart, the faster spinning also reduces the number of times a doctor may need to try to get a good image. “It’s like having faster film in your camera,” says Chen.  Changes to the way the system generates x-rays and computes the images also mean patients spend less time getting hit with radiation.

Chen and colleagues at the National Heart Lung and Blood Institute used the Toshiba system to examine 107 adult patients of different ages and sizes for plaque buildup and cardiovascular problems. Patient size matters because more x-rays are required to image a larger person. “A lot of imaging centers will use one setting for all patients,” says Chen. “You get beautiful image quality on everybody, but the downside is that some patients get more radiation than they probably should.” In his study, the system takes a quick preliminary scan that uses low-dose x-rays to figure out how big a patient is and how much radiation will be needed for the diagnostic image.

Most patients who got a scan in the new Toshiba machine received 0.93 millisieverts of radiation, and almost every patient received less than 4 millisieverts. Radiation exposure was decreased by as much as 95 percent relative to other CT scanners currently in use.

http://www.linkedin.com/profile/view?id=87597&trk=tab_pro

The reader is advised to review Alternative #3 in the following article, published on 3/10/2013, including the Editorial in NEJM by Dr. Redberg, UCSF, included in the article, prior to reading the content, below — as background on this important topic having the potential to change best practice and standard of care in the ER/ED.

Acute Chest Pain/ER Admission: Three Emerging Alternatives to Angiography and PCI – Corus CAD, hs cTn, CCTA

CCTA for Chest Pain Cuts Costs, Admissions

By Eric Barnes, AuntMinnie.com staff writer

May 14, 2013 — One of the largest studies yet comparing medical resource use and outcomes among chest pain patients found that coronary CT angiography (CCTA) reduced medical resource utilization compared to standard care, generating fewer hospital admissions and shorter emergency room stays, researchers reported in the Journal of the American College of Cardiology.

The retrospective study compared matched cohorts of nearly 1,000 patients presenting with chest pain before and after implementation of routine CCTA evaluation. The study team from Stony Brook, NY, and two other institutions found that patients receiving the standard workup for chest pain — which is to say, mostly observation — were admitted to the hospital almost five times as frequently as patients receiving CT. The standard workup patients also had significantly longer stays when admitted.

The rates of invasive angiography without revascularization and recidivism were also much higher for patients receiving standard care (JACC, May 14, 2013).

“I think the take-home message is that CT done correctly by experts with the resources to do it correctly on a routine basis is not only safe and feasible, but reduces healthcare resource utilization,” said lead author Dr. Michael Poon, from Stony Brook Medical Center, in an interview with AuntMinnie.com.

More than $10 billion in costs

Caring for chest pain is an expensive proposition in the U.S., costing upward of $10 billion a year for some 6 million emergency department (ED) visits. To reduce the problem of overcrowded emergency rooms, some hospitals have implemented chest pain evaluation units, but the care isn’t comprehensive or necessarily all that helpful, Poon said.

“It has been a problem and a major dilemma for emergency rooms because for most patients, it’s a false alarm,” he said. “I would say nine out of 10 are false alarms, but how to pick out that one is very tricky and costly. So what most hospitals tend to do is a one-size-fits-all policy where everybody gets blood tests and an electrocardiogram, and they keep patients in the ED for an extended period of time. So if you come in Friday, you may stay until Monday.”

Coronary CTA has been shown to be safe and cost-effective for acute chest pain evaluation in several smaller studies and in three smaller multicenter trials, but those studies have been limited by a lack of CT availability outside of weekdays and office hours, while EDs must operate 24/7, Poon said.

“All of those studies were done in a randomized, controlled fashion and in an artificial environment,” where each patient was randomized to either a stress test or CT during weekday office hours, Poon said. “But in real life, there is no such thing; it cannot be done.”

More often, chest pain patients get a couple of tests and several hours of observation before they are sent home.

Poon and colleagues from Stony Brook, William Beaumont Hospital, and the University of Toronto wanted to do a “real-world” observational study to show that CT remained cost-effective and efficient for triaging chest pain patients.

The study sought to compare the overall impact of CT on clinical outcomes and efficacy, when comparing CCTA and the hospital’s standard evaluation for the triage of chest pain patients, with CCTA available 12 hours a day, seven days a week.

From a total of 9,308 patients with a chest pain diagnosis upon admission, the study used a matched sample of 894 patients without a history of coronary artery disease and without positive troponin or ischemic changes on an electrocardiogram.

Patients undergoing CT were scanned on a 64-detector-row scanner (LightSpeed VCT, GE Healthcare) following administration of iodinated contrast and metoprolol as a beta-blocker for those with heart rates faster than 65 beats per minute (bpm).

Those with a body mass index (BMI) less than 30 were scanned at 100 kV, while those with a BMI between 30 and 50 were scanned at 120 kV. Retrospective gating was reserved for patients whose heart rates remained above 65 bpm. Obstructive stenosis was defined as 50% or greater lumen narrowing.

CT choice faster, more efficient

The results showed a lower overall admission rate of 14% for CCTA, compared with 40% for the standard of care (p < 0.001). In fact, patients undergoing standard evaluation were 5.5 times more likely to be admitted (p < 0.001) than CCTA patients.

The length of stay in the ED was 1.6 times longer for standard care (p < 0.001) than for CCTA. For patients undergoing CCTA, the median radiation dose was 5.88 mSv.

“We also showed that the recidivism rate is higher for standard of care, meaning that they come back within one month with recurrent chest pain,” Poon said. The odds of returning to the ED within 30 days were five times greater for patients in the standard evaluation group (odds ratio, 5.06; p = 0.022).

“In the era of Obamacare, this is a penalty to the hospital; you don’t want the patient returning within one month with the same diagnosis,” he said. When that happens, “you’re not only not getting paid, you have to pay a penalty. It’s a double whammy. We also show that downstream invasive coronary angiography is significantly less in the CCTA arm.”

More invasive angiography

Patients receiving standard care were seven times more likely to undergo invasive coronary angiography without revascularization (odds ratio, 7.17; p ≤ 0.001), while neither patient group was significantly more likely to undergo revascularization.

“Many physicians use [catheterization] as a way of getting patients in and out of the hospital,” Poon said. However, the cost is more than $10,000 per procedure.

The high rate of angiography without revascularization in the standard care group was not seen in the Rule Out Myocardial Infarction/Ischemia Using Computer Assisted Tomography (ROMICAT) I and II trials, where all patients in the standard care group underwent stress testing before angiography was considered, he said.

Poon credited the ROMICAT trials’ routine use of stress tests with diminishing CT’s relative advantage in resource use. “In the real world, that is not available,” he said. The present study, in which only about 20% of the standard care patients underwent stress tests, is more realistic.

Finally, Poon and colleagues showed no difference in rates of myocardial infarction between CT and the standard of care within the first 30 days of follow up. However, that is changing as patients are followed for longer time periods, he noted.

“We see a trend starting to diverge in our next report, which follows [patients] for six months,” he said. “You see a lot more acute myocardial infarction in the standard care arm, and we’re going to extend it for a year.”

The authors concluded that using CCTA to rule out acute coronary syndromes in low-risk chest pain patients is likely to improve doctors’ ability to triage patients with the common presentation of chest pain. The result of this approach appears to be fewer hospital admissions, shorter stays, less recidivism, less invasive angiography, and better patient outcomes.

In any case, Poon said, the study method is permanent at Stony Brook University, where the standard of care now incorporates CCTA.

“We didn’t stop doing it after the study,” he said. “If you look at some of the randomized, controlled studies, they actually went back to the standard of care.” They had to because those kinds of protocols are only practical with a grant.

Related Reading

CORE 320 study evaluates CCTA and SPECT for CAD diagnosis, March 25, 2013

Study affirms CCTA’s value to rule out myocardial infarction, March 19, 2013

CCTA predicts heart attack in people without risk factors, February 19, 2013

Study: Use CCTA 1st for lower-risk chest pain patients, February 4, 2013

2010 CCTA appropriateness criteria yield mixed results, January 31, 2013
Copyright © 2013 AuntMinnie.com

http://www.auntminnie.com/index.aspx?sec=sup&sub=cto&pag=dis&ItemID=103419&wf=5447

Other related articles on this Open Access Online Scientific Journal include the following:

Economic Toll of Heart Failure in the US: Forecasting the Impact of Heart Failure in the United States – A Policy Statement From the American Heart Association

Aviva Lev-Ari, PhD, RN, 4/25/2013

http://pharmaceuticalintelligence.com/2013/04/25/economic-toll-of-heart-failure-in-the-us-forecasting-the-impact-of-heart-failure-in-the-united-states-a-policy-statement-from-the-american-heart-association/

Diagnosis of Cardiovascular Disease, Treatment and Prevention: Current & Predicted Cost of Care and the Promise of Individualized Medicine Using Clinical Decision Support Systems

Larry H Bernstein, MD, FACP and Aviva Lev-Ari, PhD, RN, Curator, 5/15/2013

http://pharmaceuticalintelligence.com/2013/05/15/diagnosis-of-cardiovascular-disease-treatment-and-prevention-current-predicted-cost-of-care-and-the-promise-of-individualized-medicine-using-clinical-decision-support-systems-2/

Read Full Post »

Treatment, Prevention and Cost of Cardiovascular Disease: Current & Predicted Cost of Care and the Potential for Improved Individualized Care Using Clinical Decision Support Systems

Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC

Author and Curator: Larry H Bernstein, MD, FACP

and

Curator: Aviva Lev-Ari, PhD, RN

This article has the following FIVE parts:

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

4. Arterial Elasticity in Quest for a Drug Stabilizer: Isolated Systolic Hypertension
caused by Arterial Stiffening Ineffectively Treated by Vasodilatation Antihypertensives

5. Clinical Decision Support Systems: Realtime Clinical Expert Support — Biomarkers of Cardiovascular Disease : Molecular Basis and Practical Considerations

 

1. Forecasting the Impact of Heart Failure in the United States : A Policy Statement From the American Heart Association

PA Heidenreich, NM Albert, LA Allen, DA Bluemke, J Butler, et al. Circulation: Heart Failure 2013;6.
Print ISSN: 1941-3289, Online ISSN: 1941-3297.

Heart failure (HF) poses a major burden on productivity and cost of national healthcare expenditures

  • among older Americans, more are hospitalized for HF than for any other medical condition.

As the population ages, the prevalence of HF is expected to increase.

The purpose of this report is to

  • provide an in-depth look at how the changing demographics in the United States will impact the prevalence and cost of care for HF for different US populations.

 Projections of HF Prevalence

Prevalence estimates for HF were determined from

 Projections of the US Population With HF From 2010 to 2030 for Different Age Groups

Year

All ages

18-44 y

45-64 y

65-79 y

> 80

2012 5 813 262 396 578 1 907 141 2 192 233 1 317 310
2015 6 190 606 402 926 1 949 669 2 483 853 1 354 158
2020 6 859 623 417 600 1 974 585 3 004 002 1 463 436
2025 7 644 674 434 635 1 969 852 3 526 347 1 713 840
2030 8 489 428 450 275 2 000 896 3 857 729 2 180 528

Future Costs of HF

The future costs of HF were estimated by methods developed by the American Heart Association

  • project the prevalence and costs of HF from 2012 to 2030
  • factor out  the costs attributable to comorbid conditions.

The model does this by assuming that

(1) HF prevalence percentages will remain constant by age, sex, and race/ethnicity;

(2) the costs of technological innovation will rise at the current rate.

HF prevalence and costs (direct and indirect) were projected using the following steps:

1. HF prevalence and average cost per person were estimated by age group (18–44, 45–64, 65–79, ≥80 years), gender (male, female), and race/ethnicity (white non-Hispanic, white Hispanic, black, other) [32]. The initial HF cost per person and rate of increase in cost was determined for each demographic group, as a percentage of total healthcare expeditures.

2. Inflation is separately addressed by correcting dollar values from Medical Expenditure Panel Survey (MEPS) to 2010 dollars.

3. Nursing home spending triggered an adjustment. The estimates project the incremental cost of care attributable to heart failure (HF).

4. Total HF population prevalence and costs were projected by multiplying the US Census–projected population of each demographic group by the percentage prevalence and average cost

5. The total work loss and home productivity loss costs were generated by multiplying per capita work days lost attributable to HF by (1) prevalence of HF, (2) the probability of employment given HF (for work loss costs only), (3) mean per capita daily earnings, and (4) US Census population projection counts.

Projections of Indirect Costs

Indirect costs of lost productivity from morbidity and premature mortality were estimated as detailed below.
Morbidity costs represent the value of lost earnings attributable to HF and include loss of work among

  • currently employed individuals and those too sick to work, as well as
  • home productivity loss, which is the value of household services performed by household members who do not receive pay for the services.

Total Costs Attributable to Heart Failure (HF)

Projections of Total Cost of Care ($ Billions) for HF for Different Age Groups of the US Population

Year All 18–44 45–64 65–79 ≥ 80
2012
Medical 20.9 0.33 3.67 8.46 8.42
Indirect: Morbidity 5.42 0.52 1.92 2.05 0.93
Indirect: Mortality 4.35 0.66 2.53 0.98 0.18
Total 30.7 1.51 8.12 11.5 9.53
2020
Medical 31.1 0.43 4.58 14.2 11.8
Indirect: Morbidity 7.09 0.66 2.20 3.11 1.12
Indirect: Mortality 5.39 0.79 2.89 1.49 0.22
Total 43.6 1.88 9.67 18.8 13.2
2030
Medical 53.1 0.59 5.86 23.3 23.4
Indirect: Morbidity 9.80 0.91 2.54 4.48 1.87
Indirect: Mortality 6.84 0.98 3.32 2.16 0.37
Total 69.7 2.48 11.7 29.9 25.6

Excludes HF care costs that have been attributed to comorbid conditions.

Cost of Care

Total medical costs are projected to increase from $20.9 billion in 2012 to $53.1 billion in 2030, a 2.5-fold increase. Assuming continuation of current hospitalization practices, the majority (80%) of the costs stem from

  • hospitalization. Also, the majority of increase is from directs costs. Indirect costs are expected to rise as well, but at a lower rate, from $9.8 billion to $16.6 billion, an increase of 69%.

Direct costs (cost of medical care) are expected to increase at a faster rate than indirect costs because of premature deaths and lost productivity.

The total cost of HF (direct and indirect costs) is expected to increase in 2030 from the current $30.7 billion to at least $69.8 billion. This will amount to $244 for every US adult in 2030.

Thus the burden of HF for the US healthcare system will grow substantially during the next 18 years if current trends continue.

It is estimated that

  • by 2030, the prevalence of HF in the United States will increase by 25%, to 3.0%.
  • >8 million people in the US (1 in every 33) will have HF by 2030.
  • the projected total direct medical costs of HF between 2012 and 2030 (in 2010 dollars) will increase from $21 billion to $53 billion.
  • Total costs, including indirect costs for HF, are estimated to increase from $31 billion in 2012 to $70 billion in 2030.
  • If one assumes all costs of cardiac care for HF patients are attributable to HF
    (no cost attribution to comorbid conditions), the 2030 projected cost estimates of treating patients with HF will be 3-fold higher ($160 billion in direct costs).

Projections can be lowered if action is taken to reduce the health and economic burden of HF. Strategies, plans, and implementation to prevent HF and improve the efficiency of care are needed.

Causes and Stages of HF

If the projections for accelerating HF costs are to be avoided, attention to the different causes of HF and their risk factors is warranted.
HF is a clinical syndrome that results from a variety of cardiac disorders

  1. idiopathic dilated cardiomyopathy
  2. cardiac valvular disease
  3. pericarditis or pericardial effusion
  4. ischemic heart disease
  5. primary or secondary hypertension
  6. renovascular disease
  7. advanced liver disease with decreased venous return
  8. pulmonary hypertension
  9. prolonged hypoalbuminemia with generalized interstitial edema
  10. diabetic nephropathy
  11. heart muscle infiltration disease such as primary or secondary amyloidosis
  12. myocarditis
  13. rhythm disorders
  14. congenital diseases
  15. accidental trauma (war, chest trauma)
  16. toxicities (methamphetamine, cocaine, heavy metals, chemotherapy)

HF generally causes symptoms:

  • shortness of breath
  • fatigue
  • swelling (edema)
  • inability to lay flat (orthopnea, paroxysmal nocturnal dyspnea)
  • possibly cough, wheezing

In the Western world the predominant causes of HF are:

  • coronary artery disease
  • valvular disease
  • hypertension
  • viral, alcohol, methamphetamine or other drug  toxicity cardiomyopathy
  • stress (catechol toxicity, takotsubo “broken heart” cardiomyopathy)
  • atrial fibrillation/rapid heart rates
  • thyroid disease

In 2001, the American College of Cardiology and AHA practice guidelines for chronic HF promoted a classification system that encompasses 4 stages of HF.

  • Stage A: Patients at high risk for developing HF in the future but no functional or structural heart disorder.
  • Stage B: a structural heart disorder but no symptoms.
  • Stage C: previous or current symptoms of heart failure, manageable with medical treatment.
  • Stage D: advanced disease requiring hospital-based support, a heart transplant or palliative care.

Stages A and B are considered precursors to the clinical HF and are meant

  1. to alert healthcare providers to known risk factors for HF and
  2. the available therapies aimed at mitigating disease progression.

Stage A patients have risk factors for HF hypertension, atherosclerotic heart disease, and/or diabetes mellitus.

Patients with stage B are asymptomatic patients who have  developed structural heart disease from a variety of potential insults to the heart muscle such as myocardial infarction or valvular heart disease.

Stages C and D represent the symptomatic phases of HF, with stage C manageable and stage D failing medical management, resulting in marked symptoms at rest or with minimal activity despite optimal medical therapy.

Therapeutic interventions include:

  • dietary salt restriction and diuretics
  • medications known to prolong survival (beta blockers, ACE inhibitors, aldosterone inhibitors)
  • implantable devices such as pacemakers and defibrillators
  • stoppage of tobacco, toxic drugs, excess alcohol

Classic demographic risk factors for the development of HF include

  • older age, male gender, ethnicity, and low socioeconomic status.
  • comorbid disease states contribute to the development of HF
    • Ischemic heart disease
    • Hypertension

Diabetes mellitus, insulin resistance, and obesity are also linked to HF development,

  • with diabetes mellitus increasing the risk of HF by ≈2-fold in men and up to 5-fold in women.

Smoking remains the single largest preventable cause of disease and premature death in the United States.

Translation of Scientific Evidence into Clinical Practice

In multiple studies, failures to apply evidence-based management strategies are blamed for avoidable hospitalizations and/or deaths from HF

Improved implementation of guidelines can delay, mitigate or prevent the onset of HF, and improve survival. Performance improvement programs have facilitated the implementation of evidence-based therapies in both hospital and ambulatory care settings.

Care transition programs by hospitals have become more widespread

  • in an effort to reduce avoidable readmissions.

The interventions used by these programs include

  • initiating discharge planning early in the course of hospital care,
  • actively involving patients and families or caregivers in the plan of care,
  • providing new processes and systems that ensure patient understanding of the plan of care before discharge from the hospital, and
  • improving quality of care by continually monitoring adherence to national evidence-based guidelines with appropriate adaptations for individual differences in needs and responses.

In multiple studies,adherence to the HF plan of care was associated with reduced all-cause mortality as well as HF hospitalization.

It is anticipated that care transition programs may increase appropriate admissions while decreasing inappropriate admissions

This would have a potentially benenficial impact on the 30-day all-cause readmission rate that has become

  • a focus of public reporting in pay for performance.

More than a quarter of Medicare spending occurs in the last year of life, and

  • the costs of care during the last 6 months for a patient with HF have been increasing (11% from 2000 to 2007).

Improving end-of-life care cost effectiveness for patients with stage D HF will require ongoing

  • improved prediction of outcomes
  • integration of multiple aspects of care
  • educated examination of alternatives and priorities
  • improved decision-making
  • unbiased allocation of resources and coverage for this process rather than unbalanced coverage favoring catastrophic care

Palliative care, including formal hospice care, is increasingly advocated for patients with advanced HF.
Offering palliative care to patients with HF may lead to

  • more conservative (and less expensive) treatment
  • consistent with many patients’ goals for care

The use of hospice services is growing among the HF population,

  • HF now the second most common reason for entering hospice
  • but hospice declaration may impose automated restrictions on care that can impose an impediment to election of hospice

A recent study of patients in hospice care found that

  • patients with HF were more likely than patients with cancer to use hospice services longer than 6 months or to be discharged from hospice care alive.

Highlights:

1. Increasing incidence and costs of care for heart failure projected from 2012 to 2030

2. Direct costs rising at greater rate than indirect costs

3. American Heart Association has defined 4 stages of HF, the last 2 of which are advanced

4. Stages C & D are clinically overt and contribute to rehospitalization

5. Stage D accounts for a significant use of end-of-life hospice care

6. There are evidence-based guidelines for the provision of coordinated care that are not widely applied at present

Basic questions raised:

1. If stages A & B are under the radar, then what measures can best trigger the use of evidence-based guidelines for care?
2. Why are evidence-based guidelines commonly not deployed?

  • Flaws in the “evidence” due to bias, design errors, limted ability to extrapolate to the patients it should address
  • Delays in education, convincing of caretakers, and deployment
  • Inadequate resources
  • Financial or other disincentives

The arguments for introducing coordinated care and for evidence-based guidelines is strong.

Arguments AGAINST slavish imposition of evidence based medicine include genetic individuality (what is best on average is not necessarily best for each genetically and behaviorly distinct individual). Strict adherence to evidence-based guidelines also stifles innovative explorations. None-the-less, deviations from evidence-based plans should be cautious, well-documented, and well-informed, not due to mal-aligned incentives, ignorance, carelessness or error.

The question of when and how to intervene most cost effectively is unanswered. If some patients are salt-sensitive as a contribution to the prevalence of hypertension and heart failure, should EVERYONE be salt restricted or should there be a more concerted effort to define who is salt sensitive? What if it proved more cost-effective to restrict salt intake for everyone, even though many might be fine with high sodium intake, and some might even benefit from or require high sodium intake? Is it reasonable to impose costs, hurdles, even possible harm on some as a cheaper way to achieve “greater good”?
These issues are highly relevant to the proposed emphasis on holistic solutions.

2. A Case Study from the GENETIC CONNECTIONS — In The Family: Heart Disease Seeking Clues to Heart Disease in DNA of an Unlucky Family

By GINA KOLATA   2013.05.13  New York Times

Scientists are studying the genetic makeup of the Del Sontro family for

  • telltale mutations or aberrations in the DNA.

Robin Ashwood, one of Mr. Del Sontro’s sisters, found out she had extensive heart disease even though her electrocardiograms was normal. Six of her seven siblings also have heart disease, despite not having any of the traditional risk factors. Then, after a sister, just 47 years old, found out she had advanced heart disease, Mr. Del Sontro, then 43, went to a cardiologist. An X-ray of his arteries revealed the truth. Like his grand-father, his mother, his four brothers and two sisters, he had heart disease.

Now he and his extended family have joined an extraordinary federal research project that is using genetic sequencing to find factors that increase the risk of heart disease beyond the usual suspects — high cholesterol, high blood pressure, smoking and diabetes.“We don’t know yet how many pathways there are to heart disease,” said Dr. Leslie Biesecker, who directs the study Mr. Del Sontro joined. “That’s the power of genetics. To try and dissect that.”

“I had bought the dream: if you just do the right things and eat the right things, you will be O.K.,” said Mr. Del Sontro, whose cholesterol and blood pressure are reassuringly low.

3. Arterial Stiffness and Cardiovascular Events : The Framingham Heart Study

GF Mitchell, Shih-Jen Hwang, RS Vasan, MG Larson.

Circulation. 2010;121:505-511.  http://circ.ahajournals.org/content/121/4/505
http://dx.doi.org/10.1161/CIRCULATIONAHA.109.886655

Various measures of arterial stiffness and wave reflection have been proposed as cardiovascular risk markers.
Prior studies have not assessed relations of a comprehensive panel of stiffness measures to prognosis.
First-onset major cardiovascular disease events in relation to arterial stiffness

  • pulse wave velocity [PWV]
  • wave reflection
    • augmentation index
    • carotid-brachial pressure amplification)
  • central pulse pressure

were analyzed  in 2232 participants (mean age, 63 years; 58% women) in the Framingham Heart Study by a proportional hazards model. During median follow-up of 7.8 (range, 0.2 to 8.9) years,

  • 151 of 2232 participants (6.8%) experienced an event.

In multivariable models adjusted for

  • age
  • sex
  • systolic blood pressure
  • use of antihypertensive therapy
  • total and high-density lipoprotein cholesterol concentrations
  • smoking
  • presence of diabetes mellitus

higher aortic PWV was associated with a 48% increase in cardiovascular disease risk (95% confidence interval, 1.16 to 1.91 per SD; P 0.002).

After PWV was added to a standard risk factor model, integrated discrimination improvement was 0.7% (95% confidence interval, 0.05% to 1.3%; P 0.05).

In contrast,

  • augmentation index,
  • central pulse pressure, and
  • pulse pressure amplification

were not related to cardiovascular disease outcomes in multivariable models.

Higher aortic stiffness assessed by PWV

  • is associated with increased risk for a first cardiovascular event.

Aortic PWV improves risk prediction when added to standard risk factors and may represent

  • a valuable biomarker of cardiovascular disease risk

We shall here visit a recent article by Justin D. Pearlman and Aviva Lev-Ari, PhD, RN, on

Pros and Cons of Drug Stabilizers for Arterial  Elasticity as an Alternative or Adjunct to Diuretics and Vasodilators in the Management of Hypertension, titled

4. Hypertension and Vascular Compliance: 2013 Thought Frontier – An Arterial Elasticity Focus

http://pharmaceuticalintelligence.com/2013/05/11/arterial-elasticity-in-quest-for-a-drug-stabilizer-isolated-systolic-hypertension-caused-by-arterial-stiffening-ineffectively-treated-by-vasodilatation-antihypertensives/

Speaking at the 2013 International Conference on Prehypertension and Cardiometabolic Syndrome, meeting cochair Dr Reuven Zimlichman (Tel Aviv University, Israel) argued that there is a growing number of patients for whom the conventional methods are inappropriate for

  • the definitions of hypertension
  • the risk-factor tables used to guide treatment

Most antihypertensives today work by producing vasodilation or decreasing blood volume which may be

  • ineffective treatments for patients in whom average arterial diameter and circulating volume are not the causes of hypertension and as targets of therapy may promote decompensation

In the future, he predicts, “we will have to start looking for a totally different medication that will aim to

  • improve or at least to stabilize arterial elasticity: medication that might affect factors that determine the stiffness of the arteries, like collagen, like fibroblasts.

Those are not the aim of any group of antihypertensive medications today.”

Zimlichman believes existing databases could be used to develop algorithms that focus on

  • inelasticity as a mechanism of hypertensive disease

He also points out that

  • ambulatory blood-pressure-monitoring devices can measure elasticity

http://www.theheart.org/article/1502067.do

A related article was published on the relationship between arterial stiffening and primary hypertension.

Arterial stiffening provides sufficient explanation for primary hypertension.

KH Pettersen, SM Bugenhagen, J Nauman, DA Beard, SW Omholt.

By use of empirically well-constrained computer models describing the coupled function of the baroreceptor reflex and mechanics of the circulatory system, we demonstrate quantitatively that

  • arterial stiffening seems sufficient to explain age-related emergence of hypertension.

Specifically,

  • the empirically observed chronic changes in pulse pressure with age
  • the capacity of hypertensive individuals to regulate short-term changes in blood pressure becomes impaired

The results suggest that a major target for treating chronic hypertension in the elderly  may include

  • the reestablishment of a proper baroreflex response.

http://arxiv.org/abs/1305.0727v2?goback=%2Egde_4346921_member_240018699

5. Clinical Decision Support Systems: Realtime Clinical Expert Support: Biomarkers of Cardiovascular Disease — Molecular Basis and Practical Considerations

RS Vasan.  Circulation. 2006;113:2335-2362

http://dx.doi.org/10.1161/CIRCULATIONAHA.104.482570

http://circ.ahajournals.org/content/113/19/2335

Substantial data indicate that CVD is a life course disease that begins with the evolution of risk factors that contribute to

  • subclinical atherosclerosis.

Subclinical disease culminates in overt CVD. The onset of CVD itself portends an adverse prognosis with greater

  • risks of recurrent adverse cardiovascular events, morbidity, and mortality.

Clinical assessment alone has limitations. Clinicians have used additional tools to aid clinical assessment and to enhance their ability to identify the “vulnerable” patient at risk for CVD, as suggested by a recent National Institutes of Health (NIH) panel.

Biomarkers are one such tool to better identify high-risk individuals, to diagnose disease conditions promptly for diagnosis, prognosis, and treatment guidance.

Biological marker (biomarker): A laboratory test value that is objectively measured and evaluated as an indicator of

  1. normal biological processes,
  2. pathogenic processes, or
  3. pharmacological responses to a therapeutic intervention.

Type 0 biomarker: A marker of the natural history of a disease

  • Type 0 correlates longitudinally with known clinical indices/predicts outcomes.

Type I biomarker: A marker that captures the effects of a therapeutic intervention

  • Type I assesses an aspect of treatment mechanism of action.

Type 2 biomarker (surrogate end point):  A marker intended to predict outcomes on the basis of

  • epidemiologic
  • therapeutic
  • pathophysiologic or
  • other scientific evidence.

With biomarkers monitoring disease progression or response to therapy, the patient can serve as  his or her own control (follow-up values may be compared to baseline  values).

Costs may be less important for prognostic markers when they are largely restricted to people with disease (total cost=cost per person x number to be tested, plus down-stream costs). Some biomarkers (e.g., an exercise stress test) may be used for both diagnostic and prognostic purposes.

Generally there are cost differences in establishing a prognostic value versus diagnostic value of a biomarker:

  • prognostic utility typically requires a large sample and a prospective design, whereas
  • diagnostic value often can be determined with a smaller sample in a cross-sectional design

Regardless of the intended use, it is important to remember that biomarkers that do not change disease management

  • cannot affect patient outcome and therefore
  • are unlikely to be cost-effective (judged in terms of quality-adjusted life-years gained).

Typically, for a biomarker to change management, it is important to have evidence that risk reduction strategies should vary with biomarker levels, and/or biomarker-guided management achieves advantages over a management scheme that ignores the biomarker levels.

Typically it means that biomarker levels should be modifiable by therapy.

Gil David and Larry Bernstein have developed, in consultation with Prof. Ronald Coifman, in the Yale University Applied Mathematics Program, a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that

  • provides empirical medical reference and
  • suggests quantitative diagnostics options.

The current design of the Electronic Medical Record (EMR) is a
linear presentation of portions of the record

  • by services
  • by diagnostic method, and
  • by date

to cite examples.

This allows perusal through a graphical user interface (GUI) that

  • partitions the information or necessary reports in a workstation entered by keying to icons.
  • presents decision support

Examples of data partitions include:

  • history
  • medications
  • laboratory reports
  • imaging
  • EKGs

The introduction of a DASHBOARD adds presentation of

  • drug reactions
  • allergies
  • primary and secondary diagnoses, and
  • critical information

about any patient the care giver needing access to the record.

A basic issue for such a tool is what information is presented and how it is displayed.

A determinant of the success of this endeavor is if it

  • facilitates workflow
  • facilitates decision-making process
  • reduces medical error.

Continuing work is in progress in extending the capabilities with model datasets, and sufficient data based on the assumption that computer extraction of data from disparate sources will, in the long run, further improve this process.

For instance, there is synergistic value in finding coincidence of:

  • ST shift on EKG
  • elevated cardiac biomarker (troponin)
  • in the absence of substantially reduced renal function.

Similarly, the conversion of hematology based data into useful clinical information requires the establishment of problem-solving constructs based on the measured data.

The most commonly ordered test used for managing patients worldwide is the hemogram that often incorporates

  • morphologic review of a peripheral smear
  • descriptive statistics

While the hemogram has undergone progressive modification of the measured features over time the subsequent expansion of the panel of tests has provided a window into the cellular changes in the

  • production
  • release
  • or suppression

of the formed elements from the blood-forming organ into the circulation. In the hemogram one can view data reflecting the characteristics of a broad spectrum of medical conditions.

Progressive modification of the measured features of the hemogram has delineated characteristics expressed as measurements of

  • size
  • density, and
  • concentration

resulting in many characteristic features of classification. In the diagnosis of hematological disorders

  • proliferation of marrow precursors
  • domination of a cell line
  • suppression of hematopoiesis

Other dimensions are created by considering

  • the maturity and size of the circulating cells.

The application of rules-based, automated problem solving should provide a valid approach to

  • the classification and interpretation of the data used to determine a knowledge-based clinical opinion.

The exponential growth of knowledge since the mapping of the human genome enabled by parallel advances in applied mathematics that have not been a part of traditional clinical problem solving.

As the complexity of statistical models has increased

  • the dependencies have become less clear to the individual.

Contemporary statistical modeling has a primary goal of finding an underlying structure in studied data sets.
The development of an evidence-based inference engine that can substantially interpret the data at hand and

  • convert it in real time to a “knowledge-based opinion”

could improve clinical decision-making by incorporating into the model

  • multiple complex clinical features as well as onset and duration .

An example of a difficult area for clinical problem solving is found in the diagnosis of Systemic Inflammatory Response Syndrome (SIRS) and associated sepsis. SIRS is a costly diagnosis in hospitalized patients.   Failure to diagnose it in a timely manner increases the financial and safety hazard.  The early diagnosis of SIRS/sepsis is made by the application of defined criteria by the clinician.

  • temperature
  • heartrate
  • respiratory rate and
  • WBC count

The application of those clinical criteria, however, defines the condition after it has developed, leaving unanswered the hope for

  • a reliable method for earlier diagnosis of SIRS.

The early diagnosis of SIRS may possibly be enhanced by the measurement of proteomic biomarkers, including

  • transthyretin
  • C-reactive protein
  • procalcitonin
  • mean arterial pressure

Immature granulocyte (IG) measurement has been proposed as a

  • readily available indicator of the presence of granulocyte precursors (left shift).

The use of such markers, obtained by automated systems in conjunction with innovative statistical modeling, provides

  • a promising support to early accurate decision making.

Such a system aims to reduce medical error by utilizing

  • the conjoined syndromic features of disparate data elements .

How we frame our expectations is important. It determines

  • the data we collect to examine the process.

In the absence of data to support an assumed benefit, there is no proof of validity at whatever cost.

Potential arenas of benefit include:

  • hospital operations
  • nonhospital laboratory studies
  • companies in the diagnostic business
  • planners of health systems

The problem stated by LL  WEED in “Idols of the Mind” (Dec 13, 2006):
“ a root cause of a major defect in the health care system is that, while we falsely admire and extol the intellectual powers of highly educated physicians, we do not search for the external aids their minds require.” Hospital information technology (HIT) use has been focused on information retrieval, leaving

  • the unaided mind burdened with information processing.

We deal with problems in the interpretation of data presented to the physician, and how the situation could be improved through better

  • design of the software that presents data .

The computer architecture that the physician uses to view the results is more often than not presented

  • as the designer would prefer, and not as the end-user would like.

In order to optimize the interface for physician, the system could have a “front-to-back” design, with the call up for any patient

  • A dashboard design that presents the crucial information that the physician would likely act on in an easily accessible manner
  • Each item used has to be closely related to a corresponding criterion needed for a decision.

Feature Extraction.

Eugene Rypka contributed greatly to clarifying the extraction of features in a series of articles, which

  • set the groundwork for the methods used today in clinical microbiology.

The method he describes is termed S-clustering, and

  • will have a significant bearing on how we can view laboratory data.

He describes S-clustering as extracting features from endogenous data that

  • amplify or maximize structural information to create distinctive classes.

The method classifies by taking the number of features with sufficient variety to generate maps.

The mapping is done by

  • a truth table NxN of messages and choices
  • each variable is scaled to assign values for each message choice.

For example, the message for an antibody titer would be converted from 0 + ++ +++ to 0 1 2 3.

Even though there may be a large number of measured values, the variety is reduced by this compression, even though it may represent less information.

The main issue is

  • how a combination of variables falls into a table to convey meaningful information.

We are concerned with

  • accurate assignment into uniquely variable groups by information in test relationships.

One determines the effectiveness of each variable by its contribution to information gain in the system. The reference or null set is the class having no information.  Uncertainty in assigning to a classification can be countered by providing sufficient information.

One determines the effectiveness of each variable by its contribution to information gain in the system. The possibility for realizing a good model for approximating the effects of factors supported by data used

  • for inference owes much to the discovery of Kullback-Liebler distance or “information”, and Akaike
  • found a simple relationship between K-L information and Fisher’s maximized log-likelihood function.

In the last 60 years the application of entropy comparable to

  • the entropy of physics, information, noise, and signal processing,
  • developed by Shannon, Kullback, and others
  • integrated with modern statistics,
  • as a result of the seminal work of Akaike, Leo Goodman, Magidson and Vermunt, and work by Coifman

Akaike pioneered recognition that the choice of model influence results in a measurable manner. In particular, a larger number of variables promotes further explanations of variance, such that a model selection criterion is important that penalizes for the number of variables when success is measured by explanation of variance.

Gil David et al. introduced an AUTOMATED processing of the data available to the ordering physician and

  • can anticipate an enormous impact in diagnosis and treatment of perhaps half of the top 20 most common
  • causes of hospital admission that carry a high cost and morbidity.

For example:

  1. anemias (iron deficiency, vitamin B12 and folate deficiency, and hemolytic anemia or myelodysplastic syndrome);
  2. pneumonia; systemic inflammatory response syndrome (SIRS) with or without bacteremia;
  3. multiple organ failure and hemodynamic shock;
  4. electrolyte/acid base balance disorders;
  5. acute and chronic liver disease;
  6. acute and chronic renal disease;
  7. diabetes mellitus;
  8. protein-energy malnutrition;
  9. acute respiratory distress of the newborn;
  10. acute coronary syndrome;
  11. congestive heart failure;
  12. hypertension
  13. disordered bone mineral metabolism;
  14. hemostatic disorders;
  15. leukemia and lymphoma;
  16. malabsorption syndromes; and
  17. cancer(s)[breast, prostate, colorectal, pancreas, stomach, liver, esophagus, thyroid, and parathyroid].
  18. endocrine disorders
  19. prenatal and perinatal diseases

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of
myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH (Chairman). Prealbumin in Nutritional Care Consensus Group.

Measurement of visceral protein status in assessing protein and energy
malnutrition: standard of care. Nutrition 1995; 11:169-171.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the
Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB)
Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method
(GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with
chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor Wickerhauser. Adapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising. Optical Engineering, 33(7):2170–2174, July 1994.

R. Coifman and N. Saito. Constructions of local orthonormal bases for classification and regression.
C. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

Realtime Clinical Expert Support and validation System

We have developed a software system that is the equivalent of an intelligent Electronic Health Records Dashboard that provides empirical medical reference and suggests quantitative diagnostics options. The primary purpose is to gather medical information, generate metrics, analyze them in realtime and provide a differential diagnosis, meeting the highest standard of accuracy. The system builds its unique characterization and provides a list of other patients that share this unique profile, therefore

  • utilizing the vast aggregated knowledge (diagnosis, analysis, treatment, etc.) of the medical community.
  • The main mathematical breakthroughs are provided by accurate patient profiling and inference methodologies
  • in which anomalous subprofiles are extracted and compared to potentially relevant cases.

As the model grows and its knowledge database is extended, the diagnostic and the prognostic become more accurate and precise.
We anticipate that the effect of implementing this diagnostic amplifier would result in

  • higher physician productivity at a time of great human resource limitations,
  • safer prescribing practices,
  • rapid identification of unusual patients,
  • better assignment of patients to observation, inpatient beds,
    intensive care, or referral to clinic,
  • shortened length of patients ICU and bed days.

The main benefit is a

  1. real time assessment as well as
  2. diagnostic options based on comparable cases,
  3. flags for risk and potential problems

as illustrated in the following case acquired on 04/21/10. The patient was diagnosed by our system with severe SIRS at a grade of 0.61 .

Graphical presentation of patient status

The patient was treated for SIRS and the blood tests were repeated during the following week. The full combined record of our system’s assessment of the patient, as derived from the further hematology tests, is illustrated below. The yellow line shows the diagnosis that corresponds to the first blood test (as also shown in the image above). The red line shows the next diagnosis that was performed a week later.

Progression changes in patient ICU stay with SIRS

The MISSIVE(c) system, by Justin Pearlman, is an alternative approach that includes not only automated data retrieval and reformatting of data for decision support, but also an integrated set of tools to speed up analysis, structured for quality and error reduction, couplled to facilitated report generation, incorporation of just-in-time knowledge and group expertise, standards of care, evidence-based planning, and both physician and patient instruction.

See also in Pharmaceutical Intelligence:

The Cost Burden of Disease: U.S. and Michigan.CHRT Brief. January 2010. @www.chrt.org

The National Hospital Bill: The Most Expensive Conditions by Payer, 2006. HCUP Brief #59.

Rudolph RA, Bernstein LH, Babb J: Information-Induction for the diagnosis of myocardial infarction. Clin Chem 1988;34:2031-2038.

Bernstein LH, Qamar A, McPherson C, Zarich S, Rudolph R. Diagnosis of myocardial infarction:
integration of serum markers and clinical descriptors using information theory.
Yale J Biol Med 1999; 72: 5-13.

Kaplan L.A.; Chapman J.F.; Bock J.L.; Santa Maria E.; Clejan S.; Huddleston D.J.; Reed R.G.;
Bernstein L.H.; Gillen-Goldstein J. Prediction of Respiratory Distress Syndrome using the Abbott FLM-II amniotic fluid assay. The National Academy of Clinical Biochemistry (NACB) Fetal Lung Maturity Assessment Project.  Clin Chim Acta 2002; 326(8): 61-68.

Bernstein LH, Qamar A, McPherson C, Zarich S. Evaluating a new graphical ordinal logit method (GOLDminer) in the diagnosis of myocardial infarction utilizing clinical features and laboratory
data. Yale J Biol Med 1999; 72:259-268.

Bernstein L, Bradley K, Zarich SA. GOLDmineR: Improving models for classifying patients with chest pain. Yale J Biol Med 2002; 75, pp. 183-198.

Ronald Raphael Coifman and Mladen Victor WickerhauserAdapted Waveform Analysis as a Tool for Modeling, Feature Extraction, and Denoising.
Optical Engineering 1994; 33(7):2170–2174.

R. Coifman and N. SaitoConstructions of local orthonormal bases for classification and regressionC. R. Acad. Sci. Paris, 319 Série I:191-196, 1994.

W Ruts, S De Deyne, E Ameel, W Vanpaemel,T Verbeemen, And G Storms. Dutch norm data for 13 semantic categoriesand 338 exemplars. Behavior Research Methods, Instruments,
& Computers 2004; 36 (3): 506–515.

De Deyne, S Verheyen, E Ameel, W Vanpaemel, MJ Dry, WVoorspoels, and G Storms.  Exemplar by feature applicability matrices and other Dutch normative data for semantic
concepts.
  Behavior Research Methods 2008; 40 (4): 1030-1048

Landauer, T. K., Ross, B. H., & Didner, R. S. (1979). Processing visually presented single words: A reaction time analysis [Technical memorandum].  Murray Hill, NJ: Bell Laboratories.
Lewandowsky , S. (1991).

Weed L. Automation of the problem oriented medical record. NCHSR Research Digest Series DHEW. 1977;(HRA)77-3177.

Naegele TA. Letter to the Editor. Amer J Crit Care 1993;2(5):433.

Sheila Nirenberg/Cornell and Chethan Pandarinath/Stanford, “Retinal prosthetic strategy with the capacity to restore normal vision,” Proceedings of the National Academy of Sciences.

Other related articles published in this Open Access Online Scientific Journal include the following:

http://pharmaceuticalintelligence.com/2012/08/13/the-automated-second-opinion-generator/

http://pharmaceuticalintelligence.com/2012/09/21/the-electronic-health-record-how-far-we-
have-travelled-and-where-is-journeys-end/

http://pharmaceuticalintelligence.com/2013/02/18/the-potential-contribution-of-
informatics-to-healthcare-is-more-than-currently-estimated/

http://pharmaceuticalintelligence.com/2013/05/04/cardiovascular-diseases-decision-support-
systems-for-disease-management-decision-making/?goback=%2Egde_4346921_member_239739196

http://pharmaceuticalintelligence.com/2012/08/13/demonstration-of-a-diagnostic-clinical-
laboratory-neural-network-agent-applied-to-three-laboratory-data-conditioning-problems/

http://pharmaceuticalintelligence.com/2012/12/17/big-data-in-genomic-medicine/

http://pharmaceuticalintelligence.com/2013/02/13/cracking-the-code-of-human-life-
the-birth-of-bioinformatics-and-computational-genomics/

http://pharmaceuticalintelligence.com/2013/04/28/genetics-of-conduction-disease-
atrioventricular-av-conduction-disease-block-gene-mutations-transcription-excitability-
and-energy-homeostasis/

http://pharmaceuticalintelligence.com/2012/12/10/identification-of-biomarkers-that-
are-relatedto-the-actin-cytoskeleton/

http://pharmaceuticalintelligence.com/2012/08/14/regression-a-richly-textured-method-
for-comparison-and-classification-of-predictor-variables/

http://pharmaceuticalintelligence.com/2012/08/02/diagnostic-evaluation-of-sirs-by-
immature-granulocytes/

http://pharmaceuticalintelligence.com/2012/08/01/automated-inferential-diagnosis-
of-sirs-sepsis-septic-shock/

http://pharmaceuticalintelligence.com/2012/08/12/1815/

http://pharmaceuticalintelligence.com/2012/08/15/1946/

http://pharmaceuticalintelligence.com/2013/05/13/vinod-khosla-20-doctor-included-speculations-
musings-of-a-technology-optimist-or-technology-will-replace-80-of-what-doctors-do/

http://pharmaceuticalintelligence.com/2013/05/05/bioengineering-of-vascular-and-tissue-models/

The Heart: Vasculature Protection – A Concept-based Pharmacological Therapy including THYMOSIN
Aviva Lev-Ari, PhD, RN 2/28/2013
http://pharmaceuticalintelligence.com/2013/02/28/the-heart-vasculature-protection-a-concept-
based-pharmacological-therapy-including-thymosin/

FDA Pending 510(k) for The Latest Cardiovascular Imaging Technology
Aviva Lev-Ari, PhD, RN 1/28/2013
http://pharmaceuticalintelligence.com/2013/01/28/fda-pending-510k-for-the-latest-
cardiovascular-imaging-technology/

PCI Outcomes, Increased Ischemic Risk associated with Elevated Plasma Fibrinogen not
Platelet Reactivity    Aviva Lev-Ari, PhD, RN 1/10/2013
http://pharmaceuticalintelligence.com/2013/01/10/pci-outcomes-increased-ischemic-risk-
associated-with-elevated-plasma-fibrinogen-not-platelet-reactivity/

The ACUITY-PCI score: Will it Replace Four Established Risk Scores — TIMI, GRACE, SYNTAX,
and Clinical SYNTAX   Aviva Lev-Ari, PhD, RN 1/3/2013
http://pharmaceuticalintelligence.com/2013/01/03/the-acuity-pci-score-will-it-replace-four-
established-risk-scores-timi-grace-syntax-and-clinical-syntax/

Coronary artery disease in symptomatic patients referred for coronary angiography: Predicted by
Serum Protein Profiles    Aviva Lev-Ari, PhD, RN 12/29/2012
http://pharmaceuticalintelligence.com/2012/12/29/coronary-artery-disease-in-symptomatic-
patients-referred-for-coronary-angiography-predicted-by-serum-protein-profiles/

New Definition of MI Unveiled, Fractional Flow Reserve (FFR)CT for Tagging Ischemia
Aviva Lev-Ari, PhD, RN 8/27/2012
http://pharmaceuticalintelligence.com/2012/08/27/new-definition-of-mi-unveiled-
fractional-flow-reserve-ffrct-for-tagging-ischemia/

Herceptin Fab (antibody) - light and heavy chains

Herceptin Fab (antibody) – light and heavy chains (Photo credit: Wikipedia)

Personalized Medicine

Personalized Medicine (Photo credit: Wikipedia)

Diagnostic of pathogenic mutations. A diagnost...

Diagnostic of pathogenic mutations. A diagnostic complex is a dsDNA molecule resembling a short part of the gene of interest, in which one of the strands is intact (diagnostic signal) and the other bears the mutation to be detected (mutation signal). In case of a pathogenic mutation, the transcribed mRNA pairs to the mutation signal and triggers the release of the diagnostic signal (Photo credit: Wikipedia)

Read Full Post »

The Binding of Oligonucleotides in DNA and 3-D Lattice Structures

Curator: Larry H Bernstein, MD, FCAP

 

This article is a renewal of a previous discussion on the role of genomics in discovery of therapeutic targets which focused on:

  •  key drivers of cellular proliferation,
  •  stepwise mutational changes coinciding with cancer progression, and
  •  potential therapeutic targets for reversal of the process.

“The Birth of BioInformatics & Computational Genomics” lays the manifold multivariate systems analytical tools that has moved the science forward to a ground that ensures clinical application. Their is a web-like connectivity between inter-connected scientific discoveries, as significant findings have led to novel hypotheses and has driven our understanding of biological and medical processes at an exponential pace owing to insights into the chemical structure of DNA,

  • the basic building blocks of DNA  and proteins,
  • of nucleotide and protein-protein interactions,
  • protein folding, allostericity, genomic structure,
  • DNA replication,
  • nuclear polyribosome interaction, and
  • metabolic control.

In addition, the emergence of methods for

  • copying,
  • removal and insertion, and
  • improvements in structural analysis as well as
  • developments in applied mathematics have transformed the research framework.

Three-Dimensional Folding and Functional Organization Principles of The Drosophila Genome Sexton T, Yaffe E, Kenigeberg E, Bantignies F,…Cavalli G. Institute de Genetique Humaine, Montpelliere GenomiX, and Weissman Institute, France and Israel. Cell 2012; 148(3): 458-472.       http://dx.doi.org/10.1016/j.cell.2012.01.010   http://www.ncbi.nlm.nih.gov/pubmed/22265598 Chromosomes are the physical realization of genetic information and thus form the basis for its

  •   readout and propagation.

Here we present a high-resolution chromosomal contact map derived from a modified genome-wide chromosome conformation capture approach applied to Drosophila embryonic nuclei. The entire genome is linearly partitioned into well-demarcated physical domains that overlap extensively with

  •   active and repressive epigenetic marks.

Chromosomal contacts are hierarchically organized between domains. Global modeling of contact density and clustering of domains show that

  •   inactive domains are condensed and confined to their chromosomal territories, whereas
  •  active domains reach out of the territory to form remote intra- and interchromosomal contacts.
  •  we systematically identify specific long-range intrachromosomal contacts between Polycomb-repressed domains

Together, these observations allow for quantitative prediction of the Drosophila chromosomal contact map, laying the foundation for detailed studies of

  • chromosome structure and function in a genetically tractable system.

“Mr. President; The Genome is Fractal !” Eric Lander (Science Adviser to the President and Director of Broad Institute) et al. delivered the message on Science Magazine cover (Oct. 9, 2009) and generated interest in this by the International HoloGenomics Society at a Sept meeting. First, it may seem to be trivial to rectify the statement in “About cover” of Science Magazine by AAAS. The statement

  • “the Hilbert curve is a one-dimensional fractal trajectory” needs mathematical clarification.

While the paper itself does not make this statement, the new Editorship of the AAAS Magazine might be even more advanced if the previous Editorship did not reject (without review)

  • a Manuscript by 20+ Founders of (formerly) International PostGenetics Society in December, 2006.

Second, it may not be sufficiently clear for the reader that the reasonable requirement for the

  • DNA polymerase to crawl along a “knot-free” (or “low knot”) structure does not need fractals.

A “knot-free” structure could be spooled by an ordinary “knitting globule” (such that the DNA polymerase does not bump into a “knot” when duplicating the strand; just like someone knitting can go through the entire thread without encountering an annoying knot):

  • Just to be “knot-free” you don’t need fractals.

Note, however, that the “strand” can be accessed only at its beginning – it is impossible to e.g.

  • to pluck a segment from deep inside the “globulus”.

This is where certain fractals provide a major advantage – that could be the “Eureka” moment. For instance, the mentioned Hilbert-curve is not only “knot free” – but provides an easy access to

  • “linearly remote” segments of the strand.

If the Hilbert curve starts from the lower right corner and ends at the lower left corner, for instance

  • the path shows the very easy access of what would be the mid-point if the Hilbert-curve
  • is measured by the Euclidean distance along the zig-zagged path.

Likewise, even the path from the beginning of the Hilbert-curve is about equally easy to access – easier than to reach from the origin a point that is about 2/3 down the path. The Hilbert-curve provides an easy access between two points within the “spooled thread”; from a point that is about 1/5 of the overall length to about 3/5 is also in a “close neighborhood”. This marvellous fractal structure is illustrated by the 3D rendering of the Hilbert-curve. Once you observe such fractal structure,

  • you’ll never again think of a chromosome as a “brillo mess”, would you?

It will dawn on you that the genome is orders of magnitudes more finessed than we ever thought so. Those embarking at a somewhat complex review of some historical aspects of the power of fractals may wish to consult the ouvre of Mandelbrot (also, to celebrate his 85th birthday). For the more sophisticated readers, even the fairly simple Hilbert-curve (a representative of the Peano-class) becomes even more stunningly brilliant than just some “see through density”. Those who are familiar with the classic “Traveling Salesman Problem” know that “the shortest path along which every given n locations can be visited once, and only once” requires fairly sophisticated algorithms (and tremendous amount of computation if n>10 (or much more). Some readers will be amazed, therefore, that for n=9 the underlying Hilbert-curve helps to provide an empirical solution. refer to pellionisz@junkdna.com Briefly, the significance of the above realization, that the (recursive) Fractal Hilbert Curve is intimately connected to the (recursive) solution of TravelingSalesman Problem, a core-concept of Artificial Neural Networks can be summarized as below. Accomplished physicist John Hopfield (already a member of the National Academy of Science) aroused great excitement in 1982 with his (recursive) design of artificial neural networks and learning algorithms which were able to find solutions to combinatorial problems such as the Traveling SalesmanProblem. (Book review Clark Jeffries, 1991; see  J Anderson, Rosenfeld, and A Pellionisz (eds.), Neurocomputing 2: Directions for research, MIT Press, Cambridge, MA, 1990): “Perceptions were modeled chiefly with neural connections in a “forward” direction: A -> B -* C — D. The analysis of networks with strong backward coupling proved intractable. All our interesting results arise as consequences of the strong back-coupling” (Hopfield, 1982). The Principle of Recursive Genome Function surpassed obsolete axioms that blocked, for half a Century, entry of recursive algorithms to interpretation of the structure-and function of (Holo)Genome.  This breakthrough,

  • by uniting the two largely separate fields of Neural Networks and Genome Informatics,

is particularly important for those who focused on Biological (actually occurring) Neural Networks (rather than  abstract algorithms that may not, or because of their core-axioms, simply could not represent neural networks under the governance of DNA information). If biophysicist Andras Pellionisz is correct, genetic science may be on the verge of yielding its third — and by far biggest — surprise. With a doctorate in physics, Pellionisz is the holder of Ph.D.’s in computer sciences and experimental biology from the prestigious Budapest Technical University and the Hungarian National Academy of Sciences. A biophysicist by training, the 59-year-old is a former research associate professor of physiology and biophysics at New York University, author of numerous papers in respected scientific journals and textbooks, a past winner of the prestigious Humboldt Prize for scientific research, a former consultant to NASA and holder of a patent on the world’s first artificial cerebellum, a technology that has already been integrated into research on advanced avionics systems. Because of his background, the Hungarian-born brain researcher might also become one of the first people to successfully launch a new company by

  • using the Internet to gather momentum for a novel scientific idea.

The genes we know about today, Pellionisz says, can be thought of as something similar to machines that make bricks (proteins, in the case of genes), with certain junk-DNA sections providing a blueprint for the different ways those proteins are assembled. The notion that at least certain parts of junk DNA might have a purpose for example, many researchers

  • now refer to with a far less derogatory term: introns.

In a provisional patent application filed July 31, Pellionisz claims to have

  • unlocked a key to the hidden role junk DNA

plays in growth — and in life itself. His patent application covers all attempts to

  • count,
  • measure and
  • compare

the fractal properties of introns for diagnostic and therapeutic purposes.

The FractoGene Decade from Inception in 2002 Proofs of Concept and Impending Clinical Applications by 2012Junk DNA Revisited (SF Gate, 2002)The Future of Life, 50th Anniversary of DNA (Monterey, 2003)Mandelbrot and Pellionisz (Stanford, 2004)Morphogenesis, Physiology and Biophysics (Simons, Pellionisz 2005)PostGenetics; Genetics beyond Genes (Budapest, 2006)ENCODE-conclusion (Collins, 2007)The Principle of Recursive Genome Function (paper, YouTube, 2008)You Tube Cold Spring Harbor presentation of FractoGene (Cold Spring Harbor, 2009)Mr. President, the Genome is Fractal! (2009)HolGenTech, Inc. Founded (2010)Pellionisz on the Board of Advisers in the USA and India (2011)ENCODE – final admission (2012) Recursive Genome Function is Clogged by Fractal Defects in Hilbert-Curve (2012) Geometric Unification of Neuroscience and Genomics (2012) US Patent Office issues FractoGene 8,280,641 to Pellionisz (2012) http://www.junkdna.com/the_fractogene_decade.pdf

The Hidden Fractal Language of Intron DNA

To fully understand Pellionisz’ idea, one must first know what a fractal is. Fractals are a way that nature organizes matter. Fractal patterns can be found in anything that has a nonsmooth surface (unlike a billiard ball), such as

  • coastal seashores,
  • the branches of a tree or
  • the contours of a neuron (a nerve cell in the brain).

Some, but not all, fractals are self-similar and stop repeating their patterns at some stage;

  • the branches of a tree, for example, can get only so small.

Because they are geometric, meaning they have a shape, fractals can be described in mathematical terms. It’s similar to the way a circle can be described by using a number to represent its radius (the distance from its center to its outer edge). When that number is known, it’s possible to draw the circle it represents without ever having seen it before. Although the math is much more complicated, the same is true of fractals. If one has the formula for a given fractal, it’s possible to use that formula to construct, or reconstruct, an image of whatever structure it represents, no matter how complicated. The mysteriously repetitive but not identical strands of genetic material are in reality

  • building instructions organized in a special type of pattern known as a fractal.

It’s this pattern of fractal instructions, he says, that tells genes what they must do in order to form living tissue, everything from the wings of a fly to the entire body of a full-grown human. In a move sure to alienate some scientists, Pellionisz chose the unorthodox route of making his initial disclosures online on his own Web site. He picked that strategy, he says, because it is the fastest way he can document his claims and find scientific collaborators and investors. Most mainstream scientists usually blanch at such approaches, preferring more traditionally credible methods, such as publishing articles in peer-reviewed journals. Pellionisz’ idea is that a fractal set of building instructions in the DNA plays a role in organizing life itself. Decode the language, and in theory it could be reverse engineered. Just as knowing the radius of a circle lets one create that circle. The fractal-based formula

  • would allow us to understand how a heart or disease-fighting antibodies is created.

The idea is  encourage new collaborations across the boundaries that separate the intertwined

  • disciplines of biology, mathematics and computer sciences.

Hal Plotkin, Special to SF Gate. Thursday, November 21, 2002. http://www.junkdna.com/ http://www.junkdna.com/the_fractogene_decade.pdf http://www.sciencentral.com/articles/view.php3?article_id=218392305 http://www.news-medical.net/health/Junk-DNA-What-is-Junk-DNA.aspx http://www.kurzweilai.net/junk-dna-plays-active-role-in-cancer-progression-researchers-find http://marginalrevolution.com/marginalrevolution/2013/05/the-battle-over-junk-dna http://profiles.nlm.nih.gov/SC/B/B/F/T/_/scbbft.pdf

Human Genome is Multifractal

The human genome: a multifractal analysis. Moreno PA, Vélez PE, Martínez E, et al.    BMC Genomics 2011, 12:506. http://www.biomedcentral.com/1471-2164/12/506 Several studies have shown that genomes can be studied via a multifractal formalism. These researchers used a multifractal approach to study the genetic information content of the Caenorhabditis elegans genome. They investigated the possibility that the human genome shows a similar behavior to that observed in the nematode. They report

  • multifractality in the human genome sequence.

This behavior correlates strongly on the presence of Alu elements and to a lesser extent on CpG islands and (G+C) content.

  1. Gene function,
  2. cluster of orthologous genes,
  3. metabolic pathways, and
  4. exons
  • tended to increase their frequencies with ranges of multifractality and
  • large gene families were located in genomic regions with varied multifractality.
  • a multifractal map and classification for human chromosomes are proposed.

They propose a descriptive non-linear model for the structure of the human genome. This model reveals a multifractal regionalization where many regions coexist that are

  • far from equilibrium and this non-linear organization has significant
  • molecular and medical genetic implications for understanding the role of Alu elements in genome stability and structure of the human genome.

Given the role of Alu sequences in

  • gene regulation
  • genetic diseases
  • human genetic diversity
  • adaptation and phylogenetic analyses

these quantifications are especially useful.

MiIP: The Monomer Identification and Isolation Program

Bun C, Ziccardi W, Doering J and Putonti C. Evolutionary Bioinformatics 2012:8 293-300. http://dx.doi.org/10.4137/EBO.S9248 Repetitive elements within genomic DNA are both functionally and evolution-wise informative. Discovering these sequences ab initio is computationally challenging, compounded by the fact that sequence identity between repetitive elements can vary significantly. These investigators present a new application, the Monomer Identification and Isolation Program (MiIP),

  • which provides functionality to both search for a particular repeat as well as
  • discover repetitive elements within a larger genomic sequence.

To compare MiIP’s performance with other repeat detection tools, analysis was conducted for synthetic sequences as well as several a21-II clones and HC21 BAC sequences. The main benefit of MiIP is

  • it is a single tool capable of searching for both known monomeric sequences
  • discovering the occurrence of repeats ab initio

Triplex DNA: A third strand for DNA

The DNA double helix can under certain conditions accommodate

  • a third strand in its major groove.

Researchers in the UK  presented a complete set of four variant nucleotides that makes it

  • possible to use this phenomenon in gene regulation and mutagenesis.

Natural DNA only forms a triplex if the targeted strand is rich in purines – guanine (G) and adenine (A) – which in addition to the bonds of the Watson-Crick base pairing

  • can form two further hydrogen bonds,
  •  the ‘third strand’ oligonucleotide has the matching sequence of pyrimidines – cytosine (C) and thymine (T).

Any Cs or Ts in the target strand of the duplex will only bind very weakly, as

  • they contribute just one hydrogen bond.

Moreover, the recognition of G requires the C in the probe strand to be protonated,

  • triplex formation will only work at low pH.

To overcome all these problems, the groups of Tom Brown and Keith Fox at the University of Southampton have developed modified building blocks, and have now completed

  • a set of four new nucleotides, each of which will bind to one DNA nucleotide from the major groove of the double helix.

They tested the binding of a 19-mer of these designer nucleotides to a double helix target sequence in comparison with the corresponding triplex-forming oligonucleotide made from natural DNA bases. Using fluorescence-monitored thermal melting and DNase I footprinting, the researchers showed that

  • their construct forms stable triplex even at neutral pH. 

Tests with mutated versions of the target sequence showed that

  • three of the novel nucleotides are highly selective for their target base pair,
  • while the ‘S’ nucleotide, designed to bind to T, also tolerates C.

References

DA Rusling et al, Nucleic Acids Res. 2005, 33, 3025 http://nucleicacidsres.com/Rusling_DA KM Vasquez et al, Science 2000, 290, 530 http://Science.org/2000/290.530/Vazquez_KM/ Frank-Kamenetskii MD, Mirkin SM. Annual Rev Biochem 1995; 64:69-95. http://www.annualreviews.org/aronline/1995/Frank-Kamenetski_MD/64.69/ Since the pioneering work of Felsenfeld, Davies, & Rich, double-stranded polynucleotides containing purines in one strand and pydmidines in the other strand [such as poly(A)/poly(U), poly(dA)/poly(dT), or poly(dAG)/ poly(dCT)] have been known to be able to undergo a stoichiometric transition forming a triple-stranded structure containing one polypurine and two poly-pyrimidine strands. Early on, it was assumed that the third strand was located in the major groove and associated with the duplex via non-Watson-Crick interactions now

  • known as Hoogsteen pairing.

Triple helices consisting of one pyrimidine and two purine strands were also proposed. However, notwithstanding the fact that single-base triads in tRNA structures were well- documented, triple-helical DNA escaped wide attention before the mid-1980s. The interest in DNA triplexes arose due to two partially independent developments.

  1.  homopurine-homopyrimidine stretches in super-coiled plasmids were found to adopt an unusual DNA structure, called H-DNA which includes a triplex.
  2. several groups demonstrated that homopyrimidine and some purine-rich oligonucleotides
  • can form stable and sequence-specific complexes with
  • corresponding homopurine-homopyrimidine sites on duplex DNA.

These complexes were shown to be triplex structures rather than D-loops, where

  • the oligonucleotide invades the double helix and displaces one strand.

A characteristic feature of all these triplexes is that the two

  • chemically homologous strands (both pyrimidine or both purine) are antiparallel.

These findings led explosive growth in triplex studies. One can easily imagine numerous “geometrical” ways to form a triplex, and those that have been studied experimentally. The canonical intermolecular triplex consists of either

  • three independent
  • oligonucleotide chains or of a
  • long DNA duplex carrying homopurine-homopyrimidine insert
    • and the corresponding oligonucleotide.

Triplex formation strongly depends on the oligonucleotide(s) concentration. A single DNA

  • chain may also fold into a triplex connected by two loops.

To comply with the sequence and polarity requirements for triplex formation, such a DNA strand must have a peculiar sequence: It contains a mirror repeat

  1. (homopyrimidine for YR*Y triplexes and homopurine for YR*R triplexes)
  2. flanked by a sequence complementary to one half of this repeat.

Such DNA sequences fold into triplex configuration much more readily than do the corresponding intermolecular triplexes, because all triplex forming segments are brought together within the same molecule. It has become clear that both

  • sequence requirements and chain polarity rules for triplex formation

can be met by DNA target sequences built of clusters of purines and pyrimidines. The third strand consists of adjacent homopurine and homopyrimidine blocks forming Hoogsteen hydrogen bonds with purines on alternate strands of the target duplex, and

  • this strand switch preserves the proper chain polarity.

These structures, called alternate-strand triplexes, have been experimentally observed as both intra- and inter-molecular triplexes. These results increase the number of potential targets for triplex formation in natural DNAs somewhat by adding sequences composed of purine and pyrimidine clusters, although arbitrary sequences are still not targetable because

  • strand switching is energetically unfavorable.

References: Lyamichev VI, Mirkin SM, Frank-Kamenetskii MD. J. Biomol. Stract. Dyn. 1986; 3:667-69. http://JbiomolStractDyn.com/1986/Lyamichev_VI/3.667/ Filippov SA, Frank-Kamenetskii MD. Nature 1987; 330:495-97. http://Nature.com/1987/Fillipov_SA/330.495/ Demidov V, Frank-Kamenetskii MD, Egholm M, Buchardt O, Nielsen PE. Nucleic Acids Res. 1993; 21:2103-7. http://NucleicAcidsResearch.com/1993/Demidov_V/21.2103/ Mirkin SM, Frank-Kamenetskii MD. Anna. Rev. Biophys. Biomol. Struct. 1994; 23:541-76. http://AnnRevBiophysBiomolecStructure.com/1994/Mirkin_SM/23.541/ Hoogsteen K. Acta Crystallogr. 1963; 16:907-16 http://ActaCrystallogr.com/1963/Hoogsteen_K/16.907/ Malkov VA, Voloshin ON, Veselkov AG, Rostapshov VM, Jansen I, et al. Nucleic Acids Res. 1993; 21:105-11. http://NucleicAcidsResearch.com/1993/Malkov_VA/21.105 Malkov VA, Voloshin ON, Soyfer VN, Frank-Kamenetskii MD. Nucleic Acids Res. 1993; 21:585-91 http://NucleicAcidsRes.com/1993/Malkov_VA/21.585/ Chemy DY, Belotserkovskii BP, Frank-Kamenetskii MD, Egholm M, Buchardt O, et al. Proc. Natl. Acad. Sci. USA 1993; 90:1667-70 http://PNAS.org/1993/Chemy_DY/90.1667/ Triplex forming oligonucleotides Triplex forming oligonucleotides: sequence-specific tools for genetic targeting. Knauert MP, Glazer PM. Human Molec Genetics 2001; 10(20):2243-2251. http://HumanMolecGenetics.com/2001/Knauert_ MP/10.2243/ Triplex forming oligonucleotides (TFOs) bind in the major groove of duplex DNA with a

  • high specificity and affinity.

Because of these characteristics, TFOs have been proposed as

  • homing devices for genetic manipulation in vivo.

These investigators review work demonstrating the ability of TFOs and related molecules

  • to alter gene expression and mediate gene modification in mammalian cells.

TFOs can mediate targeted gene knock out in mice, providing a foundation for potential

  • application of these molecules in human gene therapy.

The Triplex Genetic Code

Novagon DNA John Allen Berger, founder of Novagon DNA and The Triplex Genetic Code Over the past 12+ years, Novagon DNA has amassed a vast array of empirical findings which

  • challenge the “validity” of the “central dogma theory”, especially the current five nucleotide
  • Watson-Crick DNA and RNA genetic codes. DNA = A1T1G1C1, RNA =A2U1G2C2.

We propose that our new Novagon DNA 6 nucleotide Triplex Genetic Code has more validity than

  • the existing 5 nucleotide (A1T1U1G1C1) Watson-Crick genetic codes.

Our goal is to conduct a “world class” validation study to replicate and extend our findings.

Methods for Examining Genomic and Proteomic Interactions.

An Integrated Statistical Approach to Compare Transcriptomics Data Across Experiments: A Case Study on the Identification of Candidate Target Genes of the Transcription Factor PPARα Ullah MO, Müller M and Hooiveld GJEJ. Bioinformatics and Biology Insights 2012;6: 145–154. http://dx.doi.org/10.4137/BBI.S9529 http://www.ncbi.nlm.nih.gov/pubmed/22783064 Corresponding author email: guido.hooiveld@wur.nl       http://edepot.wur.nl/213859 An effective strategy to elucidate the signal transduction cascades activated by a transcription factor

  • is to compare the transcriptional profiles of wild type and transcription factor knockout models.

Many statistical tests have been proposed for analyzing gene expression data, but

  • most tests are based on pair-wise comparisons.

Since the analysis of microarrays involves the testing of multiple hypotheses within one study,

  • it is generally accepted to control for false positives by the false discovery rate (FDR).

However, this may be an inappropriate metric for

    • comparing data across different experiments.

Here we propose  the simultaneous testing and integration of

  • the three hypotheses (contrasts) using the cell means ANOVA model.

These three contrasts test for the effect of a treatment in

  1. wild type,
  2. gene knockout, and
  3. globally over all experimental groups

We compare differential expression of genes across experiments while

  • controlling for multiple hypothesis testing,
  • managing biological complexity across orthologs
  • with a visual knowledgebase of documented biomolecular interactions.

Vincent Van Buren & Hailin Chen. Scientific Reports 2012; 2, Article number: 1011 http://dx.doi.org/10.1038/srep01011 The complexity of biomolecular interactions and influences is a major obstacle

  • to their comprehension and elucidation.

Visualizing knowledge of biomolecular interactions increases

  • comprehension and facilitates the development of new hypotheses.

The rapidly changing landscape of high-content experimental results also presents a challenge

  • for the maintenance of comprehensive knowledgebases.

Distributing the responsibility for maintenance of a knowledgebase to a community of

  • experts is an effective strategy for large, complex and rapidly changing knowledgebases.

Cognoscente serves these needs

  • by building visualizations for queries of biomolecular interactions on demand,
  • by managing the complexity of those visualizations, and
  • by crowdsourcing to promote the incorporation of current knowledge from the literature.

Imputing functional associations

  • between biomolecules and imputing directionality of regulation
  • for those predictions each require a corpus of existing knowledge as a framework.

Comprehension of the complexity of this corpus of knowledge will be facilitated by effective

  • visualizations of the corresponding biomolecular interaction networks.

Cognoscente (http://vanburenlab.medicine.tamhsc.edu/cognoscente.html) was designed and implemented to serve these roles as a knowledgebase and as

  • an effective visualization tool for systems biology research and education.

Cognoscente currently contains over 413,000 documented interactions, with coverage across multiple species. Perl, HTML, GraphViz1, and a MySQL database were used in the development of Cognoscente. Cognoscente was motivated by the need to update the knowledgebase of

  • biomolecular interactions at the user level, and
  • flexibly visualize multi-molecule query results for
    • heterogeneous interaction types across different orthologs.

Satisfying these needs provides a strong foundation for developing new hypotheses about

  • regulatory and metabolic pathway topologies.

Several existing tools provide functions that are similar to Cognoscente.

Hilbert 3D curve, iteration 3

Hilbert 3D curve, iteration 3 (Photo credit: Wikipedia)

3-dimensionnal Hilbert cube.

3-dimensionnal Hilbert cube. (Photo credit: Wikipedia)

0tj, 1st and 2nd iteration of Hilbert curve in...

0tj, 1st and 2nd iteration of Hilbert curve in 3D. If you’re looking for the source file, contact me. (Photo credit: Wikipedia)

8 first steps of the building of the Hilbert c...

8 first steps of the building of the Hilbert curve in animated gif (Photo credit: Wikipedia)

Read Full Post »

Finding the Genetic Links in Common Disease:  Caveats of Whole Genome Sequencing Studies

Writer and Reporter: Stephen J. Williams, Ph.D.

In the November 23, 2012 issue of Science, Jocelyn Kaiser reports (Genetic Influences On Disease Remain Hidden in News and Analysis)[1] on the difficulties that many genomic studies are encountering correlating genetic variants to high risk of type 2 diabetes and heart disease.  At the recent American Society of Human Genetics annual 2012 meeting, results of several DNA sequencing studies reported difficulties in finding genetic variants and links to high risk type 2 diabetes and heart disease.  These studies were a part of an international effort to determine the multiple genetic events contributing to complex, common diseases like diabetes.  Unlike Mendelian inherited diseases (like ataxia telangiectasia) which are characterized by defects mainly in one gene, finding genetic links to more complex diseases may pose a problem as outlined in the article:

  • Variants may be so rare that massive number of patient’s genome would need to be analyzed
  • For most diseases, individual SNPs (single nucleotide polymorphisms) raise risk modestly
  • Hard to find isolated families (hemophilia) or isolated populations (Ashkenazi Jew)
  • Disease-influencing genes have not been weeded out by natural selection after human population explosion (~5000 years ago) resulted in numerous gene variants
  • What percentage variants account for disease heritability (studies have shown this is as low as 26% for diabetes with the remaining risk determined by environment)

Although many genome-wide-associations studies have found SNPs that have causality to increasing risk diseases such as cancer, diabetes, and heart disease, most individual SNPs for common diseases raise risk by about only 20-40% and would be useless for predicting an individual’s chance they will develop disease and be a candidate for a personalized therapy approach.  Therefore, for common diseases, investigators are relying on direct exome sequencing and whole-genome sequencing to detect these medium-rare risk variants, rather than relying on genome-wide association studies (which are usually fine for detecting the higher frequency variants associated with common diseases).

Three of the many projects (one for heart risk and two for diabetes risk) are highlighted in the article:

1.  National Heart, Lung and Blood Institute Exome Sequencing Project (ESP)[2]: heart, lung, blood

  • Sequenced 6,700 exomes of European or African descent
  • Majority of variants linked to disease too rare (as low as one variant)
  • Groups of variants in the same gene confirmed link between APOC3 and higher risk for early-onset heart attack
  • No other significant gene variants linked with heart disease

2.  T2D-GENES Consortium: diabetes

Sequenced 5,300 exomes of type 2 diabetes patients and controls from five ancestry groups
SNP in PAX4 gene associated with disease in East Asians
No low-frequency variant with large effect though

3.  GoT2D: diabetes

  • After sequencing 2700 patient’s exomes and whole genome no new rare variants above 1.5% frequency with a strong effect on diabetes risk

A nice article by Dr. Sowmiya Moorthie entitled Involvement of rare variants in common disease can be found at the PGH Foundation site http://www.phgfoundation.org/news/5164/ further discusses this conundrum,  and is summarized below:

“Although GWAs have identified many SNPs associated with common disease, they have as yet had little success in identifying the causative genetic variants. Those that have been identified have only a weak effect on disease risk, and therefore only explain a small proportion of the heritable, genetic component of susceptibility to that disease. This has led to the common disease-common variant hypothesis, which predicts that common disease-causing genetic variants exist in all human populations, but each individual variant will necessarily only have a small effect on disease susceptibility (i.e. a low associated relative risk).

An alternative hypothesis is the common disease, many rare variants hypothesis, which postulates that disease is caused by multiple strong-effect variants, each of which is only found in a few individuals. Dickson et al. in a paper in PLoS Biology postulate that these rare variants can be indirectly associated with common variants; they call these synthetic associations and demonstrate how further investigation could help explain findings from GWA studies [Dickson et al. (2010) PLoS Biol. 8(1):e1000294][3].  In simulation experiments, 30% of synthetic associations were caused by the presence of rare causative variants and furthermore, the strength of the association with common variants also increased if the number of rare causative variants increased. “

one_of_many rare variants

Figure from Dr. Moorthie’s article showing the problem of “finding one in many”.

(please   click to enlarge)

Indeed, other examples of such issues concerning gene variant association studies occur with other common diseases such as neurologic diseases and obesity, where it has been difficult to clearly and definitively associate any variant with prediction of risk.

For example, Nuytemans et. al.[4] used exome sequencing to find variants in the vascular protein sorting 3J (VPS35) and eukaryotic transcription initiation factor 4  gamma1 (EIF4G1) genes, tow genes causally linked to Parkinson’s Disease (PD).  Although they identified novel VPS35 variants none of these variants could be correlated to higher risk of PD.   One EIF4G1 variant seemed to be a strong Parkinson’s Disease risk factor however there was “no evidence for an overall contribution of genetic variability in VPS35 or EIF4G1 to PD development”.

These negative results may have relevance as companies such as 23andme (www.23andme.com) claim to be able to test for Parkinson’s predisposition.  To see a description of the LLRK2 mutational analysis which they use to determine risk for the disease please see the following link: https://www.23andme.com/health/Parkinsons-Disease/. This company and other like it have been subjects of posts on this site (Personalized Medicine: Clinical Aspiration of Microarrays)

However there seems to be more luck with strategies focused on analyzing intronic sequence rather than exome sequence. Jocelyn Kaiser’s Science article notes this in a brief interview with Harry Dietz of Johns Hopkins University where he suspects that “much of the missing heritability lies in gene-gene interactions”.  Oliver Harismendy and Kelly Frazer and colleagues’ recent publication in Genome Biology  http://genomebiology.com/content/11/11/R118 support this notion[5].  The authors used targeted resequencing of two endocannabinoid metabolic enzyme genes (fatty-acid-amide hydrolase (FAAH) and monoglyceride lipase (MGLL) in 147 normal weight and 142 extremely obese patients.

These patients were enrolled in the CRESCENDO trial and patients analyzed were of European descent. However, instead of just exome sequencing, the group resequenced exome AND intronic sequence, especially focusing on promoter regions.   They identified 1,448 single nucleotide variants but using a statistical filter (called RareCover which is referred to as a collapsing method) they found 4 variants in the promoters and intronic areas of the FAAH and MGLL genes which correlated to body mass index.  It should be noted that anandamide, a substrate for FAAH, is elevated in obese patients. The authors did note some issues though mentioning that “some other loci, more weakly or inconsistently associated in the original GWASs, were not replicated in our samples, which is not too surprising given the sample size of our cohort is inadequate to replicate modest associations”.

PLEASE WATCH VIDEO on the National Heart, Lung and Blood Institute Exome Sequencing Project

https://www.youtube.com/watch?v=-Qr5ahk1HEI

REFERENCES

http://www.phgfoundation.org/news/5164/  PHG Foundation

1.            Kaiser J: Human genetics. Genetic influences on disease remain hidden. Science 2012, 338(6110):1016-1017.

2.            Tennessen JA, Bigham AW, O’Connor TD, Fu W, Kenny EE, Gravel S, McGee S, Do R, Liu X, Jun G et al: Evolution and functional impact of rare coding variation from deep sequencing of human exomes. Science 2012, 337(6090):64-69.

3.            Dickson SP, Wang K, Krantz I, Hakonarson H, Goldstein DB: Rare variants create synthetic genome-wide associations. PLoS biology 2010, 8(1):e1000294.

4.            Nuytemans K, Bademci G, Inchausti V, Dressen A, Kinnamon DD, Mehta A, Wang L, Zuchner S, Beecham GW, Martin ER et al: Whole exome sequencing of rare variants in EIF4G1 and VPS35 in Parkinson disease. Neurology 2013, 80(11):982-989.

5.            Harismendy O, Bansal V, Bhatia G, Nakano M, Scott M, Wang X, Dib C, Turlotte E, Sipe JC, Murray SS et al: Population sequencing of two endocannabinoid metabolic genes identifies rare and common regulatory variants associated with extreme obesity and metabolite level. Genome biology 2010, 11(11):R118.

Other posts on this site related to Genomics include:

Cancer Biology and Genomics for Disease Diagnosis

Diagnosis of Cardiovascular Disease, Treatment and Prevention: Current & Predicted Cost of Care and the Promise of Individualized Medicine Using Clinical Decision Support Systems

Ethical Concerns in Personalized Medicine: BRCA1/2 Testing in Minors and Communication of Breast Cancer Risk

Genomics & Genetics of Cardiovascular Disease Diagnoses: A Literature Survey of AHA’s Circulation Cardiovascular Genetics, 3/2010 – 3/2013

Genomics-based cure for diabetes on-the-way

Personalized Medicine: Clinical Aspiration of Microarrays

Late Onset of Alzheimer’s Disease and One-carbon Metabolism

Genetics of Disease: More Complex is How to Creating New Drugs

Genetics of Conduction Disease: Atrioventricular (AV) Conduction Disease (block): Gene Mutations – Transcription, Excitability, and Energy Homeostasis

Centers of Excellence in Genomic Sciences (CEGS): NHGRI to Fund New CEGS on the Brain: Mental Disorders and the Nervous System

Cancer Genomic Precision Therapy: Digitized Tumor’s Genome (WGSA) Compared with Genome-native Germ Line: Flash-frozen specimen and Formalin-fixed paraffin-embedded Specimen Needed

Mitochondrial Metabolism and Cardiac Function

Pancreatic Cancer: Genetics, Genomics and Immunotherapy

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

Quantum Biology And Computational Medicine

Personalized Cardiovascular Genetic Medicine at Partners HealthCare and Harvard Medical School

Centers of Excellence in Genomic Sciences (CEGS): NHGRI to Fund New CEGS on the Brain: Mental Disorders and the Nervous System

LEADERS in Genome Sequencing of Genetic Mutations for Therapeutic Drug Selection in Cancer Personalized Treatment: Part 2

Consumer Market for Personal DNA Sequencing: Part 4

Personalized Medicine: An Institute Profile – Coriell Institute for Medical Research: Part 3

Whole-Genome Sequencing Data will be Stored in Coriell’s Spin off For-Profit Entity

 

Read Full Post »

« Newer Posts - Older Posts »