Healthcare analytics, AI solutions for biological big data, providing an AI platform for the biotech, life sciences, medical and pharmaceutical industries, as well as for related technological approaches, i.e., curation and text analysis with machine learning and other activities related to AI applications to these industries.
3.3.8 The 3rd STATONC Annual Symposium, April 25-27, 2019, Hilton Hartford, CT, 315 Trumbull St., Hartford, CT 06103, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair
SYMPOSIUM OBJECTIVES
The three-day symposium aims to bring oncologists and statisticians together to share new research, discuss novel ideas, ask questions and provide solutions for cancer clinical trials. In the era of big data, precision medicine, and genomics and immune-based oncology, it is crucial to provide a platform for interdisciplinary dialogues among clinical and quantitative scientists. The Stat4Onc Annual Symposium serves as a venue for oncologists and statisticians to communicate their views on trial design and conduct, drug development, and translations to patient care. To be discussed includes big data and genomics for oncology clinical trials, novel dose-finding designs, drug combinations, immune oncology clinical trials, and umbrella/basket oncology trials. An important aspect of Stat4Onc is the participation of researchers across academia, industry, and regulatory agency.
Meeting Agenda will be announced coming soon. For Updated Agenda and Program Speakers please CLICK HERE
Hypertriglyceridemia: Evaluation and Treatment Guideline
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
Severe and very severe hypertriglyceridemia increase the risk for pancreatitis, whereas mild or moderate hypertriglyceridemia may be a risk factor for cardiovascular disease. Individuals found to have any elevation of fasting triglycerides should be evaluated for secondary causes of hyperlipidemia including endocrine conditions and medications. Patients with primary hypertriglyceridemia must be assessed for other cardiovascular risk factors, such as central obesity, hypertension, abnormalities of glucose metabolism, and liver dysfunction. The aim of this study was to develop clinical practice guidelines on hypertriglyceridemia.
The diagnosis of hypertriglyceridemia should be based on fasting levels, that mild and moderate hypertriglyceridemia (triglycerides of 150–999 mg/dl) be diagnosed to aid in the evaluation of cardiovascular risk, and that severe and very severe hypertriglyceridemia (triglycerides of >1000 mg/dl) be considered a risk for pancreatitis. The patients with hypertriglyceridemia must be evaluated for secondary causes of hyperlipidemia and that subjects with primary hypertriglyceridemia be evaluated for family history of dyslipidemia and cardiovascular disease.
The treatment goal in patients with moderate hypertriglyceridemia should be a non-high-density lipoprotein cholesterol level in agreement with National Cholesterol Education Program Adult Treatment Panel guidelines. The initial treatment should be lifestyle therapy; a combination of diet modification, physical activity and drug therapy may also be considered. In patients with severe or very severe hypertriglyceridemia, a fibrate can be used as a first-line agent for reduction of triglycerides in patients at risk for triglyceride-induced pancreatitis.
Three drug classes (fibrates, niacin, n-3 fatty acids) alone or in combination with statins may be considered as treatment options in patients with moderate to severe triglyceride levels. Statins are not be used as monotherapy for severe or very severe hypertriglyceridemia. However, statins may be useful for the treatment of moderate hypertriglyceridemia when indicated to modify cardiovascular risk.
Once herpes simplex infects a person, the virus goes into hiding inside nerve cells, hibernating there for life, periodically waking up from its sleep to reignite infection, causing cold sores or genital lesions to recur. Research from Harvard Medical School showed that the virus uses a host protein called CTCF, or cellular CCCTC-binding factor, to display this type of behavior. Researchers revealed with experiments on mice that CTCF helps herpes simplex regulate its own sleep-wake cycle, enabling the virus to establish latent infections in the body’s sensory neurons where it remains dormant until reactivated. Preventing that latency-regulating protein from binding to the virus’s DNA, weakened the virus’s ability to come out of hiding.
Herpes simplex virus’s ability to go in and out of hiding is a key survival strategy that ensures its propagation from one host to the next. Such symptom-free latency allows the virus to remain out of the reach of the immune system most of the time, while its periodic reactivation ensures that it can continue to spread from one person to the next. On one hand, so-called latency-associated transcript genes, or LAT genes, turn off the transcription of viral RNA, inducing the virus to go into hibernation, or latency. On the other hand, a protein made by a gene called ICP0 promotes the activity of genes that stimulate viral replication and causes active infection.
Based on these earlier findings, the new study revealed that this balancing act is enabled by the CTCF protein when it binds to the viral DNA. Present during latent or dormant infections, CTCF is lost during active, symptomatic infections. The researchers created an altered version of the virus that lacked two of the CTCF binding sites. The absence of the binding sites made no difference in early-stage or acute infections. Similar results were found in infected cultured human nerve cells (trigeminal ganglia) and infected mice model. The researchers concluded that the mutant virus was found to have significantly weakened reactivation capacity.
Taken together, the experiments showed that deleting the CTCF binding sites weakened the virus’s ability to wake up from its dormant state thereby establishing the evidence that the CTCF protein is a key regulator of sleep-wake cycle in herpes simplex infections.
Bioinformatics Tool Review: Genome Variant Analysis Tools, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK. wm2@ebi.ac.uk.
2
European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK.
3
European Molecular Biology Laboratory, European Bioinformatics Institute, Wellcome Genome Campus, Hinxton, Cambridge, CB10 1SD, UK. fiona@ebi.ac.uk.
Abstract
The Ensembl Variant Effect Predictor is a powerful toolset for the analysis, annotation, and prioritization of genomic variants in coding and non-coding regions. It provides access to an extensive collection of genomic annotation, with a variety of interfaces to suit different requirements, and simple options for configuring and extending analysis. It is open source, free to use, and supports full reproducibility of results. The Ensembl Variant Effect Predictor can simplify and accelerate variant interpretation in a wide range of study designs.
Rare diseases can be difficult to diagnose due to low incidence and incomplete penetrance of implicated alleles however variant analysis of whole genome sequencing can identify underlying genetic events responsible for the disease (Nature, 2015). However, a large cohort is required for many WGS association studies in order to produce enough statistical power for interpretation (see post and here). To this effect major sequencing projects have been initiated worldwide including:
And although sequencing costs have dramatically been reduced over the years, the costs to determine the functional consequences of such variants remains high, as thorough basic research studies must be conducted to validate the interpretation of variant data with respect to the underlying disease, as only a small fraction of variants from a genome sequencing project will encode for a functional protein. Correct annotation of sequences and variants, identification of correct corresponding reference genes or transcripts in GENCODE or RefSeq respectively offer compelling challenges to the proper identification of sequenced variants as potential functional variants.
To this effect, the authors developed the Ensembl Variant Effect Predictor (VEP), which is a software suite that performs annotations and analysis of most types of genomic variation in coding and non-coding regions of the genome.
Summary of Features
Annotation: VEP can annotate two broad categories of genomic variants
Sequence variants with specific and defined changes: indels, base substitutions, SNVs, tandem repeats
Larger structural variants > 50 nucleotides
Species and assembly/genomic database support: VEP can analyze data from any species with assembled genome sequence and annotated gene set. VEP supports chromosome assemblies such as the latest GRCh38, FASTA, as well as transcripts from RefSeq as well as user-derived sequences
Transcript Annotation: VEP includes a wide variety of gene and transcript related information including NCBI Gene ID, Gene Symbol, Transcript ID, NCBI RefSeq ID, exon/intron information, and cross reference to other databases such as UniProt
Protein Annotation: Protein-related fields include Protein ID, RefSeq ID, SwissProt, UniParc ID, reference codons and amino acids, SIFT pathogenicity score, protein domains
Noncoding Annotation: VEP reports variants in noncoding regions including genomic regulatory regions, intronic regions, transcription binding motifs. Data from ENCODE, BLUEPRINT, and NIH Epigenetics RoadMap are used for primary annotation. Plugins to the Perl coding are also available to link other databases which annotate noncoding sequence features.
Frequency, phenotype, and citation annotation: VEP searches Ensembl databases containing a large amount of germline variant information and checks variants against the dbSNP single nucleotide polymorphism database. VEP integrates with mutational databases such as COSMIC, the Human Gene Mutation Database, and structural and copy number variants from Database of Genomic Variants. Allele Frequencies are reported from 1000 Genomes and NHLBI and integrates with PubMed for literature annotation. Phenotype information is from OMIM, Orphanet, GWAS and clinical information of variants from ClinVar.
Flexible Input and Output Formats: VEP supports input data format called “variant call format” or VCP, a standard in next-gen sequencing. VEP has the ability to process variant identifiers from other database formats. Output formats are tab deliminated and give the user choices in presentation of results (HTML or text based)
Choice of user interface
Online tool (VEP Web): simple point and click; incorporates Instant VEP Functionality and copy and paste features. Results can be stored online in cloud storage on Ensembl.
VEP script: VEP is available as a downloadable PERL script (see below for link) and can process large amounts of data rapidly. This interface is powerfully flexible with the ability to integrate multiple plugins available from Ensembl and GitHub. The ability to alter the PERL code and add plugins and code functions allows the flexibility to modify any feature of VEP.
VEP REST API: provides robust computational access to any programming language and returns basic variant annotation. Can make use of external plugins.
Watch Video on VES Web Version training on How to Analyze Your Sequence in VEP
Availability of data and materials
The dataset supporting the conclusions of this article is available from Illumina’s Platinum Genomes [93] and using the Ensembl release 75 gene set. Pre-built data sets are available for all Ensembl and Ensembl Genomes species [94]. They can also be downloaded automatically during set up whilst installing the VEP.
Nature. 2015 Mar 12;519(7542):223-8. doi: 10.1038/nature14135. PMID:25533962
Updated 11/15/2018
Research Points to Caution in Use of Variant Effect Prediction Bioinformatic Tools
Although we have the ability to use high throughput sequencing to identify allelic variants occurring in rare disease, correlation of these variants with the underlying disease is often difficult due to a few concerns:
For rare sporadic diseases, classical gene/variant association studies have proven difficult to perform (Meyts et al. 2016)
As Whole Exome Sequencing (WES) returns a considerable number of variants, how to differentiate the normal allelic variation found in the human population from disease-causing pathogenic alleles
For rare diseases, pathogenic allele frequencies are generally low
Therefore, for these rare pathogenic alleles, the use of bioinformatics tools in order to predict the resulting changes in gene function may provide insight into disease etiology when validation of these allelic changes might be experimentally difficult.
In a 2017 Genes & Immunity paper, Line Lykke Andersen and Rune Hartmann tested the reliability of various bioinformatic software to predict the functional consequence of variants of six different genes involved in interferon induction and sixteen allelic variants of the IFNLR1 gene. These variants were found in cohorts of patients presenting with herpes simplex encephalitis (HSE). Most of the adult population is seropositive for Herpes Simplex Virus (HSV) however a minor fraction (1 in 250,000 individuals per year) of HSV infected individuals will develop HSE (Hjalmarsson et al., 2007). It has been suggested that HSE occurs in individuals with rare primary immunodeficiencies caused by gene defects affecting innate immunity through reduced production of interferons (IFN) (Zhang et al., Lim et al.).
References
Meyts I, Bosch B, Bolze A, Boisson B, Itan Y, Belkadi A, et al. Exome and genome sequencing for inborn errors of immunity. J Allergy Clin Immunol. 2016;138:957–69.
Hjalmarsson A, Blomqvist P, Skoldenberg B. Herpes simplex encephalitis in Sweden, 1990-2001: incidence, morbidity, and mortality. Clin Infect Dis. 2007;45:875–80.
Zhang SY, Jouanguy E, Ugolini S, Smahi A, Elain G, Romero P, et al. TLR3 deficiency in patients with herpes simplex encephalitis. Science. 2007;317:1522–7.
Lim HK, Seppanen M, Hautala T, Ciancanelli MJ, Itan Y, Lafaille FG, et al. TLR3 deficiency in herpes simplex encephalitis: high allelic heterogeneity and recurrence risk. Neurology. 2014;83:1888–97.
Genes Immun. 2017 Dec 4. doi: 10.1038/s41435-017-0002-z.
We selected two sets of naturally occurring human missense allelic variants within innate immune genes. The first set represented eleven non-synonymous variants in six different genes involved in interferon (IFN) induction, present in a cohort of patients suffering from herpes simplex encephalitis (HSE) and the second set represented sixteen allelic variants of the IFNLR1 gene. We recreated the variants in vitro and tested their effect on protein function in a HEK293T cell based assay. We then used an array of 14 available bioinformatics tools to predict the effect of these variants upon protein function. To our surprise two of the most commonly used tools, CADD and SIFT, produced a high rate of false positives, whereas SNPs&GO exhibited the lowest rate of false positives in our test. As the problem in our test in general was false positive variants, inclusion of mutation significance cutoff (MSC) did not improve accuracy.
Methodology
Identification of rare variants
Genomes of nineteen Dutch patients with a history of HSE sequenced by WES and identification of novel HSE causing variants determined by filtering the single nucleotide polymorphisms (SNPs) that had a frequency below 1% in the NHBLI Exome Sequencing Project Exome Variant Server and the 1000 Genomes Project and were present within 204 genes involved in the immune response to HSV.
Identified variants (204) manually evaluated for involvement of IFN induction based on IDBase and KEGG pathway database analysis.
In order to validate the predictive value of the software, HEK293T cells, deficient in IRF3, MAVS, and IKKe/TBK1, were cotransfected with the nine variants of the aforementioned genes and a luciferase reporter under control of the IFN-b promoter and luciferase activity measured as an indicator of IFN signaling function. Western blot was performed to confirm the expression of the constructs.
In predicting the pathogenicity of a nonsynonymous single-nucleotide variant (nsSNV), a radical change in amino acid properties is prone to be classified as being pathogenic. However, not all such nsSNVs are associated with human diseases. We generated random forest (RF) models individually for each amino acid substitution to differentiate pathogenic nsSNVs in the Human Gene Mutation Database and common nsSNVs in dbSNP. We named a set of our models ‘Individual Meta RF’ (InMeRF). Ten-fold cross-validation of InMeRF showed that the areas under the curves (AUCs) of receiver operating characteristic (ROC) and precision-recall curves were on average 0.941 and 0.957, respectively. To compare InMeRF with seven other tools, the eight tools were generated using the same training dataset, and were compared using the same three testing datasets. ROC-AUCs of InMeRF were ranked first in the eight tools. We applied InMeRF to 155 pathogenic and 125 common nsSNVs in seven major genes causing congenital myasthenic syndromes, as well as in VANGL1 causing spina bifida, and found that the sensitivity and specificity of InMeRF were 0.942 and 0.848, respectively. We made the InMeRF web service, and also made genome-wide InMeRF scores available online (https://www.med.nagoya-u.ac.jp/neurogenetics/InMeRF/).
Numerous human diseases are caused by mutations in genomic sequences. Since amino acid changes affect protein function through mechanisms often predictable from protein structure, the integration of structural and sequence data enables us to estimate with greater accuracy whether and how a given mutation will lead to disease. Publicly available annotated databases enable hypothesis assessment and benchmarking of prediction tools. However, the results are often presented as summary statistics or black box predictors, without providing full descriptive information. We developed a new semi-manually curated human variant database presenting information on the protein contact-map, sequence-to-structure mapping, amino acid identity change, and stability prediction for the popular UniProt database. We found that the profiles of pathogenic and benign missense polymorphisms can be effectively deduced using decision trees and comparative analyses based on the presented dataset. The database is made publicly available through https://zhanglab.ccmb.med.umich.edu/ADDRESS.
Thousands of genomic structural variants (SVs) segregate in the human population and can impact phenotypic traits and diseases. Their identification in whole-genome sequence data of large cohorts is a major computational challenge. Most current approaches identify SVs in single genomes and afterwards merge the identified variants into a joint call set across many genomes. We describe the approach PopDel, which directly identifies deletions of about 500 to at least 10,000 bp in length in data of many genomes jointly, eliminating the need for subsequent variant merging. PopDel scales to tens of thousands of genomes as we demonstrate in evaluations on up to 49,962 genomes. We show that PopDel reliably reports common, rare and de novo deletions. On genomes with available high-confidence reference call sets PopDel shows excellent recall and precision. Genotype inheritance patterns in up to 6794 trios indicate that genotypes predicted by PopDel are more reliable than those of previous SV callers. Furthermore, PopDel’s running time is competitive with the fastest tested previous tools. The demonstrated scalability and accuracy of PopDel enables routine scans for deletions in large-scale sequencing studies.
Researchers have embraced CRISPR gene-editing as a method for altering genomes, but some have reported that unwanted DNA changes may slip by undetected. The tool can cause large DNA deletions and rearrangements near its target site on the genome. Such alterations can confuse the interpretation of experimental results and could complicate efforts to design therapies based on CRISPR. The finding is in line with previous results from not only CRISPR but also other gene-editing systems.
CRISPR -Cas9 gene editing relies on the Cas9 enzyme to cut DNA at a particular target site. The cell then attempts to reseal this break using its DNA repair mechanisms. These mechanisms do not always work perfectly, and sometimes segments of DNA will be deleted or rearranged, or unrelated bits of DNA will become incorporated into the chromosome.
Researchers often use CRISPR to generate small deletions in the hope of knocking out a gene’s function. But when examining CRISPR edits, researchers found large deletions (often several thousand nucleotides) and complicated rearrangements of DNA sequences in which previously distant DNA sequences were stitched together. Many researchers use a method for amplifying short snippets of DNA to test whether their edits have been made properly. But this approach might miss larger deletions and rearrangements.
These deletions and rearrangements occur only with gene-editing techniques that rely on DNA cutting and not with some other types of CRISPR modifications that avoid cutting DNA. Such as a modified CRISPR system to switch one nucleotide for another without cutting DNA and other systems use inactivated Cas9 fused to other enzymes to turn genes on or off, or to target RNA. Overall, these unwanted edits are a problem that deserves more attention, but this should not stop anyone from using CRISPR. Only when people use it, they need to do a more thorough analysis about the outcome.
Long interspersed nuclear elements 1 (LINE1) is repeated half a million times in the human genome, making up nearly a fifth of the DNA in every cell. But nobody cared to study it and may be the reason to call it junk DNA. LINE1, like other transposons (or “jumping genes”), has the unusual ability to copy and insert itself in random places in the genome. Many other research groups uncovered possible roles in early mouse embryos and in brain cells. But nobody quite established a proper report about the functions of LINE1.
Geneticists gave attention to LINE1 when it was found to cause cancer or genetic disorders like hemophilia. But researchers at University of California at San Francisco suspected there was more characteristics of LINE1. They suspected that if it can be most harmless then it can be worst harmful also.
Many reports showed that LINE1 is especially active inside developing embryos, which suggests that the segment actually plays a key role in coordinating the development of cells in an embryo. Researchers at University of California at San Francisco figured out how to turn LINE1 off in mouse embryos by blocking LINE1 RNA. As a result the embryos got stuck in the two-cell stage, right after a fertilized egg has first split. Without LINE1, embryos essentially stopped developing.
The researchers thought that LINE1 RNA particles act as molecular “glue,” bringing together a suite of molecules that switch off the two-cell stage and kick it into the next phase of development. In particular, it turns off a gene called Dux, which is active in the two-cell stage.
LINE1’s ability to copy itself, however, seems to have nothing to do with its role in embryonic development. When LINE1 was blocked from inserting itself into the genome, the embryonic stem cells remained unaffected. It’s possible that cells in embryos have a way of making LINE1 RNA while also preventing its potentially harmful “jumping” around in the genome. But it’s unlikely that every one of the thousands of copies of LINE1 is actually being used to regulate embryonic development.
LINE1 is abundant in the genomes of almost all mammals. Other transposons, also once considered junk DNA, have turned out to have critical roles in development in human cells too. There are differences between mice and humans, so, the next obvious step is to study LINE1 in human cells, where it makes up 17 percent of the genome.
Live Conference Coverage @Medcitynews Converge 2018 Philadelphia:Liquid Biopsy and Gene Testing vs Reimbursement Hurdles
Reporter: Stephen J. Williams, PhD
9:25- 10:15 Liquid Biopsy and Gene Testing vs. Reimbursement Hurdles
Genetic testing, whether broad-scale or single gene-testing, is being ordered by an increasing number of oncologists, but in many cases, patients are left to pay for these expensive tests themselves. How can this dynamic be shifted? What can be learned from the success stories?
Moderator:Shoshannah Roth, Assistant Director of Health Technology Assessment and Information Services , ECRI Institute @Ecri_Institute Speakers: Rob Dumanois, Manager – reimbursement strategy, Thermo Fisher Scientific Eugean Jiwanmall, Senior Research Analyst for Medical Policy & Technology Evaluation , Independence Blue Cross @IBX Michael Nall, President and Chief Executive Officer, Biocept
Michael: Wide range of liquid biopsy services out there. There are screening companies however they are young and need lots of data to develop pan diagnostic test. Most of liquid biopsy is more for predictive analysis… especially therapeutic monitoring. Sometimes solid biopsies are impossible , limited, or not always reliable due to metastasis or tough to biopsy tissues like lung.
Eugean: Circulating tumor cells and ctDNA is the only FDA approved liquid biopsies. However you choose then to evaluate the liquid biopsy, PCR NGS, FISH etc, helps determines what the reimbursement options are available.
Rob: Adoption of reimbursement for liquid biopsy is moving faster in Europe than the US. It is possible in US that there may be changes to the payment in one to two years though.
Michael: China is adopting liquid biopsy rapidly. Patients are demanding this in China.
Reimbursement
Eugean: For IBX to make better decisions we need more clinical trials to correlate with treatment outcome. Most of the major cancer networks, like NCCN, ASCO, CAP, just have recommendations and not approved guidelines at this point. From his perspective with lung cancer NCCN just makes a suggestion with EGFR mutations however only the companion diagnostic is approved by FDA.
Michael: Fine needle biopsies are usually needed by the pathologist anyway before they go to liquid biopsy as need to know the underlying mutations in the original tumor, it just is how it is done in most cancer centers.
Eugean: Whatever the established way of doing things, you have to outperform the clinical results of the old method for adoption of a newer method.
Reimbursement issues have driven a need for more research into clinical validity and utility of predictive and therapeutic markers with regard to liquid biopsies. However although many academic centers try to partner with Biocept Biocept has a limit of funds and must concentrate only on a few trials. The different payers use different evidence based methods to evaluate liquid biopsy markers. ECRI also has a database for LB markers using an evidence based criteria. IBX does sees consistency among payers as far as decision and policy.
NGS in liquid biopsy
Rob: There is a path to coverage, especially through the FDA. If you have a FDA cleared NGS test, it will be covered. These are long and difficult paths to reimbursement for NGS but it is feasible. Medicare line of IBX covers this testing, however on the commercial side they can’t cover this. @IBX: for colon only kras or nras has clinical utility and only a handful of other cancer related genes for other cancers. For a companion diagnostic built into that Dx do the other markers in the panel cost too much?
Please follow on Twitter using the following #hash tags and @pharma_BI
Live Conference Coverage Medcity Converge 2018 Philadelphia: Clinical Trials and Mega Health Mergers
Reporter: Stephen J. Williams, PhD
1:30 – 2:15 PM Clinical Trials 2.0
The randomized, controlled clinical trial is the gold standard, but it may be time for a new model. How can patient networks and new technology be leveraged to boost clinical trial recruitment and manage clinical trials more efficiently?
Michele: Medable is creating a digital surrogate biomarker for short term end result for cardiology clinical trials as well as creating a virtual site clinical trial design (independent of geography)
Sameek: OSU is developing RNASeq tests for oncogenic fusions that are actionable
John: ability to use various technologies to conduct telehealth and tele-trials. So why are we talking about Clinical Trials 2.0?
Andrew: We are not meeting many patients needs. The provider also have a workload that prevents from the efficient running of a clinical trial.
Michele: Personalized medicine: what is the framework how we conduct clinical trials in this new paradigm?
Sameek: How do we find those rare patients outside of a health network? A fragmented health system is hurting patient recruitment efforts.
Wout: The Christmas Tree paradigm: collecting data points based on previous studies may lead to unnecessary criteria for patient recruitment
Sameek: OSU has a cancer network (Orion) that has 95% success rate of recruitment. Over Orion network sequencing performed at $10,000 per patient, cost reimbursed through network. Network helps pharma companies find patients and patients to find drugs
Wout: reaching out to different stakeholders
John: what he sees in 2.0 is use of tech. They took 12 clinic business but they integrated these sites and was able to benefit patient experience… this helped in recruitment into trials. Now after a patient is recruited, how 2.0 model works?
Sameek: since we work with pharma companies, what if we bring in patients from all over the US. how do we continue to take care of them?
Andrew: utilizing a technology is critically important for tele-health to work and for tele-clinical trials to work
Michele: the utilization of tele-health by patients is rather low.
Wout: We are looking for insights into the data. So we are concentrated on collecting the data and not decision trees.
John: What is a barrier to driving Clinical Trial 2.0?
Andrew: The complexity is a barrier to the patient. Need to show the simplicity of this. Need to match trials within a system.
Saleem: Data sharing incentives might not be there or the value not recognized by all players. And it is hard to figure out how to share the data in the most efficient way.
Wout: Key issue when think locally and act globally but healthcare is the inverse of this as there are so many stakeholders but that adoption by all stakeholders take time
Michele: accessibility of healthcare data by patients is revolutionary. The medical training in US does not train doctors in communicating a value of a trial
John: we are in a value-driven economy. You have to give alot to get something in this economy. Final comments?
Saleem: we need fundamental research on the validity of clinical trials 2.0.
Wout: Use tools to mine manually but don’t do everything manually, not underlying tasks
Andrew: Show value to patient
2:20-3:00 PM CONVERGEnce on Steroids: Why Comcast and Independence Blue Cross?
This year has seen a great deal of convergence in health care. One of the most innovative collaborations announced was that of Cable and Media giant Comcast Corporation and health plan Independence Blue Cross. This fireside chat will explore what the joint venture is all about, the backstory of how this unlikely partnership came to be, and what it might mean for our industry.
Moderator:Tom Olenzak, Managing Director Strategic Innovation Portfolio, Independence Blue Cross @IBX Speakers: Marc Siry, VP, Strategic Development, Comcast Michael Vennera, SVP, Chief Information Officer, Independence Blue Cross
Comcast and Independence Blue Cross Blue Shield are teaming together to form an independent health firm to bring various players in healthcare onto a platform to give people a clear path to manage their healthcare. Its not just about a payer and information system but an ecosystem within Philadelphia and over the nation.
Michael: About 2015 at a health innovation conference they came together to produce a demo on how they envision the future of healthcare.
Marc: When we think of a customer we think of the household. So we thought about aggregating services to people in health. How do people interact with their healthcare system?
What are the risks for bringing this vision to reality?
Michael: Key to experience is how to connect consumer to caregiver.
How do we aggregate the data, and present it in a way to consumer where it is actionable?
How do we help the patient to know where to go next?
Marc: Concept of ubiquity, not just the app, nor asking the provider to ask patient to download the app and use it but use our platform to expand it over all forms of media. They did a study with an insurer with metabolic syndrome and people’s viewing habits. So when you can combine the expertise of IBX and the scale of a Comcast platform you can provide great amount of usable data.
Michael: Analytics will be a prime importance of the venture.
Tom: We look at lots of companies that try to pitch technologies but they dont understand healthcare is a human problem not a tech problem. What have you learned?
Marc: Adoption rate of new tech by doctors is very low as they are very busy. Understanding the clinicians workflow is important and how to not disrupt their workflow was humbling for us.
Michael: The speed at which big tech companies can integrate and innovate new technologies is very rapid, something we did not understand. We want to get this off the ground locally but want to take this solution national and globally.
Marc: We are not in competition with local startups but we are looking to work with them to build scale and operability so startups need to show how they can scale up. This joint venture is designed to look at these ideas. However this will take a while before we open up the ecosystem until we can see how they would add value. There are also challenges with small companies working with large organizations.
Please follow on Twitter using the following #hashtags and @pharma_BI
#MCConverge
#cancertreatment
#healthIT
#innovation
#precisionmedicine
#healthcaremodels
#personalizedmedicine
#healthcaredata
And at the following handles:
@pharma_BI
@medcitynews
Please see related articles on Live Coverage of Previous Meetings on this Open Access Journal
The CRISPR-Cas9 system has proven to be a powerful tool for genome editing allowing for the precise modification of specific DNA sequences within a cell. Many efforts are currently underway to use the CRISPR-Cas9 system for the therapeutic correction of human genetic diseases. CRISPR/Cas9 has revolutionized our ability to engineer genomes and conduct genome-wide screens in human cells.
CRISPR–Cas9 induces a p53-mediated DNA damage response and cell cycle arrest in immortalized human retinal pigment epithelial cells, leading to a selection against cells with a functional p53 pathway. Inhibition of p53 prevents the damage response and increases the rate of homologous recombination from a donor template. These results suggest that p53 inhibition may improve the efficiency of genome editing of untransformed cells and that p53 function should be monitored when developing cell-based therapies utilizing CRISPR–Cas9.
Whereas some cell types are amenable to genome engineering, genomes of human pluripotent stem cells (hPSCs) have been difficult to engineer, with reduced efficiencies relative to tumour cell lines or mouse embryonic stem cells. Using hPSC lines with stable integration of Cas9 or transient delivery of Cas9-ribonucleoproteins (RNPs), an average insertion or deletion (indel) efficiency greater than 80% was achieved. This high efficiency of insertion or deletion generation revealed that double-strand breaks (DSBs) induced by Cas9 are toxic and kill most hPSCs.
The toxic response to DSBs was P53/TP53-dependent, such that the efficiency of precise genome engineering in hPSCs with a wild-type P53 gene was severely reduced. These results indicate that Cas9 toxicity creates an obstacle to the high-throughput use of CRISPR/Cas9 for genome engineering and screening in hPSCs. As hPSCs can acquire P53 mutations, cell replacement therapies using CRISPR/Cas9-enginereed hPSCs should proceed with caution, and such engineered hPSCs should be monitored for P53 function.
CRISPR-based editing of T cells to treat cancer, as scientists at the University of Pennsylvania are studying in a clinical trial, should also not have a p53 problem. Nor should any therapy developed with CRISPR base editing, which does not make the double-stranded breaks that trigger p53. But, there are pre-existing humoral and cell-mediated adaptive immune responses to Cas9 in humans, a factor which must be taken into account as the CRISPR-Cas9 system moves forward into clinical trials.
Knowing the genetic vulnerability of bladder cancer for therapeutic intervention, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
Knowing the genetic vulnerability of bladder cancer for therapeutic intervention
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
A mutated gene called RAS gives rise to a signalling protein Ral which is involved in tumour growth in the bladder. Many researchers tried and failed to target and stop this wayward gene. Signalling proteins such as Ral usually shift between active and inactive states.
So, researchers next tried to stop Ral to get into active state. In inacvtive state Ral exposes a pocket which gets closed when active. After five years, the researchers found a small molecule dubbed BQU57 that can wedge itself into the pocket to prevent Ral from closing and becoming active. Now, BQU57 has been licensed for further development.
Researchers have a growing genetic data on bladder cancer, some of which threaten to overturn the supposed causes of bladder cancer. Genetics has also allowed bladder cancer to be reclassified from two categories into five distinct subtypes, each with different characteristics and weak spots. All these advances bode well for drug development and for improved diagnosis and prognosis.
Among the groups studying the genetics of bladder cancer are two large international teams: Uromol (named for urology and molecular biology), which is based at Aarhus University Hospital in Denmark, and The Cancer Genome Atlas (TCGA), based at institutions in Texas and Boston. Each team tackled a different type of cancer, based on the traditional classification of whether or not a tumour has grown into the muscle wall of the bladder. Uromol worked on the more common, earlier form, non-muscle-invasive bladder cancer, whereas TCGA is looking at muscle-invasive bladder cancer, which has a lower survival rate.
The Uromol team sought to identify people whose non-invasive tumours might return after treatment, becoming invasive or even metastatic. Bladder cancer has a high risk of recurrence, so people whose non-invasive cancer has been treated need to be monitored for many years, undergoing cystoscopy every few months. They looked for predictive genetic footprints in the transcriptome of the cancer, which contains all of a cell’s RNA and can tell researchers which genes are turned on or off.
They found three subgroups with distinct basal and luminal features, as proposed by other groups, each with different clinical outcomes in early-stage bladder cancer. These features sort bladder cancer into genetic categories that can help predict whether the cancer will return. The researchers also identified mutations that are linked to tumour progression. Mutations in the so-called APOBEC genes, which code for enzymes that modify RNA or DNA molecules. This effect could lead to cancer and cause it to be aggressive.
The second major research group, TCGA, led by the National Cancer Institute and the National Human Genome Research Institute, that involves thousands of researchers across USA. The project has already mapped genomic changes in 33 cancer types, including breast, skin and lung cancers. The TCGA researchers, who study muscle-invasive bladder cancer, have looked at tumours that were already identified as fast-growing and invasive.
The work by Uromol, TCGA and other labs has provided a clearer view of the genetic landscape of early- and late-stage bladder cancer. There are five subtypes for the muscle-invasive form: luminal, luminal–papillary, luminal–infiltrated, basal–squamous, and neuronal, each of which is genetically distinct and might require different therapeutic approaches.
Bladder cancer has the third-highest mutation rate of any cancer, behind only lung cancer and melanoma. The TCGA team has confirmed Uromol research showing that most bladder-cancer mutations occur in the APOBEC genes. It is not yet clear why APOBEC mutations are so common in bladder cancer, but studies of the mutations have yielded one startling implication. The APOBEC enzyme causes mutations early during the development of bladder cancer, and independent of cigarette smoke or other known exposures.
The TCGA researchers found a subset of bladder-cancer patients, those with the greatest number of APOBEC mutations, had an extremely high five-year survival rate of about 75%. Other patients with fewer APOBEC mutations fared less well which is pretty surprising.
This detailed knowledge of bladder-cancer genetics may help to pinpoint the specific vulnerabilities of cancer cells in different people. Over the past decade, Broad Institute researchers have identified more than 760 genes that cancer needs to grow and survive. Their genetic map might take another ten years to finish, but it will list every genetic vulnerability that can be exploited. The goal of cancer precision medicine is to take the patient’s tumour and decode the genetics, so the clinician can make a decision based on that information.