Healthcare analytics, AI solutions for biological big data, providing an AI platform for the biotech, life sciences, medical and pharmaceutical industries, as well as for related technological approaches, i.e., curation and text analysis with machine learning and other activities related to AI applications to these industries.
UPDATED on 3/18/2022 In Europe, BigData@Heart aim to improve patient outcomes and reduce societal burden of atrial fibrillation (AF), heart failure (HF) and acute coronary syndrome (ACS).
Reporter: Aviva Lev-Ari, PhD, RN
UPDATED on 3/18/2022
Restoring Sinus Rhythm Reverses Cardiac Remodeling and Reduces Valvular Regurgitation in Patients With Atrial Fibrillation
Cardiac chamber remodeling in atrial fibrillation (AF) reflects the progression of cardiac rhythm and may affect functional regurgitation.
Objectives
The purpose of this study was to explore the 3-dimensional echocardiographic variables of cardiac cavity remodeling and the impact on functional regurgitation in patients with AF with/without sinus rhythm restoration at 12 months.
Methods
A total of 117 consecutive patients hospitalized for AF were examined using serial 3-dimensional transthoracic echocardiography at admission, at 6 months, and at 12 months (337 examinations).
Results
During follow-up, 47 patients with active restoration of sinus rhythm (SR) (through cardioversion and/or ablation) had a decrease in all atrial indexed volumes (Vi), end-systolic (ES) right ventricular (RV) Vi, an increase in end-diastolic (ED) left ventricular Vi, and an improvement in 4-chambers function (P < 0.05). Patients with absence/failure of restoration of SR (n = 39) had an increase in ED left atrial Vi and ED/ES RV Vi without modification of 4-chambers function, except for a decrease in left atrial emptying fraction (P < 0.05). Patients with spontaneous restoration of SR (n = 31) had no changes in Vi or function. The authors found an improvement vs baseline in severity of functional regurgitation in patients with active restoration of SR (tricuspid and mitral regurgitation) and in spontaneous restoration of SR (tricuspid regurgitation) (P < 0.05). In multivariable analysis, right atrial and/or left atrial reverse remodeling exclusively correlated with intervention (cardioversion and/or ablation) during 12-month follow-up.
Conclusions
Management of AF should focus on restoration of SR to induce anatomical (all atrial Vi, ES RV Vi) and/or functional (4 chambers) cardiac cavity reverse remodeling and reduce severity of functional regurgitation. (Thromboembolic and Bleeding Risk Stratification in Patients With Non-valvular Atrial Fibrillation [FASTRHAC]; NCT02741349)
The objective of BigData@Heart is to develop a data-driven translational research platform of unparalleled scale and phenotypical resolution, with the aim to improve patient outcomes and reduce societal burden of atrial fibrillation (AF), heart failure (HF) and acute coronary syndrome (ACS).
AF, HF and ACS are major drivers of cardiovascular disease (CVD), which causes more than 3.9 million deaths each year across Europe – accounting for 45% of all deaths (49% of deaths among women and 40% of deaths among men) – with 1.3 million of these deaths occuring before the age of 75 years. Of the total cost of CVD in the EU (€210 billion a year), around 53% (€111 billion) is due to health care costs, 26% (€54 billion) to productivity losses and 21% (€45 billion) to the informal care of people with CVD.(i)
Currently, the management of AF, HF and ACS is complicated by their complex aetiology and heterogeneous prognoses. This renders the response to therapy unpredictable, with large variations amongst individuals and, importantly, small or undetectable treatment effects in large patient trials. Also, tolerability of medications and adherence to current treatments shows wide variations. Aside from the medical need, drug development pipelines from early target validation through to late post-marketing work have proven to be slow and high-risk. The lack of high-resolution biomarkers and computable definitions frustrates progress in the development of successful CVD therapies. There is a clear need for a better definition of CVD through improved biomarkers and endpoints, as well as its outcomes and prognoses.
BigData@Heart uniquely brings together key players and stakeholders in the CVD field to address these challenges. The clinical researchers involved have been instrumental in shaping current AF, HF and ACS treatment and management in Europe. They will join forces with leading epidemiologists, big data scientists, leading cardiovascular practitioners, pharmaceutical industry scientists, experts in ethics and legal aspects, and patient organisations from across Europe. The BigData@Heart consortium will develop a data-driven translational research platform which will be aiming at delivering clinically relevant disease phenotypes, scalable insights from real-world evidence, best-practices in drug development, and personalised medicines through advanced analytics.
For the first time, BigData@Heart will assemble European-wide consented cohorts (conventional research data), electronic health records (EHRs) in population settings (e.g. CALIBER, ABUCASIS, MONDRIAAN), hospital based EHRs, disease quality improvement registries (e.g. SWEDEHEART, NICOR, SwedeHF), clinically recorded imaging data, and trial data (covering over 75,000 patients).
BigData@Heart will deliver population relevant disease-based datasets (with > 5 million cases of HF, AF and ACS and healthy population cohorts > 16 million people accruing a further > 500,000 cases on follow up) and phenotypic depth with biomarker, behavioural, clinical, imaging data and genomic information with genome-wide association study (GWAS) consortia in each disease (AFGen, HERMES, GENIUS-CHD).
This project will develop and test a framework that will enable big data driven cardiovascular research, including the development of:
New definitions of diseases and outcomes that are universal, computable, and relevant for patients, clinicians, industry and regulators.
Informatics platforms that link, visualise and harmonise data sources of varying types, completeness and structure.
Data science techniques to develop new definitions of disease, identify new phenotypes, and construct personalized predictive models.
Guidelines that allow for cross-border usage of big data sources acknowledging ethical and legal constraints as well as data security.
The ultimate expected impact of BigData@Heart on science, industry, policies, and patients includes a better understanding of heart disease, the development of new therapy targets, improved drug and device development/utilisation, and laying a scientific foundation for progress in the personalised treatment and management of CVD.
BigData@Heart is a 5-year, € 19 million project supported by the Innovative Medicines Initiative (IMI), a public-private partnership between the European Union and the European pharmaceutical industry. The pharmaceutical industry contributes half of BigData@Heart’s budget, while the other half is funded by the European Commission.
BigData@Heart Structure and Participants
The consortium is being jointly led by Prof. Diederick E. (Rick) Grobbee from the University Medical Center Utrecht (UMCU) and Dr. Gunnar Brobert from Bayer and consists of 19 partners coming from academia, medical associations, pharmaceutical industry, SMEs and patient organisations:
University Medical Center Utrecht (UMCU)
Charité – Universitätsmedizin Berlin (Charité)
European Society of Cardiology (ESC)
European Heart Network (EHN)
University College London (UCL)
University of Cambridge (CAM)
International Consortium for Health Outcomes Measurement (ICHOM)
Fundación para la investigación del Hospital Clinico de la Comunidad Valenciana (INCLIVA)
‘BigData@Heart will show how big data can drive progress in the treatment and management of CVD’, Prof. Diederick E. (Rick) Grobbee, University Medical Center Utrecht (UMCU)
SNP-based Study on high BMI exposure confirms CVD and DM Risks – no associations with Stroke, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
SNP-based Study on high BMI exposure confirms CVD and DM Risks – no associations with Stroke
Reporter: Aviva Lev-Ari, PhD, RN
Genes Affirm: High BMI Carries Weighty Heart, Diabetes Risk – Mendelian randomization study adds to ‘burgeoning evidence’
by Crystal Phend,Senior Associate Editor, MedPage Today, July 05, 2017
The “genetically instrumented” measure of high BMI exposure — calculated based on 93 single-nucleotide polymorphisms associated with BMI in prior genome-wide association studies — was associated with the following risks (odds ratios given per standard deviation higher BMI):
Hypertension (OR 1.64, 95% CI 1.48-1.83)
Coronary heart disease (CHD; OR 1.35, 95% CI 1.09-1.69)
Type 2 diabetes (OR 2.53, 95% CI 2.04-3.13)
Systolic blood pressure (β 1.65 mm Hg, 95% CI 0.78-2.52 mm Hg)
Diastolic blood pressure (β 1.37 mm Hg, 95% CI 0.88-1.85 mm Hg)
However, there were no associations with stroke, Donald Lyall, PhD, of the University of Glasgow, and colleagues reported online in JAMA Cardiology.
“The main advantage of an MR approach is that certain types of study bias can be minimized,” the team noted. “Because DNA is stable and randomly inherited, which helps to mitigate errors from reverse causality and confounding, genetic variation can be used as a proxy for lifetime BMI to overcome limitations such as reverse causality and confounding, a process that hampers observational analyses of obesity and its consequences.”
Guest Author: David Davenport, Office Administrator, Personalized Medicine Coalition
“Health care today is reactive and costly … anything but personalized … but we are now entering a new era where health care is becoming proactive, preventive, highly personalized and most importantly predictive,” said J. Craig Venter, Ph.D., Founder, President, CEO, J. Craig Venter Institute, during his opening keynote at the Personalized Medicine and Diagnostics Track at the 2017 BIO International Convention in San Diego from June 21 – 22. The track, co-organized by PMC, brought together thought leaders to discuss breakthroughs in advancing personalized medicine. From those conversations several themes emerged:
Complex genetic data require a “knowledge network” to translate into personalized care.
During the session titled The Next Frontier: Navigating Clinical Adoption of Personalized Medicine, moderated by PMC Vice President for Science Policy Daryl Pritchard, Ph.D., panelists discussed how to accelerate the clinical adoption of innovative personalized therapies. Jennifer Levin Carter, M.D., Founder and Chief Medical Officer of N-of-One, a clinical diagnostic testing interpretation service company, explained that as data grows in complexity, there is a growing need for partnerships to efficiently analyze the data and develop effective targeted treatment plans. India Hook-Barnard, Ph.D., Director of Research Strategy, Associate Director of Precision Medicine, University of California, San Francisco (UCSF), agreed and discussed the need to build a “knowledge network” that can harness data and expertise to inform provider-patient decision-making.
Discussing how personalized medicine can be integrated into community health centers lacking large research budgets, Lynn Dressler, Dr.P.H., Director of Personalized Medicine and Pharmacogenomics at Mission Health Systems, a rural community health care delivery system in Asheville, North Carolina, discussed the need to better educate physicians and patients as well as the role that a knowledge network could play in providing easy and cost-effective access to diagnostic testing services.
Delivering personalized medicine requires innovative partnerships involving industry, IT companies, providers, payers and the government.
During It’s a Converging World: Innovative Partnerships and Precision Medicine, a panel moderated by Kristin Pothier, Global Head of Life Sciences Strategy, Ernst & Young, discussed the need for “open data” where improved patient care is the shared goal, and how public-private partnerships that address education, evidence development and access to care can help foster personalized medicine.
During a session titled Nevada as a New Model for Population Health Study, Nevada-based health system Renown Health outlined a study in which it partnered with genetic testing company 23andMe to examine whether free access to genetic testing changes participants’ practices in managing their own health and facilitates the utilization of personalized medicine.
In the era of personalized medicine, measuring and delivering value requires a paradigm shift from population-based to individual-based evidence.
Following a discussion on regulatory and reimbursement challenges moderated by Bruce Quinn, M.D., Ph.D., Principal, Bruce Quinn Associates, during which panelists called for the simplification of payment structures to be more consistent, more efficient and more connected to the patient market, a panel moderated by Jennifer Snow, Director of Health Policy at Xcenda, discussed how value assessment frameworks must adapt to consider the value of personalized medicine. During The Whole Picture: Consideration of Personalized Medicine in Value Assessment Frameworks, panelist Mitch Higashi, Ph.D., Vice President, Health Economics and Outcomes Research, U.S., Bristol-Myers Squibb, called for patient-centered definitions of value and advocated for the inclusion of predictive biomarkers in all value frameworks. Donna Cryer, J.D., President, CEO, Global Liver Institute, added that the “patient must be the ultimate ‘arbiter of value’” and urged “transparency” in how value assessment frameworks are used.
Noting that different assessment frameworks have different goals, Roger Longman, CEO, Real Endpoints, called for more dynamic frameworks that allow different stakeholders to “use the same criteria but weigh them differently.” The panel concluded that to advance personalized medicine, value frameworks must be meaningful, practical and predictive for patients; reflect evolving evidence needs like real-world evidence; and consider breakthrough payment structures like bundled payments.
From Promise to Practice: The Way Forward for Personalized Medicine
During the concluding session, Creating a Universal Biomarker Program, moderated by Ian Wright, Owner, Strategic Innovations LLC, on behalf of Cedars-Sinai Precision Health, panelists discussed how to make patients the point of reference for their own care, as opposed to being compared to the “normal” range of population averages in treatment decisions using biomarkers. The speakers concluded that moving in that direction requires providers to establish baselines for each patient, along with tools and metrics to facilitate the approach.
In the words of Donna Cryer, “personalized medicine is the definition of value for a patient.” With the ability to detect diseases before they even express themselves, the promise of personalized medicine has never been greater.
However, changing the health care system to improve patient access to valuable personalized medicines requires innovation and collaboration. As PMC President Edward Abrahams, Ph.D., said during his opening remarks for the track, that change
“doesn’t come easily,” but “breakthrough” discussions like these continue to move us forward.
Arrhythmias Detection: Speeding Diagnosis and Treatment – New deep learning algorithm can diagnose 14 types of heart rhythm defects by sifting through hours of ECG data generated by some REMOTELY iRhythm’s wearable monitors
ECG sensor patch is a diagnostic tool used by the clinicians for early detection of atrial fibrillation and to ensure timely treatment for such patients. It also acts as triggering alarm for the #cardiac patient about the stress levels and thus increasing the patient compliance. With advances in device miniaturization and wireless technologies and changing consumer expectations, wearable “on-body” ECG patch devices have evolved to meet contemporary needs. The wearable patch continuously record the ECG of user, which aids in arrhythmia detection and management at the point of care. It also acts as triggering alarm for the cardiac patient about the stress levels and thus increasing the patient compliance.
Long term, the group hopes this algorithm could be a step toward expert-level arrhythmia diagnosis for people who don’t have access to a cardiologist, as in many parts of the developing world and in other rural areas. More immediately, the algorithm could be part of a wearable device that at-risk people keep on at all times that would alert emergency services to potentially deadly heartbeat irregularities as they’re happening.
said Pranav Rajpurkar, a graduate student and co-lead author of the paper. “For example, two forms of the arrhythmia known as second-degree atrioventricular block look very similar, but one requires no treatment while the other requires immediate attention.”
To test accuracy of the algorithm, the researchers gave a group of three expert cardiologists 300 undiagnosed clips and asked them to reach a consensus about any arrhythmias present in the recordings. Working with these annotated clips, the algorithm could then predict how those cardiologists would label every second of other ECGs with which it was presented, in essence, giving a diagnosis.
We develop an algorithm which exceeds the performance of board certified cardiologists in detecting a wide range of heart arrhythmias from electrocardiograms recorded with a single-lead wearable monitor. We build a dataset with more than 500 times the number of unique patients than previously studied corpora. On this dataset, we train a 34-layer convolutional neural network which maps a sequence of ECG samples to a sequence of rhythm classes. Committees of board-certified cardiologists annotate a gold standard test set on which we compare the performance of our model to that of 6 other individual cardiologists. We exceed the average cardiologist performance in both recall (sensitivity) and precision (positive predictive value).
While clinicians can accurately identify different types of heartbeats in electrocardiograms (ECGs) from different patients, researchers have had limited success in applying supervised machine learning to the same task. The problem is made challenging by the variety of tasks, inter- and intra-patient differences, an often severe class imbalance, and the high cost of getting cardiologists to label data for individual patients. We address these difficulties using active learning to perform patient-adaptive and task-adaptive heartbeat classification. When tested on a benchmark database of cardiologist annotated ECG recordings, our method had considerably better performance than other recently proposed methods on the two primary classification tasks recommended by the Association for the Advancement of Medical Instrumentation. Additionally, our method required over 90% less patient-specific training data than the methods to which we compared it.
Stanford computer scientists develop an algorithm that diagnoses heart arrhythmias with cardiologist-level accuracy
A new deep learning algorithm can diagnose 14 types of heart rhythm defects, called arrhythmias, better than cardiologists. This could speed diagnosis and improve treatment for people in rural locations.
Genomic Diagnostics: Three Techniques to Perform Single Cell Gene Expression and Genome Sequencing Single Molecule DNA Sequencing
Curator: Aviva Lev-Ari, PhD, RN
4.2.3 Genomic Diagnostics: Three Techniques to Perform Single Cell Gene Expression and Genome Sequencing Single Molecule DNA Sequencing, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 4: Single Cell Genomics
Ido Amit, AmitLab @Department of Immunology at the Weizmann Institute of Science, Rehovot, Israel
The Amit lab studies the genomic code enabling immune cells to differentiate to specific subtypes and devise a specific response to invading pathogens. Our main focus is to understand how gene regulatory networks activate this code. We develop and apply state of the art high throughputgenomic tools and interdisciplinary approaches to address these critical biological and therapeutic questions. We believe that elucidating the underlying principles of the regulatory code will allow us to impact the future of personalized medicine. We are currently seeking highly motivated individuals who want to join us on this quest.
Basic lessons – “Single-cell genomics will soon be commonplace in basic and applied immunology research.”
It is early days for single-cell genomics. But already, a number of important lessons can be learnt from the experiences of our lab and those of others.
First, it is clear that many of the current categories of immune cells, such as T cells or monocytes, encompass heterogeneous populations. To probe cellular complexity, researchers must therefore cast their nets wide, and try to collect all immune cells within a tissue or region of interest. This is a very different approach from that used with methods based on cell-surface markers, which aim to obtain as pure a sample as possible.
Second, success will depend, in part, on the extent to which researchers preserve the states of cells and the original composition of a tissue. Cell stress or death should be minimized to ensure that tissue preparation does not favour specific cell types. (Some are more sensitive to heat stress, for example, than others.)
Third, bioinformaticians will need to develop scalable and robust algorithms to cope with greater numbers of cells, conflicting or overlapping programs of gene expression and fleeting developmental stages.
Fourth, after researchers have characterized all of the immune cells in a sample, they will need to find molecular markers that can be used to either enrich or deplete certain cell types in further samples. Tissues comprise trillions of cells with myriad molecular characteristics and functions, and the types or states of these cells may vary in abundance by many orders of magnitude. For instance, in the brains of healthy mice, our newly identified population of DAM makes up less than 0.01% of cells15. Thus, repeated unbiased sampling to characterize rare populations will keep on accumulating cells that are not those of interest.
Lavin Y, Kobayashi S, Leader A, Amir ED, Elefant N, Bigenwald C, Remark R, Sweeney R, Becker CD, Levine JH, Meinhof K, Chow A, Kim-Shulze S, Wolf A, Medaglia C, Li H, Rytlewski JA, Emerson RO, Solovyov A, Greenbaum BD, Sanders C, Vignali M, Beasley MB, Flores R, Gnjatic S, Pe’er D, Rahman A, Amit I, Merad M.
Cell. 2017 May 4;169(4):750-765.e17. doi: 10.1016/j.cell.2017.04.014.
Mildner A, Chapnik E, Varol D, Aychek T, Lampl N, Rivkin N, Bringmann A, Paul F, Boura-Halfon S, Hayoun YS, Barnett-Itzhaki Z, Amit I, Hornstein E, Jung S.
Eur J Immunol. 2017 May 4. doi: 10.1002/eji.201746987. [Epub ahead of print]
Guillaumet-Adkins A, Rodríguez-Esteban G, Mereu E, Mendez-Lago M, Jaitin DA, Villanueva A, Vidal A, Martinez-Marti A, Felip E, Vivancos A, Keren-Shaul H, Heath S, Gut M, Amit I, Gut I, Heyn H.
Genome Biol. 2017 Mar 1;18(1):45. doi: 10.1186/s13059-017-1171-9.
Bahar Halpern K, Shenhav R, Matcovitch-Natan O, Tóth B, Lemze D, Golan M, Massasa EE, Baydatch S, Landen S, Moor AE, Brandis A, Giladi A, Stokar-Avihail A, David E, Amit I, Itzkovitz S.
Science 04 Nov 2016:
Vol. 354, Issue 6312, pp. 557-558
DOI: 10.1126/science.aak9761
Summary
We each begin life as a single cell harboring a single genome, which—over the course of development—gives rise to the trillions of cells that make up the body. From skin cells to heart cells to neurons of the brain, each bears a copy of the original cell’s genome. But as anyone who has used a copy machine or played the childhood game of “telephone” knows, copies are never perfect. Every cell in an individual actually has a unique genome, an imperfect copy of its cellular ancestor differentiated by inevitable somatic mutations arising from errors in DNA replication and other mutagenic forces (1). Somatic mutation is the fundamental process leading to all genetic diseases, including cancer; every inherited genetic disease also has its origins in such mutation events that occurred in an ancestor’s germline cells. Yet how many and what kinds of somatic mutations accumulate in our cells as we develop and age has long been unknown and a blind spot in our understanding of the origins of genetic disease.
The author of the prize-winning essay, Gilad Evrony, received his undergraduate degree from the Massachusetts Institute of Technology. He served in the Intelligence Division of the Israel Defense Forces and completed an M.D. and Ph.D. at Harvard Medical School, with graduate research in the laboratory of Dr. Christopher Walsh at Boston Children’s Hospital. Dr. Evrony is currently pursuing clinical training in pediatrics at Mount Sinai Hospital and continuing his research developing novel technologies for studying the brain and neuropsychiatric diseases.
Innovations on the CRISPR System for Gene Editing: (1) Cryo-electron microscopy-based visualization of Cas3 Enzyme Cleavage (2) New tool testing an entire genome against a CRISPR molecule to predict potential errors and interactions, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair
Innovations on the CRISPR System for Gene Editing: (1) Cryo-electron microscopy-based visualization of Cas3 Enzyme Cleavage (2) New tool testing an entire genome against a CRISPR molecule to predict potential errors and interactions
Curator and Reporter: Aviva Lev-Ari, PhD, RN
Boom in human gene editing as 20 CRISPR trials gear up
A pioneering CRISPR trial in China will be the first to try editing the genomes of cells inside the body, in an effort to eliminate cancer-causing HPV virus
(1) Cryo-electron microscopy-based visualization of Cas3 Enzyme Cleavage
Harvard Medical School and Cornell University scientists have now generated near-atomic resolution snapshots of CRISPR that reveal key steps in its mechanism of action. The findings, published in Cell on June 29, provide the structural data necessary for efforts to improve the efficiency and accuracy of CRISPR for biomedical applications.
Through cryo-electron microscopy, the researchers describe for the first time the exact chain of events as the CRISPR complex loads target DNA and prepares it for cutting by the Cas3 enzyme. These structures reveal a process with multiple layers of error detection—a molecular redundancy that prevents unintended genomic damage, the researchers say.
Image Source: CRISPR forms a “seed bubble” state, which acts as an initial fail-safe mechanism to ensure that CRISPR RNA matches its target DNA. Image: Liao Lab/HMS
In contrast to the scalpel-like Cas9, CRISPR-Cas3 acts like a shredder that chews DNA up beyond repair. While CRISPR-Cas3 has, thus far, limited utility for precision gene editing, it is being developed as a tool to combat antibiotic-resistant strains of bacteria. A better understanding of its mechanisms may broaden the range of potential applications for CRISPR-Cas3.
In addition, all CRISPR-Cas subtypes utilize some version of an R-loop formation to detect and prepare target DNA for cleavage. The improved structural understanding of this process can now enable researchers to work toward modifying multiple types of CRISPR-Cas systems to improve their accuracy and reduce the chance of off-target effects in biomedical applications.
SOURCE
Structure Basis for Directional R-loop Formation and Substrate Handover Mechanisms in Type I CRISPR-Cas System
(2) New tool testing an entire genome against a CRISPR molecule to predict potential errors and interactions
Scientists from The University of Texas at Austin may have come up with a possible solution. They’ve developed something that works like a predictive editor for CRISPR: a method for anticipating and catching the tool’s mistakes as it works, thereby allowing for the editing of disease-causing errors out of genomes.
Many forms of cancer, Huntington’s disease, and even HIV can be targeted using CRISPR. CRISPR can “correct” something that was actually right — the consequences of which can make it a dangerous mistake. One that actually causes a disease. CRISPR molecules—proteins that find and edit genes—sometimes target the wrong genes, acting more like an auto-correct feature that turns correctly spelled words into typos. Editing the wrong gene could create new problems, such as causing healthy cells to become cancerous.
“You and I differ in about 1 million spots in our genetic code,” says Ilya Finkelstein, an assistant professor in the Department of Molecular Biosciences at UT Austin and the project’s principal investigator. “Because of this genetic diversity, human gene editing will always be a custom-tailored therapy.”
Image Source: The heart of the new technique developed by Finkelstein, et al. for detecting interactions between CRISPR and off-target DNA segments is a standard next generation gene sequencing slide (a.k.a. flowcell), produced by Illumina. Image by Wikimedia user Bainscou, via Creative Commons Attribution 3.0 license
CHAMP, or Chip Hybridized Affinity Mapping Platform. The heart of the test is a standard next generation genome sequencing chip already widely used in research and medicine. Two other key elements—designs for a 3-D printed mount that holds the chip under a microscope and software the team developed for analyzing the results—are open source. As a result, other researchers can easily replicate the technique in experiments involving CRISPR.
Andy Ellington, a professor in the Department of Molecular Biosciences and vice president for research of the Applied Research Laboratories at UT Austin, is a co-author of the paper. He says this method also illustrates the unpredictable side benefits of new technologies.
“Next generation genome sequencing was invented to read genomes, but here we’ve turned the technology on its head to allow us to characterize how CRISPR interacts with genomes,” says Ellington. “Inventive folks like Ilya take new technologies and extend them into new realms.”
they found that the CRISPR molecule they tested, called Cascade, pays less attention to every third letter in a DNA sequence than to the others.
Discussion
CHAMP repurposes sequenced and discarded chips from modern next-generation Illumina sequencers for high-throughput association profiling of proteins to nucleic acids. A key difference between CHAMP and prior NGS-based approaches is that it does not require any hardware or software modifications to discontinued Illumina sequencers (Nutiu et al., 2011, Tome et al., 2014, Buenrostro et al., 2014). In CHAMP, all association-profiling experiments are carried out on sequenced MiSeq chips and imaged in a conventional TIRF microscope. CHAMP’s computational strategy uses phiX clusters as alignment markers to align the spatial information obtained via Illumina sequencing with the fluorescent association profiling experiments. This strategy offers three key advantages over previous approaches. First, using a conventional fluorescence microscope opens new experimental configurations, including multi-color co-localization and time-dependent kinetic experiments. The excitation and emission optics can also be readily adapted for FRET (Figure S6) and other advanced imaging modalities. Second, complete fluidic access to the chip allows addition of other protein components during a biochemical reaction. Third, the computational strategy for aligning sequencer outputs to fluorescent datasets is applicable to all modern Illumina sequencers, including the MiSeq, NextSeq, and HiSeq platforms. Indeed, we also used the CHAMP imaging and bioinformatics pipeline to regenerate, image, and spatially align the DNA clusters in a HiSeq flowcell (Figure S6), providing an avenue for massively parallel profiling of protein-nucleic acid interactions on both synthetic libraries and entire genomes. Future extensions will leverage on-chip transcription and translation (e.g., ribosome display) to facilitate high-throughput studies of RNA or peptide association landscapes. These studies will permit quantitative biophysical studies of diverse protein-nucleic acid interactions.
The Biologic Roles of Leptin in Metabolism, Leptin Physiology and Obesity: On the Mechanism of Action of the Hormone in Energy Balance
Reporter: Aviva Lev-Ari, PhD, RN
More than $140 billion is spent each year in the United States to treat obesity-related diseases, according to the CDC.
Worldwide obesity rates have doubled since 1980, and most people now live in countries where more deaths are caused by overweight and obesity than by malnourishment, according to the World Health Organization.
Treatment with leptin was approved in the United States in 2014 for use in congenital leptin deficiency as well as in an unusual syndrome of lipodystrophy, but the protein has not been readily available for clinical experiments.
Flier, the HMS George Higginson Professor of Physiology and Medicine, and Maratos-Flier, HMS professor of medicine at Beth Israel Deaconess Medical Center, have made significant contributions to the understanding of the metabolism of obesity and starvation in general, and of leptin in particular.
The role for leptin as a starvation signal is now well established. [T]he physiologic role of leptin in most individuals may be limited to signaling the response to hunger or starvation, and then reversing that signal as energy stores are restored
Conclusion
“We continue to believe that healthy and lean individuals exist who resist obesity at least in part through their leptin levels, and that some individuals develop obesity because they have insufficiently elevated leptin levels or cellular resistance to leptin,” Flier said.
“But in science, belief and knowledge are two different things, and as much as we may lean toward this belief, we ought to develop evidence for this hypothesis or abandon it in favor of new potential mechanisms for the regulation of body weight,” he said.
SOURCES
Leptin’s Physiologic Role: Does the Emperor of Energy Balance Have No Clothes?
Importance of leptin signaling and signal transducer and activator of transcription-3 activation in mediating the cardiac hypertrophy associated with obesity
Most of the cells in our body are diploid, which indicate they carry two sets of chromosomes—one from each parent. So far, scientists have only succeeded in generating haploid embryonic stem cells—which comprise a single set of chromosomes in non-human mammals such as mice, rats and monkeys. Nevertheless, scientists have tried to isolate and duplicate these haploid ESCs in humans, which would allow them to work with one set of human chromosomes as opposed to a mixture from both parents.
Scientists from Hebrew from The Hebrew University of Jerusalem, Columbia University Medical Center (CUMC) and The New York Stem Cell Foundation Research Institute (NYSCF) were successful in generating a new type of embryonic stem cells that has a single copy of the human genome, instead of two copies which is typically found in normal stem cells.
This landmark was finally obtained by Ido Sagi, working as a PhD student at the Hebrew University of Jerusalem which was successful in isolating and maintaining haploid embryonic stem cells in humans. Unlike in mice, these haploid stem cells were capable to differentiate into various cell types such as brain, heart and pancreas, although holding a single set of chromosomes. Sagi and his advisor, Prof. Nissim Benvenisty showed that this new human stem cell type will play an important role in human genetic and medical research. This new human cell type cell type will aid in understanding human development and it will make genetic screening simpler and more precise, by examining a single set of chromosomes.
Based on this research, the Technology Transfer arm of the Hebrew University, started a new company New Stem, which is developing a diagnostic kit for predicting resistance to chemotherapy treatments. By gathering a broad library of human pluripotent stem cells with various genetic makeups and mutations. The company is planning to use this kit for personalized medication and future therapeutic and reproductive products.
Other related articles published in this Open Access Online Scientific Journal include the following:
Ido Sagi – PhD Student @HUJI, 2017 Kaye Innovation Award winner for leading research that yielded the first successful isolation and maintenance of haploid embryonic stem cells in humans.
Detecting Multiple Types of Cancer With a Single Blood Test
Reporter and Curator: Irina Robu, PhD
Monitoring cancer patients and evaluating their response to treatment can sometimes involve invasive procedures, including surgery.
The liquid biopsies have become something of a Holy Grail in cancer treatment among physicians, researchers and companies gambling big on the technology. Liquid biopsies, unlike traditional biopsies involving invasive surgery — rely on an ordinary blood draw. Developments in sequencing the human genome, permitting researchers to detect genetic mutations of cancers, have made the tests conceivable. Some 38 companies in the US alone are working on liquid biopsies by trying to analyze blood for fragments of DNA shed by dying tumor cells.
Premature research on the liquid biopsy has concentrated profoundly on patients with later-stage cancers who have suffered treatments, including chemotherapy, radiation, surgery, immunotherapy or drugs that target molecules involved in the growth, progression and spread of cancer. For cancer patients undergoing treatment, liquid biopsies could spare them some of the painful, expensive and risky tissue tumor biopsies and reduce reliance on CT scans. The tests can rapidly evaluate the efficacy of surgery or other treatment, while old-style biopsies and CT scans can still remain inconclusive as a result of scar tissue near the tumor site.
As recently as a few years ago, the liquid biopsies were hardly used except in research. At the moment, thousands of the tests are being used in clinical practices in the United States and abroad, including at the M.D. Anderson Cancer Center in Houston; the University of California, San Diego; the University of California, San Francisco; the Duke Cancer Institute and several other cancer centers.
With patients for whom physicians cannot get a tissue biopsy, the liquid biopsy could prove a safe and effective alternative that could help determine whether treatment is helping eradicate the cancer. A startup, Miroculus developed a cheap, open source device that can test blood for several types of cancer at once. The platform, called Miriam finds cancer by extracting RNA from blood and spreading it across plates that look at specific type of mRNA. The technology is then hooked up at a smartphone which sends the information to an online database and compares the microRNA found in the patient’s blood to known patterns indicating different type of cancers in the early stage and can reduce unnecessary cancer screenings.
Nevertheless, experts warn that more studies are essential to regulate the accuracy of the test, exactly which cancers it can detect, at what stages and whether it improves care or survival rates.
Cellular Guillotine Created for Studying Single-Cell Wound Repair
Reporter: Irina Robu, PhD
Using the century-old cutting method, it would take a researcher five hours to cut 100 cells, and by the time they were done, the cells they cut first would be well on their way to healing.
In an effort to comprehend how a single cell heal, mechanical engineer Sing Tand developed a microscopic guillotine that proficiently cuts cells into two.
Tang, who is an assistant professor of mechanical engineering at Stanford University knew that finding a way to competently slice the cell in two could lead to engineering self-healing materials and machines. In order, to efficiently slice a cell in two he developed a tool that could cut 150 cells in just over 2 minutes, and the cuts were much more standardized and synchronized in the stage of their repair process. They attained this rate by creating a scaled-up version of their tool with eight identical parallel channels that run simultaneously. Being able to efficiently study cell healing could eventually help scientists study and treat a variety of human diseases such as cancer and neurodegenerative diseases. Prior to Tang’s cellular guillotine, scientists used to slice cells by hand under a microscope using a glass needle which is a method that can lead to errors.
Tang’s method can be the Holy Grail of engineering self-healing materials and machines.