Feeds:
Posts
Comments

Posts Tagged ‘AI’


Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I
Research Team: Abhisar Anand, Srinivas Sriram

2021 LPBI Summer Internship in Data Science and Website construction.
This article reports on a research study conducted till December 2020.
Research completed before the 2021 LPBI Summer Internship began in 6/15/2021.

As the field of Artificial Intelligence progresses, various algorithms have been implemented by researchers to classify emotions from EEG signals. Few researchers from China and Singapore released a paper (“An Investigation of Deep Learning Models from EEG-Based Emotion Recognition”) analyzing different types of DL model architectures such as deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid of CNN and LSTM (CNN-LSTM). The dataset used in this investigation was the DEAP Dataset which consisted of EEG signals of patients that watched 40 one-minute long music videos and then rated them in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. The result of the investigation presented that CNN (90.12%) and CNN-LSTM (94.7%) models had the highest performance out of the batch of DL models. On the other hand, the DNN model had a very fast training speed but was not able to perform as accurately as other other models. The LSTM model was also not able to perform accurately and the training speed was much slower as it was difficult to achieve convergence.

This research in the various model architectures provides a sense of what the future of Emotion Classification with AI holds. These Deep Learning models can be implemented in a variety of different scenarios across the world, all to help with detecting emotions in scenarios where it may be difficult to do so. However, there needs to be more research implemented in the model training aspect to ensure the accuracy of the classification is top-notch. Along with that, newer and more reliable hardware can be implemented in society to provide an easy-to-access and portable EEG collection device that can be used in any different scenario across the world. Overall, although future improvements need to be implemented, the future of making sure that emotions are accurately detected in all people is starting to look a lot brighter thanks to the innovation of AI in the neuroscience field.

Emotions are a key factor in any person’s day to day life. Most of the time, we as humans can detect these emotions through physical cues such as movements, facial expressions, and tone of voice. However, in certain individuals, it can be hard to identify their emotions through their visible physical cues. Recent studies in the Machine Learning and AI field provide a particular development in the ability to detect emotions through brainwaves, more specifically EEG brainwaves. These researchers from across the world utilize the same concept of EEG implemented in AI to help predict the state an individual is in at any given moment.

Emotion classification based on brain wave: a survey (Figure 4)

Image Source: https://hcis-journal.springeropen.com/articles/10.1186/s13673-019-0201-x

EEGs can detect and compile normal and abnormal brain wave activity and indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities. EEG signals are classified mainly by brain wave frequencies. The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slowed, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. These EEG signals can help for the detection of emotions based on the frequencies that the signals happen in and the activity of the signals (whether they are active or relatively calm). 

Sources:

Zhang, Yaqing, et al. “An Investigation of Deep Learning Models for EEG-Based Emotion Recognition.” Frontiers in Neuroscience, vol. 14, 2020. Crossref, doi:10.3389/fnins.2020.622759.

Nayak, Anilkumar, Chetan, Arayamparambil. “EEG Normal Waveforms.” National Center for Biotechnology Information, StatPearls Publishing LLC., 4 May 2021, http://www.ncbi.nlm.nih.gov/books/NBK539805.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy
Reporter: Srinivas Sriram, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-the-instant-prediction-of-patients-with-epilepsy/

Prediction of Cardiovascular Risk by Machine Learning (ML) Algorithm: Best performing algorithm by predictive capacity had area under the ROC curve (AUC) scores: 1st, quadratic discriminant analysis; 2nd, NaiveBayes and 3rd, neural networks, far exceeding the conventional risk-scaling methods in Clinical Use
Curator: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/07/04/prediction-of-cardiovascular-risk-by-machine-learning-ml-algorithm-best-performing-algorithm-by-predictive-capacity-had-area-under-the-roc-curve-auc-scores-1st-quadratic-discriminant-analysis/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes
Reporter: Amandeep Kaur, B.Sc., M.Sc.
https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms
Reporter: Dror Nir, PhD
https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe
Reporter: Howard Donohue, PhD (EAW)
https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Read Full Post »


Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy

Reporter: Srinivas Sriram, Research Assistant I
Research Team: Srinivas Sriram, Abhisar Anand

2021 LPBI Summer Intern in Data Science and Website Construction
This article reports on a research study conducted from January 2021 to May 2021.
This Research was completed before the 2021 LPBI Summer Internship that began on 6/15/2021.

The main criterion of this study was to utilize the dataset (shown above) to develop a DL network that could accurately predict new seizures based on incoming data. To begin the study, our research group did some exploratory data analysis on the dataset and we recognized the key defining pattern of the data that allowed for the development of the DL model. This pattern of the data can be represented in the graph above, where the lines representing seizure data had major spikes in extreme hertz values, while the lines representing normal patient data remained stable without any spikes. We utilized this pattern as a baseline for our model. 

Conclusions and Future Improvements:

Through our system, we were able to create a prototype solution that would predict when seizures happened in a potential patient using an accurate LSTM network and a reliable hardware system. This research can be implemented in hospitals with patients suffering from epilepsy in order to help them as soon as they experience a seizure to prevent damage. However, future improvements need to be made to this solution to allow it to be even more viable in the Healthcare Industry, which is listed below.

  • Needs to be implemented on a more reliable EEG headset (covers all neurons of the brain, less prone to electric disruptions shown in the prototype). 
  • Needs to be tested on live patients to deem whether the solution is viable and provides a potential solution to the problem. 
  • The network can always be fine-tuned to maximize performance. 
  • A better alert system can be implemented to provide as much help as possible. 

These improvements, when implemented, can help provide a real solution to one of the most common diseases faced in the world. 

Background Information:

Epilepsy is described as a brain disorder diagnostic category for multiple occurrences of seizures that happen within recurrent and/or a brief timespan. According to the World Health Organization, seizure disorders, including epilepsy, are among the most common neurological diseases. Those who suffer seizures have a 3 times higher risk of premature death. Epilepsy is often treatable, especially when physicians can provide necessary treatment quickly. When untreated, however, seizures can cause physical, psychological, and emotional, including isolation from others. Quick diagnosis and treatment prevent suffering and save lives. The importance of a quick diagnosis of epilepsy has led to our research team developing Deep Learning (DL) algorithms for the sole purpose of detecting epileptic seizures as soon as they occur. 

Throughout the years, one common means of detecting Epilepsy has emerged in the form of an electroencephalogram (EEG). EEGs can detect and compile “normal” and “abnormal “brain wave activity” and “indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities”. EEG waves are classified mainly by brain wave frequencies (EEG, 2020). The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slow, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. EEG detection of electrical brain wave frequencies can be used to detect and diagnose seizures based on their deviation from usual brain wave patterns.

In this particular research project, our research group hoped to develop a DL algorithm that when implemented on a live, portable EEG brain wave capturing device, could accurately predict when a particular patient was suffering from Epilepsy as soon as it occurred. This would be accomplished by creating a network that could detect when the brain frequencies deviated from the normal frequency ranges. 

The Study:

Line Graph representing EEG Brain Waves from a Seizure versus EEG Brain Waves from a normal individual. 

Source Dataset: https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition

To expand more on the dataset, it is an EEG data set compiled by Qiuyi Wu and Ernest Fokoue (2021) from the work of medical researchers R.Andrzejak, M.D. et al. (2001) which had been made public domain through the UCI Machine Learning Repository We also confirmed fair use permission with UCI. The dataset had been gathered by Andrzejak during examinations of 500 patients with a chronic seizure disorder. R.G.Andrzejak, et al. (2001) recorded each entry in the EEG dataset used for this project within 23.6 seconds in a time-series data structure. Each row in the dataset represented a patient recorded. The continuous variables in the dataset were single EEG data points at that specific point in time during the measuring period. At the end of the dataset, was a y-variable that indicated whether or not the patient had a seizure during the period the data was recorded. The continuous variables, or the EEG data, for each patient, varied widely based on whether the patient was experiencing a seizure at that time. The Wu & Fokoue Dataset (2021) consists of one file of 11,500 rows, each with 178 sequential data points concatenated from the original dataset of 5 data folders, each including 100 files of EEG recordings of 23.6 seconds and containing 4097 data points. Each folder contained a single, original subset. Subset A contained EEG data gathering during epileptic seizure…. Subset B contained EEG data from brain tumor sites. Subset 3, from a healthy site where tumors had been located. Subsets 4 and 5 from non-seizure patients at rest with eyes open and closed, respectively. 

Based on the described data, our team recognized that a Recurrent Neural Network (RNN) was needed to input the sequential data and return an output of whether the sequential data was a seizure or not. However, we realized that RNN models are known to get substantially large over time, reducing computation speeds. To help provide a solution to this issue, our group decided to implement a long-short-term memory (LSTM) model. After deciding our model’s architecture, we proceeded to train our model in two different DL frameworks inside Python, TensorFlow, and PyTorch. Through various rounds of retesting and redesigning, we were able to train and develop two accurate models in each of the models that not only performed well while learning the data while training, but also could accurately predict new data in the testing set (98 percent accuracy on the unseen data). These LSTM networks could classify normal EEG data when the brain waves are normal, and then immediately predict the seizure data based on if a dramatic spike occurred in the data. 

After training our model, we had to implement our model in a real-life prototype scenario in which we utilized a Single Board Computer (SBC) in the Raspberry Pi 4 and a live capturing EEG headset in the Muse 2 Headband. The two hardware components would sync up through Bluetooth and the headband would return EEG data to the Raspberry Pi, which would process the data. Through the Muselsl API in Python, we were able to retrieve this EEG data in a format similar to the manner implemented during training. This new input data would be fed into our LSTM network (TensorFlow was chosen for the prototype due to its better performance than the PyTorch network), which would then output the result of the live captured EEG data in small intervals. This constant cycle would be able to accurately predict a seizure as soon as it occurs through batches of EEG data being fed into the LSTM network. Part of the reason why our research group chose the Muse Headband, in particular, was not only due to its compatibility with Python but also due to the fact that it was able to represent seizure data. Because none of our members had epilepsy, we had to find a reliable way of testing our model to make sure it worked on the new data. Through electrical disruptions in the wearable Muse Headband, we were able to simulate these seizures that worked with our network’s predictions. In our program, we implemented an alert system that would email the patient’s doctor as soon as a seizure was detected.

Individual wearing the Muse 2 Headband

Image Source: https://www.techguide.com.au/reviews/gadgets-reviews/muse-2-review-device-help-achieve-calm-meditation/

Sources Cited:

Wu, Q. & Fokoue, E. (2021).  Epileptic seizure recognition data set: Data folder & Data set description. UCI Machine Learning Repository: Epileptic Seizure Recognition. Jan. 30. Center for Machine Learning and Intelligent Systems, University of California Irvine.

Nayak, C. S. (2020). EEG normal waveforms.” StatPearls [Internet]. U.S. National Library of Medicine, 31 Jul. 2020, www.ncbi.nlm.nih.gov/books/NBK539805/#.

Epilepsy. (2019). World Health Organization Fact Sheet. Jun. https://www.who.int/ news-room/fact-sheet s/detail/epilepsy

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I

https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms

Reporter: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe

Reporter: Howard Donohue, PhD (EAW)

https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020 #AACR20: Tuesday June 23, 2020 Noon-2:45 Educational Sessions


Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 Noon-2:45 Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using

#AACR20

@pharma_BI

@AACR

Register for FREE at https://www.aacr.org/

 

Presidential Address

Elaine R Mardis, William N Hait

DETAILS

Welcome and introduction

William N Hait

 

Improving diagnostic yield in pediatric cancer precision medicine

Elaine R Mardis
  • Advent of genomics have revolutionized how we diagnose and treat lung cancer
  • We are currently needing to understand the driver mutations and variants where we can personalize therapy
  • PD-L1 and other checkpoint therapy have not really been used in pediatric cancers even though CAR-T have been successful
  • The incidence rates and mortality rates of pediatric cancers are rising
  • Large scale study of over 700 pediatric cancers show cancers driven by epigenetic drivers or fusion proteins. Need for transcriptomics.  Also study demonstrated that we have underestimated germ line mutations and hereditary factors.
  • They put together a database to nominate patients on their IGM Cancer protocol. Involves genetic counseling and obtaining germ line samples to determine hereditary factors.  RNA and protein are evaluated as well as exome sequencing. RNASeq and Archer Dx test to identify driver fusions
  • PECAN curated database from St. Jude used to determine driver mutations. They use multiple databases and overlap within these databases and knowledge base to determine or weed out false positives
  • They have used these studies to understand the immune infiltrate into recurrent cancers (CytoCure)
  • They found 40 germline cancer predisposition genes, 47 driver somatic fusion proteins, 81 potential actionable targets, 106 CNV, 196 meaningful somatic driver mutations

 

 

Tuesday, June 23

12:00 PM – 12:30 PM EDT

Awards and Lectures

NCI Director’s Address

Norman E Sharpless, Elaine R Mardis

DETAILS

Introduction: Elaine Mardis

 

NCI Director Address: Norman E Sharpless
  • They are functioning well at NCI with respect to grant reviews, research, and general functions in spite of the COVID pandemic and the massive demonstrations on also focusing on the disparities which occur in cancer research field and cancer care
  • There are ongoing efforts at NCI to make a positive difference in racial injustice, diversity in the cancer workforce, and for patients as well
  • Need a diverse workforce across the cancer research and care spectrum
  • Data show that areas where the clinicians are successful in putting African Americans on clinical trials are areas (geographic and site specific) where health disparities are narrowing
  • Grants through NCI new SeroNet for COVID-19 serologic testing funded by two RFAs through NIAD (RFA-CA-30-038 and RFA-CA-20-039) and will close on July 22, 2020

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Immunology, Tumor Biology, Experimental and Molecular Therapeutics, Molecular and Cellular Biology/Genetics

Tumor Immunology and Immunotherapy for Nonimmunologists: Innovation and Discovery in Immune-Oncology

This educational session will update cancer researchers and clinicians about the latest developments in the detailed understanding of the types and roles of immune cells in tumors. It will summarize current knowledge about the types of T cells, natural killer cells, B cells, and myeloid cells in tumors and discuss current knowledge about the roles these cells play in the antitumor immune response. The session will feature some of the most promising up-and-coming cancer immunologists who will inform about their latest strategies to harness the immune system to promote more effective therapies.

Judith A Varner, Yuliya Pylayeva-Gupta

 

Introduction

Judith A Varner
New techniques reveal critical roles of myeloid cells in tumor development and progression
  • Different type of cells are becoming targets for immune checkpoint like myeloid cells
  • In T cell excluded or desert tumors T cells are held at periphery so myeloid cells can infiltrate though so macrophages might be effective in these immune t cell naïve tumors, macrophages are most abundant types of immune cells in tumors
  • CXCLs are potential targets
  • PI3K delta inhibitors,
  • Reduce the infiltrate of myeloid tumor suppressor cells like macrophages
  • When should we give myeloid or T cell therapy is the issue
Judith A Varner
Novel strategies to harness T-cell biology for cancer therapy
Positive and negative roles of B cells in cancer
Yuliya Pylayeva-Gupta
New approaches in cancer immunotherapy: Programming bacteria to induce systemic antitumor immunity

 

 

Tuesday, June 23

12:45 PM – 1:46 PM EDT

Virtual Educational Session

Cancer Chemistry

Chemistry to the Clinic: Part 2: Irreversible Inhibitors as Potential Anticancer Agents

There are numerous examples of highly successful covalent drugs such as aspirin and penicillin that have been in use for a long period of time. Despite historical success, there was a period of reluctance among many to purse covalent drugs based on concerns about toxicity. With advances in understanding features of a well-designed covalent drug, new techniques to discover and characterize covalent inhibitors, and clinical success of new covalent cancer drugs in recent years, there is renewed interest in covalent compounds. This session will provide a broad look at covalent probe compounds and drug development, including a historical perspective, examination of warheads and electrophilic amino acids, the role of chemoproteomics, and case studies.

Benjamin F Cravatt, Richard A. Ward, Sara J Buhrlage

 

Discovering and optimizing covalent small-molecule ligands by chemical proteomics

Benjamin F Cravatt
  • Multiple approaches are being investigated to find new covalent inhibitors such as: 1) cysteine reactivity mapping, 2) mapping cysteine ligandability, 3) and functional screening in phenotypic assays for electrophilic compounds
  • Using fluorescent activity probes in proteomic screens; have broad useability in the proteome but can be specific
  • They screened quiescent versus stimulated T cells to determine reactive cysteines in a phenotypic screen and analyzed by MS proteomics (cysteine reactivity profiling); can quantitate 15000 to 20,000 reactive cysteines
  • Isocitrate dehydrogenase 1 and adapter protein LCP-1 are two examples of changes in reactive cysteines they have seen using this method
  • They use scout molecules to target ligands or proteins with reactive cysteines
  • For phenotypic screens they first use a cytotoxic assay to screen out toxic compounds which just kill cells without causing T cell activation (like IL10 secretion)
  • INTERESTINGLY coupling these MS reactive cysteine screens with phenotypic screens you can find NONCANONICAL mechanisms of many of these target proteins (many of the compounds found targets which were not predicted or known)

Electrophilic warheads and nucleophilic amino acids: A chemical and computational perspective on covalent modifier

The covalent targeting of cysteine residues in drug discovery and its application to the discovery of Osimertinib

Richard A. Ward
  • Cysteine activation: thiolate form of cysteine is a strong nucleophile
  • Thiolate form preferred in polar environment
  • Activation can be assisted by neighboring residues; pKA will have an effect on deprotonation
  • pKas of cysteine vary in EGFR
  • cysteine that are too reactive give toxicity while not reactive enough are ineffective

 

Accelerating drug discovery with lysine-targeted covalent probes

 

Tuesday, June 23

12:45 PM – 2:15 PM EDT

Virtual Educational Session

Molecular and Cellular Biology/Genetics

Virtual Educational Session

Tumor Biology, Immunology

Metabolism and Tumor Microenvironment

This Educational Session aims to guide discussion on the heterogeneous cells and metabolism in the tumor microenvironment. It is now clear that the diversity of cells in tumors each require distinct metabolic programs to survive and proliferate. Tumors, however, are genetically programmed for high rates of metabolism and can present a metabolically hostile environment in which nutrient competition and hypoxia can limit antitumor immunity.

Jeffrey C Rathmell, Lydia Lynch, Mara H Sherman, Greg M Delgoffe

 

T-cell metabolism and metabolic reprogramming antitumor immunity

Jeffrey C Rathmell

Introduction

Jeffrey C Rathmell

Metabolic functions of cancer-associated fibroblasts

Mara H Sherman

Tumor microenvironment metabolism and its effects on antitumor immunity and immunotherapeutic response

Greg M Delgoffe
  • Multiple metabolites, reactive oxygen species within the tumor microenvironment; is there heterogeneity within the TME metabolome which can predict their ability to be immunosensitive
  • Took melanoma cells and looked at metabolism using Seahorse (glycolysis): and there was vast heterogeneity in melanoma tumor cells; some just do oxphos and no glycolytic metabolism (inverse Warburg)
  • As they profiled whole tumors they could separate out the metabolism of each cell type within the tumor and could look at T cells versus stromal CAFs or tumor cells and characterized cells as indolent or metabolic
  • T cells from hyerglycolytic tumors were fine but from high glycolysis the T cells were more indolent
  • When knock down glucose transporter the cells become more glycolytic
  • If patient had high oxidative metabolism had low PDL1 sensitivity
  • Showed this result in head and neck cancer as well
  • Metformin a complex 1 inhibitor which is not as toxic as most mito oxphos inhibitors the T cells have less hypoxia and can remodel the TME and stimulate the immune response
  • Metformin now in clinical trials
  • T cells though seem metabolically restricted; T cells that infiltrate tumors are low mitochondrial phosph cells
  • T cells from tumors have defective mitochondria or little respiratory capacity
  • They have some preliminary findings that metabolic inhibitors may help with CAR-T therapy

Obesity, lipids and suppression of anti-tumor immunity

Lydia Lynch
  • Hypothesis: obesity causes issues with anti tumor immunity
  • Less NK cells in obese people; also produce less IFN gamma
  • RNASeq on NOD mice; granzymes and perforins at top of list of obese downregulated
  • Upregulated genes that were upregulated involved in lipid metabolism
  • All were PPAR target genes
  • NK cells from obese patients takes up palmitate and this reduces their glycolysis but OXPHOS also reduced; they think increased FFA basically overloads mitochondria
  • PPAR alpha gamma activation mimics obesity

 

 

Tuesday, June 23

12:45 PM – 2:45 PM EDT

Virtual Educational Session

Clinical Research Excluding Trials

The Evolving Role of the Pathologist in Cancer Research

Long recognized for their role in cancer diagnosis and prognostication, pathologists are beginning to leverage a variety of digital imaging technologies and computational tools to improve both clinical practice and cancer research. Remarkably, the emergence of artificial intelligence (AI) and machine learning algorithms for analyzing pathology specimens is poised to not only augment the resolution and accuracy of clinical diagnosis, but also fundamentally transform the role of the pathologist in cancer science and precision oncology. This session will discuss what pathologists are currently able to achieve with these new technologies, present their challenges and barriers, and overview their future possibilities in cancer diagnosis and research. The session will also include discussions of what is practical and doable in the clinic for diagnostic and clinical oncology in comparison to technologies and approaches primarily utilized to accelerate cancer research.

 

Jorge S Reis-Filho, Thomas J Fuchs, David L Rimm, Jayanta Debnath

DETAILS

Tuesday, June 23

12:45 PM – 2:45 PM EDT

 

High-dimensional imaging technologies in cancer research

David L Rimm

  • Using old methods and new methods; so cell counting you use to find the cells then phenotype; with quantification like with Aqua use densitometry of positive signal to determine a threshold to determine presence of a cell for counting
  • Hiplex versus multiplex imaging where you have ten channels to measure by cycling of flour on antibody (can get up to 20plex)
  • Hiplex can be coupled with Mass spectrometry (Imaging Mass spectrometry, based on heavy metal tags on mAbs)
  • However it will still take a trained pathologist to define regions of interest or field of desired view

 

Introduction

Jayanta Debnath

Challenges and barriers of implementing AI tools for cancer diagnostics

Jorge S Reis-Filho

Implementing robust digital pathology workflows into clinical practice and cancer research

Jayanta Debnath

Invited Speaker

Thomas J Fuchs
  • Founder of spinout of Memorial Sloan Kettering
  • Separates AI from computational algothimic
  • Dealing with not just machines but integrating human intelligence
  • Making decision for the patients must involve human decision making as well
  • How do we get experts to do these decisions faster
  • AI in pathology: what is difficult? =è sandbox scenarios where machines are great,; curated datasets; human decision support systems or maps; or try to predict nature
  • 1) learn rules made by humans; human to human scenario 2)constrained nature 3)unconstrained nature like images and or behavior 4) predict nature response to nature response to itself
  • In sandbox scenario the rules are set in stone and machines are great like chess playing
  • In second scenario can train computer to predict what a human would predict
  • So third scenario is like driving cars
  • System on constrained nature or constrained dataset will take a long time for commuter to get to decision
  • Fourth category is long term data collection project
  • He is finding it is still finding it is still is difficult to predict nature so going from clinical finding to prognosis still does not have good predictability with AI alone; need for human involvement
  • End to end partnering (EPL) is a new way where humans can get more involved with the algorithm and assist with the problem of constrained data
  • An example of a workflow for pathology would be as follows from Campanella et al 2019 Nature Medicine: obtain digital images (they digitized a million slides), train a massive data set with highthroughput computing (needed a lot of time and big software developing effort), and then train it using input be the best expert pathologists (nature to human and unconstrained because no data curation done)
  • Led to first clinically grade machine learning system (Camelyon16 was the challenge for detecting metastatic cells in lymph tissue; tested on 12,000 patients from 45 countries)
  • The first big hurdle was moving from manually annotated slides (which was a big bottleneck) to automatically extracted data from path reports).
  • Now problem is in prediction: How can we bridge the gap from predicting humans to predicting nature?
  • With an AI system pathologist drastically improved the ability to detect very small lesions

 

Virtual Educational Session

Epidemiology

Cancer Increases in Younger Populations: Where Are They Coming from?

Incidence rates of several cancers (e.g., colorectal, pancreatic, and breast cancers) are rising in younger populations, which contrasts with either declining or more slowly rising incidence in older populations. Early-onset cancers are also more aggressive and have different tumor characteristics than those in older populations. Evidence on risk factors and contributors to early-onset cancers is emerging. In this Educational Session, the trends and burden, potential causes, risk factors, and tumor characteristics of early-onset cancers will be covered. Presenters will focus on colorectal and breast cancer, which are among the most common causes of cancer deaths in younger people. Potential mechanisms of early-onset cancers and racial/ethnic differences will also be discussed.

Stacey A. Fedewa, Xavier Llor, Pepper Jo Schedin, Yin Cao

Cancers that are and are not increasing in younger populations

Stacey A. Fedewa

 

  • Early onset cancers, pediatric cancers and colon cancers are increasing in younger adults
  • Younger people are more likely to be uninsured and these are there most productive years so it is a horrible life event for a young adult to be diagnosed with cancer. They will have more financial hardship and most (70%) of the young adults with cancer have had financial difficulties.  It is very hard for women as they are on their childbearing years so additional stress
  • Types of early onset cancer varies by age as well as geographic locations. For example in 20s thyroid cancer is more common but in 30s it is breast cancer.  Colorectal and testicular most common in US.
  • SCC is decreasing by adenocarcinoma of the cervix is increasing in women’s 40s, potentially due to changing sexual behaviors
  • Breast cancer is increasing in younger women: maybe etiologic distinct like triple negative and larger racial disparities in younger African American women
  • Increased obesity among younger people is becoming a factor in this increasing incidence of early onset cancers

 

 

Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM

Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

 

Read Full Post »


Powerful AI Tools Being Developed for the COVID-19 Fight

Curator: Stephen J. Williams, Ph.D.

 

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

IBM Releases Novel AI-Powered Technologies to Help Health and Research Community Accelerate the Discovery of Medical Insights and Treatments for COVID-19

April 3, 2020 | Written by: 

IBM Research has been actively developing new cloud and AI-powered technologies that can help researchers across a variety of scientific disciplines accelerate the process of discovery. As the COVID-19 pandemic unfolds, we continue to ask how these technologies and our scientific knowledge can help in the global battle against coronavirus.

Today, we are making available multiple novel, free resources from across IBM to help healthcare researchers, doctors and scientists around the world accelerate COVID-19 drug discovery: from gathering insights, to applying the latest virus genomic information and identifying potential targets for treatments, to creating new drug molecule candidates.

Though some of the resources are still in exploratory stages, IBM is making them available to qualifying researchers at no charge to aid the international scientific investigation of COVID-19.

Today’s announcement follows our recent leadership in launching the U.S. COVID-19 High Performance Computing Consortium, which is harnessing massive computing power in the effort to help confront the coronavirus.

Streamlining the Search for Information

Healthcare agencies and governments around the world have quickly amassed medical and other relevant data about the pandemic. And, there are already vast troves of medical research that could prove relevant to COVID-19. Yet, as with any large volume of disparate data sources, it is difficult to efficiently aggregate and analyze that data in ways that can yield scientific insights.

To help researchers access structured and unstructured data quickly, we are offering a cloud-based AI research resource that has been trained on a corpus of thousands of scientific papers contained in the COVID-19 Open Research Dataset (CORD-19), prepared by the White House and a coalition of research groups, and licensed databases from the DrugBankClinicaltrials.gov and GenBank. This tool uses our advanced AI and allows researchers to pose specific queries to the collections of papers and to extract critical COVID-19 knowledge quickly. Please note, access to this resource will be granted only to qualified researchers. To learn more and request access, please click here.

Aiding the Hunt for Treatments

The traditional drug discovery pipeline relies on a library of compounds that are screened, improved, and tested to determine safety and efficacy. In dealing with new pathogens such as SARS-CoV-2, there is the potential to enhance the compound libraries with additional novel compounds. To help address this need, IBM Research has recently created a new, AI-generative framework which can rapidly identify novel peptides, proteins, drug candidates and materials.

We have applied this AI technology against three COVID-19 targets to identify 3,000 new small molecules as potential COVID-19 therapeutic candidates. IBM is releasing these molecules under an open license, and researchers can study them via a new interactive molecular explorer tool to understand their characteristics and relationship to COVID-19 and identify candidates that might have desirable properties to be further pursued in drug development.

To streamline efforts to identify new treatments for COVID-19, we are also making the IBM Functional Genomics Platform available for free for the duration of the pandemic. Built to discover the molecular features in viral and bacterial genomes, this cloud-based repository and research tool includes genes, proteins and other molecular targets from sequenced viral and bacterial organisms in one place with connections pre-computed to help accelerate discovery of molecular targets required for drug design, test development and treatment.

Select IBM collaborators from government agencies, academic institutions and other organizations already use this platform for bacterial genomic study. And now, those working on COVID-19 can request the IBM Functional Genomics Platform interface to explore the genomic features of the virus. Access to the IBM Functional Genomics Platform will be prioritized for those conducting COVID-19 research. To learn more and request access, please click here.

Drug and Disease Information

Clinicians and healthcare professionals on the frontlines of care will also have free access to hundreds of pieces of evidence-based, curated COVID-19 and infectious disease content from IBM Micromedex and EBSCO DynaMed. Using these two rich decision support solutions, users will have access to drug and disease information in a single and comprehensive search. Clinicians can also provide patients with consumer-friendly patient education handouts with relevant, actionable medical information. IBM Micromedex is one of the largest online reference databases for medication information and is used by more than 4,500 hospitals and health systems worldwide. EBSCO DynaMed provides peer-reviewed clinical content, including systematic literature reviews in 28 specialties for comprehensive disease topics, health conditions and abnormal findings, to highly focused topics on evaluation, differential diagnosis and management.

The scientific community is working hard to make important new discoveries relevant to the treatment of COVID-19, and we’re hopeful that releasing these novel tools will help accelerate this global effort. This work also outlines our long-term vision for the future of accelerated discovery, where multi-disciplinary scientists and clinicians work together to rapidly and effectively create next generation therapeutics, aided by novel AI-powered technologies.

Learn more about IBM’s response to COVID-19: IBM.com/COVID19.

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

DiA Imaging Analysis Receives Grant to Accelerate Global Access to its AI Ultrasound Solutions in the Fight Against COVID-19

Source: https://www.grantnews.com/news-articles/?rkey=20200512UN05506&filter=12337

Grant will allow company to accelerate access to its AI solutions and use of ultrasound in COVID-19 emergency settings

TEL AVIV, IsraelMay 12, 2020 /PRNewswire-PRWeb/ — DiA Imaging Analysis, a leading provider of AI based ultrasound analysis solutions, today announced that it has received a government grant from the Israel Innovation Authority (IIA) to develop solutions for ultrasound imaging analysis of COVID-19 patients using Artificial Intelligence (AI).Using ultrasound in point of care emergency settings has gained momentum since the outbreak of COVID-19 pandemic. In these settings, which include makeshift hospital COVID-19 departments and triage “tents,” portable ultrasound offers clinicians diagnostic decision support, with the added advantage of being easier to disinfect and eliminating the need to transport patients from one room to another.However, analyzing ultrasound images is a process that it is still mostly done visually, leading to a growing market need for automated solutions and decision support.As the leading provider of AI solutions for ultrasound analysis and backed by Connecticut Innovations, DiA makes ultrasound analysis smarter and accessible to both new and expert ultrasound users with various levels of experience. The company’s flagship LVivo Cardio Toolbox for AI-based cardiac ultrasound analysis enables clinicians to automatically generate objective clinical analysis, with increased accuracy and efficiency to support decisions about patient treatment and care.

The IIA grant provides a budget of millions NIS to increase access to DiA’s solutions for users in Israel and globally, and accelerate R&D with a focus on new AI solutions for COVID-19 patient management. DiA solutions are vendor-neutral and platform agnostic, as well as powered to run in low processing, mobile environments like handheld ultrasound.Recent data highlights the importance of looking at the heart during the progression of COVID-19, with one study citing 20% of patients hospitalized with COVID-19 showing signs of heart damage and increased mortality rates in those patients. DiA’s LVivo cardiac analysis solutions automatically generate objective, quantified cardiac ultrasound results to enable point-of-care clinicians to assess cardiac function on the spot, near patients’ bedside.

According to Dr. Ami Applebaum, the Chairman of the Board of the IIA, “The purpose of IIA’s call was to bring solutions to global markets for fighting COVID-19, with an emphasis on relevancy, fast time to market and collaborations promising continuity of the Israeli economy. DiA meets these requirements with AI innovation for ultrasound.”DiA has received several FDA/CE clearances and established distribution partnerships with industry leading companies including GE Healthcare, IBM Watson and Konica Minolta, currently serving thousands of end users worldwide.”We see growing use of ultrasound in point of care settings, and an urgent need for automated, objective solutions that provide decision support in real time,” said Hila Goldman-Aslan, CEO and Co-founder of DiA Imaging Analysis, “Our AI solutions meet this need by immediately helping clinicians on the frontlines to quickly and easily assess COVID-19 patients’ hearts to help guide care delivery.”

About DiA Imaging Analysis:
DiA Imaging Analysis provides advanced AI-based ultrasound analysis technology that makes ultrasound accessible to all. DiA’s automated tools deliver fast and accurate clinical indications to support the decision-making process and offer better patient care. DiA’s AI-based technology uses advanced pattern recognition and machine-learning algorithms to automatically imitate the way the human eye detects image borders and identifies motion. Using DiA’s tools provides automated and objective AI tools, helps reduce variability among users, and increases efficiency. It allows clinicians with various levels of experience to quickly and easily analyze ultrasound images.

For additional information, please visit http://www.dia-analysis.com.

Read Full Post »



Google AI improves accuracy of reading mammograms, study finds

Google AI improves accuracy of reading mammograms, study finds

Google CFO Ruth Porat has blogged about twice battling breast cancer.

Artificial intelligence was often more accurate than radiologists in detecting breast cancer from mammograms in a study conducted by researchers using Google AI technology.

The study, published in the journal Nature, used mammograms from approximately 90,000 women in which the outcomes were known to train technology from Alphabet Inc’s DeepMind AI unit, now part of Google Health, Yahoo news reported.

The AI system was then used to analyze images from 28,000 other women and often diagnosed early cancers more accurately than the radiologists who originally interpreted the mammograms.

In another test, AI outperformed six radiologists in reading 500 mammograms. However, while the AI system found cancers the humans missed, it also failed to find cancers flagged by all six radiologists, reports The New York Times.

The researchers said the study “paves the way” for further clinical trials.

Writing in NatureEtta D. Pisano, chief research officer at the American College of Radiology and professor in residence at Harvard Medical School, noted, “The real world is more complicated and potentially more diverse than the type of controlled research environment reported in this study.”

Ruth Porat, senior vice president and chief financial officer Alphabet, Inc., wrote in a company blog titled “Breast cancer and tech…a reason for optimism” in October about twice battling the disease herself, and the importance of her company’s application of AI to healthcare innovations.

She said that focus had already led to the development of a deep learning algorithm to help pathologists assess tissue associated with metastatic breast cancer.

“By pinpointing the location of the cancer more accurately, quickly and at a lower cost, care providers might be able to deliver better treatment for more patients,” she wrote.

Google also has created algorithms that help medical professionals diagnose lung cancer, and eye disease in people with diabetes, per the Times.

Porat acknowledged that Google’s research showed the best results occur when medical professionals and technology work together.

Any insights provided by AI must be “paired with human intelligence and placed in the hands of skilled researchers, surgeons, oncologists, radiologists and others,” she said.

Anne Stych is a staff writer for Bizwomen.
Industries:

Read Full Post »


AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights

Reporter: Stephen J. Williams, Ph.D.

3.4.16

3.4.16   AI Acquisitions by Big Tech Firms Are Happening at a Blistering Pace: 2019 Recent Data by CBI Insights, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 3: AI in Medicine

Recent report from CBI Insights shows the rapid pace at which the biggest tech firms (Google, Apple, Microsoft, Facebook, and Amazon) are acquiring artificial intelligence (AI) startups, potentially confounding the AI talent shortage that exists.

The link to the report and free download is given here at https://www.cbinsights.com/research/top-acquirers-ai-startups-ma-timeline/

Part of the report:

TECH GIANTS LEAD IN AI ACQUISITIONS

The usual suspects are leading the race for AI: tech giants like Facebook, Amazon, Microsoft, Google, & Apple (FAMGA) have all been aggressively acquiring AI startups in the last decade.

Among the FAMGA companies, Apple leads the way, making 20 total AI acquisitions since 2010. It is followed by Google (the frontrunner from 2012 to 2016) with 14 acquisitions and Microsoft with 10.

Apple’s AI acquisition spree, which has helped it overtake Google in recent years, was essential to the development of new iPhone features. For example, FaceID, the technology that allows users to unlock their iPhone X just by looking at it, stems from Apple’s M&A moves in chips and computer vision, including the acquisition of AI company RealFace.

In fact, many of FAMGA’s prominent products and services came out of acquisitions of AI companies — such as Apple’s Siri, or Google’s contributions to healthcare through DeepMind.

That said, tech giants are far from the only companies snatching up AI startups.

Since 2010, there have been 635 AI acquisitions, as companies aim to build out their AI capabilities and capture sought-after talent (as of 8/31/2019).

The pace of these acquisitions has also been increasing. AI acquisitions saw a more than 6x uptick from 2013 to 2018, including last year’s record of 166 AI acquisitions — up 38% year-over-year.

In 2019, there have already been 140+ acquisitions (as of August), putting the year on track to beat the 2018 record at the current run rate.

Part of this increase in the pace of AI acquisitions can be attributed to a growing diversity in acquirers. Where once AI was the exclusive territory of major tech companies, today, smaller AI startups are becoming acquisition targets for traditional insurance, retail, and healthcare incumbents.

For example, in February 2018, Roche Holding acquired New York-based cancer startup Flatiron Health for $1.9B — one of the largest M&A deals in artificial intelligence. This year, Nike acquired AI-powered inventory management startup Celect, Uber acquired computer vision company Mighty AI, and McDonald’s acquired personalization platform Dynamic Yield.

Despite the increased number of acquirers, however, tech giants are still leading the charge. Acquisitive tech giants have emerged as powerful global corporations with a competitive advantage in artificial intelligence, and startups have played a pivotal role in helping these companies scale their AI initiatives.

Apple, Google, Microsoft, Facebook, Intel, and Amazon are the most active acquirers of AI startups, each acquiring 7+ companies.

To read more on recent Acquisitions in the AI space please see the following articles on this Open Access Online Journal

Diversification and Acquisitions, 2001 – 2015: Trail known as “Google Acquisitions” – Understanding Alphabet’s Acquisitions: A Sector-By-Sector Analysis

Clarivate Analytics expanded IP data leadership by new acquisition of the leading provider of intellectual property case law and analytics Darts-ip

2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Forbes Opinion: 13 Industries Soon To Be Revolutionized By Artificial Intelligence

Artificial Intelligence and Cardiovascular Disease

Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Read Full Post »


Retrospect on HistoScanning; an AI routinely used in diagnostic imaging for over a decade

Author and Curator: Dror Nir, PhD

3.2.7

3.2.7   Retrospect on HistoScanning: an AI routinely used in diagnostic imaging for over a decade, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

This blog-post is a retrospect on over a decade of doing with HistoScanning; an AI medical-device for imaging-based tissue characterization.

Imaging-based tissue characterization by AI is offering a change in imaging paradigm; enhancing the visual information received when using diagnostic-imaging beyond that which the eye alone can see and at the same time simplifying and increasing the cost-effectiveness of patients clinical pathway.

In the case of HistoScanning, imaging is a combination of 3D-scanning by ultrasound with a real-time application of AI. The HistoScanning AI application comprises fast “patterns recognition” algorithms trained on ultrasound-scans and matched histopathology of cancer patients. It classifies millimetric tissue-volumes by identifying differences in the scattered ultrasound characterizing different mechanical and morphological properties of the different pathologies. A user-friendly interface displays the analysis results on the live ultrasound video image.

Users of AI in diagnostic-imaging of cancer patients expect it to improve their ability to:

  • Detect clinically significant cancer lesions with high sensitivity and specificity
  • Accurately position lesions within an organ
  • Accurately estimate the lesion volume
  • AND; help determine the pre-clinical level of lesion aggressiveness

The last being achieved through real-time guidance of needle biopsy towards the most suspicious locations.

Unlike most technologies that get obsolete as time passes, AI gets better. Availability of more processing power, better storage technologies, and faster memories translate to an ever-growing capacity of machines to learn. Moreover, the human-perception of AI is transforming fast from disbelief at the time HistoScanning was first launched, into total embracement.

During the last decade, 192 systems were put to use at the hands of urologists, radiologists, and gynecologists. Over 200 peer-reviewed, scientific-posters and white-papers were written by HistoScanning users sharing experiences and thoughts. Most of these papers are about HistoScanning for Prostate (PHS) which was launched as a medical-device in 2007. The real-time guided prostate-biopsy application was added to it in late 2013. I have mentioned several  of these papers in blog-posts published in this open-access website, e.g. :

Today’s fundamental challenge in Prostate cancer screening (September 2, 2012)

The unfortunate ending of the Tower of Babel construction project and its effect on modern imaging-based cancer patients’ management (October 22, 2012)

On the road to improve prostate biopsy (February 15, 2013)

Ultrasound-based Screening for Ovarian Cancer (April 28, 2013)

Imaging-Biomarkers; from discovery to validation (September 28, 2014)

For people who are developing AI applications for health-care, retrospect on HistoScanning represents an excellent opportunity to better plan the life cycle of such products and what it would take to bring it to a level of wide adoption by global health systems.

It would require many pages to cover the lessons HistoScanning could teach each and all of us in detail. I will therefore briefly discuss the highlights:

  • Regulations: Clearance for HistoScanning by FDA required a PMA and was not achieved until today. The regulatory process in Europe was similar to that of ultrasound but getting harder in recent years.
  • Safety: During more than a decade and many thousands of procedures, no safety issue was brought up.
  • Learning curve: Many of the reports on HistoScanning conclude that in order to maximize its potential the sonographer must be experienced and well trained with using the system. Amongst else, it became clear that there is a strong correlation between the clinical added value of using HistoScanning and the quality of the ultrasound scan, which is dependant on the sonographer but also, in many cases, on the patient (e.g. his BMI)
  • Patient’s attitude: PMS reviews on HistoScanning shows that patients are generally excited about the opportunity of an AI application being involved in their diagnostic process. It seems to increase their confidence in the validity of the results and there was never a case of refusal to be exposed to the analysis. Also, some of the early adopters of PHS (HistoScanning for prostate) charged their patients privately for the service and patients were happy to accept that although there was no reimbursement of such cost by their health insurance.
  • Adoption by practitioners: To date, PHS did not achieve wide market adoption and users’ feedback on it are mixed, ranging from strong positive recommendation to very negative and dismissive. Close examination of the reasons for such a variety of experiences reveals that most of the reports are relying on small and largely varying samples. The reason for it being the relatively high complexity and cost of clinical trials aiming at measuring its performance. Moreover, without any available standards of assessing AI performance, what is good enough for one user can be totally insufficient for another. Realizing this led to recent efforts by some leading urologists to organize large patients’ registries related to routine-use of PHS.

The most recent peer-reviewed paper on PHS; Evaluation of Prostate HistoScanning as a Method for Targeted Biopsy in Routine Practice. Petr V. Glybochko, Yuriy G. Alyaev, Alexandr V. Amosov, German E. Krupinov, Dror Nir, Mathias Winkler, Timur M. Ganzha, European Urology Focus.

Studies PHS on statistically reasonable number (611) of patients and concluded that “Our study results support supplementing the standard schematic transrectal ultrasound-guided biopsy with a few guided cores harvested using the ultrasound-based prostate HistoScanning true targeting approach in cases for which multiparametric magnetic resonance imaging is not available.”

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence: Realizing Precision Medicine One Patient at a Time

Reporter: Stephen J Williams, PhD @StephenJWillia2

The impact of Machine Learning (ML) and Artificial Intelligence (AI) during the last decade has been tremendous. With the rise of infobesity, ML/AI is evolving to an essential capability to help mine the sheer volume of patient genomics, omics, sensor/wearables and real-world data, and unravel the knot of healthcare’s most complex questions.

Despite the advancements in technology, organizations struggle to prioritize and implement ML/AI to achieve the anticipated value, whilst managing the disruption that comes with it. In this session, panelists will discuss ML/AI implementation and adoption strategies that work. Panelists will draw upon their experiences as they share their success stories, discuss how to implement digital diagnostics, track disease progression and treatment, and increase commercial value and ROI compared against traditional approaches.

  • most of trials which are done are still in training AI/ML algorithms with training data sets.  The best results however have been about 80% accuracy in training sets.  Needs to improve
  • All data sets can be biased.  For example a professor was looking at heartrate using a IR detector on a wearable but it wound up that different types of skin would generate a different signal to the detector so training sets maybe population biases (you are getting data from one group)
  • clinical grade equipment actually haven’t been trained on a large set like commercial versions of wearables, Commercial grade is tested on a larger study population.  This can affect the AI/ML algorithms.
  • Regulations:  The regulatory bodies responsible is up to debate.  Whether FDA or FTC is responsible for AI/ML in healtcare and healthcare tech and IT is not fully decided yet.  We don’t have the guidances for these new technologies
  • some rules: never use your own encryption always use industry standards especially when getting personal data from wearables.  One hospital corrupted their system because their computer system was not up to date and could not protect against a virus transmitted by a wearable.
  • pharma companies understand they need to increase value of their products so very interested in how AI/ML can be used.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Precision Medicine Beyond Oncology June 5 Philadelphia PA

Reporter: Stephen J Williams PhD @StephenJWillia2

Precision Medicine has helped transform cancer care from one-size-fits-all chemotherapy to a new era, where patients’ tumors can be analyzed and therapy selected based on their genetic makeup. Until now, however, precision medicine’s impact has been far less in other therapeutic areas, many of which are ripe for transformation. Efforts are underway to bring the successes of precision medicine to neurology, immunology, ophthalmology, and other areas. This move raises key questions of how the lessons learned in oncology can be used to advance precision medicine in other fields, what types of data and tools will be important to personalizing treatment in these areas, and what sorts of partnerships and payer initiatives will be needed to support these approaches and their ultimate commercialization and use. The panel will also provide an in depth look at precision medicine approaches aimed at better understanding and improving patient care in highly complex disease areas like neurology.
Speaker panel:  The big issue now with precision medicine is there is so much data and hard to put experimental design and controls around randomly collected data.
  • The frontier is how to CURATE randomly collected data to make some sense of it
  • One speaker was at a cancer meeting and the oncologist had no idea what to make of genomic reports they were given.  Then there is a lack of action or worse a misdiagnosis.
  • So for e.g. with Artificial Intelligence algorithms to analyze image data you can see things you can’t see with naked eye but if data quality not good the algorithms are useless – if data not curated properly data is wasted
Data needs to be organized and curated. 
If relying of AI for big data analysis the big question still is: what are the rates of false negative and false positives?  Have to make sure so no misdiagnosis.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »