Advertisements
Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence in Health Care – Tools & Innovations’ Category


Tweets, Pictures and Retweets at 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, MIT by @pharma_BI and @AVIVA1950 for #KIsymposium PharmaceuticalIntelligence.com and Social Media

 

Pictures taken in Real Time

 

Notification from Twitter.com on June 14, 2019 and in the 24 hours following the symposium

 

     liked your Tweet

    3 hours ago

  1.  Retweeted your Tweet

    5 hours ago

    1 other Retweet

  2.  liked your Tweets

    11 hours ago

    2 other likes

     and  liked a Tweet you were mentioned in

    11 hours ago

     liked your reply

    12 hours ago

  3. Replying to 

    It was an incredibly touching and “metzamrer” surprise to meet you at MIT

  4.  liked your Tweets

    13 hours ago

    3 other likes

     liked your reply

    15 hours ago

    Amazing event @avivregev @reginabarzilay 2pharma_BI Breakthrough in

     and  liked a Tweet you were mentioned in

    17 hours ago

  5. ‘s machine learning tool characterizes proteins, which are biomarkers of disease development and progression. Scientists can know more about their relationship to specific diseases and can interview earlier and precisely. ,

  6. learning and are undergoing dramatic changes and hold great promise for cancer research, diagnostics, and therapeutics. @KIinstitute by

     liked your Tweet

    Jun 16

     Retweeted your Retweet

    Jun 16

     liked your Retweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Tweet

    Jun 15

     Retweeted your Retweet

    Jun 15

     and 3 others liked your reply

    Jun 15

     and  Retweeted your Tweet

    Jun 14

     and  liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  7.  liked your Tweets

    Jun 14

    2 other likes

     liked your Tweet

    Jun 14

     Retweeted your Retweet

    Jun 14

     liked your Tweet

    Jun 14

     and  Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     liked your Tweet

    Jun 14

  8. identification in the will depend on highly

  9.  liked your Tweets

    Jun 14

    2 other likes

     Retweeted your Tweet

    Jun 14

     liked your Tweet

    Jun 14

     and 3 others liked your reply

    Jun 14

     liked your Retweet

    Jun 14

  10. this needed to be done a long time ago

     Retweeted your Tweet

    Jun 14

     and  Retweeted your reply

    Jun 14

     liked your Tweet

    Jun 14

     liked your reply

    Jun 14

     Retweeted your reply

    Jun 14

 

Tweets by @pharma_BI and by @AVIVA1950

&

Retweets and replies by @pharma_BI and @AVIVA1950

eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  1. Top lectures by @reginabarzilay @avivaregev

  2. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  1.   Retweeted

    eProceedings 18th Symposium 2019 covered in Amazing event, Keynote best talks @avivregev ’er @reginabarzelay

  2. Top lectures by @reginabarzilay @avivaregev

  3. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  4. eProceedings & eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, 48 Massachusetts Ave, Cambridge, MA via

  5.   Retweeted
  6.   Retweeted

    Einstein, Curie, Bohr, Planck, Heisenberg, Schrödinger… was this the greatest meeting of minds, ever? Some of the world’s most notable physicists participated in the 1927 Solvay Conference. In fact, 17 of the 29 scientists attending were or became Laureates.

  7.   Retweeted

    identification in the will depend on highly

  8. eProceeding 2019 Koch Institute Symposium – 18th Annual Cancer Research Symposium – Machine Learning and Cancer, June 14, 2019, 8:00 AM-5:00 PMET MIT Kresge Auditorium, Cambridge, MA via

 

Advertisements

Read Full Post »


Reported by Dror Nir, PhD

Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model

Allison Park, BA1Chris Chute, BS1Pranav Rajpurkar, MS1;  et al, Original Investigation, Health Informatics, June 7, 2019, JAMA Netw Open. 2019;2(6):e195600. doi:10.1001/jamanetworkopen.2019.5600

Key Points

Question  How does augmentation with a deep learning segmentation model influence the performance of clinicians in identifying intracranial aneurysms from computed tomographic angiography examinations?

Findings  In this diagnostic study of intracranial aneurysms, a test set of 115 examinations was reviewed once with model augmentation and once without in a randomized order by 8 clinicians. The clinicians showed significant increases in sensitivity, accuracy, and interrater agreement when augmented with neural network model–generated segmentations.

Meaning  This study suggests that the performance of clinicians in the detection of intracranial aneurysms can be improved by augmentation using deep learning segmentation models.

 

Abstract

Importance  Deep learning has the potential to augment clinician performance in medical imaging interpretation and reduce time to diagnosis through automated segmentation. Few studies to date have explored this topic.

Objective  To develop and apply a neural network segmentation model (the HeadXNet model) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians’ intracranial aneurysm diagnostic performance.

Design, Setting, and Participants  In this diagnostic study, a 3-dimensional convolutional neural network architecture was developed using a training set of 611 head CTA examinations to generate aneurysm segmentations. Segmentation outputs from this support model on a test set of 115 examinations were provided to clinicians. Between August 13, 2018, and October 4, 2018, 8 clinicians diagnosed the presence of aneurysm on the test set, both with and without model augmentation, in a crossover design using randomized order and a 14-day washout period. Head and neck examinations performed between January 3, 2003, and May 31, 2017, at a single academic medical center were used to train, validate, and test the model. Examinations positive for aneurysm had at least 1 clinically significant, nonruptured intracranial aneurysm. Examinations with hemorrhage, ruptured aneurysm, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, surgical clips, coils, catheters, or other surgical hardware were excluded. All other CTA examinations were considered controls.

Main Outcomes and Measures  Sensitivity, specificity, accuracy, time, and interrater agreement were measured. Metrics for clinician performance with and without model augmentation were compared.

Results  The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence–produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians’ mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19).

Conclusions and Relevance  The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence–assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.

Introduction

Diagnosis of unruptured aneurysms is a critically important clinical task: intracranial aneurysms occur in 1% to 3% of the population and account for more than 80% of nontraumatic life-threatening subarachnoid hemorrhages.1 Computed tomographic angiography (CTA) is the primary, minimally invasive imaging modality currently used for diagnosis, surveillance, and presurgical planning of intracranial aneurysms,2,3but interpretation is time consuming even for subspecialty-trained neuroradiologists. Low interrater agreement poses an additional challenge for reliable diagnosis.47

Deep learning has recently shown significant potential in accurately performing diagnostic tasks on medical imaging.8 Specifically, convolutional neural networks (CNNs) have demonstrated excellent performance on a range of visual tasks, including medical image analysis.9 Moreover, the ability of deep learning systems to augment clinician workflow remains relatively unexplored.10 The development of an accurate deep learning model to help clinicians reliably identify clinically significant aneurysms in CTA has the potential to provide radiologists, neurosurgeons, and other clinicians an easily accessible and immediately applicable diagnostic support tool.

In this study, a deep learning model to automatically detect intracranial aneurysms on CTA and produce segmentations specifying regions of interest was developed to assist clinicians in the interpretation of CTA examinations for the diagnosis of intracranial aneurysms. Sensitivity, specificity, accuracy, time to diagnosis, and interrater agreement for clinicians with and without model augmentation were compared.

Methods

The Stanford University institutional review board approved this study. Owing to the retrospective nature of the study, patient consent or assent was waived. The Standards for Reporting of Diagnostic Accuracy (STARD) reporting guideline was used for the reporting of this study.

Data

A total of 9455 consecutive CTA examination reports of the head or head and neck performed between January 3, 2003, and May 31, 2017, at Stanford University Medical Center were retrospectively reviewed. Examinations with parenchymal hemorrhage, subarachnoid hemorrhage, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, ischemic stroke, nonspecific or chronic vascular findings such as intracranial atherosclerosis or other vasculopathies, surgical clips, coils, catheters, or other surgical hardware were excluded. Examinations of injuries that resulted from trauma or contained images degraded by motion were also excluded on visual review by a board-certified neuroradiologist with 12 years of experience. Examinations with nonruptured clinically significant aneurysms (>3 mm) were included.11

Radiologist Annotations

The reference standard for all examinations in the test set was determined by a board-certified neuroradiologist at a large academic practice with 12 years of experience who determined the presence of aneurysm by review of the original radiology report, double review of the CTA examination, and further confirmation of the aneurysm by diagnostic cerebral angiograms, if available. The neuroradiologist had access to all of the Digital Imaging and Communications in Medicine (DICOM) series, original reports, and clinical histories, as well as previous and follow-up examinations during interpretation to establish the best possible reference standard for the labels. For each of the aneurysm examinations, the radiologist also identified the location of each of the aneurysms. Using the open-source annotation software ITK-SNAP,12 the identified aneurysms were manually segmented on each slice.

Model Development

In this study, we developed a 3-dimensional (3-D) CNN called HeadXNet for segmentation of intracranial aneurysms from CT scans. Neural networks are functions with parameters structured as a sequence of layers to learn different levels of abstraction. Convolutional neural networks are a type of neural network designed to process image data, and 3-D CNNs are particularly well suited to handle sequences of images, or volumes.

HeadXNet is a CNN with an encoder-decoder structure (eFigure 1 in the Supplement), where the encoder maps a volume to an abstract low-resolution encoding, and the decoder expands this encoding to a full-resolution segmentation volume. The segmentation volume is of the same size as the corresponding study and specifies the probability of aneurysm for each voxel, which is the atomic unit of a 3-D volume, analogous to a pixel in a 2-D image. The encoder is adapted from a 50-layer SE-ResNeXt network,1315and the decoder is a sequence of 3 × 3 transposed convolutions. Similar to UNet,16 skip connections are used in 3 layers of the encoder to transmit outputs directly to the decoder. The encoder was pretrained on the Kinetics-600 data set,17 a large collection of YouTube videos labeled with human actions; after pretraining the encoder, the final 3 convolutional blocks and the 600-way softmax output layer were removed. In their place, an atrous spatial pyramid pooling18 layer and the decoder were added.

Training Procedure

Subvolumes of 16 slices were randomly sampled from volumes during training. The data set was preprocessed to find contours of the skull, and each volume was cropped around the skull in the axial plane before resizing each slice to 208 × 208 pixels. The slices were then cropped to 192 × 192 pixels (using random crops during training and centered crops during testing), resulting in a final input of size 16 × 192 × 192 per example; the same transformations were applied to the segmentation label. The segmentation output was trained to optimize a weighted combination of the voxelwise binary cross-entropy and Dice losses.19

Before reaching the model, inputs were clipped to [−300, 700] Hounsfield units, normalized to [−1, 1], and zero-centered. The model was trained on 3 Titan Xp graphical processing units (GPUs) (NVIDIA) using a minibatch of 2 examples per GPU. The parameters of the model were optimized using a stochastic gradient descent optimizer with momentum of 0.9 and a peak learning rate of 0.1 for randomly initialized weights and 0.01 for pretrained weights. The learning rate was scheduled with a linear warm-up from 0 to the peak learning rate for 10 000 iterations, followed by cosine annealing20 over 300 000 iterations. Additionally, the learning rate was fixed at 0 for the first 10 000 iterations for the pretrained encoder. For regularization, L2 weight decay of 0.001 was added to the loss for all trainable parameters and stochastic depth dropout21 was used in the encoder blocks. Standard dropout was not used.

To control for class imbalance, 3 methods were used. First, an auxiliary loss was added after the encoder and focal loss was used to encourage larger parameter updates on misclassified positive examples. Second, abnormal training examples were sampled more frequently than normal examples such that abnormal examples made up 30% of training iterations. Third, parameters of the decoder were not updated on training iterations where the segmentation label consisted of purely background (normal) voxels.

To produce a segmentation prediction for the entire volume, the segmentation outputs for sequential 16-slice subvolumes were simply concatenated. If the number of slices was not divisible by 16, the last input volume was padded with 0s and the corresponding output volume was truncated back to the original size.

Study Design

We performed a diagnostic accuracy study comparing performance metrics of clinicians with and without model augmentation. Each of the 8 clinicians participating in the study diagnosed a test set of 115 examinations, once with and once without assistance of the model. The clinicians were blinded to the original reports, clinical histories, and follow-up imaging examinations. Using a crossover design, the clinicians were randomly and equally divided into 2 groups. Within each group, examinations were sorted in a fixed random order for half of the group and sorted in reverse order for the other half. Group 1 first read the examinations without model augmentation, and group 2 first read the examinations with model augmentation. After a washout period of 14 days, the augmentation arrangement was reversed such that group 1 performed reads with model augmentation and group 2 read the examinations without model augmentation (Figure 1A).

Clinicians were instructed to assign a binary label for the presence or absence of at least 1 clinically significant aneurysm, defined as having a diameter greater than 3 mm. Clinicians read alone in a diagnostic reading room, all using the same high-definition monitor (3840 × 2160 pixels) displaying CTA examinations on a standard open-source DICOM viewer (Horos).22 Clinicians entered their labels into a data entry software application that automatically logged the time difference between labeling of the previous examination and the current examination.

When reading with model augmentation, clinicians were provided the model’s predictions in the form of region of interest (ROI) segmentations directly overlaid on top of CTA examinations. To ensure an image display interface that was familiar to all clinicians, the model’s predictions were presented as ROIs in a standard DICOM viewing software. At every voxel where the model predicted a probability greater than 0.5, readers saw a semiopaque red overlay on the axial, sagittal, and coronal series (Figure 1C). Readers had access to the ROIs immediately on loading the examinations, and the ROIs could be toggled off to reveal the unaltered CTA images (Figure 1B). The red overlays were the only indication that was given whether a particular CTA examination had been predicted by the model to contain an aneurysm. Given these model results, readers had the option to take it into consideration or disregard it based on clinical judgment. When readers performed diagnoses without augmentation, no ROIs were present on any of the examinations. Otherwise, the diagnostic tools were identical for augmented and nonaugmented reads.

 

Statistical Analysis

On the binary task of determining whether an examination contained an aneurysm, sensitivity, specificity, and accuracy were used to assess the performance of clinicians with and without model augmentation. Sensitivity denotes the number of true-positive results over total aneurysm-positive cases, specificity denotes the number of true-negative results over total aneurysm-negative cases, and accuracy denotes the number of true-positive and true-negative results over all test cases. The microaverage of these statistics across all clinicians was also computed by measuring each statistic pertaining to the total number of true-positive, false-negative, and false-positive results. In addition, to convert the models’ segmentation output of the model into a binary prediction, a prediction was considered positive if the model predicted at least 1 voxel as belonging to an aneurysm and negative otherwise. The 95% Wilson score confidence intervals were used to assess the variability in the estimates for sensitivity, specificity, and accuracy.23

To assess whether the clinicians achieved significant increases in performance with model augmentation, a 1-tailed t test was performed on the differences in sensitivity, specificity, and accuracy across all 8 clinicians. To determine the robustness of the findings and whether results were due to inclusion of the resident radiologist and neurosurgeon, we performed a sensitivity analysis: we computed the t test on the differences in sensitivity, specificity, and accuracy across board-certified radiologists only.

The average time to diagnosis for the clinicians with and without augmentation was computed as the difference between the mean entry times into the spreadsheet of consecutive diagnoses; 95% t score confidence intervals were used to assess the variability in the estimates. To account for interruptions in the clinical read or time logging errors, the 5 longest and 5 shortest time to diagnosis for each clinician in each reading were excluded. To assess whether model augmentation significantly decreased the time to diagnosis, a 1-tailed t test was performed on the difference in average time with and without augmentation across all 8 clinicians.

The interrater agreement of clinicians and for the radiologist subset was computed using the exact Fleiss κ.24 To assess whether model augmentation increased interrater agreement, a 1-tailed permutation test was performed on the difference between the interrater agreement of clinicians on the test set with and without augmentation. The permutation procedure consisted of randomly swapping clinician annotations with and without augmentation so that a random subset of the test set that had previously been labeled as read with augmentation was now labeled as being read without augmentation, and vice versa; the exact Fleiss κ values (and the difference) were computed on the test set with permuted labels. This permutation procedure was repeated 10 000 times to generate the null distribution of the Fleiss κ difference (the interrater agreement of clinician annotations with augmentation is not higher than without augmentation) and the unadjusted value calculated as the proportion of Fleiss κ differences that were higher than the observed Fleiss κ difference.

To control the familywise error rate, the Benjamini-Hochberg correction was applied to account for multiple hypothesis testing; a Benjamini-Hochberg–adjusted P ≤ .05 indicated statistical significance. All tests were 1-tailed.25

Results

The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms (Figure 2). Of the 328 aneurysm cases, 20 cases from 15 unique patients contained 2 or more aneurysms. One hundred forty-eight aneurysm cases contained aneurysms between 3 mm and 7 mm, 108 cases had aneurysms between 7 mm and 12 mm, 61 cases had aneurysms between 12 mm and 24 mm, and 11 cases had aneurysms 24 mm or greater. The location of the aneurysms varied according to the following distribution: 99 were located in the internal carotid artery, 78 were in the middle cerebral artery, 50 were cavernous internal carotid artery aneurysms, 44 were basilar tip aneurysms, 41 were in the anterior communicating artery, 18 were in the posterior communicating artery, 16 were in the vertebrobasilar system, and 12 were in the anterior cerebral artery. All examinations were performed either on a GE Discovery, GE LightSpeed, GE Revolution, Siemens Definition, Siemens Sensation, or a Siemens Force scanner, with slice thicknesses of 1.0 mm or 1.25 mm, using standard clinical protocols for head angiogram or head/neck angiogram. There was no difference between the protocols or slice thicknesses between the aneurysm and nonaneurysm examinations. For this study, axial series were extracted from each examination and a segmentation label was produced on every axial slice containing an aneurysm. The number of images per examination ranged from 113 to 802 (mean [SD], 373 [157]).

The examinations were split into a training set of 611 examinations (494 patients; mean [SD] age, 55.8 [18.1] years; 372 [60.9%] female) used to train the model, a development set of 92 examinations (86 patients; mean [SD] age, 61.6 [16.7] years; 59 [64.1%] female) used for model selection, and a test set of 115 examinations (82 patients; mean [SD] age, 57.8 [18.3] years; 74 [64.4%] female) to evaluate the performance of the clinicians when augmented with the model (Figure 2).

Using stratified random sampling, the development and test sets were formed to include 50% aneurysm examinations and 50% normal examinations; the remaining examinations composed the training set, of which 36.5% were aneurysm examinations. Forty-three patients had multiple examinations in the data set due to examinations performed for follow-up of the aneurysm. To account for these repeat patients, examinations were split so that there was no patient overlap between the different sets. Figure 2 contains pathology and patient demographic characteristics for each set.

A total of 8 clinicians, including 6 board-certified practicing radiologists, 1 practicing neurosurgeon, and 1 radiology resident, participated as readers in the study. The radiologists’ years of experience ranged from 3 to 12 years, the neurosurgeon had 2 years of experience as attending, and the resident was in the second year of training at Stanford University Medical Center. Groups 1 and 2 consisted of 3 radiologists each; the resident and neurosurgeon were both in group 1. None of the clinicians were involved in establishing the reference standard for the examinations.

Without augmentation, clinicians achieved a microaveraged sensitivity of 0.831 (95% CI, 0.794-0.862), specificity of 0.960 (95% CI, 0.937-0.974), and an accuracy of 0.893 (95% CI, 0.872-0.912). With augmentation, the clinicians achieved a microaveraged sensitivity of 0.890 (95% CI, 0.858-0.915), specificity of 0.975 (95% CI, 0.957-0.986), and an accuracy of 0.932 (95% CI, 0.913-0.946). The underlying model had a sensitivity of 0.949 (95% CI, 0.861-0.983), specificity of 0.661 (95% CI, 0.530-0.771), and accuracy of 0.809 (95% CI, 0.727-0.870). The performances of the model, individual clinicians, and their microaverages are reported in eTable 1 in the Supplement.

 

With augmentation, there was a statistically significant increase in the mean sensitivity (0.059; 95% CI, 0.028-0.091; adjusted P = .01) and mean accuracy (0.038; 95% CI, 0.014-0.062; adjusted P = .02) of the clinicians as a group. There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16). Performance improvements across clinicians are detailed in the Table, and individual clinician improvement in Figure 3.

Individual performances with and without model augmentation are shown in eTable 1 in the Supplement. The sensitivity analysis confirmed that even among board-certified radiologists, there was a statistically significant increase in mean sensitivity (0.059; 95% CI, 0.013-0.105; adjusted P = .04) and accuracy (0.036; 95% CI, 0.001-0.072; adjusted P = .05). Performance improvements of board-certified radiologists as a group are shown in eTable 2 in the Supplement.

 

The mean diagnosis time per examination without augmentation microaveraged across clinicians was 57.04 seconds (95% CI, 54.58-59.50 seconds). The times for individual clinicians are detailed in eTable 3 in the Supplement, and individual time changes are shown in eFigure 2 in the Supplement.

 

With augmentation, there was no statistically significant decrease in mean diagnosis time (5.71 seconds; 95% CI, −7.22 to 18.63 seconds; adjusted P = .19). The model took a mean of 7.58 seconds (95% CI, 6.92-8.25 seconds) to process an examination and output its segmentation map.Confusion matrices, which are tables reporting true- and false-positive results and true- and false-negative results of each clinician with and without model augmentation, are shown in eTable 4 in the Supplement.

There was a statistically significant increase of 0.060 (adjusted P = .05) in the interrater agreement among the clinicians, with an exact Fleiss κ of 0.799 without augmentation and 0.859 with augmentation. For the board-certified radiologists, there was an increase of 0.063 in their interrater agreement, with an exact Fleiss κ of 0.783 without augmentation and 0.847 with augmentation.

Discussion

In this study, the ability of a deep learning model to augment clinician performance in detecting cerebral aneurysms using CTA was investigated with a crossover study design. With model augmentation, clinicians’ sensitivity, accuracy, and interrater agreement significantly increased. There was no statistical change in specificity and time to diagnosis.Given the potential catastrophic outcome of a missed aneurysm at risk of rupture, an automated detection tool that reliably detects and enhances clinicians’ performance is highly desirable. Aneurysm rupture is fatal in 40% of patients and leads to irreversible neurological disability in two-thirds of those who survive; therefore, an accurate and timely detection is of paramount importance. In addition to significantly improving accuracy across clinicians while interpreting CTA examinations, an automated aneurysm detection tool, such as the one presented in this study, could also be used to prioritize workflow so that those examinations more likely to be positive could receive timely expert review, potentially leading to a shorter time to treatment and more favorable outcomes.The significant variability among clinicians in the diagnosis of aneurysms has been well documented and is typically attributed to lack of experience or subspecialty neuroradiology training, complex neurovascular anatomy, or the labor-intensive nature of identifying aneurysms. Studies have shown that interrater agreement of CTA-based aneurysm detection is highly variable, with interrater reliability metrics ranging from 0.37 to 0.85,6,7,2628 and performance levels that vary depending on aneurysm size and individual radiologist experience.4,6 In addition to significantly increasing sensitivity and accuracy, augmenting clinicians with the model also significantly improved interrater reliability from 0.799 to 0.859. This implies that augmenting clinicians with varying levels of experience and specialties with models could lead to more accurate and more consistent radiological interpretations. Currently, tools to improve clinician aneurysm detection on CTA include bone subtraction,29 as well as 3-D rendering of intracranial vasculature,3032 which rely on application of contrast threshold settings to better delineate cerebral vasculature and create a 3-D–rendered reconstruction to assist aneurysm detection. However, using these tools is labor- and time-intensive for clinicians; in some institutions, this process is outsourced to a 3-D lab at additional costs. The tool developed in this study, integrated directly in a standard DICOM viewer, produces a segmentation map on a new examination in only a few seconds. If integrated into the standard workflow, this diagnostic tool could substantially decrease both cost and time to diagnosis, potentially leading to more efficient treatment and more favorable patient outcomes.Deep learning has recently shown success in various clinical image-based recognition tasks. In particular, studies have shown strong performance of 2-D CNNs in detecting intracranial hemorrhage and other acute brain findings, such as mass effect or skull fractures, on CT head examinations.3336 Recently, one study10 examined the potential role for deep learning in magnetic resonance angiogram–based detection of cerebral aneurysms, and another study37 showed that providing deep learning model predictions to clinicians when interpreting knee magnetic resonance studies increased specificity in detecting anterior cruciate ligament tears. To our knowledge, prior to this study, deep learning had not been applied to CTA, which is the first-line imaging modality for detecting cerebral aneurysms. Our results demonstrate that deep learning segmentation models may produce dependable and interpretable predictions that augment clinicians and improve their diagnostic performance. The model implemented and tested in this study significantly increased sensitivity, accuracy, and interrater reliability of clinicians with varied experience and specialties in detecting cerebral aneurysms using CTA.

Limitations

This study has limitations. First, because the study focused only on nonruptured aneurysms, model performance on aneurysm detection after aneurysm rupture, lesion recurrence after coil or surgical clipping, or aneurysms associated with arteriovenous malformations has not been investigated. Second, since examinations containing surgical hardware or devices were excluded, model performance in their presence is unknown. In a clinical environment, CTA is typically used to evaluate for many types of vascular diseases, not just for aneurysm detection. Therefore, the high prevalence of aneurysm in the test set and the clinician’s binary task could have introduced bias in interpretation. Also, this study was performed on data from a single tertiary care academic institution and may not reflect performance when applied to data from other institutions with different scanners and imaging protocols, such as different slice thicknesses.

Conclusions

A deep learning model was developed to automatically detect clinically significant intracranial aneurysms on CTA. We found that the augmentation significantly improved clinicians’ sensitivity, accuracy, and interrater reliability. Future work should investigate the performance of this model prospectively and in application of data from other institutions and hospitals.

Article Information:

Accepted for Publication: April 23, 2019.

Published: June 7, 2019. doi:10.1001/jamanetworkopen.2019.5600

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Park A et al. JAMA Network Open.

Corresponding Author: Kristen W. Yeom, MD, School of Medicine, Department of Radiology, Stanford University, 725 Welch Rd, Ste G516, Palo Alto, CA 94304 (kyeom@stanford.edu).

Author Contributions: Ms Park and Dr Yeom had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Ms Park and Messrs Chute and Rajpurkar are co–first authors. Drs Ng and Yeom are co–senior authors.

Concept and design: Park, Chute, Rajpurkar, Lou, Shpanskaya, Ni, Basu, Lungren, Ng, Yeom.

Acquisition, analysis, or interpretation of data: Park, Chute, Rajpurkar, Lou, Ball, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Ni, Wishah, Wittber, Hong, Wilson, Halabi, Patel, Lungren, Yeom.

Drafting of the manuscript: Park, Chute, Rajpurkar, Lou, Ball, Jabarkheel, Kim, McKenna, Hong, Halabi, Lungren, Yeom.

Critical revision of the manuscript for important intellectual content: Park, Chute, Rajpurkar, Ball, Shpanskaya, Jabarkheel, Kim, Tseng, Ni, Wishah, Wittber, Wilson, Basu, Patel, Lungren, Ng, Yeom.

Statistical analysis: Park, Chute, Rajpurkar, Lou, Ball, Lungren.

Administrative, technical, or material support: Park, Chute, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Wittber, Hong, Wilson, Lungren, Ng, Yeom.

Supervision: Park, Ball, Tseng, Halabi, Basu, Lungren, Ng, Yeom.

Conflict of Interest Disclosures: Drs Wishah and Patel reported grants from GE and Siemens outside the submitted work. Dr Patel reported participation in the speakers bureau for GE. Dr Lungren reported personal fees from Nines Inc outside the submitted work. Dr Yeom reported grants from Philips outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by National Institutes of Health National Center for Advancing Translational Science Clinical and Translational Science Award UL1TR001085.

Role of the Funder/Sponsor: The National Institutes of Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

1.Jaja  BN, Cusimano  MD, Etminan  N,  et al.  Clinical prediction models for aneurysmal subarachnoid hemorrhage: a systematic review.  Neurocrit Care. 2013;18(1):143-153. doi:10.1007/s12028-012-9792-zPubMedGoogle ScholarCrossref
2.Turan  N, Heider  RA, Roy  AK,  et al.  Current perspectives in imaging modalities for the assessment of unruptured intracranial aneurysms: a comparative analysis and review.  World Neurosurg. 2018;113:280-292. doi:10.1016/j.wneu.2018.01.054PubMedGoogle ScholarCrossref
3.Yoon  NK, McNally  S, Taussky  P, Park  MS.  Imaging of cerebral aneurysms: a clinical perspective.  Neurovasc Imaging. 2016;2(1):6. doi:10.1186/s40809-016-0016-3Google ScholarCrossref
4.Jayaraman  MV, Mayo-Smith  WW, Tung  GA,  et al.  Detection of intracranial aneurysms: multi-detector row CT angiography compared with DSA.  Radiology. 2004;230(2):510-518. doi:10.1148/radiol.2302021465PubMedGoogle ScholarCrossref
5.Bharatha  A, Yeung  R, Durant  D,  et al.  Comparison of computed tomography angiography with digital subtraction angiography in the assessment of clipped intracranial aneurysms.  J Comput Assist Tomogr. 2010;34(3):440-445. doi:10.1097/RCT.0b013e3181d27393PubMedGoogle ScholarCrossref
6.Lubicz  B, Levivier  M, François  O,  et al.  Sixty-four-row multisection CT angiography for detection and evaluation of ruptured intracranial aneurysms: interobserver and intertechnique reproducibility.  AJNR Am J Neuroradiol. 2007;28(10):1949-1955. doi:10.3174/ajnr.A0699PubMedGoogle ScholarCrossref
7.White  PM, Teasdale  EM, Wardlaw  JM, Easton  V.  Intracranial aneurysms: CT angiography and MR angiography for detection prospective blinded comparison in a large patient cohort.  Radiology. 2001;219(3):739-749. doi:10.1148/radiology.219.3.r01ma16739PubMedGoogle ScholarCrossref
8.Suzuki  K.  Overview of deep learning in medical imaging.  Radiol Phys Technol. 2017;10(3):257-273. doi:10.1007/s12194-017-0406-5PubMedGoogle ScholarCrossref
9.Rajpurkar  P, Irvin  J, Ball  RL,  et al.  Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists.  PLoS Med. 2018;15(11):e1002686. doi:10.1371/journal.pmed.1002686PubMedGoogle ScholarCrossref
10.Bien  N, Rajpurkar  P, Ball  RL,  et al.  Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet.  PLoS Med. 2018;15(11):e1002699. doi:10.1371/journal.pmed.1002699PubMedGoogle ScholarCrossref
11.Morita  A, Kirino  T, Hashi  K,  et al; UCAS Japan Investigators.  The natural course of unruptured cerebral aneurysms in a Japanese cohort.  N Engl J Med. 2012;366(26):2474-2482. doi:10.1056/NEJMoa1113260PubMedGoogle ScholarCrossref
12.Yushkevich  PA, Piven  J, Hazlett  HC,  et al.  User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.  Neuroimage. 2006;31(3):1116-1128. doi:10.1016/j.neuroimage.2006.01.015PubMedGoogle ScholarCrossref
13.He  K, Zhang  X, Ren  S, Sun  J. Deep residual learning for image recognition. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 27, 2016; Las Vegas, NV.
14.Xie  S, Girshick  R, Dollár  P, Tu  Z, He  K. Aggregated residual transformations for deep neural networks. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
15.Hu  J, Shen  L, Sun  G. Squeeze-and-excitation networks. Paper presented at: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 21, 2018; Salt Lake City, Utah.
16.Ronneberger  O, Fischer  P, Brox  T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Basel, Switzerland: Springer International; 2015:234–241.
17.Carreira  J, Zisserman  A. Quo vadis, action recognition? a new model and the kinetics dataset. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
18.Chen  L-C, Papandreou  G, Schroff  F, Adam  H. Rethinking atrous convolution for semantic image segmentation. https://arxiv.org/abs/1706.05587. Published June 17, 2017. Accessed May 7, 2019.
19.Milletari  F, Navab  N, Ahmadi  S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. Paper presented at: 2016 IEEE Fourth International Conference on 3D Vision (3DV); October 26-28, 2016; Stanford, CA.
20.Loshchilov  I, Hutter  F. Sgdr: Stochastic gradient descent with warm restarts. Paper presented at: 2017 Fifth International Conference on Learning Representations; April 24-26, 2017; Toulon, France.
21.Huang  G, Sun  Y, Liu  Z, Sedra  D, Weinberger  KQ. Deep networks with stochastic depth. European Conference on Computer Vision. Basel, Switzerland: Springer International; 2016:646–661.
22.Horos. https://horosproject.org. Accessed May 1, 2019.
23.Wilson  EB.  Probable inference, the law of succession, and statistical inference.  J Am Stat Assoc. 1927;22(158):209-212. doi:10.1080/01621459.1927.10502953Google ScholarCrossref
24.Fleiss  JL, Cohen  J.  The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability.  Educ Psychol Meas. 1973;33(3):613-619. doi:10.1177/001316447303300309Google ScholarCrossref
25.Benjamini  Y, Hochberg  Y.  Controlling the false discovery rate: a practical and powerful approach to multiple testing.  J R Stat Soc Series B Stat Methodol. 1995;57(1):289-300.Google Scholar
26.Maldaner  N, Stienen  MN, Bijlenga  P,  et al.  Interrater agreement in the radiologic characterization of ruptured intracranial aneurysms based on computed tomography angiography.  World Neurosurg. 2017;103:876-882.e1. doi:10.1016/j.wneu.2017.04.131PubMedGoogle ScholarCrossref
27.Wang  Y, Gao  X, Lu  A,  et al.  Residual aneurysm after metal coils treatment detected by spectral CT.  Quant Imaging Med Surg. 2012;2(2):137-138.PubMedGoogle Scholar
28.Yoon  YW, Park  S, Lee  SH,  et al.  Post-traumatic myocardial infarction complicated with left ventricular aneurysm and pericardial effusion.  J Trauma. 2007;63(3):E73-E75. doi:10.1097/01.ta.0000246896.89156.70PubMedGoogle ScholarCrossref
29.Tomandl  BF, Hammen  T, Klotz  E, Ditt  H, Stemper  B, Lell  M.  Bone-subtraction CT angiography for the evaluation of intracranial aneurysms.  AJNR Am J Neuroradiol. 2006;27(1):55-59.PubMedGoogle Scholar
30.Shi  W-Y, Li  Y-D, Li  M-H,  et al.  3D rotational angiography with volume rendering: the utility in the detection of intracranial aneurysms.  Neurol India. 2010;58(6):908-913. doi:10.4103/0028-3886.73743PubMedGoogle ScholarCrossref
31.Lin  N, Ho  A, Gross  BA,  et al.  Differences in simple morphological variables in ruptured and unruptured middle cerebral artery aneurysms.  J Neurosurg. 2012;117(5):913-919. doi:10.3171/2012.7.JNS111766PubMedGoogle ScholarCrossref
32.Villablanca  JP, Jahan  R, Hooshi  P,  et al.  Detection and characterization of very small cerebral aneurysms by using 2D and 3D helical CT angiography.  AJNR Am J Neuroradiol. 2002;23(7):1187-1198.PubMedGoogle Scholar
33.Chang  PD, Kuoy  E, Grinband  J,  et al.  Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT.  AJNR Am J Neuroradiol. 2018;39(9):1609-1616. doi:10.3174/ajnr.A5742PubMedGoogle ScholarCrossref
34.Chilamkurthy  S, Ghosh  R, Tanamala  S,  et al.  Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study.  Lancet. 2018;392(10162):2388-2396. doi:10.1016/S0140-6736(18)31645-3PubMedGoogle ScholarCrossref
35.Jnawali  K, Arbabshirani  MR, Rao  N, Patel  AA. Deep 3D convolution neural network for CT brain hemorrhage classification. Paper presented at: Medical Imaging 2018: Computer-Aided Diagnosis. February 27, 2018; Houston, TX. doi:10.1117/12.2293725
36.Titano  JJ, Badgeley  M, Schefflein  J,  et al.  Automated deep-neural-network surveillance of cranial images for acute neurologic events.  Nat Med. 2018;24(9):1337-1341. doi:10.1038/s41591-018-0147-yPubMedGoogle ScholarCrossref
37.Ueda  D, Yamamoto  A, Nishimori  M,  et al.  Deep learning for MR angiography: automated detection of cerebral aneurysms.  Radiology. 2019;290(1):187-194.PubMedGoogle ScholarCrossref

Read Full Post »


AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

Featuring Start Up: aifred

www.aifredhealth.com

About Us

The inability to predict any given individual’s unique response to psychiatric treatment is a huge bottleneck to recovery from mental health conditions.
To address this challenge, we are creating a deep-learning based clinical decision tool for physicians to bring personalized medicine to psychiatry.
Initially, we will be focusing on treatments for depression, but we plan to scale Aifred to encompass all mental health conditions in order to amplify clinical utility. At its core, aifred is leveraging the collective intelligence of the scientific and medical community to bring better healthcare to all.
We are a proud official IBM Watson AI XPrize team, headquartered in Montreal, Canada.

Read more about us:

Deep Learning


Something unique to every machine learning company is the precise nature of their hyperparameter optimization and goals of their model. We will optimize aifred with the help of a distributed network of domain experts in psychiatry — a collaboration unique to aifred health. We are implementing attention networks responsible for removing the “black-box” nature of neural networks. As well, we are analyzing the quality of model predictions, allowing both for greater interpretability of model decisions and the generation of new basic research questions, which are going to be unique to the data-set and optimization techniques we develop in-house. By training aifred on reliable datasets, we are able to ensure quality input to our model. De-identified patient outcomes will feed back into our neural networks to continuously improve aifred’s predictive power. Feature engineering is an important part of determining which inputs go into a network and varies how it’s done for every team- once again, this will be undertaken with the support of diverse group of experts we are recruiting.

Our Product


Treatment Prediction

The aifred solution makes use of innovative and powerful machine learning techniques predict treatment efficacy based on an array of patient characteristics.

Interpretability

Forget the blackbox! Our system will provide a report highlighting the most significant features that led to a treatment prediction.

Patient Data Tracking

Track patient symptoms and test results to monitor outcomes or make new predictions. Banks of standardized questionnaires, data visualization, scheduling software — all of it modular and capable of being tailored to clinicians’ needs.

Electronic Patient Record

Keep all important patient information in one place, and get insights using our analytics.

 

In the News:

Montreal Gazette article written about our startup:

https://montrealgazette.com/news/local-news/a-software-tool-to-improve-treatment-of-depression-was-developed-in-montreal

Press about us winning first place globally in the IBM Watson AI XPrize milestone competition

http://www.concordia.ca/news/stories/2018/12/07/Aifred-Health-and-Nectar-take-home-top-honours-at-the-ibm-Watson-ai-xprize-milestone-competition.html

Forbes article that features our CTO, Robert Fratila:

https://www.forbes.com/sites/insights-intelai/2018/11/29/5-entrepreneurs-on-the-rise-in-ai/

Post about our graduation from the prestigious creative destruction lab program:

https://medium.com/@aifred/aifred-health-graduates-from-the-creative-destruction-lab-500e4b2a83c?fbclid=IwAR2qz9iQf8-4B07ljB1ZP3GAUCGZcK-CyxgG9cu1jtN8moEDSexvZeEcN7c

McGill University article featuring us:

https://www.mcgill.ca/giving/why-giving-matters/2019/02/05/taking-depression-using-ai?fbclid=IwAR1NN_ID04IJMto97cT-28fDfVxg1rbp7c7arbGf48MrzL4_Q4EaEzmegj8

 

REFERENCE

The Incredible Ways Artificial Intelligence Is Now Used In Mental Health

Bernard Marr 12:23 am

https://www.forbes.com/sites/bernardmarr/2019/05/03/the-incredible-ways-artificial-intelligence-is-now-used-in-mental-health/amp/?__twitter_impression=true

4 Benefits of using AI to help solve the mental health crisis

There are several reasons why AI could be a powerful tool to help us solve the mental health crisis. Here are four benefits:

  1.      Support mental health professionals

As it does for many industries, AI can help support mental health professionals in doing their jobs. Algorithms can analyze data much faster than humans, can suggest possible treatments, monitor a patient’s progress and alert the human professional to any concerns. In many cases, AI and a human clinician would work together.

  1.      24/7 access

Due to the lack of human mental health professionals, it can take months to get an appointment. If patients live in an area without enough mental health professionals, their wait will be even longer. AI provides a tool that an individual can access all the time, 24/7 without waiting for an appointment.

  1.      Not expensive

The cost of care prohibits some individuals from seeking help. Artificial intelligent tools could offer a more accessible solution.

  1.      Comfort talking to a bot

While it might take some people time to feel comfortable talking to a bot, the anonymity of an AI algorithm can be positive. What might be difficult to share with a therapist in person is easier for some to disclose to a bot.

Other related articles published in this Open Access Online Scientific Journal include the following:

Resources on Artificial Intelligence in Health Care and in Medicine:

Articles of Note at PharmaceuticalIntelligence.com @AVIVA1950 @pharma_BI

Curator: Aviva Lev-Ari, PhD, RN

https://www.linkedin.com/pulse/resources-artificial-intelligence-health-care-note-lev-ari-phd-rn/

R&D for Artificial Intelligence Tools & Applications: Google’s Research Efforts in 2018

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/16/rd-for-artificial-intelligence-tools-applications-googles-research-efforts-in-2018/

 

McKinsey Top Ten Articles on Artificial Intelligence: 2018’s most popular articles – An executive’s guide to AI

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/21/mckinsey-top-ten-articles-on-artificial-intelligence-2018s-most-popular-articles-an-executives-guide-to-ai/

 

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

https://pharmaceuticalintelligence.com/2019/04/10/live-day-three-world-medical-innovation-forum-artificial-intelligence-boston-ma-usa-monday-april-10-2019/

 

LIVE Day Two – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 9, 2019

https://pharmaceuticalintelligence.com/2019/04/09/live-day-two-world-medical-innovation-forum-artificial-intelligence-boston-ma-usa-monday-april-9-2019/

 

LIVE Day One – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 8, 2019

https://pharmaceuticalintelligence.com/2019/04/08/live-day-one-world-medical-innovation-forum-artificial-intelligence-westin-copley-place-boston-ma-usa-monday-april-8-2019/

The Regulatory challenge in adopting AI

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/04/07/the-regulatory-challenge-in-adopting-ai/

 

VIDEOS: Artificial Intelligence Applications for Cardiology

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/03/11/videos-artificial-intelligence-applications-for-cardiology/

 

Artificial Intelligence in Health Care and in Medicine: Diagnosis & Therapeutics

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/01/21/artificial-intelligence-in-health-care-and-in-medicine-diagnosis-therapeutics/

 

World Medical Innovation Forum, Partners Innovations, ARTIFICIAL INTELLIGENCE | APRIL 8–10, 2019 | Westin, BOSTON

https://worldmedicalinnovation.org/agenda/

https://pharmaceuticalintelligence.com/2019/02/14/world-medical-innovation-forum-partners-innovations-artificial-intelligence-april-8-10-2019-westin-boston/

 

Digital Therapeutics: A Threat or Opportunity to Pharmaceuticals

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/03/18/digital-therapeutics-a-threat-or-opportunity-to-pharmaceuticals/

 

The 3rd STATONC Annual Symposium, April 25-27, 2019, Hilton Hartford, CT, 315 Trumbull St., Hartford, CT 06103

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2019/02/26/the-3rd-stat4onc-annual-symposium-april-25-27-2019-hilton-hartford-connecticut/

 

2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/05/10/2019-biotechnology-sector-and-artificial-intelligence-in-healthcare/

 

The Journey of Antibiotic Discovery

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

https://pharmaceuticalintelligence.com/2019/05/19/the-journey-of-antibiotic-discovery/

 

Artificial intelligence can be a useful tool to predict Alzheimer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/01/26/artificial-intelligence-can-be-a-useful-tool-to-predict-alzheimer/

 

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

 

2018 Annual World Medical Innovation Forum Artificial Intelligence April 23–25, 2018 Boston, Massachusetts | Westin Copley Place

https://worldmedicalinnovation.org/

https://pharmaceuticalintelligence.com/2018/01/18/2018-annual-world-medical-innovation-forum-artificial-intelligence-april-23-25-2018-boston-massachusetts-westin-copley-place/

 

MedCity Converge 2018 Philadelphia: Live Coverage @pharma_BI

Reporter: Stephen J. Williams

https://pharmaceuticalintelligence.com/2018/07/11/medcity-converge-2018-philadelphia-live-coverage-pharma_bi/

 

IBM’s Watson Health division – How will the Future look like?

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/04/24/ibms-watson-health-division-how-will-the-future-look-like/

 

Live Coverage: MedCity Converge 2018 Philadelphia: AI in Cancer and Keynote Address

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2018/07/11/live-coverage-medcity-converge-2018-philadelphia-ai-in-cancer-and-keynote-address/

 

HUBweek 2018, October 8-14, 2018, Greater Boston – “We The Future” – coming together, of breaking down barriers, of convening across disciplinary lines to shape our future

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/10/08/hubweek-2018-october-8-14-2018-greater-boston-we-the-future-coming-together-of-breaking-down-barriers-of-convening-across-disciplinary-lines-to-shape-our-future/

 

Role of Informatics in Precision Medicine: Notes from Boston Healthcare Webinar: Can It Drive the Next Cost Efficiencies in Oncology Care?

Reporter: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2019/01/03/role-of-informatics-in-precision-medicine-can-it-drive-the-next-cost-efficiencies-in-oncology-care/

 

Gene Editing with CRISPR gets Crisper

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/05/03/gene-editing-with-crispr-gets-crisper/

 

Disease related changes in proteomics, protein folding, protein-protein interaction

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/13/disease-related-changes-in-proteomics-protein-folding-protein-protein-interaction/

 

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

 

N3xt generation carbon nanotubes

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/12/14/n3xt-generation-carbon-nanotubes/

 

Healthcare conglomeration to access Big Data and lower costs

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/01/13/healthcare-conglomeration-to-access-big-data-and-lower-costs/

 

Mindful Discoveries

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/01/28/mindful-discoveries/

 

Synopsis Days 1,2,3: 2018 Annual World Medical Innovation Forum Artificial Intelligence April 23–25, 2018 Boston, Massachusetts | Westin Copley Place

Curator: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/04/26/synopsis-days-123-2018-annual-world-medical-innovation-forum-artificial-intelligence-april-23-25-2018-boston-massachusetts-westin-copley-place/

 

Unlocking the Microbiome

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/02/07/unlocking-the-microbiome/

 

Linguamatics announces the official launch of its AI self-service text-mining solution for researchers.

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/linguamatics-announces-the-official-launch-of-its-ai-self-service-text-mining-solution-for-researchers/

 

Novel Discoveries in Molecular Biology and Biomedical Science

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/30/novel-discoveries-in-molecular-biology-and-biomedical-science/

 

Biomarker Development

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/11/16/biomarker-development/

 

Imaging of Cancer Cells

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/04/20/imaging-of-cancer-cells/

 

Future of Big Data for Societal Transformation

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2015/12/14/future-of-big-data-for-societal-transformation/

 

mRNA Data Survival Analysis

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/06/18/mrna-data-survival-analysis/

 

Applying AI to Improve Interpretation of Medical Imaging

Author and Curator: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/05/28/applying-ai-to-improve-interpretation-of-medical-imaging/

Read Full Post »


Applying AI to Improve Interpretation of Medical Imaging

Author and Curator: Dror Nir, PhD

 

 

images

The idea that we can use machines’ intelligence to help us perform daily tasks is not an alien any more. As consequence, applying AI to improve the assessment of patients’ clinical condition is booming. What used to be the field of daring start-ups became now a playground for the tech-giants; Google, Amazon, Microsoft and IBM.

Interpretation of medical-Imaging involves standardised workflows and requires analysis of many data-items. Also, it is well established that human-subjectivity is a barrier to reproducibility and transferability of medical imaging results (evident by the reports on high intraoperative variability in  imaging-interpretation).Accepting the fact that computers are better suited that humans to perform routine, repeated tasks involving “big-data” analysis makes AI a very good candidate to improve on this situation.Google’s vision in that respect: “Machine learning has dozens of possible application areas, but healthcare stands out as a remarkable opportunity to benefit people — and working closely with clinicians and medical providers, we’re developing tools that we hope will dramatically improve the availability and accuracy of medical services.”

Google’s commitment to their vision is evident by their TensorFlow initiative. “TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.” Two recent papers describe in length the use of TensorFlow in retrospective studies (supported by Google AI) in which medical-images (from publicly accessed databases) where used:

Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, Nature Biomedical Engineering, Authors: Ryan Poplin, Avinash V. Varadarajan, Katy Blumer, Yun Liu, Michael V. McConnell, Greg S. Corrado, Lily Peng, and Dale R. Webster

As a demonstrator to the expected benefits the use of AI in interpretation of medical-imaging entails this is a very interesting paper. The authors show how they could extract information that is relevant for the assessment of the risk for having an adverse cardiac event from retinal fundus images collected while managing a totally different medical condition.  “Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic

blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70).”

 

Screenshot 2019-05-28 at 10.07.21Screenshot 2019-05-28 at 10.09.40

Clearly, if such algorithm would be implemented as a generalised and transferrable medical-device that can be used in routine practice, it will contribute to the cost-effectiveness of screening programs.

 

End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography, Nature Medicine, Authors: Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse , Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich and Shravya Shetty.

This paper is in line of many previously published works demonstrating how AI can increase the accuracy of cancer diagnosis in comparison to current state of the art: “Existing challenges include inter-grader variability and high false-positive and false-negative rates. We propose a deep learning algorithm that uses a patient’s current and prior computed tomography volumes to predict the risk of lung cancer. Our model achieves a state-of-the art performance (94.4% area under the curve) on 6,716 National Lung Cancer Screening Trial cases, and performs similarly on an independent clinical validation set of 1,139 cases.”

Screenshot 2019-05-28 at 10.22.06Screenshot 2019-05-28 at 10.23.48

The benefit of using an AI based application for lung cancer screening (If and when such algorithm is implemented as a generalised and transferable medical device) is well summarised by the authors: “The strong performance of the model at the case level has important potential clinical relevance. The observed increase in specificity could translate to fewer unnecessary follow up procedures. Increased sensitivity in cases without priors could translate to fewer missed cancers in clinical practice, especially as more patients begin screening. For patients with prior imaging exams, the performance of the deep learning model could enable gains in workflow efficiency and consistency as assessment of prior imaging is already a key component of a specialist’s workflow. Given that LDCT screening is in the relatively early phases of adoption, the potential for considerable improvement in patient care in the coming years is substantial. The model’s localization directs follow-up for specific lesion(s) of greatest concern. These predictions are critical for patients proceeding for further work-up and treatment, including diagnostic CT, positron emission tomography (PET)/CT or biopsy. Malignancy risk prediction allows for the possibility of augmenting existing, manually created interpretation guidelines such as Lung-RADS, which are limited to subjective clustering and assessment to approximate cancer risk.

BTW: The methods section in these two papers is detailed enough to allow any interested party to reproduce the study.

For the sake of balance-of-information, I would like to note that:

  • Amazon is encouraging access to its AI platform Amazon SageMaker “Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. Your models get to production faster with much less effort and lower cost.” Amazon is offering training courses to help programmers get proficiency in Machine-Learning using its AWS platform: “We offer 30+ digital ML courses totaling 45+ hours, plus hands-on labs and documentation, originally developed for Amazon’s internal use. Developers, data scientists, data platform engineers, and business decision makers can use this training to learn how to apply ML, artificial intelligence (AI), and deep learning (DL) to their businesses unlocking new insights and value. Validate your learning and your years of experience in machine learning on AWS with a new certification.”
  • IBM is offering a general-purpose AI platform named Watson. Watson is also promoted as a platform to develop AI applications in the “health” sector with the following positioning: “IBM Watson Health applies data-driven analytics, advisory services and advanced technologies such as AI, to deliver actionable insights that can help you free up time to care, identify efficiencies, and improve population health.”
  • Microsoft is offering its AI platform as a tool to accelerate development of AI solutions. They are also offering an AI school : “Dive in and learn how to start building intelligence into your solutions with the Microsoft AI platform, including pre-trained AI services like Cognitive Services and Bot Framework, as well as deep learning tools like Azure Machine Learning, Visual Studio Code Tools for AI and Cognitive Toolkit. Our platform enables any developer to code in any language and infuse AI into your apps. Whether your solutions are existing or new, this is the intelligence platform to build on.”

Read Full Post »


The Journey of Antibiotic Discovery

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

The term ‘antibiotic’ was introduced by Selman Waksman as any small molecule, produced by a microbe, with antagonistic properties on the growth of other microbes. An antibiotic interferes with bacterial survival via a specific mode of action but more importantly, at therapeutic concentrations, it is sufficiently potent to be effective against infection and simultaneously presents minimal toxicity. Infectious diseases have been a challenge throughout the ages. From 1347 to 1350, approximately one-third of Europe’s population perished to Bubonic plague. Advances in sanitary and hygienic conditions sufficed to control further plague outbreaks. However, these persisted as a recurrent public health issue. Likewise, infectious diseases in general remained the leading cause of death up to the early 1900s. The mortality rate shrunk after the commercialization of antibiotics, which given their impact on the fate of mankind, were regarded as a ‘medical miracle’. Moreover, the non-therapeutic application of antibiotics has also greatly affected humanity, for instance those used as livestock growth promoters to increase food production after World War II.

 

Currently, more than 2 million North Americans acquire infections associated with antibiotic resistance every year, resulting in 23,000 deaths. In Europe, nearly 700 thousand cases of antibiotic-resistant infections directly develop into over 33,000 deaths yearly, with an estimated cost over €1.5 billion. Despite a 36% increase in human use of antibiotics from 2000 to 2010, approximately 20% of deaths worldwide are related to infectious diseases today. Future perspectives are no brighter, for instance, a government commissioned study in the United Kingdom estimated 10 million deaths per year from antibiotic resistant infections by 2050.

 

The increase in antibiotic-resistant bacteria, alongside the alarmingly low rate of newly approved antibiotics for clinical usage, we are on the verge of not having effective treatments for many common infectious diseases. Historically, antibiotic discovery has been crucial in outpacing resistance and success is closely related to systematic procedures – platforms – that have catalyzed the antibiotic golden age, namely the Waksman platform, followed by the platforms of semi-synthesis and fully synthetic antibiotics. Said platforms resulted in the major antibiotic classes: aminoglycosides, amphenicols, ansamycins, beta-lactams, lipopeptides, diaminopyrimidines, fosfomycins, imidazoles, macrolides, oxazolidinones, streptogramins, polymyxins, sulphonamides, glycopeptides, quinolones and tetracyclines.

 

The increase in drug-resistant pathogens is a consequence of multiple factors, including but not limited to high rates of antimicrobial prescriptions, antibiotic mismanagement in the form of self-medication or interruption of therapy, and large-scale antibiotic use as growth promotors in livestock farming. For example, 60% of the antibiotics sold to the USA food industry are also used as therapeutics in humans. To further complicate matters, it is estimated that $200 million is required for a molecule to reach commercialization, with the risk of antimicrobial resistance rapidly developing, crippling its clinical application, or on the opposing end, a new antibiotic might be so effective it is only used as a last resort therapeutic, thus not widely commercialized.

 

Besides a more efficient management of antibiotic use, there is a pressing need for new platforms capable of consistently and efficiently delivering new lead substances, which should attend their precursors impressively low rates of success, in today’s increasing drug resistance scenario. Antibiotic Discovery Platforms are aiming to screen large libraries, for instance the reservoir of untapped natural products, which is likely the next antibiotic ‘gold mine’. There is a void between phenotanypic screening (high-throughput) and omics-centered assays (high-information), where some mechanistic and molecular information complements antimicrobial activity, without the laborious and extensive application of various omics assays. The increasing need for antibiotics drives the relentless and continuous research on the foreground of antibiotic discovery. This is likely to expand our knowledge on the biological events underlying infectious diseases and, hopefully, result in better therapeutics that can swing the war on infectious diseases back in our favor.

 

During the genomics era came the target-based platform, mostly considered a failure due to limitations in translating drugs to the clinic. Therefore, cell-based platforms were re-instituted, and are still of the utmost importance in the fight against infectious diseases. Although the antibiotic pipeline is still lackluster, especially of new classes and novel mechanisms of action, in the post-genomic era, there is an increasingly large set of information available on microbial metabolism. The translation of such knowledge into novel platforms will hopefully result in the discovery of new and better therapeutics, which can sway the war on infectious diseases back in our favor.

 

References:

 

https://www.mdpi.com/2079-6382/8/2/45/htm

 

https://www.ncbi.nlm.nih.gov/pubmed/19515346

 

https://www.ajicjournal.org/article/S0196-6553(11)00184-2/fulltext

 

https://www.ncbi.nlm.nih.gov/pubmed/21700626

 

http://www.med.or.jp/english/journal/pdf/2009_02/103_108.pdf

 

Read Full Post »


2019 Biotechnology Sector and Artificial Intelligence in Healthcare

Reporter: Aviva Lev-Ari, PhD, RN

 

AI Ushers in a New Era

The implications of AI, cloud-based technologies and increased R&D focus have lent a competitive edge to companies within the biotech space. The use of AI has gradually begun to revolutionize research activities in the industry as it can drastically reduce time and costs involved in developing life-saving drugs.

Let’s take a look at some instances on how AI is being used to advance in biotech. AiCure has developed an application that uses AI to govern if and at what time the patient takes a pill. Moreover, it is now being used regularly in many clinical trials. SOPHiA Genetics ‘ AI system is used for genomics analysis of next-generation sequencing data from hospitals and research institutions globally.

Moreover, Illumina ILMN released an open source artificial intelligence software for discovering previously overlooked noncoding mutations in patients with rare genetic diseases in the beginning of 2019.

In fact, J&J JNJ , Pfizer PFE and Novartis NVS have tie-ups with IBM’s Watson Health. Per the deals, the companies can use Watson Health’s AI solutions and applications for drug discovery and to accelerate cancer research efforts.

SOURCE

https://m.nasdaq.com/article/biotechnology-market-on-a-tear-5-etfs-in-spotlight-cm1128200

 

The biotech industry has kept its promise for solid returns so far. The rally in some major biotechnology indexes reflects the same. In this context,

SOURCE

https://m.nasdaq.com/article/biotechnology-market-on-a-tear-5-etfs-in-spotlight-cm1128200

BioPharma

Novartis AG (NVS): Free Stock Analysis Report

Eli Lilly and Company (LLY): Free Stock Analysis Report

Roche Holding AG (RHHBY): Free Stock Analysis Report

Pfizer Inc. (PFE): Free Stock Analysis Report

Johnson & Johnson (JNJ): Free Stock Analysis Report

ALPS Medical Breakthroughs ETF (SBIO): ETF Research Reports

Principal Healthcare Innovators Index ETF (BTEC): ETF Research Reports

Virtus LifeSci Biotech Products ETF (BBP): ETF Research Reports

SPDR S&P Biotech ETF (XBI): ETF Research Reports

Spark Therapeutics, Inc. (ONCE): Free Stock Analysis Report

Illumina, Inc. (ILMN): Free Stock Analysis Report

ARK Genomic Revolution Multi-Sector ETF (ARKG): ETF Research Reports

SOURCE

To read this article on Zacks.com click here.

Zacks Investment Research

 

2019 M&A in Biotech

Mergers and acquisitions (M&As) are dominating the sector as sluggishness in mature products has forced companies to explore acquisitions to bolster their pipeline. The biggest deal of the year was Bristol-Myers’ acquisition offer of $74 billion to buy Celgene. Also, Eli Lilly and Company LLY has announced that it will take over Loxo Oncology for $8 billion to broaden its oncology suite to precision medicines or targeted therapies. (read: What’s Behind the Biotech ETF Rally to Start 2019? )

Several other large-cap pharma as well as bigger biotech companies are entering collaboration deals with smaller ones to boost their pipeline. Notably, Swiss pharma giant Roche Holdings RHHBY has bet big on U.S.-based gene therapy company Spark Therapeutics ONCE in an effort to strengthen its presence in gene therapy. Similarly, in order to develop gene therapies targeting rare indications, Biogen has offered to buy Nightstar Therapeutics.

Furthermore, in-licensing deals are consistently rising with bigwigs partnering with smaller and mid-sized players that own promising mid-to-late stage pipeline candidates or interesting technology.

SOURCE

https://m.nasdaq.com/article/biotechnology-market-on-a-tear-5-etfs-in-spotlight-cm1128200

 

Takeda-Novartis, Daiichi-AZ and more—FiercePharmaAsia
Takeda sells meds to Novartis and J&J; Daiichi’s AZ-shared HER2 antibody-drug conjugate hits key trial goal; Sun scouts for Chinese partner.
Takeda HQ
Novartis buys Takeda’s Xiidra, gets 400 staffers in $3.4B deal
Novartis hopes the deal, potentially worth $5.3 billion, could better position itself in front-of-the-eye therapies.
Asia Map
AZ, BeiGene, Kangmei and more—FiercePharmaAsia
AZ warns of slower China growth; BeiGene chief ranks among highest-paid biopharma CEOs; Kangmei faces delisting over huge accounting “error.”
Sanofi Pasteur HQ
After safety scare, Sanofi’s Dengvaxia nabs limited FDA nod
The FDA limited Dengvaxia to older children and teenagers living in endemic regions—and only if a diagnostic test confirms a prior dengue infection.
Takeda US facility
Takeda’s new Trintellix ad celebrates everyday wins
Takeda highlights everyday joys in new TV ads for major depressive disorder treatment Trintellix.
ReputationSign
HIV drugmakers ViiV, Gilead top pharma reputation survey
Pharma’s reputation is holding steady with patient groups with an annual study finding 41% giving pharma good marks, similar to 43% the year before.
Asia Map
PD-1 royalty dispute, Takeda and more—FiercePharmaAsia
Nobel laureate wants bigger PD-1 revenue cut; Takeda scouts buyers for Latin America business; Chinese genomics investor is forced out of U.S. firm.
Takeda scouts buyers for Latin American business: report
Takeda sold its Brazil-based unit Multilab right after it confirmed its plan to buy Shire, and now it’s reportedly mulling another sale in the region.
Woman typing on computer
Repackager recalls 40 lots of tainted losartan—News of Note
CDMOs Cambrex and Ajinomoto Bio-Pharma Services upgraded manufacturing plants, Takeda scored an albumin approval via its Shire deal, and more.
Darzalex
NICE limits coverage of J&J, Takeda myeloma combo
J&J’s Darzalex is on track to nab a second first-line myeloma nod in the U.S., but its reimbursement journey in England hasn’t been so smooth.

Other related 260 articles published in this Open Access Online Scientific Journal include the following:

https://pharmaceuticalintelligence.com/?s=Artificial+Intelligence

To access 260 articles:

GO TO Categories

Select CATEGORY

Artificial Intelligence per Ontology on this topic [multiple nested categories]

Read Full Post »


LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

 

www.worldmedicalinnovation.org

 

The Forum will focus on patient interactions across care settings, and the role technology and data can play in advancing knowledge discovery and care delivery. The agenda can be found here.

https://worldmedicalinnovation.org/agenda/

Leaders in Pharmaceutical Business Intelligence (LPBI) Group

represented by Founder & Director, Aviva Lev-Ari, PhD, RN will cover this event in REAL TIME using Social Media

@pharma_BI

@AVIVA1950

@PHSInnovation

#WMIF19 

Wednesday, April 10, 2019

7:00 am – 12:00 pm
7:30 am – 9:30 am
Bayer Ballroom

Innovation Discovery Grant Awardee Presentations

Eleven clinical teams selected to receive highly competitive Innovation Discovery Grants present their work illustrating how AI can be used to improve patient health and health care delivery. This session is designed for investors, entrepreneurs, investigators, and others who are interested in commercializing AI opportunities that are currently in development with support from the Innovation Office.

To view speakers and topics, click here.

Where AI Meets Clinical Care

Twelve clinical AI teams culled through the Innovation Discovery Grant program present their work illustrating how AI can be used to improve patient health and healthcare delivery. This session is designed for investors, entrepreneurs, investigators, and others who are interested in commercializing AI opportunities that are currently in development with support from the Innovation Office.

IDG logo

Peter Dunn, MD

Vice President, Perioperative Services and Healthcare System Engineering, MGH; Assistant Professor, Anesthesia, HMS

Using Deep Learning to Optimize Hospital Capacity Management

  • collaboration with @MIT @MGH
  • deploy mobile app across all Partners institutions

 

Kevin Elias, MD

Director, Gynecologic Oncology Research Laboratory, BH; Assistant Professor, HMS

Screening for Cancer Using Serum miRNA Neural Networks

  • cancer screening fragmented process – tests not efficient No screening for many common cancer type
  • Cervical, Breast, Colon, Ovarian Uterus Cancer
  • Serum miRNA multiple cancer types

 

Alexandra Golby, MD

Director, Image-Guided Neurosurgery, BH; Professor, Neurosurgery and Radiology, HMS

Using Machine Learning to Optimize Optical Image Guidance for Brain Tumor Surgery

  • optical visualization in Neurosurgery – to improve Brain Cancer surgery Tumor removal complete resection could cause neurological deficits
  • BWH original research on Neuronavigations, intraops MRI
  • New Tool Real Time: Color code tumors using light diagnostics with machine learning
  • GUIDING Brain surgery, applicable for Breast Cancer
  • iP filling prototype creation, testing, pre-clinical testing, clinical protocol established academic-industrial partnerships
  • AI based – World 1st guided neurosurgery

 

Jayashree Kalpathy-Cramer, PhD

Director, QTIM Lab, MGH; Associate Professor, Radiology, HMS

DeepROP: Point-of-Care System for Diagnosis of Plus Disease in Retinopathy of Prematurity

  • Prematurity 1250 gr <31 weeks f gestation
  • ROP – Retinopathy of prematurity (ROP)
  • Images annotated Plus/not plus – algorithm for rating images “normal” or “plus”
  • DeepROP Applicationsinto Camera for data acquisition, iPhone

 

Jochen Lennerz, MD, PhD

Associate Director, Center for Integrated Diagnostics, MGH; Assistant Professor, HMS

Predicting Unnecessary Surgeries in High-Risk Breast Lesions

  • 10% reduction of high risk lesion equivalent to $1.4Billion in cost savings
  • Funding for Production line

Bruno Madore, PhD

Associate Professor, Radiology, BH, HMS

Sensor Technology for Enhanced Medical Imaging

  • ML Ultrasound – Organ configuration Motion (OCM) sensor
  • Hybrid MRI-ultrasound acquisitions
  • Long term vision – collaboration with Duke for a wireless device

 

Jinsong Ouyang, PhD

Physicist, MGH; Associate Professor, HMS

Training a Neural Network to Detect Lesions

  • Approach – train a NN using artificially inserted lesions

APPLICATIONS:

  • Build unlimitted number of training sets using small 15-50 human data sets generated
  • bone lession detection using SPECT
  • cardiac detect myocardial perfusion SPECT
  • Tumor detection PET
  • Volume detection/locatization of artificial Spinal Lesions (L1-L5)

 

David Papke, MD, PhD

Resident, Surgical Pathology, BH; Clinical Fellow, HMS

Augmented Digital Microscopy for Diagnosis of Endometrial Neoplasia

See tweet

 

Martin Teicher, MD, PhD

Director, Developmental Biopsychiatry Research Program, McLean; Associate Professor, Psychiatry, HMS

Poly-Exposure Risk Scores for Psychiatric Disorders

  • MACE Scale – psychopathology development – collinearity
  • Identifying sensitivity period predictors of major depression
  • predicting risk in adolescence – dataset with high collinearity
  • Onset of depression age 10-15
  • 50% assessment exposure to adversity – based on neuroimaging
  • Analytics and AI longitudinal studies

 

 

Christian Webb, PhD

Director, Treatment and Etiology of Depression, Youth Lab, McLean; Assistant Professor, Psychiatry, HMS

Leveraging Machine Learning to Match Depressed Patients to the Optimal Treatment

  • 4-8 wks of treatment till psychotropic drugs work
  • Data driven approaches: ML can match better patients to antidepressant treatments (Zoloft vs Placebo responder /non responder)?
  • Large number of variables prediction, prognosis calculator, good vs poor outcome
  • Better on Zoloft vs Placebo

 

Brandon Westover, MD, PhD

Executive Director, Clinical Data Animation Center, MGH; Associate Professor, Neurology, HMS
  • seizure, prediction of next attack
  • EEG readings – accurate diagnosis on epilepsy
  • 50 million World wide
  • automated epilepsy detection
  • @MGH – 1,063 EEGs 88,000 spikes 7 experts scored – not all agreed
  • How well can experts identify spikes?
  • Super spike detector is better than Experts – False positive 60% 87% Sensitivity vs 10% and 87% by AI
Moderator: David Louis, MD
  • Pathologist-in-Chief, MGH; Benjamin Castleman Professor of Pathology, HMS
Moderator: Clare Tempany, MD
  • Vice-Chair, Radiology Research, BH; Ferenc Jolesz MD Professor of Radiology, HMS
9:30 am – 10:00 am
10:00 am – 10:30 am
Bayer Ballroom

1:1 Fireside Chat: Stefan Oelrich, Member of the Board of Management; President, Pharmaceutical, Bayer AG

Introduction by: John Fish
  • CEO, Suffolk; Chairman of Board Trustees, Brigham Health
Moderator: Betsy Nabel, MD
  • President, Brigham Health; Professor of Medicine, HMS
  • Member of the Board of Management, Bayer AG; President, Pharmaceutical, Bayer AG

Chief Digital Officers

  • Leaders at the top needs to understand AI
  • Millennials needs to fill Baby boomer retiring
  • Boston – funding Research by NIH by private investment technology transfer to commercialization
  • Career advice: Academia is the first step for credibility move to Big Pharma, create own company
  • America economic strength built on innovation in Healthcare to invest
  • Leadership at Bayer: “Culture eat strategy for Breakfast”
  • AI overcoming barriers – AI improving what we know Medical imaging human vs machine – AI is the new norm – platforms Imaging AI device to detect Hypertension more accurately development of Bayer and Merck – Bayer leader in Radiology
  • Clinical research End point to reach compare
  • Future billion end point which therapeutic pathway is best for which patient
  • Incentives for risky strategy
  • Motivation to collaborate in Boston: Cardiology with broad Institute
  • BWH data and algorithms to increase knowledge
  • Pricing medicine around the World
  • US system in-transparent – patients do not understand Price of meds Rebates to Payers
  • Medical Part B – no pass to Rebates price tied to value
  • As industry – innovations in Pharma reduce healthcare costs Germany 15% of HealthCare on Drugs, generics, “Patented medicine 4%” of all Best in Europe
  • beak silos
  • In US training physicians to lead innovations
10:30 am – 11:00 am
Bayer Ballroom

1:1 Fireside Chat: Deepak Chopra, MD, Founder, The Chopra Foundation

Moderator: Rudolph Tanzi, PhD
  • Vice-Chair, Neurology, Director, Genetics and Aging Research Unit, MGH; Joseph P. and Rose F. Kennedy Professor of Neurology, HMS
  • IMAGING of Brains of Women in Meditation – enlongate telemeres
  • inflammation decrease – Sleep health interactions exsercise learning new things diet
  • flashing from brain wastes – amaloydosis AD – 35 genes variance leading to disease
  • Founder, The Chopra Foundation – Body-Mind Connection
  • AI – re-invest our bodies Telemeres, transferdomics,
  • Nutrition, sleep, excercise, BP, HR, sympathetic vs non sympatheric nervous system breathing pattern, – microbiome subjective experience with Vitals emotional well being
  • emersive augmented
  • longer Telemerese – anti aging correlation
  • biomarkers vs states of energy
  • wisdom best knowledge for self awareness – highest intelligence – NOT artificial
  • Thoughts on being aware
11:00 am – 11:50 am
Bayer Ballroom

Using AI to Predict and Monitor Human Performance and Neurological Disease

In the quest for effective treatments aimed at devastating neurological diseases like Alzheimer’s and ALS, there is a critical need for robust methods to predict and monitor disease progression. AI-based approaches offer promise in this important area. Panelists will discuss efforts to map movement-related disorders and use machine learning to predict the path of disease with imaging and biomarkers.

  • Chief of Neurology, Co-Director, Neurological Clinical Research Institute, MGH; Julieanne Dorn Professor of Neurology, HMS
  • Chief Scientist, Dolby Laboratories Stanford & Adobe – measuring experience
  • convergence of skills
  • internal wellness measured in the ear, motions
  • Stimulate Vagal nerve through the ear for depression treatment
  • Legislation in CA contribution to spaces
  • Global Therapeutic Head, Neuroscience Janssen Research & Development
  • Disease starts earlier Biogen contributions in the field
  • measurement surrogate indicators for outcome given interventions
  • Autism-spectrum not one disease
  • AI will enhance the human competence for measurement
  • UK based efforts to share dat and launch programs for Dementia
  • Conditions of Brain & Mind – declining cognitive
  • Democratization of discovery
  • AI benefit iterative process in changing and improving Algorithms — FDA approved algorithm needs several versions in the future
  • Complexity of CNS Polygenic gene scores
  • Dynamics of AI
  • EVP and CMO, Biogen
  • MS – follow patients, patient reporting in 10 centers , vision cognitions –
  • Obtain measurement even on normal people for early detection – FDA introduced Stage 1,2,3 Biomarker based
  • Newborn Kit of screening teat early helps
  • Home monitoring at Home for onset of AD

Dr. Isaac Galatzer-Levy – NYU & AiCure

  • All CNS diseases are heterogeneous
  • ML requires collaboration
  • AiCure – Medication adherence monitoring from Voice of patients
  • Sampling populations – cell phone
  • Re-investigate studies that have failed with new AI tools
11:50 am – 12:50 pm
Bayer Ballroom

Disruptive Dozen: 12 Technologies that will reinvent AI in the Next 12 Months

The Disruptive Dozen identifies and ranks the AI technologies that Partners faculty feel will break through over the next year to significantly improve health care.

  • innovations, technologies close to make to market

#12 David Ahern – Mental Health in US closing the Gap

#11 David Ting – Voice first

#10 Bharti Khurana – Partners Violence

#9 Gilberto Gonzales – Acute Stroke care

#8 James Hefferman – Burden og Health care ADM

#7 Samuel Aronson – FHIR Health information exchange

#6 Joan Miller – AI for eye health

#5 Brsndon Westover – A window to the Brain

#4 Rochelle Walensky – Automated detection of Malaria

#3 Annette Kim – Streamlining Diagnosis 

  #2 Thomas McCoy – Better Prediction of Suicide risk

  #1 Alexandra Golby – Reimagining Medical Imaging 

 

Moderator: Jeffrey Golden, MD
  • Chair, Department of Pathology, BH; Ramzi S. Cotran Professor of Pathology, HMS
  • Associate Chief, Infection Control Unit, MGH; Assistant Professor, Medicine, HMS
1:00 pm – 1:10 pm
Bayer Ballroom

Read Full Post »

Older Posts »