Advertisements
Feeds:
Posts
Comments

Posts Tagged ‘Artificial intelligence’


Reported by Dror Nir, PhD

Deep Learning–Assisted Diagnosis of Cerebral Aneurysms Using the HeadXNet Model

Allison Park, BA1Chris Chute, BS1Pranav Rajpurkar, MS1;  et al, Original Investigation, Health Informatics, June 7, 2019, JAMA Netw Open. 2019;2(6):e195600. doi:10.1001/jamanetworkopen.2019.5600

Key Points

Question  How does augmentation with a deep learning segmentation model influence the performance of clinicians in identifying intracranial aneurysms from computed tomographic angiography examinations?

Findings  In this diagnostic study of intracranial aneurysms, a test set of 115 examinations was reviewed once with model augmentation and once without in a randomized order by 8 clinicians. The clinicians showed significant increases in sensitivity, accuracy, and interrater agreement when augmented with neural network model–generated segmentations.

Meaning  This study suggests that the performance of clinicians in the detection of intracranial aneurysms can be improved by augmentation using deep learning segmentation models.

 

Abstract

Importance  Deep learning has the potential to augment clinician performance in medical imaging interpretation and reduce time to diagnosis through automated segmentation. Few studies to date have explored this topic.

Objective  To develop and apply a neural network segmentation model (the HeadXNet model) capable of generating precise voxel-by-voxel predictions of intracranial aneurysms on head computed tomographic angiography (CTA) imaging to augment clinicians’ intracranial aneurysm diagnostic performance.

Design, Setting, and Participants  In this diagnostic study, a 3-dimensional convolutional neural network architecture was developed using a training set of 611 head CTA examinations to generate aneurysm segmentations. Segmentation outputs from this support model on a test set of 115 examinations were provided to clinicians. Between August 13, 2018, and October 4, 2018, 8 clinicians diagnosed the presence of aneurysm on the test set, both with and without model augmentation, in a crossover design using randomized order and a 14-day washout period. Head and neck examinations performed between January 3, 2003, and May 31, 2017, at a single academic medical center were used to train, validate, and test the model. Examinations positive for aneurysm had at least 1 clinically significant, nonruptured intracranial aneurysm. Examinations with hemorrhage, ruptured aneurysm, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, surgical clips, coils, catheters, or other surgical hardware were excluded. All other CTA examinations were considered controls.

Main Outcomes and Measures  Sensitivity, specificity, accuracy, time, and interrater agreement were measured. Metrics for clinician performance with and without model augmentation were compared.

Results  The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms. The 8 clinicians reading the test set ranged in experience from 2 to 12 years. Augmenting clinicians with artificial intelligence–produced segmentation predictions resulted in clinicians achieving statistically significant improvements in sensitivity, accuracy, and interrater agreement when compared with no augmentation. The clinicians’ mean sensitivity increased by 0.059 (95% CI, 0.028-0.091; adjusted P = .01), mean accuracy increased by 0.038 (95% CI, 0.014-0.062; adjusted P = .02), and mean interrater agreement (Fleiss κ) increased by 0.060, from 0.799 to 0.859 (adjusted P = .05). There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16) and time to diagnosis (5.71 seconds; 95% CI, 7.22-18.63 seconds; adjusted P = .19).

Conclusions and Relevance  The deep learning model developed successfully detected clinically significant intracranial aneurysms on CTA. This suggests that integration of an artificial intelligence–assisted diagnostic model may augment clinician performance with dependable and accurate predictions and thereby optimize patient care.

Introduction

Diagnosis of unruptured aneurysms is a critically important clinical task: intracranial aneurysms occur in 1% to 3% of the population and account for more than 80% of nontraumatic life-threatening subarachnoid hemorrhages.1 Computed tomographic angiography (CTA) is the primary, minimally invasive imaging modality currently used for diagnosis, surveillance, and presurgical planning of intracranial aneurysms,2,3but interpretation is time consuming even for subspecialty-trained neuroradiologists. Low interrater agreement poses an additional challenge for reliable diagnosis.47

Deep learning has recently shown significant potential in accurately performing diagnostic tasks on medical imaging.8 Specifically, convolutional neural networks (CNNs) have demonstrated excellent performance on a range of visual tasks, including medical image analysis.9 Moreover, the ability of deep learning systems to augment clinician workflow remains relatively unexplored.10 The development of an accurate deep learning model to help clinicians reliably identify clinically significant aneurysms in CTA has the potential to provide radiologists, neurosurgeons, and other clinicians an easily accessible and immediately applicable diagnostic support tool.

In this study, a deep learning model to automatically detect intracranial aneurysms on CTA and produce segmentations specifying regions of interest was developed to assist clinicians in the interpretation of CTA examinations for the diagnosis of intracranial aneurysms. Sensitivity, specificity, accuracy, time to diagnosis, and interrater agreement for clinicians with and without model augmentation were compared.

Methods

The Stanford University institutional review board approved this study. Owing to the retrospective nature of the study, patient consent or assent was waived. The Standards for Reporting of Diagnostic Accuracy (STARD) reporting guideline was used for the reporting of this study.

Data

A total of 9455 consecutive CTA examination reports of the head or head and neck performed between January 3, 2003, and May 31, 2017, at Stanford University Medical Center were retrospectively reviewed. Examinations with parenchymal hemorrhage, subarachnoid hemorrhage, posttraumatic or infectious pseudoaneurysm, arteriovenous malformation, ischemic stroke, nonspecific or chronic vascular findings such as intracranial atherosclerosis or other vasculopathies, surgical clips, coils, catheters, or other surgical hardware were excluded. Examinations of injuries that resulted from trauma or contained images degraded by motion were also excluded on visual review by a board-certified neuroradiologist with 12 years of experience. Examinations with nonruptured clinically significant aneurysms (>3 mm) were included.11

Radiologist Annotations

The reference standard for all examinations in the test set was determined by a board-certified neuroradiologist at a large academic practice with 12 years of experience who determined the presence of aneurysm by review of the original radiology report, double review of the CTA examination, and further confirmation of the aneurysm by diagnostic cerebral angiograms, if available. The neuroradiologist had access to all of the Digital Imaging and Communications in Medicine (DICOM) series, original reports, and clinical histories, as well as previous and follow-up examinations during interpretation to establish the best possible reference standard for the labels. For each of the aneurysm examinations, the radiologist also identified the location of each of the aneurysms. Using the open-source annotation software ITK-SNAP,12 the identified aneurysms were manually segmented on each slice.

Model Development

In this study, we developed a 3-dimensional (3-D) CNN called HeadXNet for segmentation of intracranial aneurysms from CT scans. Neural networks are functions with parameters structured as a sequence of layers to learn different levels of abstraction. Convolutional neural networks are a type of neural network designed to process image data, and 3-D CNNs are particularly well suited to handle sequences of images, or volumes.

HeadXNet is a CNN with an encoder-decoder structure (eFigure 1 in the Supplement), where the encoder maps a volume to an abstract low-resolution encoding, and the decoder expands this encoding to a full-resolution segmentation volume. The segmentation volume is of the same size as the corresponding study and specifies the probability of aneurysm for each voxel, which is the atomic unit of a 3-D volume, analogous to a pixel in a 2-D image. The encoder is adapted from a 50-layer SE-ResNeXt network,1315and the decoder is a sequence of 3 × 3 transposed convolutions. Similar to UNet,16 skip connections are used in 3 layers of the encoder to transmit outputs directly to the decoder. The encoder was pretrained on the Kinetics-600 data set,17 a large collection of YouTube videos labeled with human actions; after pretraining the encoder, the final 3 convolutional blocks and the 600-way softmax output layer were removed. In their place, an atrous spatial pyramid pooling18 layer and the decoder were added.

Training Procedure

Subvolumes of 16 slices were randomly sampled from volumes during training. The data set was preprocessed to find contours of the skull, and each volume was cropped around the skull in the axial plane before resizing each slice to 208 × 208 pixels. The slices were then cropped to 192 × 192 pixels (using random crops during training and centered crops during testing), resulting in a final input of size 16 × 192 × 192 per example; the same transformations were applied to the segmentation label. The segmentation output was trained to optimize a weighted combination of the voxelwise binary cross-entropy and Dice losses.19

Before reaching the model, inputs were clipped to [−300, 700] Hounsfield units, normalized to [−1, 1], and zero-centered. The model was trained on 3 Titan Xp graphical processing units (GPUs) (NVIDIA) using a minibatch of 2 examples per GPU. The parameters of the model were optimized using a stochastic gradient descent optimizer with momentum of 0.9 and a peak learning rate of 0.1 for randomly initialized weights and 0.01 for pretrained weights. The learning rate was scheduled with a linear warm-up from 0 to the peak learning rate for 10 000 iterations, followed by cosine annealing20 over 300 000 iterations. Additionally, the learning rate was fixed at 0 for the first 10 000 iterations for the pretrained encoder. For regularization, L2 weight decay of 0.001 was added to the loss for all trainable parameters and stochastic depth dropout21 was used in the encoder blocks. Standard dropout was not used.

To control for class imbalance, 3 methods were used. First, an auxiliary loss was added after the encoder and focal loss was used to encourage larger parameter updates on misclassified positive examples. Second, abnormal training examples were sampled more frequently than normal examples such that abnormal examples made up 30% of training iterations. Third, parameters of the decoder were not updated on training iterations where the segmentation label consisted of purely background (normal) voxels.

To produce a segmentation prediction for the entire volume, the segmentation outputs for sequential 16-slice subvolumes were simply concatenated. If the number of slices was not divisible by 16, the last input volume was padded with 0s and the corresponding output volume was truncated back to the original size.

Study Design

We performed a diagnostic accuracy study comparing performance metrics of clinicians with and without model augmentation. Each of the 8 clinicians participating in the study diagnosed a test set of 115 examinations, once with and once without assistance of the model. The clinicians were blinded to the original reports, clinical histories, and follow-up imaging examinations. Using a crossover design, the clinicians were randomly and equally divided into 2 groups. Within each group, examinations were sorted in a fixed random order for half of the group and sorted in reverse order for the other half. Group 1 first read the examinations without model augmentation, and group 2 first read the examinations with model augmentation. After a washout period of 14 days, the augmentation arrangement was reversed such that group 1 performed reads with model augmentation and group 2 read the examinations without model augmentation (Figure 1A).

Clinicians were instructed to assign a binary label for the presence or absence of at least 1 clinically significant aneurysm, defined as having a diameter greater than 3 mm. Clinicians read alone in a diagnostic reading room, all using the same high-definition monitor (3840 × 2160 pixels) displaying CTA examinations on a standard open-source DICOM viewer (Horos).22 Clinicians entered their labels into a data entry software application that automatically logged the time difference between labeling of the previous examination and the current examination.

When reading with model augmentation, clinicians were provided the model’s predictions in the form of region of interest (ROI) segmentations directly overlaid on top of CTA examinations. To ensure an image display interface that was familiar to all clinicians, the model’s predictions were presented as ROIs in a standard DICOM viewing software. At every voxel where the model predicted a probability greater than 0.5, readers saw a semiopaque red overlay on the axial, sagittal, and coronal series (Figure 1C). Readers had access to the ROIs immediately on loading the examinations, and the ROIs could be toggled off to reveal the unaltered CTA images (Figure 1B). The red overlays were the only indication that was given whether a particular CTA examination had been predicted by the model to contain an aneurysm. Given these model results, readers had the option to take it into consideration or disregard it based on clinical judgment. When readers performed diagnoses without augmentation, no ROIs were present on any of the examinations. Otherwise, the diagnostic tools were identical for augmented and nonaugmented reads.

 

Statistical Analysis

On the binary task of determining whether an examination contained an aneurysm, sensitivity, specificity, and accuracy were used to assess the performance of clinicians with and without model augmentation. Sensitivity denotes the number of true-positive results over total aneurysm-positive cases, specificity denotes the number of true-negative results over total aneurysm-negative cases, and accuracy denotes the number of true-positive and true-negative results over all test cases. The microaverage of these statistics across all clinicians was also computed by measuring each statistic pertaining to the total number of true-positive, false-negative, and false-positive results. In addition, to convert the models’ segmentation output of the model into a binary prediction, a prediction was considered positive if the model predicted at least 1 voxel as belonging to an aneurysm and negative otherwise. The 95% Wilson score confidence intervals were used to assess the variability in the estimates for sensitivity, specificity, and accuracy.23

To assess whether the clinicians achieved significant increases in performance with model augmentation, a 1-tailed t test was performed on the differences in sensitivity, specificity, and accuracy across all 8 clinicians. To determine the robustness of the findings and whether results were due to inclusion of the resident radiologist and neurosurgeon, we performed a sensitivity analysis: we computed the t test on the differences in sensitivity, specificity, and accuracy across board-certified radiologists only.

The average time to diagnosis for the clinicians with and without augmentation was computed as the difference between the mean entry times into the spreadsheet of consecutive diagnoses; 95% t score confidence intervals were used to assess the variability in the estimates. To account for interruptions in the clinical read or time logging errors, the 5 longest and 5 shortest time to diagnosis for each clinician in each reading were excluded. To assess whether model augmentation significantly decreased the time to diagnosis, a 1-tailed t test was performed on the difference in average time with and without augmentation across all 8 clinicians.

The interrater agreement of clinicians and for the radiologist subset was computed using the exact Fleiss κ.24 To assess whether model augmentation increased interrater agreement, a 1-tailed permutation test was performed on the difference between the interrater agreement of clinicians on the test set with and without augmentation. The permutation procedure consisted of randomly swapping clinician annotations with and without augmentation so that a random subset of the test set that had previously been labeled as read with augmentation was now labeled as being read without augmentation, and vice versa; the exact Fleiss κ values (and the difference) were computed on the test set with permuted labels. This permutation procedure was repeated 10 000 times to generate the null distribution of the Fleiss κ difference (the interrater agreement of clinician annotations with augmentation is not higher than without augmentation) and the unadjusted value calculated as the proportion of Fleiss κ differences that were higher than the observed Fleiss κ difference.

To control the familywise error rate, the Benjamini-Hochberg correction was applied to account for multiple hypothesis testing; a Benjamini-Hochberg–adjusted P ≤ .05 indicated statistical significance. All tests were 1-tailed.25

Results

The data set contained 818 examinations from 662 unique patients with 328 CTA examinations (40.1%) containing at least 1 intracranial aneurysm and 490 examinations (59.9%) without intracranial aneurysms (Figure 2). Of the 328 aneurysm cases, 20 cases from 15 unique patients contained 2 or more aneurysms. One hundred forty-eight aneurysm cases contained aneurysms between 3 mm and 7 mm, 108 cases had aneurysms between 7 mm and 12 mm, 61 cases had aneurysms between 12 mm and 24 mm, and 11 cases had aneurysms 24 mm or greater. The location of the aneurysms varied according to the following distribution: 99 were located in the internal carotid artery, 78 were in the middle cerebral artery, 50 were cavernous internal carotid artery aneurysms, 44 were basilar tip aneurysms, 41 were in the anterior communicating artery, 18 were in the posterior communicating artery, 16 were in the vertebrobasilar system, and 12 were in the anterior cerebral artery. All examinations were performed either on a GE Discovery, GE LightSpeed, GE Revolution, Siemens Definition, Siemens Sensation, or a Siemens Force scanner, with slice thicknesses of 1.0 mm or 1.25 mm, using standard clinical protocols for head angiogram or head/neck angiogram. There was no difference between the protocols or slice thicknesses between the aneurysm and nonaneurysm examinations. For this study, axial series were extracted from each examination and a segmentation label was produced on every axial slice containing an aneurysm. The number of images per examination ranged from 113 to 802 (mean [SD], 373 [157]).

The examinations were split into a training set of 611 examinations (494 patients; mean [SD] age, 55.8 [18.1] years; 372 [60.9%] female) used to train the model, a development set of 92 examinations (86 patients; mean [SD] age, 61.6 [16.7] years; 59 [64.1%] female) used for model selection, and a test set of 115 examinations (82 patients; mean [SD] age, 57.8 [18.3] years; 74 [64.4%] female) to evaluate the performance of the clinicians when augmented with the model (Figure 2).

Using stratified random sampling, the development and test sets were formed to include 50% aneurysm examinations and 50% normal examinations; the remaining examinations composed the training set, of which 36.5% were aneurysm examinations. Forty-three patients had multiple examinations in the data set due to examinations performed for follow-up of the aneurysm. To account for these repeat patients, examinations were split so that there was no patient overlap between the different sets. Figure 2 contains pathology and patient demographic characteristics for each set.

A total of 8 clinicians, including 6 board-certified practicing radiologists, 1 practicing neurosurgeon, and 1 radiology resident, participated as readers in the study. The radiologists’ years of experience ranged from 3 to 12 years, the neurosurgeon had 2 years of experience as attending, and the resident was in the second year of training at Stanford University Medical Center. Groups 1 and 2 consisted of 3 radiologists each; the resident and neurosurgeon were both in group 1. None of the clinicians were involved in establishing the reference standard for the examinations.

Without augmentation, clinicians achieved a microaveraged sensitivity of 0.831 (95% CI, 0.794-0.862), specificity of 0.960 (95% CI, 0.937-0.974), and an accuracy of 0.893 (95% CI, 0.872-0.912). With augmentation, the clinicians achieved a microaveraged sensitivity of 0.890 (95% CI, 0.858-0.915), specificity of 0.975 (95% CI, 0.957-0.986), and an accuracy of 0.932 (95% CI, 0.913-0.946). The underlying model had a sensitivity of 0.949 (95% CI, 0.861-0.983), specificity of 0.661 (95% CI, 0.530-0.771), and accuracy of 0.809 (95% CI, 0.727-0.870). The performances of the model, individual clinicians, and their microaverages are reported in eTable 1 in the Supplement.

 

With augmentation, there was a statistically significant increase in the mean sensitivity (0.059; 95% CI, 0.028-0.091; adjusted P = .01) and mean accuracy (0.038; 95% CI, 0.014-0.062; adjusted P = .02) of the clinicians as a group. There was no statistically significant change in mean specificity (0.016; 95% CI, −0.010 to 0.041; adjusted P = .16). Performance improvements across clinicians are detailed in the Table, and individual clinician improvement in Figure 3.

Individual performances with and without model augmentation are shown in eTable 1 in the Supplement. The sensitivity analysis confirmed that even among board-certified radiologists, there was a statistically significant increase in mean sensitivity (0.059; 95% CI, 0.013-0.105; adjusted P = .04) and accuracy (0.036; 95% CI, 0.001-0.072; adjusted P = .05). Performance improvements of board-certified radiologists as a group are shown in eTable 2 in the Supplement.

 

The mean diagnosis time per examination without augmentation microaveraged across clinicians was 57.04 seconds (95% CI, 54.58-59.50 seconds). The times for individual clinicians are detailed in eTable 3 in the Supplement, and individual time changes are shown in eFigure 2 in the Supplement.

 

With augmentation, there was no statistically significant decrease in mean diagnosis time (5.71 seconds; 95% CI, −7.22 to 18.63 seconds; adjusted P = .19). The model took a mean of 7.58 seconds (95% CI, 6.92-8.25 seconds) to process an examination and output its segmentation map.Confusion matrices, which are tables reporting true- and false-positive results and true- and false-negative results of each clinician with and without model augmentation, are shown in eTable 4 in the Supplement.

There was a statistically significant increase of 0.060 (adjusted P = .05) in the interrater agreement among the clinicians, with an exact Fleiss κ of 0.799 without augmentation and 0.859 with augmentation. For the board-certified radiologists, there was an increase of 0.063 in their interrater agreement, with an exact Fleiss κ of 0.783 without augmentation and 0.847 with augmentation.

Discussion

In this study, the ability of a deep learning model to augment clinician performance in detecting cerebral aneurysms using CTA was investigated with a crossover study design. With model augmentation, clinicians’ sensitivity, accuracy, and interrater agreement significantly increased. There was no statistical change in specificity and time to diagnosis.Given the potential catastrophic outcome of a missed aneurysm at risk of rupture, an automated detection tool that reliably detects and enhances clinicians’ performance is highly desirable. Aneurysm rupture is fatal in 40% of patients and leads to irreversible neurological disability in two-thirds of those who survive; therefore, an accurate and timely detection is of paramount importance. In addition to significantly improving accuracy across clinicians while interpreting CTA examinations, an automated aneurysm detection tool, such as the one presented in this study, could also be used to prioritize workflow so that those examinations more likely to be positive could receive timely expert review, potentially leading to a shorter time to treatment and more favorable outcomes.The significant variability among clinicians in the diagnosis of aneurysms has been well documented and is typically attributed to lack of experience or subspecialty neuroradiology training, complex neurovascular anatomy, or the labor-intensive nature of identifying aneurysms. Studies have shown that interrater agreement of CTA-based aneurysm detection is highly variable, with interrater reliability metrics ranging from 0.37 to 0.85,6,7,2628 and performance levels that vary depending on aneurysm size and individual radiologist experience.4,6 In addition to significantly increasing sensitivity and accuracy, augmenting clinicians with the model also significantly improved interrater reliability from 0.799 to 0.859. This implies that augmenting clinicians with varying levels of experience and specialties with models could lead to more accurate and more consistent radiological interpretations. Currently, tools to improve clinician aneurysm detection on CTA include bone subtraction,29 as well as 3-D rendering of intracranial vasculature,3032 which rely on application of contrast threshold settings to better delineate cerebral vasculature and create a 3-D–rendered reconstruction to assist aneurysm detection. However, using these tools is labor- and time-intensive for clinicians; in some institutions, this process is outsourced to a 3-D lab at additional costs. The tool developed in this study, integrated directly in a standard DICOM viewer, produces a segmentation map on a new examination in only a few seconds. If integrated into the standard workflow, this diagnostic tool could substantially decrease both cost and time to diagnosis, potentially leading to more efficient treatment and more favorable patient outcomes.Deep learning has recently shown success in various clinical image-based recognition tasks. In particular, studies have shown strong performance of 2-D CNNs in detecting intracranial hemorrhage and other acute brain findings, such as mass effect or skull fractures, on CT head examinations.3336 Recently, one study10 examined the potential role for deep learning in magnetic resonance angiogram–based detection of cerebral aneurysms, and another study37 showed that providing deep learning model predictions to clinicians when interpreting knee magnetic resonance studies increased specificity in detecting anterior cruciate ligament tears. To our knowledge, prior to this study, deep learning had not been applied to CTA, which is the first-line imaging modality for detecting cerebral aneurysms. Our results demonstrate that deep learning segmentation models may produce dependable and interpretable predictions that augment clinicians and improve their diagnostic performance. The model implemented and tested in this study significantly increased sensitivity, accuracy, and interrater reliability of clinicians with varied experience and specialties in detecting cerebral aneurysms using CTA.

Limitations

This study has limitations. First, because the study focused only on nonruptured aneurysms, model performance on aneurysm detection after aneurysm rupture, lesion recurrence after coil or surgical clipping, or aneurysms associated with arteriovenous malformations has not been investigated. Second, since examinations containing surgical hardware or devices were excluded, model performance in their presence is unknown. In a clinical environment, CTA is typically used to evaluate for many types of vascular diseases, not just for aneurysm detection. Therefore, the high prevalence of aneurysm in the test set and the clinician’s binary task could have introduced bias in interpretation. Also, this study was performed on data from a single tertiary care academic institution and may not reflect performance when applied to data from other institutions with different scanners and imaging protocols, such as different slice thicknesses.

Conclusions

A deep learning model was developed to automatically detect clinically significant intracranial aneurysms on CTA. We found that the augmentation significantly improved clinicians’ sensitivity, accuracy, and interrater reliability. Future work should investigate the performance of this model prospectively and in application of data from other institutions and hospitals.

Article Information:

Accepted for Publication: April 23, 2019.

Published: June 7, 2019. doi:10.1001/jamanetworkopen.2019.5600

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Park A et al. JAMA Network Open.

Corresponding Author: Kristen W. Yeom, MD, School of Medicine, Department of Radiology, Stanford University, 725 Welch Rd, Ste G516, Palo Alto, CA 94304 (kyeom@stanford.edu).

Author Contributions: Ms Park and Dr Yeom had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Ms Park and Messrs Chute and Rajpurkar are co–first authors. Drs Ng and Yeom are co–senior authors.

Concept and design: Park, Chute, Rajpurkar, Lou, Shpanskaya, Ni, Basu, Lungren, Ng, Yeom.

Acquisition, analysis, or interpretation of data: Park, Chute, Rajpurkar, Lou, Ball, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Ni, Wishah, Wittber, Hong, Wilson, Halabi, Patel, Lungren, Yeom.

Drafting of the manuscript: Park, Chute, Rajpurkar, Lou, Ball, Jabarkheel, Kim, McKenna, Hong, Halabi, Lungren, Yeom.

Critical revision of the manuscript for important intellectual content: Park, Chute, Rajpurkar, Ball, Shpanskaya, Jabarkheel, Kim, Tseng, Ni, Wishah, Wittber, Wilson, Basu, Patel, Lungren, Ng, Yeom.

Statistical analysis: Park, Chute, Rajpurkar, Lou, Ball, Lungren.

Administrative, technical, or material support: Park, Chute, Shpanskaya, Jabarkheel, Kim, McKenna, Tseng, Wittber, Hong, Wilson, Lungren, Ng, Yeom.

Supervision: Park, Ball, Tseng, Halabi, Basu, Lungren, Ng, Yeom.

Conflict of Interest Disclosures: Drs Wishah and Patel reported grants from GE and Siemens outside the submitted work. Dr Patel reported participation in the speakers bureau for GE. Dr Lungren reported personal fees from Nines Inc outside the submitted work. Dr Yeom reported grants from Philips outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by National Institutes of Health National Center for Advancing Translational Science Clinical and Translational Science Award UL1TR001085.

Role of the Funder/Sponsor: The National Institutes of Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

1.Jaja  BN, Cusimano  MD, Etminan  N,  et al.  Clinical prediction models for aneurysmal subarachnoid hemorrhage: a systematic review.  Neurocrit Care. 2013;18(1):143-153. doi:10.1007/s12028-012-9792-zPubMedGoogle ScholarCrossref
2.Turan  N, Heider  RA, Roy  AK,  et al.  Current perspectives in imaging modalities for the assessment of unruptured intracranial aneurysms: a comparative analysis and review.  World Neurosurg. 2018;113:280-292. doi:10.1016/j.wneu.2018.01.054PubMedGoogle ScholarCrossref
3.Yoon  NK, McNally  S, Taussky  P, Park  MS.  Imaging of cerebral aneurysms: a clinical perspective.  Neurovasc Imaging. 2016;2(1):6. doi:10.1186/s40809-016-0016-3Google ScholarCrossref
4.Jayaraman  MV, Mayo-Smith  WW, Tung  GA,  et al.  Detection of intracranial aneurysms: multi-detector row CT angiography compared with DSA.  Radiology. 2004;230(2):510-518. doi:10.1148/radiol.2302021465PubMedGoogle ScholarCrossref
5.Bharatha  A, Yeung  R, Durant  D,  et al.  Comparison of computed tomography angiography with digital subtraction angiography in the assessment of clipped intracranial aneurysms.  J Comput Assist Tomogr. 2010;34(3):440-445. doi:10.1097/RCT.0b013e3181d27393PubMedGoogle ScholarCrossref
6.Lubicz  B, Levivier  M, François  O,  et al.  Sixty-four-row multisection CT angiography for detection and evaluation of ruptured intracranial aneurysms: interobserver and intertechnique reproducibility.  AJNR Am J Neuroradiol. 2007;28(10):1949-1955. doi:10.3174/ajnr.A0699PubMedGoogle ScholarCrossref
7.White  PM, Teasdale  EM, Wardlaw  JM, Easton  V.  Intracranial aneurysms: CT angiography and MR angiography for detection prospective blinded comparison in a large patient cohort.  Radiology. 2001;219(3):739-749. doi:10.1148/radiology.219.3.r01ma16739PubMedGoogle ScholarCrossref
8.Suzuki  K.  Overview of deep learning in medical imaging.  Radiol Phys Technol. 2017;10(3):257-273. doi:10.1007/s12194-017-0406-5PubMedGoogle ScholarCrossref
9.Rajpurkar  P, Irvin  J, Ball  RL,  et al.  Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists.  PLoS Med. 2018;15(11):e1002686. doi:10.1371/journal.pmed.1002686PubMedGoogle ScholarCrossref
10.Bien  N, Rajpurkar  P, Ball  RL,  et al.  Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of MRNet.  PLoS Med. 2018;15(11):e1002699. doi:10.1371/journal.pmed.1002699PubMedGoogle ScholarCrossref
11.Morita  A, Kirino  T, Hashi  K,  et al; UCAS Japan Investigators.  The natural course of unruptured cerebral aneurysms in a Japanese cohort.  N Engl J Med. 2012;366(26):2474-2482. doi:10.1056/NEJMoa1113260PubMedGoogle ScholarCrossref
12.Yushkevich  PA, Piven  J, Hazlett  HC,  et al.  User-guided 3D active contour segmentation of anatomical structures: significantly improved efficiency and reliability.  Neuroimage. 2006;31(3):1116-1128. doi:10.1016/j.neuroimage.2006.01.015PubMedGoogle ScholarCrossref
13.He  K, Zhang  X, Ren  S, Sun  J. Deep residual learning for image recognition. Paper presented at: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 27, 2016; Las Vegas, NV.
14.Xie  S, Girshick  R, Dollár  P, Tu  Z, He  K. Aggregated residual transformations for deep neural networks. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
15.Hu  J, Shen  L, Sun  G. Squeeze-and-excitation networks. Paper presented at: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); June 21, 2018; Salt Lake City, Utah.
16.Ronneberger  O, Fischer  P, Brox  T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention. Basel, Switzerland: Springer International; 2015:234–241.
17.Carreira  J, Zisserman  A. Quo vadis, action recognition? a new model and the kinetics dataset. Paper presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); July 25, 2017; Honolulu, HI.
18.Chen  L-C, Papandreou  G, Schroff  F, Adam  H. Rethinking atrous convolution for semantic image segmentation. https://arxiv.org/abs/1706.05587. Published June 17, 2017. Accessed May 7, 2019.
19.Milletari  F, Navab  N, Ahmadi  S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. Paper presented at: 2016 IEEE Fourth International Conference on 3D Vision (3DV); October 26-28, 2016; Stanford, CA.
20.Loshchilov  I, Hutter  F. Sgdr: Stochastic gradient descent with warm restarts. Paper presented at: 2017 Fifth International Conference on Learning Representations; April 24-26, 2017; Toulon, France.
21.Huang  G, Sun  Y, Liu  Z, Sedra  D, Weinberger  KQ. Deep networks with stochastic depth. European Conference on Computer Vision. Basel, Switzerland: Springer International; 2016:646–661.
22.Horos. https://horosproject.org. Accessed May 1, 2019.
23.Wilson  EB.  Probable inference, the law of succession, and statistical inference.  J Am Stat Assoc. 1927;22(158):209-212. doi:10.1080/01621459.1927.10502953Google ScholarCrossref
24.Fleiss  JL, Cohen  J.  The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability.  Educ Psychol Meas. 1973;33(3):613-619. doi:10.1177/001316447303300309Google ScholarCrossref
25.Benjamini  Y, Hochberg  Y.  Controlling the false discovery rate: a practical and powerful approach to multiple testing.  J R Stat Soc Series B Stat Methodol. 1995;57(1):289-300.Google Scholar
26.Maldaner  N, Stienen  MN, Bijlenga  P,  et al.  Interrater agreement in the radiologic characterization of ruptured intracranial aneurysms based on computed tomography angiography.  World Neurosurg. 2017;103:876-882.e1. doi:10.1016/j.wneu.2017.04.131PubMedGoogle ScholarCrossref
27.Wang  Y, Gao  X, Lu  A,  et al.  Residual aneurysm after metal coils treatment detected by spectral CT.  Quant Imaging Med Surg. 2012;2(2):137-138.PubMedGoogle Scholar
28.Yoon  YW, Park  S, Lee  SH,  et al.  Post-traumatic myocardial infarction complicated with left ventricular aneurysm and pericardial effusion.  J Trauma. 2007;63(3):E73-E75. doi:10.1097/01.ta.0000246896.89156.70PubMedGoogle ScholarCrossref
29.Tomandl  BF, Hammen  T, Klotz  E, Ditt  H, Stemper  B, Lell  M.  Bone-subtraction CT angiography for the evaluation of intracranial aneurysms.  AJNR Am J Neuroradiol. 2006;27(1):55-59.PubMedGoogle Scholar
30.Shi  W-Y, Li  Y-D, Li  M-H,  et al.  3D rotational angiography with volume rendering: the utility in the detection of intracranial aneurysms.  Neurol India. 2010;58(6):908-913. doi:10.4103/0028-3886.73743PubMedGoogle ScholarCrossref
31.Lin  N, Ho  A, Gross  BA,  et al.  Differences in simple morphological variables in ruptured and unruptured middle cerebral artery aneurysms.  J Neurosurg. 2012;117(5):913-919. doi:10.3171/2012.7.JNS111766PubMedGoogle ScholarCrossref
32.Villablanca  JP, Jahan  R, Hooshi  P,  et al.  Detection and characterization of very small cerebral aneurysms by using 2D and 3D helical CT angiography.  AJNR Am J Neuroradiol. 2002;23(7):1187-1198.PubMedGoogle Scholar
33.Chang  PD, Kuoy  E, Grinband  J,  et al.  Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT.  AJNR Am J Neuroradiol. 2018;39(9):1609-1616. doi:10.3174/ajnr.A5742PubMedGoogle ScholarCrossref
34.Chilamkurthy  S, Ghosh  R, Tanamala  S,  et al.  Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study.  Lancet. 2018;392(10162):2388-2396. doi:10.1016/S0140-6736(18)31645-3PubMedGoogle ScholarCrossref
35.Jnawali  K, Arbabshirani  MR, Rao  N, Patel  AA. Deep 3D convolution neural network for CT brain hemorrhage classification. Paper presented at: Medical Imaging 2018: Computer-Aided Diagnosis. February 27, 2018; Houston, TX. doi:10.1117/12.2293725
36.Titano  JJ, Badgeley  M, Schefflein  J,  et al.  Automated deep-neural-network surveillance of cranial images for acute neurologic events.  Nat Med. 2018;24(9):1337-1341. doi:10.1038/s41591-018-0147-yPubMedGoogle ScholarCrossref
37.Ueda  D, Yamamoto  A, Nishimori  M,  et al.  Deep learning for MR angiography: automated detection of cerebral aneurysms.  Radiology. 2019;290(1):187-194.PubMedGoogle ScholarCrossref
Advertisements

Read Full Post »


Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence: Realizing Precision Medicine One Patient at a Time

Reporter: Stephen J Williams, PhD @StephenJWillia2

The impact of Machine Learning (ML) and Artificial Intelligence (AI) during the last decade has been tremendous. With the rise of infobesity, ML/AI is evolving to an essential capability to help mine the sheer volume of patient genomics, omics, sensor/wearables and real-world data, and unravel the knot of healthcare’s most complex questions.

Despite the advancements in technology, organizations struggle to prioritize and implement ML/AI to achieve the anticipated value, whilst managing the disruption that comes with it. In this session, panelists will discuss ML/AI implementation and adoption strategies that work. Panelists will draw upon their experiences as they share their success stories, discuss how to implement digital diagnostics, track disease progression and treatment, and increase commercial value and ROI compared against traditional approaches.

  • most of trials which are done are still in training AI/ML algorithms with training data sets.  The best results however have been about 80% accuracy in training sets.  Needs to improve
  • All data sets can be biased.  For example a professor was looking at heartrate using a IR detector on a wearable but it wound up that different types of skin would generate a different signal to the detector so training sets maybe population biases (you are getting data from one group)
  • clinical grade equipment actually haven’t been trained on a large set like commercial versions of wearables, Commercial grade is tested on a larger study population.  This can affect the AI/ML algorithms.
  • Regulations:  The regulatory bodies responsible is up to debate.  Whether FDA or FTC is responsible for AI/ML in healtcare and healthcare tech and IT is not fully decided yet.  We don’t have the guidances for these new technologies
  • some rules: never use your own encryption always use industry standards especially when getting personal data from wearables.  One hospital corrupted their system because their computer system was not up to date and could not protect against a virus transmitted by a wearable.
  • pharma companies understand they need to increase value of their products so very interested in how AI/ML can be used.

Please follow LIVE on TWITTER using the following @ handles and # hashtags:

@Handles

@pharma_BI

@AVIVA1950

@BIOConvention

# Hashtags

#BIO2019 (official meeting hashtag)

Read Full Post »


Google, Verily’s Uses AI to Screen for Diabetic Retinopathy

Reporter : Irina Robu, PhD

Google and Verily, the life science research organization under Alphabet designed a machine learning algorithm to better screen for diabetes and associated eye diseases. Google and Verily believe the algorithm can be beneficial in areas lacking optometrists.

The algorithm is being integrated for the first time in a clinical setting at Aravind Eye Hospital in Madurai, India where it is designed to screen for diabetic retinopathy and diabetic macular edema. After a patient is imaged by trained staff using a fundus camera, the image is uploaded to the screening algorithm through management software. The algorithm then analyzes the images for the diabetic eye diseases before returning the results.

Numerous AI-driven approaches have lately been effective in detecting diabetic retinopathy with high accuracy. An AI-based grading system was able to effectively diagnose two patients with the disease. Furthermore, an AI-driven approach for detecting an early sign of diabetic retinopathy attained an accuracy rate of more than 98 percent.

According to the R. Usha Kim, Chief of retina services at the Aravind Eye Hospital the algorithm permits physicians to work closely with patients on treatment and management of their disease, whereas increasing the volume of screenings we can perform. Automated grading of diabetic retinopathy has possible benefits such as increasing efficiency, reproducible, and coverage of screening programs and improving patient outcomes by providing early detection and treatment.

Even if the technology sounds promising, current research show there are long way until it can directly transfer from the lab into clinic.

SOURCE
https://www.healthcareitnews.com/news/google-verily-using-ai-screen-diabetic-retinopathy-india

Read Full Post »


Artificial intelligence can be a useful tool to predict Alzheimer

Reporter: Irina Robu, PhD

The Alzheimer’s Association estimate that around 5.7 million people live with Alzheimer’s disease in the United States which will rise to almost 14 million by 2050. Earlier diagnosis would not only benefit those affected, but it could also jointly save about $7.9 trillion in medical care and related costs over time. As Alzheimer’s disease progresses, it changes how brain cells use glucose. This alteration in glucose metabolism shows up in a type of PET imaging that tracks the uptake of a radioactive form of glucose called 18F-fluorodeoxyglucose. By giving instructions about what to look for, the scientists were able to train the deep learning algorithm to assess the PET images for early signs of Alzheimer’s.
The researchers from University of California San Francisco used positron-emission tomography images of 1002 people’s brain to train the deep learning algorithm they developed. They used 90 percent of images to teach the algorithm to spot features of Alzheimer’s disease and the remaining 10 percent to verify its performance. The researchers tested the algorithm on PET images of brains from 40 people, from which they were able to predict which individuals would receive a final diagnosis of Alzheimer’s. On average, the people who were tested were diagnosed with the disease more than 6 years after the scans.
According to the Radiology journal in which the research was published, the team describes how the algorithm “achieved 82 percent specificity at 100 percent sensitivity, an average of 75.8 months prior to the final diagnosis.” The researchers taught the algorithm with the help of more than 2,109 PET images of 1,002 individuals’ brains. The algorithm uses deep learning, which allows the algorithm to “teach itself” what to look for by spotting subtle differences among the thousands of images. The algorithm was as good as, if not better than, human experts at analyzing the FDG PET images.
Future advances will involve using larger data sets and additional images taken over time from people at various clinics and institutions. In the future, the algorithm could be a beneficial addition to the radiologist’s toolbox and advance opportunities for the early treatment of Alzheimer’s disease.

Source

https://www.medicalnewstoday.com/articles/323608.php

 

Read Full Post »


2,000 human brains yield clues to how genes raise risk for mental illnesses

Reporter: Irina Robu, PhD

It’s one thing to detect sites in the genome associated with mental disorders; it’s quite another to discover the biological mechanisms by which these changes in DNA work in the human brain to boost risk. In their first concerted effort to tackle the problem, 15 collaborating research teams of the National Institutes of Health-funded PsychENCODE Consortium evaluated data of 2000 human brains which might yield clues to how genes raise risk for mental illnesses.
Applying newly uncovered secrets of the brain’s molecular architecture, they established an artificial intelligence model that is six times better than preceding ones at predicting risk for mental disorders. They also identified several hundred previously unknown risk genes for mental illnesses and linked many known risk variants to specific genes. In the brain tissue and single cells, the researchers identified patterns of gene expression, marks in gene regulation as well as genetic variants that can be linked to mental illnesses.
Dr. Nenad Sestan of Yale University explained that “ the consortium’s integrative genomic analyses elucidate the mechanisms by which cellular diversity and patterns of gene expression change throughout development and reveal how neuropsychiatric risk genes are concentrated into distinct co-expression modules and cell types”. The implicated variants are typically small-effect genetic variations that fall within regions of the genome that don’t code for proteins, but instead are thought to regulate gene expression and other aspects of gene function.
In addition to the 2000 postmortem human brains, researchers examined brain tissue from prenatal development as well as people with schizophrenia, bipolar disorder,  and typical development compared findings with parallel data from non-human primates.

Their findings indicate that gene variants linked to mental illnesses exert more effects when they jointly form “modules”, communicating genes with related functions and at specific developmental time points that seem to coincide with the course of illness. Variability in risk gene expression and cell types increases during formative stages in early prenatal development and again during the teen years. However, in postmortem brains of people with a mental illness, thousands of RNAs were found to have anomalies.

According to NIMH, Geetha Senthil the multi-omic data resource caused by the PsychENCODE collaboration will pave a path for building molecular models of disease and developmental processes and may offer a platform for target identification for pharmaceutical research.

Source
https://www.nih.gov/news-events/news-releases/2000-human-brains-yield-clues-how-genes-raise-risk-mental-illnesses

Read Full Post »


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

Updated 12/18/2018

In the efforts to reduce healthcare costs, provide increased accessibility of service for patients, and drive biomedical innovations, many healthcare and biotechnology professionals have looked to advances in digital technology to determine the utility of IT to drive and extract greater value from healthcare industry.  Two areas of recent interest have focused how best to use blockchain and artificial intelligence technologies to drive greater efficiencies in our healthcare and biotechnology industries.

More importantly, with the substantial increase in ‘omic data generated both in research as well as in the clinical setting, it has become imperative to develop ways to securely store and disseminate the massive amounts of ‘omic data to various relevant parties (researchers or clinicians), in an efficient manner yet to protect personal privacy and adhere to international regulations.  This is where blockchain technologies may play an important role.

A recent Oncotarget paper by Mamoshina et al. (1) discussed the possibility that next-generation artificial intelligence and blockchain technologies could synergize to accelerate biomedical research and enable patients new tools to control and profit from their personal healthcare data, and assist patients with their healthcare monitoring needs. According to the abstract:

The authors introduce new concepts to appraise and evaluate personal records, including the combination-, time- and relationship value of the data.  They also present a roadmap for a blockchain-enabled decentralized personal health data ecosystem to enable novel approaches for drug discovery, biomarker development, and preventative healthcare.  In this system, blockchain and deep learning technologies would provide the secure and transparent distribution of personal data in a healthcare marketplace, and would also be useful to resolve challenges faced by the regulators and return control over personal data including medical records to the individual.

The review discusses:

  1. Recent achievements in next-generation artificial intelligence
  2. Basic concepts of highly distributed storage systems (HDSS) as a preferred method for medical data storage
  3. Open source blockchain Exonium and its application for healthcare marketplace
  4. A blockchain-based platform allowing patients to have control of their data and manage access
  5. How advances in deep learning can improve data quality, especially in an era of big data

Advances in Artificial Intelligence

  • Integrative analysis of the vast amount of health-associated data from a multitude of large scale global projects has proven to be highly problematic (REF 27), as high quality biomedical data is highly complex and of a heterogeneous nature, which necessitates special preprocessing and analysis.
  • Increased computing processing power and algorithm advances have led to significant advances in machine learning, especially machine learning involving Deep Neural Networks (DNNs), which are able to capture high-level dependencies in healthcare data. Some examples of the uses of DNNs are:
  1. Prediction of drug properties(2, 3) and toxicities(4)
  2. Biomarker development (5)
  3. Cancer diagnosis (6)
  4. First FDA approved system based on deep learning Arterys Cardio DL
  • Other promising systems of deep learning include:
    • Generative Adversarial Networks (https://arxiv.org/abs/1406.2661): requires good datasets for extensive training but has been used to determine tumor growth inhibition capabilities of various molecules (7)
    • Recurrent neural Networks (RNN): Originally made for sequence analysis, RNN has proved useful in analyzing text and time-series data, and thus would be very useful for electronic record analysis. Has also been useful in predicting blood glucose levels of Type I diabetic patients using data obtained from continuous glucose monitoring devices (8)
    • Transfer Learning: focused on translating information learned on one domain or larger dataset to another, smaller domain. Meant to reduce the dependence on large training datasets that RNN, GAN, and DNN require.  Biomedical imaging datasets are an example of use of transfer learning.
    • One and Zero-Shot Learning: retains ability to work with restricted datasets like transfer learning. One shot learning aimed to recognize new data points based on a few examples from the training set while zero-shot learning aims to recognize new object without seeing the examples of those instances within the training set.

Highly Distributed Storage Systems (HDSS)

The explosion in data generation has necessitated the development of better systems for data storage and handling. HDSS systems need to be reliable, accessible, scalable, and affordable.  This involves storing data in different nodes and the data stored in these nodes are replicated which makes access rapid. However data consistency and affordability are big challenges.

Blockchain is a distributed database used to maintain a growing list of records, in which records are divided into blocks, locked together by a crytosecurity algorithm(s) to maintain consistency of data.  Each record in the block contains a timestamp and a link to the previous block in the chain.  Blockchain is a distributed ledger of blocks meaning it is owned and shared and accessible to everyone.  This allows a verifiable, secure, and consistent history of a record of events.

Data Privacy and Regulatory Issues

The establishment of the Health Insurance Portability and Accountability Act (HIPAA) in 1996 has provided much needed regulatory guidance and framework for clinicians and all concerned parties within the healthcare and health data chain.  The HIPAA act has already provided much needed guidance for the latest technologies impacting healthcare, most notably the use of social media and mobile communications (discussed in this article  Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.).  The advent of blockchain technology in healthcare offers its own unique challenges however HIPAA offers a basis for developing a regulatory framework in this regard.  The special standards regarding electronic data transfer are explained in HIPAA’s Privacy Rule, which regulates how certain entities (covered entities) use and disclose individual identifiable health information (Protected Health Information PHI), and protects the transfer of such information over any medium or electronic data format. However, some of the benefits of blockchain which may revolutionize the healthcare system may be in direct contradiction with HIPAA rules as outlined below:

Issues of Privacy Specific In Use of Blockchain to Distribute Health Data

  • Blockchain was designed as a distributed database, maintained by multiple independent parties, and decentralized
  • Linkage timestamping; although useful in time dependent data, proof that third parties have not been in the process would have to be established including accountability measures
  • Blockchain uses a consensus algorithm even though end users may have their own privacy key
  • Applied cryptography measures and routines are used to decentralize authentication (publicly available)
  • Blockchain users are divided into three main categories: 1) maintainers of blockchain infrastructure, 2) external auditors who store a replica of the blockchain 3) end users or clients and may have access to a relatively small portion of a blockchain but their software may use cryptographic proofs to verify authenticity of data.

 

YouTube video on How #Blockchain Will Transform Healthcare in 25 Years (please click below)

 

 

In Big Data for Better Outcomes, BigData@Heart, DO->IT, EHDN, the EU data Consortia, and yes, even concepts like pay for performance, Richard Bergström has had a hand in their creation. The former Director General of EFPIA, and now the head of health both at SICPA and their joint venture blockchain company Guardtime, Richard is always ahead of the curve. In fact, he’s usually the one who makes the curve in the first place.

 

 

 

Please click on the following link for a podcast on Big Data, Blockchain and Pharma/Healthcare by Richard Bergström:

References

  1. Mamoshina, P., Ojomoko, L., Yanovich, Y., Ostrovski, A., Botezatu, A., Prikhodko, P., Izumchenko, E., Aliper, A., Romantsov, K., Zhebrak, A., Ogu, I. O., and Zhavoronkov, A. (2018) Converging blockchain and next-generation artificial intelligence technologies to decentralize and accelerate biomedical research and healthcare, Oncotarget 9, 5665-5690.
  2. Aliper, A., Plis, S., Artemov, A., Ulloa, A., Mamoshina, P., and Zhavoronkov, A. (2016) Deep Learning Applications for Predicting Pharmacological Properties of Drugs and Drug Repurposing Using Transcriptomic Data, Molecular pharmaceutics 13, 2524-2530.
  3. Wen, M., Zhang, Z., Niu, S., Sha, H., Yang, R., Yun, Y., and Lu, H. (2017) Deep-Learning-Based Drug-Target Interaction Prediction, Journal of proteome research 16, 1401-1409.
  4. Gao, M., Igata, H., Takeuchi, A., Sato, K., and Ikegaya, Y. (2017) Machine learning-based prediction of adverse drug effects: An example of seizure-inducing compounds, Journal of pharmacological sciences 133, 70-78.
  5. Putin, E., Mamoshina, P., Aliper, A., Korzinkin, M., Moskalev, A., Kolosov, A., Ostrovskiy, A., Cantor, C., Vijg, J., and Zhavoronkov, A. (2016) Deep biomarkers of human aging: Application of deep neural networks to biomarker development, Aging 8, 1021-1033.
  6. Vandenberghe, M. E., Scott, M. L., Scorer, P. W., Soderberg, M., Balcerzak, D., and Barker, C. (2017) Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer, Scientific reports 7, 45938.
  7. Kadurin, A., Nikolenko, S., Khrabrov, K., Aliper, A., and Zhavoronkov, A. (2017) druGAN: An Advanced Generative Adversarial Autoencoder Model for de Novo Generation of New Molecules with Desired Molecular Properties in Silico, Molecular pharmaceutics 14, 3098-3104.
  8. Ordonez, F. J., and Roggen, D. (2016) Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition, Sensors (Basel) 16.

Articles from clinicalinformaticsnews.com

Healthcare Organizations Form Synaptic Health Alliance, Explore Blockchain’s Impact On Data Quality

From http://www.clinicalinformaticsnews.com/2018/12/05/healthcare-organizations-form-synaptic-health-alliance-explore-blockchains-impact-on-data-quality.aspx

By Benjamin Ross

December 5, 2018 | The boom of blockchain and distributed ledger technologies have inspired healthcare organizations to test the capabilities of their data. Quest Diagnostics, in partnership with Humana, MultiPlan, and UnitedHealth Group’s Optum and UnitedHealthcare, have launched a pilot program that applies blockchain technology to improve data quality and reduce administrative costs associated with changes to healthcare provider demographic data.

The collective body, called Synaptic Health Alliance, explores how blockchain can keep only the most current healthcare provider information available in health plan provider directories. The alliance plans to share their progress in the first half of 2019.

Providing consumers looking for care with accurate information when they need it is essential to a high-functioning overall healthcare system, Jason O’Meara, Senior Director of Architecture at Quest Diagnostics, told Clinical Informatics News in an email interview.

“We were intentional about calling ourselves an alliance as it speaks to the shared interest in improving health care through better, collaborative use of an innovative technology,” O’Meara wrote. “Our large collective dataset and national footprints enable us to prove the value of data sharing across company lines, which has been limited in healthcare to date.”

O’Meara said Quest Diagnostics has been investing time and resources the past year or two in understanding blockchain, its ability to drive purpose within the healthcare industry, and how to leverage it for business value.

“Many health care and life science organizations have cast an eye toward blockchain’s potential to inform their digital strategies,” O’Meara said. “We recognize it takes time to learn how to leverage a new technology. We started exploring the technology in early 2017, but we quickly recognized the technology’s value is in its application to business to business use cases: to help transparently share information, automate mutually-beneficial processes and audit interactions.”

Quest began discussing the potential for an alliance with the four other companies a year ago, O’Meara said. Each company shared traits that would allow them to prove the value of data sharing across company lines.

“While we have different perspectives, each member has deep expertise in healthcare technology, a collaborative culture, and desire to continuously improve the patient/customer experience,” said O’Meara. “We also recognize the value of technology in driving efficiencies and quality.”

Following its initial launch in April, Synaptic Health Alliance is deploying a multi-company, multi-site, permissioned blockchain. According to a whitepaper published by Synaptic Health, the choice to use a permissioned blockchain rather than an anonymous one is crucial to the alliance’s success.

“This is a more effective approach, consistent with enterprise blockchains,” an alliance representative wrote. “Each Alliance member has the flexibility to deploy its nodes based on its enterprise requirements. Some members have elected to deploy their nodes within their own data centers, while others are using secured public cloud services such as AWS and Azure. This level of flexibility is key to growing the Alliance blockchain network.”

As the pilot moves forward, O’Meara says the Alliance plans to open ability to other organizations. Earlier this week Aetna and Ascension announced they joined the project.

“I am personally excited by the amount of cross-company collaboration facilitated by this project,” O’Meara says. “We have already learned so much from each other and are using that knowledge to really move the needle on improving healthcare.”

 

US Health And Human Services Looks To Blockchain To Manage Unstructured Data

http://www.clinicalinformaticsnews.com/2018/11/29/us-health-and-human-services-looks-to-blockchain-to-manage-unstructured-data.aspx

By Benjamin Ross

November 29, 2018 | The US Department of Health and Human Services (HHS) is making waves in the blockchain space. The agency’s Division of Acquisition (DA) has developed a new system, called Accelerate, which gives acquisition teams detailed information on pricing, terms, and conditions across HHS in real-time. The department’s Associate Deputy Assistant Secretary for Acquisition, Jose Arrieta, gave a presentation and live demo of the blockchain-enabled system at the Distributed: Health event earlier this month in Nashville, Tennessee.

Accelerate is still in the prototype phase, Arrieta said, with hopes that the new system will be deployed at the end of the fiscal year.

HHS spends around $25 billion a year in contracts, Arrieta said. That’s 100,000 contracts a year with over one million pages of unstructured data managed through 45 different systems. Arrieta and his team wanted to modernize the system.

“But if you’re going to change the way a workforce of 20,000 people do business, you have to think your way through how you’re going to do that,” said Arrieta. “We didn’t disrupt the existing systems: we cannibalized them.”

The cannibalization process resulted in Accelerate. According to Arrieta, the system functions by creating a record of data rather than storing it, leveraging machine learning, artificial intelligence (AI), and robotic process automation (RPA), all through blockchain data.

“We’re using that data record as a mechanism to redesign the way we deliver services through micro-services strategies,” Arrieta said. “Why is that important? Because if you have a single application or data use that interfaces with 55 other applications in your business network, it becomes very expensive to make changes to one of the 55 applications.”

Accelerate distributes the data to the workforce, making it available to them one business process at a time.

“We’re building those business processes without disrupting the existing systems,” said Arrieta, and that’s key. “We’re not shutting off those systems. We’re using human-centered design sessions to rebuild value exchange off of that data.”

The first application for the system, Arrieta said, can be compared to department stores price-matching their online competitors.

It takes the HHS close to a month to collect the amalgamation of data from existing system, whether that be terms and conditions that drive certain price points, or software licenses.

“The micro-service we built actually analyzes that data, and provides that information to you within one second,” said Arrieta. “This is distributed to the workforce, to the 5,000 people that do the contracting, to the 15,000 people that actually run the programs at [HHS].”

This simple micro-service is replicated on every node related to HHS’s internal workforce. If somebody wants to change the algorithm to fit their needs, they can do that in a distributed manner.

Arrieta hopes to use Accelerate to save researchers money at the point of purchase. The program uses blockchain to simplify the process of acquisition.

“How many of you work with the federal government?” Arrieta asked the audience. “Do you get sick of reentering the same information over and over again? Every single business opportunity you apply for, you have to resubmit your financial information. You constantly have to check for validation and verification, constantly have to resubmit capabilities.”

Wouldn’t it be better to have historical notes available for each transaction? said Arrieta. This would allow clinical researchers to be able to focus on “the things they’re really good at,” instead of red tape.

“If we had the top cancer researcher in the world, would you really want her spending her time learning about federal regulations as to how to spend money, or do you want her trying to solve cancer?” Arrieta said. “What we’re doing is providing that data to the individual in a distributed manner so they can read the information of historical purchases that support activity, and they can focus on the objectives and risks they see as it relates to their programming and their objectives.”

Blockchain also creates transparency among researchers, Arrieta said, which says creates an “uncomfortable reality” in the fact that they have to make a decision regarding data, fundamentally changing value exchange.

“The beauty of our business model is internal investment,” Arrieta said. For instance, the HHS could take all the sepsis data that exists in their system, put it into a distributed ledger, and share it with an external source.

“Maybe that could fuel partnership,” Arrieta said. “I can make data available to researchers in the field in real-time so they can actually test their hypothesis, test their intuition, and test their imagination as it relates to solving real-world problems.”

 

Shivom is creating a genomic data hub to elongate human life with AI

From VentureBeat.com
Blockchain-based genomic data hub platform Shivom recently reached its $35 million hard cap within 15 seconds of opening its main token sale. Shivom received funding from a number of crypto VC funds, including Collinstar, Lateral, and Ironside.

The goal is to create the world’s largest store of genomic data while offering an open web marketplace for patients, data donors, and providers — such as pharmaceutical companies, research organizations, governments, patient-support groups, and insurance companies.

“Disrupting the whole of the health care system as we know it has to be the most exciting use of such large DNA datasets,” Shivom CEO Henry Ines told me. “We’ll be able to stratify patients for better clinical trials, which will help to advance research in precision medicine. This means we will have the ability to make a specific drug for a specific patient based on their DNA markers. And what with the cost of DNA sequencing getting cheaper by the minute, we’ll also be able to sequence individuals sooner, so young children or even newborn babies could be sequenced from birth and treated right away.”

While there are many solutions examining DNA data to explain heritage, intellectual capabilities, health, and fitness, the potential of genomic data has largely yet to be unlocked. A few companies hold the monopoly on genomic data and make sizeable profits from selling it to third parties, usually without sharing the earnings with the data donor. Donors are also not informed if and when their information is shared, nor do they have any guarantee that their data is secure from hackers.

Shivom wants to change that by creating a decentralized platform that will break these monopolies, democratizing the processes of sharing and utilizing the data.

“Overall, large DNA datasets will have the potential to aid in the understanding, prevention, diagnosis, and treatment of every disease known to mankind, and could create a future where no diseases exist, or those that do can be cured very easily and quickly,” Ines said. “Imagine that, a world where people do not get sick or are already aware of what future diseases they could fall prey to and so can easily prevent them.”

Shivom’s use of blockchain technology and smart contracts ensures that all genomic data shared on the platform will remain anonymous and secure, while its OmiX token incentivizes users to share their data for monetary gain.

Rise in Population Genomics: Local Government in India Will Use Blockchain to Secure Genetic Data

Blockchain will secure the DNA database for 50 million citizens in the eighth-largest state in India. The government of Andhra Pradesh signed a Memorandum of Understanding with a German genomics and precision medicine start-up, Shivom, which announced to start the pilot project soon. The move falls in line with a trend for governments turning to population genomics, and at the same time securing the sensitive data through blockchain.

Andhra Pradesh, DNA, and blockchain

Storing sensitive genetic information safely and securely is a big challenge. Shivom builds a genomic data-hub powered by blockchain technology. It aims to connect researchers with DNA data donors thus facilitating medical research and the healthcare industry.

With regards to Andhra Pradesh, the start-up will first launch a trial to determine the viability of their technology for moving from a proactive to a preventive approach in medicine, and towards precision health. “Our partnership with Shivom explores the possibilities of providing an efficient way of diagnostic services to patients of Andhra Pradesh by maintaining the privacy of the individual data through blockchain technologies,” said J A Chowdary, IT Advisor to Chief Minister, Government of Andhra Pradesh.

Other Articles in this Open Access Journal on Digital Health include:

Can Mobile Health Apps Improve Oral-Chemotherapy Adherence? The Benefit of Gamification.

Medical Applications and FDA regulation of Sensor-enabled Mobile Devices: Apple and the Digital Health Devices Market

 

How Social Media, Mobile Are Playing a Bigger Part in Healthcare

 

E-Medical Records Get A Mobile, Open-Sourced Overhaul By White House Health Design Challenge Winners

 

Medcity Converge 2018 Philadelphia: Live Coverage @pharma_BI

 

Digital Health Breakthrough Business Models, June 5, 2018 @BIOConvention, Boston, BCEC

 

 

 

 

 

 

Read Full Post »


 

Live Coverage: MedCity Converge 2018 Philadelphia: AI in Cancer and Keynote Address

Reporter: Stephen J. Williams, PhD

8:30 AM -9:15

Practical Applications of AI in Cancer

We are far from machine learning dictating clinical decision making, but AI has important niche applications in oncology. Hear from a panel of innovative startups and established life science players about how machine learning and AI can transform different aspects in healthcare, be it in patient recruitment, data analysis, drug discovery or care delivery.

Moderator: Ayan Bhattacharya, Advanced Analytics Specialist Leader, Deloitte Consulting LLP
Speakers:
Wout Brusselaers, CEO and Co-Founder, Deep 6 AI @woutbrusselaers ‏
Tufia Haddad, M.D., Chair of Breast Medical Oncology and Department of Oncology Chair of IT, Mayo Clinic
Carla Leibowitz, Head of Corporate Development, Arterys @carlaleibowitz
John Quackenbush, Ph.D., Professor and Director of the Center for Cancer Computational Biology, Dana-Farber Cancer Institute

Ayan: working at IBM and Thompon Rueters with structured datasets and having gone through his own cancer battle, he is now working in healthcare AI which has an unstructured dataset(s)

Carla: collecting medical images over the world, mainly tumor and calculating tumor volumetrics

Tufia: drug resistant breast cancer clinician but interested in AI and healthcareIT at Mayo

John: taking large scale datasets but a machine learning skeptic

moderator: how has imaging evolved?

Carla: ten times images but not ten times radiologists so stressed field needs help with image analysis; they have seen measuring lung tumor volumetrics as a therapeutic diagnostic has worked

moderator: how has AI affected patient recruitment?

Tufia: majority of patients are receiving great care but AI can offer profiles and determine which patients can benefit from tertiary care;

John: 1980 paper on no free lunch theorem; great enthusiasm about optimization algortihisms fell short in application; can extract great information from e.g. images

moderator: how is AI for healthcare delivery working at mayo?

Tufia: for every hour with patient two hours of data mining. for care delivery hope to use the systems to leverage the cognitive systems to do the data mining

John: problem with irreproducible research which makes a poor dataset:  also these care packages are based on population data not personalized datasets; challenges to AI is moving correlation to causation

Carla: algorithisms from on healthcare network is not good enough, Google tried and it failed

John: curation very important; good annotation is needed; needed to go in and develop, with curators, a systematic way to curate medial records; need standardization and reproducibility; applications in radiometrics can be different based on different data collection machines; developed a machine learning model site where investigators can compare models on a hub; also need to communicate with patients on healthcare information and quality information

Ayan: Australia and Canada has done the most concerning AI and lifescience, healthcare space; AI in most cases is cognitive learning: really two types of companies 1) the Microsofts, Googles, and 2) the startups that may be more pure AI

 

Final Notes: We are at a point where collecting massive amounts of healthcare related data is simple, rapid, and shareable.  However challenges exist in quality of datasets, proper curation and annotation, need for collaboration across all healthcare stakeholders including patients, and dissemination of useful and accurate information

 

9:15 AM–9:45 AM

Opening Keynote: Dr. Joshua Brody, Medical Oncologist, Mount Sinai Health System

The Promise and Hype of Immunotherapy

Immunotherapy is revolutionizing oncology care across various types of cancers, but it is also necessary to sort the hype from the reality. In his keynote, Dr. Brody will delve into the history of this new therapy mode and how it has transformed the treatment of lymphoma and other diseases. He will address the hype surrounding it, why so many still don’t respond to the treatment regimen and chart the way forward—one that can lead to more elegant immunotherapy combination paths and better outcomes for patients.

Speaker:
Joshua Brody, M.D., Assistant Professor, Mount Sinai School of Medicine @joshuabrodyMD

Director Lymphoma therapy at Mt. Sinai

  • lymphoma a cancer with high PD-L1 expression
  • hodgkin’s lymphoma best responder to PD1 therapy (nivolumab) but hepatic adverse effects
  • CAR-T (chimeric BCR and TCR); a long process which includes apheresis, selection CD3/CD28 cells; viral transfection of the chimeric; purification
  • complete remissions of B cell lymphomas (NCI trial) and long term remissions past 18 months
  • side effects like cytokine release (has been controlled); encephalopathy (he uses a hand writing test to see progression of adverse effect)

Vaccines

  •  teaching the immune cells as PD1 inhibition exhausting T cells so a vaccine boost could be an adjuvant to PD1 or checkpoint therapy
  • using Flt3L primed in-situ vaccine (using a Toll like receptor agonist can recruit the dendritic cells to the tumor and then activation of T cell response);  therefore vaccine does not need to be produced ex vivo; months after the vaccine the tumor still in remission
  • versus rituximab, which can target many healthy B cells this in-situ vaccine strategy is very specific for the tumorigenic B cells
  • HoWEVER they did see resistant tumor cells which did not overexpress PD-L1 but they did discover a novel checkpoint (cannot be disclosed at this point)

 

 

 

 

 

 

 

 

 

Please follow on Twitter using the following #hashtags and @pharma_BI

#MCConverge

#AI

#cancertreatment

#immunotherapy

#healthIT

#innovation

#precisionmedicine

#healthcaremodels

#personalizedmedicine

#healthcaredata

And at the following handles:

@pharma_BI

@medcitynews

 

Please see related articles on Live Coverage of Previous Meetings on this Open Access Journal

LIVE – Real Time – 16th Annual Cancer Research Symposium, Koch Institute, Friday, June 16, 9AM – 5PM, Kresge Auditorium, MIT

Real Time Coverage and eProceedings of Presentations on 11/16 – 11/17, 2016, The 12th Annual Personalized Medicine Conference, HARVARD MEDICAL SCHOOL, Joseph B. Martin Conference Center, 77 Avenue Louis Pasteur, Boston

Tweets Impression Analytics, Re-Tweets, Tweets and Likes by @AVIVA1950 and @pharma_BI for 2018 BioIT, Boston, 5/15 – 5/17, 2018

BIO 2018! June 4-7, 2018 at Boston Convention & Exhibition Center

https://pharmaceuticalintelligence.com/press-coverage/

Read Full Post »

« Newer Posts - Older Posts »