Advertisements
Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence – General’ Category


Showcase: How Deep Learning could help radiologists spend their time more efficiently

Reporter and Curator: Dror Nir, PhD

 

The debate on the function AI could or should realize in modern radiology is buoyant presenting wide spectrum of positive expectations and also fears.

The article: A Deep Learning Model to Triage Screening Mammograms: A Simulation Study that was published this month shows the best, and very much feasible, utility for AI in radiology at the present time. It would be of great benefit for radiologists and patients if such applications will be incorporated (with all safety precautions taken) into routine practice as soon as possible.

In a simulation study, a deep learning model to triage mammograms as cancer free improves workflow efficiency and significantly improves specificity while maintaining a noninferior sensitivity.

Background

Recent deep learning (DL) approaches have shown promise in improving sensitivity but have not addressed limitations in radiologist specificity or efficiency.

Purpose

To develop a DL model to triage a portion of mammograms as cancer free, improving performance and workflow efficiency.

Materials and Methods

In this retrospective study, 223 109 consecutive screening mammograms performed in 66 661 women from January 2009 to December 2016 were collected with cancer outcomes obtained through linkage to a regional tumor registry. This cohort was split by patient into 212 272, 25 999, and 26 540 mammograms from 56 831, 7021, and 7176 patients for training, validation, and testing, respectively. A DL model was developed to triage mammograms as cancer free and evaluated on the test set. A DL-triage workflow was simulated in which radiologists skipped mammograms triaged as cancer free (interpreting them as negative for cancer) and read mammograms not triaged as cancer free by using the original interpreting radiologists’ assessments. Sensitivities, specificities, and percentage of mammograms read were calculated, with and without the DL-triage–simulated workflow. Statistics were computed across 5000 bootstrap samples to assess confidence intervals (CIs). Specificities were compared by using a two-tailed t test (P < .05) and sensitivities were compared by using a one-sided t test with a noninferiority margin of 5% (P < .05).

Results

The test set included 7176 women (mean age, 57.8 years ± 10.9 [standard deviation]). When reading all mammograms, radiologists obtained a sensitivity and specificity of 90.6% (173 of 191; 95% CI: 86.6%, 94.7%) and 93.5% (24 625 of 26 349; 95% CI: 93.3%, 93.9%). In the DL-simulated workflow, the radiologists obtained a sensitivity and specificity of 90.1% (172 of 191; 95% CI: 86.0%, 94.3%) and 94.2% (24 814 of 26 349; 95% CI: 94.0%, 94.6%) while reading 80.7% (21 420 of 26 540) of the mammograms. The simulated workflow improved specificity (P = .002) and obtained a noninferior sensitivity with a margin of 5% (P < .001).

Conclusion

This deep learning model has the potential to reduce radiologist workload and significantly improve specificity without harming sensitivity.

Advertisements

Read Full Post »


Using A.I. to Detect Lung Cancer gets an A!

Reporter: Irina Robu, PhD

Google researchers hypothesized that computers are as good or better than doctors at detecting tiny lung cancers on CT scans, since CT scan combines data from several X-rays to produce a detailed image of a structure inside the body. CT scans produce 2-dimensional images of a slice of the body and the data can also be used to construct 3-D images.

However, the technology published in Nature Medicine offers input in the future of artificial intelligence in medicine. By feeding vast amounts of data from medical imaging into systems called artificial neural networks, scientists can teach computers to identify patterns linked to a specific condition, like pneumonia, cancer or a wrist fracture that would be hard for a person to see. The system trails an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.

The process, known as deep learning enables computers to identify objects and understand speech but it also created systems to help pathologists read microscope slides to diagnose cancer, and to help ophthalmologists detect eye disease in people with diabetes. In their recent study, the scientist used artificial intelligence to CT scans used to screen people for lung cancer, which caused 160,000 deaths in the United States last year, and 1.7 million worldwide. The scans are recommended for people at high risk because of a long history of smoking.

Screening studies showed that it can reduce the risk of dying from lung cancer and can also identify spots that might later become cancer, so that radiologists can categorize patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.

However, the test has errors. It can miss tumors or mistake benign spots for malignancies and shove patients into invasive, risky procedures like lung biopsies or surgery.

SOURCE

https://www.nytimes.com/2019/05/20/health/cancer-artificial-intelligence-ct-scans.html

Other related articles were published in this Online Scientific Open Access Journal including the following:

https://pharmaceuticalintelligence.com/2019/07/21/multiple-barriers-identified-which-may-hamper-use-of-artificial-intelligence-in-the-clinical-setting/

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

 

Read Full Post »


Artificial Intelligence and Cardiovascular Disease

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Cardiology is a vast field that focuses on a large number of diseases specifically dealing with the heart, the circulatory system, and its functions. As such, similar symptomatologies and diagnostic features may be present in an individual, making it difficult for a doctor to easily isolate the actual heart-related problem. Consequently, the use of artificial intelligence aims to relieve doctors from this hurdle and extend better quality to patients. Results of screening tests such as echocardiograms, MRIs, or CT scans have long been proposed to be analyzed using more advanced techniques in the field of technology. As such, while artificial intelligence is not yet widely-used in clinical practice, it is seen as the future of healthcare.

 

The continuous development of the technological sector has enabled the industry to merge with medicine in order to create new integrated, reliable, and efficient methods of providing quality health care. One of the ongoing trends in cardiology at present is the proposed utilization of artificial intelligence (AI) in augmenting and extending the effectiveness of the cardiologist. This is because AI or machine-learning would allow for an accurate measure of patient functioning and diagnosis from the beginning up to the end of the therapeutic process. In particular, the use of artificial intelligence in cardiology aims to focus on research and development, clinical practice, and population health. Created to be an all-in-one mechanism in cardiac healthcare, AI technologies incorporate complex algorithms in determining relevant steps needed for a successful diagnosis and treatment. The role of artificial intelligence specifically extends to the identification of novel drug therapies, disease stratification or statistics, continuous remote monitoring and diagnostics, integration of multi-omic data, and extension of physician effectivity and efficiency.

 

Artificial intelligence – specifically a branch of it called machine learning – is being used in medicine to help with diagnosis. Computers might, for example, be better at interpreting heart scans. Computers can be ‘trained’ to make these predictions. This is done by feeding the computer information from hundreds or thousands of patients, plus instructions (an algorithm) on how to use that information. This information is heart scans, genetic and other test results, and how long each patient survived. These scans are in exquisite detail and the computer may be able to spot differences that are beyond human perception. It can also combine information from many different tests to give as accurate a picture as possible. The computer starts to work out which factors affected the patients’ outlook, so it can make predictions about other patients.

 

In current medical practice, doctors will use risk scores to make treatment decisions for their cardiac patients. These are based on a series of variables like weight, age and lifestyle. However, they do not always have the desired levels of accuracy. A particular example of the use of artificial examination in cardiology is the experimental study on heart disease patients, published in 2017. The researchers utilized cardiac MRI-based algorithms coupled with a 3D systolic cardiac motion pattern to accurately predict the health outcomes of patients with pulmonary hypertension. The experiment proved to be successful, with the technology being able to pick-up 30,000 points within the heart activity of 250 patients. With the success of the aforementioned study, as well as the promise of other researches on artificial intelligence, cardiology is seemingly moving towards a more technological practice.

 

One study was conducted in Finland where researchers enrolled 950 patients complaining of chest pain, who underwent the centre’s usual scanning protocol to check for coronary artery disease. Their outcomes were tracked for six years following their initial scans, over the course of which 24 of the patients had heart attacks and 49 died from all causes. The patients first underwent a coronary computed tomography angiography (CCTA) scan, which yielded 58 pieces of data on the presence of coronary plaque, vessel narrowing and calcification. Patients whose scans were suggestive of disease underwent a positron emission tomography (PET) scan which produced 17 variables on blood flow. Ten clinical variables were also obtained from medical records including sex, age, smoking status and diabetes. These 85 variables were then entered into an artificial intelligence (AI) programme called LogitBoost. The AI repeatedly analysed the imaging variables, and was able to learn how the imaging data interacted and identify the patterns which preceded death and heart attack with over 90% accuracy. The predictive performance using the ten clinical variables alone was modest, with an accuracy of 90%. When PET scan data was added, accuracy increased to 92.5%. The predictive performance increased significantly when CCTA scan data was added to clinical and PET data, with accuracy of 95.4%.

 

Another study findings showed that applying artificial intelligence (AI) to the electrocardiogram (ECG) enables early detection of left ventricular dysfunction and can identify individuals at increased risk for its development in the future. Asymptomatic left ventricular dysfunction (ALVD) is characterised by the presence of a weak heart pump with a risk of overt heart failure. It is present in three to six percent of the general population and is associated with reduced quality of life and longevity. However, it is treatable when found. Currently, there is no inexpensive, noninvasive, painless screening tool for ALVD available for diagnostic use. When tested on an independent set of 52,870 patients, the network model yielded values for the area under the curve, sensitivity, specificity, and accuracy of 0.93, 86.3 percent, 85.7 percent, and 85.7 percent, respectively. Furthermore, in patients without ventricular dysfunction, those with a positive AI screen were at four times the risk of developing future ventricular dysfunction compared with those with a negative screen.

 

In recent years, the analysis of big data database combined with computer deep learning has gradually played an important role in biomedical technology. For a large number of medical record data analysis, image analysis, single nucleotide polymorphism difference analysis, etc., all relevant research on the development and application of artificial intelligence can be observed extensively. For clinical indication, patients may receive a variety of cardiovascular routine examination and treatments, such as: cardiac ultrasound, multi-path ECG, cardiovascular and peripheral angiography, intravascular ultrasound and optical coherence tomography, electrical physiology, etc. By using artificial intelligence deep learning system, the investigators hope to not only improve the diagnostic rate and also gain more accurately predict the patient’s recovery, improve medical quality in the near future.

 

The primary issue about using artificial intelligence in cardiology, or in any field of medicine for that matter, is the ethical issues that it brings about. Physicians and healthcare professionals prior to their practice swear to the Hippocratic Oath—a promise to do their best for the welfare and betterment of their patients. Many physicians have argued that the use of artificial intelligence in medicine breaks the Hippocratic Oath since patients are technically left under the care of machines than of doctors. Furthermore, as machines may also malfunction, the safety of patients is also on the line at all times. As such, while medical practitioners see the promise of artificial technology, they are also heavily constricted about its use, safety, and appropriateness in medical practice.

 

Issues and challenges faced by technological innovations in cardiology are overpowered by current researches aiming to make artificial intelligence easily accessible and available for all. With that in mind, various projects are currently under study. For example, the use of wearable AI technology aims to develop a mechanism by which patients and doctors could easily access and monitor cardiac activity remotely. An ideal instrument for monitoring, wearable AI technology ensures real-time updates, monitoring, and evaluation. Another direction of cardiology in AI technology is the use of technology to record and validate empirical data to further analyze symptomatology, biomarkers, and treatment effectiveness. With AI technology, researchers in cardiology are aiming to simplify and expand the scope of knowledge on the field for better patient care and treatment outcomes.

 

References:

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.bhf.org.uk/informationsupport/heart-matters-magazine/research/artificial-intelligence

 

https://www.medicaldevice-network.com/news/heart-attack-artificial-intelligence/

 

https://www.nature.com/articles/s41569-019-0158-5

 

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5711980/

 

www.j-pcs.org/article.asp

http://www.onlinejacc.org/content/71/23/2668

http://www.scielo.br/pdf/ijcs/v30n3/2359-4802-ijcs-30-03-0187.pdf

 

https://www.escardio.org/The-ESC/Press-Office/Press-releases/How-artificial-intelligence-is-tackling-heart-disease-Find-out-at-ICNC-2019

 

https://clinicaltrials.gov/ct2/show/NCT03877614

 

https://www.europeanpharmaceuticalreview.com/news/82870/artificial-intelligence-ai-heart-disease/

 

https://www.frontiersin.org/research-topics/10067/current-and-future-role-of-artificial-intelligence-in-cardiac-imaging

 

https://www.news-medical.net/health/Artificial-Intelligence-in-Cardiology.aspx

 

https://www.sciencedaily.com/releases/2019/05/190513104505.htm

 

Read Full Post »


Artificial throat may give voice to the voiceless

Reporter
Irina Robu, PhD

Flexible sensors have fascinated more and more attention as a fundamental part of anthropomorphic robot research, medical diagnosis and physical health monitoring. The fundamental mechanism of the sensor is based on triboelectric effect inducing electrostatic charges on the surfaces between two different materials. Just like a plate capacitor, current is produced while the size of the parallel capacitor fluctuations caused by the small mechanical disturbances and therefore the output current/voltage is produced.

Chinese scientists combine ultra sensitive motion detectors with thermal sound-emitting technology invented an “artificial throat” that could enable speech in people with damaged or non-functioning vocal cords. Team members from University in Beijing, fabricated a homemade circuit board on which to build out their dual-mode system combining detection and emitting technologies.

Graphene is a wonder material because it is thinnest material in the universe and the strongest ever measured. And graphene is only a one-atom thick layer of graphite and possess a high Young’s modulus as well as superior thermal and electrical conductivities. Graphene-based sensors have attracted much attention in recent years due to their variety of structures, unique sensing performances, room-temperature working conditions, and tremendous application prospects.

The skin like device, wearable artificial graphene throat (WAGT) is as similar as a temporary tattoo, at least as perceived by the wearer. In order to make the device functional and flexible, scientists designed a laser-scribed graphene on a thin sheet of polyvinyl alcohol film. The device is the size of two thumbnails side by side and can use water to attach the film to the skin over the volunteer’s throat and connected to electrodes to a small armband that contained a circuit board, microcomputer, power amplifier and decoder. At the development phase, the system transformed subtle throat movements into simple sounds like “OK” and “No.” During the trial of the device, volunteers imitated throat motions of speech and the device converted these movements into single-syllable words.

It is believed that this device, would be able to train mute people to generate signals with their throats and the device would translate signals into speech.

SOURCE
https://www.aiin.healthcare/topics/robotics/artificial-throat-may-give-voice-voiceless?utm_source=newsletter

Read Full Post »


Multiple Barriers Identified Which May Hamper Use of Artificial Intelligence in the Clinical Setting

Reporter: Stephen J. Williams, PhD.

From the Journal Science:Science  21 Jun 2019: Vol. 364, Issue 6446, pp. 1119-1120

By Jennifer Couzin-Frankel

 

In a commentary article from Jennifer Couzin-Frankel entitled “Medicine contends with how to use artificial intelligence  the barriers to the efficient and reliable adoption of artificial intelligence and machine learning in the hospital setting are discussed.   In summary these barriers result from lack of reproducibility across hospitals. For instance, a major concern among radiologists is the AI software being developed to read images in order to magnify small changes, such as with cardiac images, is developed within one hospital and may not reflect the equipment or standard practices used in other hospital systems.  To address this issue, lust recently, US scientists and government regulators issued guidance describing how to convert research-based AI into improved medical images and published these guidance in the Journal of the American College of Radiology.  The group suggested greater collaboration among relevant parties in developing of AI practices, including software engineers, scientists, clinicians, radiologists etc. 

As thousands of images are fed into AI algorithms, according to neurosurgeon Eric Oermann at Mount Sinai Hospital, the signals they recognize can have less to do with disease than with other patient characteristics, the brand of MRI machine, or even how a scanner is angled.  For example Oermann and Mount Sinai developed an AI algorithm to detect spots on a lung scan indicative of pneumonia and when tested in a group of new patients the algorithm could detect pneumonia with 93% accuracy.  

However when the group from Sinai tested their algorithm from tens of thousands of scans from other hospitals including NIH success rate fell to 73-80%, indicative of bias within the training set: in other words there was something unique about the way Mt. Sinai does their scans relative to other hospitals.  Indeed, many of the patients Mt. Sinai sees are too sick to get out of bed and radiologists would use portable scanners, which generate different images than stand alone scanners.  

The results were published in Plos Medicine as seen below:

PLoS Med. 2018 Nov 6;15(11):e1002683. doi: 10.1371/journal.pmed.1002683. eCollection 2018 Nov.

Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study.

Zech JR1, Badgeley MA2, Liu M2, Costa AB3, Titano JJ4, Oermann EK3.

Abstract

BACKGROUND:

There is interest in using convolutional neural networks (CNNs) to analyze medical imaging to provide computer-aided diagnosis (CAD). Recent work has suggested that image classification CNNs may not generalize to new data as well as previously believed. We assessed how well CNNs generalized across three hospital systems for a simulated pneumonia screening task.

METHODS AND FINDINGS:

A cross-sectional design with multiple model training cohorts was used to evaluate model generalizability to external sites using split-sample validation. A total of 158,323 chest radiographs were drawn from three institutions: National Institutes of Health Clinical Center (NIH; 112,120 from 30,805 patients), Mount Sinai Hospital (MSH; 42,396 from 12,904 patients), and Indiana University Network for Patient Care (IU; 3,807 from 3,683 patients). These patient populations had an age mean (SD) of 46.9 years (16.6), 63.2 years (16.5), and 49.6 years (17) with a female percentage of 43.5%, 44.8%, and 57.3%, respectively. We assessed individual models using the area under the receiver operating characteristic curve (AUC) for radiographic findings consistent with pneumonia and compared performance on different test sets with DeLong’s test. The prevalence of pneumonia was high enough at MSH (34.2%) relative to NIH and IU (1.2% and 1.0%) that merely sorting by hospital system achieved an AUC of 0.861 (95% CI 0.855-0.866) on the joint MSH-NIH dataset. Models trained on data from either NIH or MSH had equivalent performance on IU (P values 0.580 and 0.273, respectively) and inferior performance on data from each other relative to an internal test set (i.e., new data from within the hospital system used for training data; P values both <0.001). The highest internal performance was achieved by combining training and test data from MSH and NIH (AUC 0.931, 95% CI 0.927-0.936), but this model demonstrated significantly lower external performance at IU (AUC 0.815, 95% CI 0.745-0.885, P = 0.001). To test the effect of pooling data from sites with disparate pneumonia prevalence, we used stratified subsampling to generate MSH-NIH cohorts that only differed in disease prevalence between training data sites. When both training data sites had the same pneumonia prevalence, the model performed consistently on external IU data (P = 0.88). When a 10-fold difference in pneumonia rate was introduced between sites, internal test performance improved compared to the balanced model (10× MSH risk P < 0.001; 10× NIH P = 0.002), but this outperformance failed to generalize to IU (MSH 10× P < 0.001; NIH 10× P = 0.027). CNNs were able to directly detect hospital system of a radiograph for 99.95% NIH (22,050/22,062) and 99.98% MSH (8,386/8,388) radiographs. The primary limitation of our approach and the available public data is that we cannot fully assess what other factors might be contributing to hospital system-specific biases.

CONCLUSION:

Pneumonia-screening CNNs achieved better internal than external performance in 3 out of 5 natural comparisons. When models were trained on pooled data from sites with different pneumonia prevalence, they performed better on new pooled data from these sites but not on external data. CNNs robustly identified hospital system and department within a hospital, which can have large differences in disease burden and may confound predictions.

PMID: 30399157 PMCID: PMC6219764 DOI: 10.1371/journal.pmed.1002683

[Indexed for MEDLINE] Free PMC Article

Images from this publication.See all images (3)Free text

 

 

Surprisingly, not many researchers have begun to use data obtained from different hospitals.  The FDA has issued some guidance in the matter but considers “locked” AI software or unchanging software as a medical device.  However they just announced development of a framework for regulating more cutting edge software that continues to learn over time.

Still the key point is that collaboration over multiple health systems in various countries may be necessary for development of AI software which is used in multiple clinical settings.  Otherwise each hospital will need to develop their own software only used on their own system and would provide a regulatory headache for the FDA.

 

Other articles on Artificial Intelligence in Clinical Medicine on this Open Access Journal include:

Top 12 Artificial Intelligence Innovations Disrupting Healthcare by 2020

The launch of SCAI – Interview with Gérard Biau, director of the Sorbonne Center for Artificial Intelligence (SCAI).

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

50 Contemporary Artificial Intelligence Leading Experts and Researchers

 

Read Full Post »


Single-cell RNA-seq helps in finding intra-tumoral heterogeneity in pancreatic cancer

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Pancreatic cancer is a significant cause of cancer mortality; therefore, the development of early diagnostic strategies and effective treatment is essential. Improvements in imaging technology, as well as use of biomarkers are changing the way that pancreas cancer is diagnosed and staged. Although progress in treatment for pancreas cancer has been incremental, development of combination therapies involving both chemotherapeutic and biologic agents is ongoing.

 

Cancer is an evolutionary disease, containing the hallmarks of an asexually reproducing unicellular organism subject to evolutionary paradigms. Pancreatic ductal adenocarcinoma (PDAC) is a particularly robust example of this phenomenon. Genomic features indicate that pancreatic cancer cells are selected for fitness advantages when encountering the geographic and resource-depleted constraints of the microenvironment. Phenotypic adaptations to these pressures help disseminated cells to survive in secondary sites, a major clinical problem for patients with this disease.

 

The immune system varies in cell types, states, and locations. The complex networks, interactions, and responses of immune cells produce diverse cellular ecosystems composed of multiple cell types, accompanied by genetic diversity in antigen receptors. Within this ecosystem, innate and adaptive immune cells maintain and protect tissue function, integrity, and homeostasis upon changes in functional demands and diverse insults. Characterizing this inherent complexity requires studies at single-cell resolution. Recent advances such as massively parallel single-cell RNA sequencing and sophisticated computational methods are catalyzing a revolution in our understanding of immunology.

 

PDAC is the most common type of pancreatic cancer featured with high intra-tumoral heterogeneity and poor prognosis. In the present study to comprehensively delineate the PDAC intra-tumoral heterogeneity and the underlying mechanism for PDAC progression, single-cell RNA-seq (scRNA-seq) was employed to acquire the transcriptomic atlas of 57,530 individual pancreatic cells from primary PDAC tumors and control pancreases. The diverse malignant and stromal cell types, including two ductal subtypes with abnormal and malignant gene expression profiles respectively, were identified in PDAC.

 

The researchers found that the heterogenous malignant subtype was composed of several subpopulations with differential proliferative and migratory potentials. Cell trajectory analysis revealed that components of multiple tumor-related pathways and transcription factors (TFs) were differentially expressed along PDAC progression. Furthermore, it was found a subset of ductal cells with unique proliferative features were associated with an inactivation state in tumor-infiltrating T cells, providing novel markers for the prediction of antitumor immune response. Together, the findings provided a valuable resource for deciphering the intra-tumoral heterogeneity in PDAC and uncover a connection between tumor intrinsic transcriptional state and T cell activation, suggesting potential biomarkers for anticancer treatment such as targeted therapy and immunotherapy.

 

References:

 

https://www.ncbi.nlm.nih.gov/pubmed/31273297

 

https://www.ncbi.nlm.nih.gov/pubmed/21491194

 

https://www.ncbi.nlm.nih.gov/pubmed/27444064

 

https://www.ncbi.nlm.nih.gov/pubmed/28983043

 

https://www.ncbi.nlm.nih.gov/pubmed/24976721

 

https://www.ncbi.nlm.nih.gov/pubmed/27693023

 

Read Full Post »


scPopCorn: A New Computational Method for Subpopulation Detection and their Comparative Analysis Across Single-Cell Experiments

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

 

Present day technological advances have facilitated unprecedented opportunities for studying biological systems at single-cell level resolution. For example, single-cell RNA sequencing (scRNA-seq) enables the measurement of transcriptomic information of thousands of individual cells in one experiment. Analyses of such data provide information that was not accessible using bulk sequencing, which can only assess average properties of cell populations. Single-cell measurements, however, can capture the heterogeneity of a population of cells. In particular, single-cell studies allow for the identification of novel cell types, states, and dynamics.

 

One of the most prominent uses of the scRNA-seq technology is the identification of subpopulations of cells present in a sample and comparing such subpopulations across samples. Such information is crucial for understanding the heterogeneity of cells in a sample and for comparative analysis of samples from different conditions, tissues, and species. A frequently used approach is to cluster every dataset separately, inspect marker genes for each cluster, and compare these clusters in an attempt to determine which cell types were shared between samples. This approach, however, relies on the existence of predefined or clearly identifiable marker genes and their consistent measurement across subpopulations.

 

Although the aligned data can then be clustered to reveal subpopulations and their correspondence, solving the subpopulation-mapping problem by performing global alignment first and clustering second overlooks the original information about subpopulations existing in each experiment. In contrast, an approach addressing this problem directly might represent a more suitable solution. So, keeping this in mind the researchers developed a computational method, single-cell subpopulations comparison (scPopCorn), that allows for comparative analysis of two or more single-cell populations.

 

The performance of scPopCorn was tested in three distinct settings. First, its potential was demonstrated in identifying and aligning subpopulations from single-cell data from human and mouse pancreatic single-cell data. Next, scPopCorn was applied to the task of aligning biological replicates of mouse kidney single-cell data. scPopCorn achieved the best performance over the previously published tools. Finally, it was applied to compare populations of cells from cancer and healthy brain tissues, revealing the relation of neoplastic cells to neural cells and astrocytes. Consequently, as a result of this integrative approach, scPopCorn provides a powerful tool for comparative analysis of single-cell populations.

 

This scPopCorn is basically a computational method for the identification of subpopulations of cells present within individual single-cell experiments and mapping of these subpopulations across these experiments. Different from other approaches, scPopCorn performs the tasks of population identification and mapping simultaneously by optimizing a function that combines both objectives. When applied to complex biological data, scPopCorn outperforms previous methods. However, it should be kept in mind that scPopCorn assumes the input single-cell data to consist of separable subpopulations and it is not designed to perform a comparative analysis of single cell trajectories datasets that do not fulfill this constraint.

 

Several innovations developed in this work contributed to the performance of scPopCorn. First, unifying the above-mentioned tasks into a single problem statement allowed for integrating the signal from different experiments while identifying subpopulations within each experiment. Such an incorporation aids the reduction of biological and experimental noise. The researchers believe that the ideas introduced in scPopCorn not only enabled the design of a highly accurate identification of subpopulations and mapping approach, but can also provide a stepping stone for other tools to interrogate the relationships between single cell experiments.

 

References:

 

https://www.sciencedirect.com/science/article/pii/S2405471219301887

 

https://www.tandfonline.com/doi/abs/10.1080/23307706.2017.1397554

 

https://ieeexplore.ieee.org/abstract/document/4031383

 

https://genomebiology.biomedcentral.com/articles/10.1186/s13059-016-0927-y

 

https://www.sciencedirect.com/science/article/pii/S2405471216302666

 

 

Read Full Post »

Older Posts »