Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence – General’ Category


Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »


Renal tumor macrophages linked to recurrence are identified using single-cell protein activity analysis

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

When malignancy returns after a period of remission, it is called a cancer recurrence. After the initial or primary cancer has been treated, this can happen weeks, months, or even years later. The possibility of recurrence is determined by the type of primary cancer. Because small patches of cancer cells might stay in the body after treatment, cancer might reoccur. These cells may multiply and develop large enough to cause symptoms or cause cancer over time. The type of cancer determines when and where cancer recurs. Some malignancies have a predictable recurrence pattern.

Even if primary cancer recurs in a different place of the body, recurrent cancer is designated for the area where it first appeared. If breast cancer recurs distantly in the liver, for example, it is still referred to as breast cancer rather than liver cancer. It’s referred to as metastatic breast cancer by doctors. Despite treatment, many people with kidney cancer eventually develop cancer recurrence and incurable metastatic illness.

The most frequent type of kidney cancer is Renal Cell Carcinoma (RCC). RCC is responsible for over 90% of all kidney malignancies. The appearance of cancer cells when viewed under a microscope helps to recognize the various forms of RCC. Knowing the RCC subtype can help the doctor assess if the cancer is caused by an inherited genetic condition and help to choose the best treatment option. The three most prevalent RCC subtypes are as follows:

  • Clear cell RCC
  • Papillary RCC
  • Chromophobe RCC

Clear Cell RCC (ccRCC) is the most prevalent subtype of RCC. The cells are clear or pale in appearance and are referred to as the clear cell or conventional RCC. Around 70% of people with renal cell cancer have ccRCC. The rate of growth of these cells might be sluggish or rapid. According to the American Society of Clinical Oncology (ASCO), clear cell RCC responds favorably to treatments like immunotherapy and treatments that target specific proteins or genes.

Researchers at Columbia University’s Vagelos College of Physicians and Surgeons have developed a novel method for identifying which patients are most likely to have cancer relapse following surgery.

The study

Their findings are detailed in a study published in the journal Cell entitled, “Single-Cell Protein Activity Analysis Identifies Recurrence-Associated Renal Tumor Macrophages.” The researchers show that the presence of a previously unknown type of immune cell in kidney tumors can predict who will have cancer recurrence.

According to co-senior author Charles Drake, MD, PhD, adjunct professor of medicine at Columbia University Vagelos College of Physicians and Surgeons and the Herbert Irving Comprehensive Cancer Center,

the findings imply that the existence of these cells could be used to identify individuals at high risk of disease recurrence following surgery who may be candidates for more aggressive therapy.

As Aleksandar Obradovic, an MD/PhD student at Columbia University Vagelos College of Physicians and Surgeons and the study’s co-first author, put it,

it’s like looking down over Manhattan and seeing that enormous numbers of people from all over travel into the city every morning. We need deeper details to understand how these different commuters engage with Manhattan residents: who are they, what do they enjoy, where do they go, and what are they doing?

To learn more about the immune cells that invade kidney cancers, the researchers employed single-cell RNA sequencing. Obradovic remarked,

In many investigations, single-cell RNA sequencing misses up to 90% of gene activity, a phenomenon known as gene dropout.

The researchers next tackled gene dropout by designing a prediction algorithm that can identify which genes are active based on the expression of other genes in the same family. “Even when a lot of data is absent owing to dropout, we have enough evidence to estimate the activity of the upstream regulator gene,” Obradovic explained. “It’s like when playing ‘Wheel of Fortune,’ because I can generally figure out what’s on the board even if most of the letters are missing.”

The meta-VIPER algorithm is based on the VIPER algorithm, which was developed in Andrea Califano’s group. Califano is the head of Herbert Irving Comprehensive Cancer Center’s JP Sulzberger Columbia Genome Center and the Clyde and Helen Wu professor of chemistry and systems biology. The researchers believe that by including meta-VIPER, they will be able to reliably detect the activity of 70% to 80% of all regulatory genes in each cell, eliminating cell-to-cell dropout.

Using these two methods, the researchers were able to examine 200,000 tumor cells and normal cells in surrounding tissues from eleven patients with ccRCC who underwent surgery at Columbia’s urology department.

The researchers discovered a unique subpopulation of immune cells that can only be found in tumors and is linked to disease relapse after initial treatment. The top genes that control the activity of these immune cells were discovered through the VIPER analysis. This “signature” was validated in the second set of patient data obtained through a collaboration with Vanderbilt University researchers; in this second set of over 150 patients, the signature strongly predicted recurrence.

These findings raise the intriguing possibility that these macrophages are not only markers of more risky disease, but may also be responsible for the disease’s recurrence and progression,” Obradovic said, adding that targeting these cells could improve clinical outcomes

Drake said,

Our research shows that when the two techniques are combined, they are extremely effective at characterizing cells within a tumor and in surrounding tissues, and they should have a wide range of applications, even beyond cancer research.

Main Source

Single-cell protein activity analysis identifies recurrence-associated renal tumor macrophages

https://www.cell.com/cell/fulltext/S0092-8674(21)00573-0

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Renal (Kidney) Cancer: Connections in Metabolism at Krebs cycle  and Histone Modulation

Curator: Demet Sag, PhD, CRA, GCP

https://pharmaceuticalintelligence.com/2015/10/14/renal-kidney-cancer-connections-in-metabolism-at-krebs-cycle-through-histone-modulation/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Bioinformatic Tools for Cancer Mutational Analysis: COSMIC and Beyond

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2015/12/02/bioinformatic-tools-for-cancer-mutational-analysis-cosmic-and-beyond-2/

Deep-learning AI algorithm shines new light on mutations in once obscure areas of the genome

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2014/12/24/deep-learning-ai-algorithm-shines-new-light-on-mutations-in-once-obscure-areas-of-the-genome/

Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning

https://pharmaceuticalintelligence.com/2021-medical-text-analysis-nlp/premalata-pati-phd-postdoc-in-pharmaceutical-sciences-medical-text-analysis-with-machine-learning/

Read Full Post »


Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

This image has an empty alt attribute; its file name is morethanthes.jpg
Seeing “through” the cancer with the power of data analysis — possible with the help of artificial intelligence. Credit: MPI f. Molecular Genetics/ Ella Maru Studio
Image Source: https://medicalxpress.com/news/2021-04-sum-mutations-cancer-genes-machine.html

Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low-risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) and Artificial Intelligence (AI) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions by predicting new algorithms.

In the majority of human cancers, heritable loss of gene function through cell division may be mediated as often by epigenetic as by genetic abnormalities. Epigenetic modification occurs through a process of interrelated changes in CpG island methylation and histone modifications. Candidate gene approaches of cell cycle, growth regulatory and apoptotic genes have shown epigenetic modification associated with loss of cognate proteins in sporadic pituitary tumors.

On 11th November 2020, researchers from the University of California, Irvine, has established the understanding of epigenetic mechanisms in tumorigenesis and publicized a previously undetected repertoire of cancer driver genes. The study was published in “Science Advances

Researchers were able to identify novel tumor suppressor genes (TSGs) and oncogenes (OGs), particularly those with rare mutations by using a new prediction algorithm, called DORGE (Discovery of Oncogenes and tumor suppressor genes using Genetic and Epigenetic features) by integrating the most comprehensive collection of genetic and epigenetic data.

The senior author Wei Li, Ph.D., the Grace B. Bell chair and professor of bioinformatics in the Department of Biological Chemistry at the UCI School of Medicine said

Existing bioinformatics algorithms do not sufficiently leverage epigenetic features to predict cancer driver genes, even though epigenetic alterations are known to be associated with cancer driver genes.

The Study

This study demonstrated how cancer driver genes, predicted by DORGE, included both known cancer driver genes and novel driver genes not reported in current literature. In addition, researchers found that the novel dual-functional genes, which DORGE predicted as both TSGs and OGs, are highly enriched at hubs in protein-protein interaction (PPI) and drug/compound-gene networks.

Prof. Li explained that the DORGE algorithm, successfully leveraged public data to discover the genetic and epigenetic alterations that play significant roles in cancer driver gene dysregulation and could be instrumental in improving cancer prevention, diagnosis and treatment efforts in the future.

Another new algorithmic prediction for the identification of cancer genes by Machine Learning has been carried out by a team of researchers at the Max Planck Institute for Molecular Genetics (MPIMG) in Berlin and the Institute of Computational Biology of Helmholtz Zentrum München combining a wide variety of data analyzed it with “Artificial Intelligence” and identified numerous cancer genes. They termed the algorithm as EMOGI (Explainable Multi-Omics Graph Integration). EMOGI can predict which genes cause cancer, even if their DNA sequence is not changed. This opens up new perspectives for targeted cancer therapy in personalized medicine and the development of biomarkers. The research was published in Nature Machine Intelligence on 12th April 2021.

In cancer, cells get out of control. They proliferate and push their way into tissues, destroying organs and thereby impairing essential vital functions. This unrestricted growth is usually induced by an accumulation of DNA changes in cancer genes—i.e. mutations in these genes that govern the development of the cell. But some cancers have only very few mutated genes, which means that other causes lead to the disease in these cases.

The Study

Overlap of EMOGI’s positive predictions with known cancer genes (KCGs) and candidate cancer genes
Image Source: https://static-content.springer.com/esm/art%3A10.1038%2Fs42256-021-00325-y/MediaObjects/42256_2021_325_MOESM1_ESM.pdf

The aim of the study has been represented in 4 main headings

  • Additional targets for personalized medicine
  • Better results by combination
  • In search of hints for further studies
  • Suitable for other types of diseases as well

The team was headed by Annalisa Marsico. The team used the algorithm to identify 165 previously unknown cancer genes. The sequences of these genes are not necessarily altered-apparently, already a dysregulation of these genes can lead to cancer. All of the newly identified genes interact closely with well-known cancer genes and be essential for the survival of tumor cells in cell culture experiments. The EMOGI can also explain the relationships in the cell’s machinery that make a gene a cancer gene. The software integrates tens of thousands of data sets generated from patient samples. These contain information about DNA methylations, the activity of individual genes and the interactions of proteins within cellular pathways in addition to sequence data with mutations. In these data, a deep-learning algorithm detects the patterns and molecular principles that lead to the development of cancer.

Marsico says

Ideally, we obtain a complete picture of all cancer genes at some point, which can have a different impact on cancer progression for different patients

Unlike traditional cancer treatments such as chemotherapy, personalized treatments are tailored to the exact type of tumor. “The goal is to choose the best treatment for each patient, the most effective treatment with the fewest side effects. In addition, molecular properties can be used to identify cancers that are already in the early stages.

Roman Schulte-Sasse, a doctoral student on Marsico’s team and the first author of the publication says

To date, most studies have focused on pathogenic changes in sequence, or cell blueprints, at the same time, it has recently become clear that epigenetic perturbation or dysregulation gene activity can also lead to cancer.

This is the reason, researchers merged sequence data that reflects blueprint failures with information that represents events in cells. Initially, scientists confirmed that mutations, or proliferation of genomic segments, were the leading cause of cancer. Then, in the second step, they identified gene candidates that are not very directly related to the genes that cause cancer.

Clues for future directions

The researcher’s new program adds a considerable number of new entries to the list of suspected cancer genes, which has grown to between 700 and 1,000 in recent years. It was only through a combination of bioinformatics analysis and the newest Artificial Intelligence (AI) methods that the researchers were able to track down the hidden genes.

Schulte-Sasse says “The interactions of proteins and genes can be mapped as a mathematical network, known as a graph.” He explained by giving an example of a railroad network; each station corresponds to a protein or gene, and each interaction among them is the train connection. With the help of deep learning—the very algorithms that have helped artificial intelligence make a breakthrough in recent years – the researchers were able to discover even those train connections that had previously gone unnoticed. Schulte-Sasse had the computer analyze tens of thousands of different network maps from 16 different cancer types, each containing between 12,000 and 19,000 data points.

Many more interesting details are hidden in the data. Patterns that are dependent on particular cancer and tissue were seen. The researchers were also observed this as evidence that tumors are triggered by different molecular mechanisms in different organs.

Marsico explains

The EMOGI program is not limited to cancer, the researchers emphasize. In theory, it can be used to integrate diverse sets of biological data and find patterns there. It could be useful to apply our algorithm for similarly complex diseases for which multifaceted data are collected and where genes play an important role. An example might be complex metabolic diseases such as diabetes.

Main Source

New prediction algorithm identifies previously undetected cancer driver genes

https://advances.sciencemag.org/content/6/46/eaba6784  

Integration of multiomics data with graph convolutional networks to identify new cancer genes and their associated molecular mechanisms

https://www.nature.com/articles/s42256-021-00325-y#citeas

Other Related Articles published in this Open Access Online Scientific Journal include the following:

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis

Curator & Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

Cancer detection and therapeutics

Curator: Larry H. Bernstein, MD, FCAP

https://pharmaceuticalintelligence.com/2016/05/02/cancer-detection-and-therapeutics/

Free Bio-IT World Webinar: Machine Learning to Detect Cancer Variants

Reporter: Stephen J. Williams, PhD

https://pharmaceuticalintelligence.com/2016/05/04/free-bio-it-world-webinar-machine-learning-to-detect-cancer-variants/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Premalata Pati, PhD, PostDoc in Biological Sciences, Medical Text Analysis with Machine Learning

https://pharmaceuticalintelligence.com/2021-medical-text-analysis-nlp/premalata-pati-phd-postdoc-in-pharmaceutical-sciences-medical-text-analysis-with-machine-learning/

Read Full Post »


 

Application of Natural Language Processing (NLP) on ~1MM cases of semi-structured echocardiogram reports: Identification of aortic stenosis (AS) cases – Accuracy comparison to administrative diagnosis codes (IDC 9/10 codes)

Reporter: Aviva Lev-Ari, PhD, RN

Large-Scale Identification of Aortic Stenosis and its Severity Using Natural Language Processing on Electronic Health Records

Background Systematic case identification is critical to improving population health, but widely used diagnosis code-based approaches for conditions like valvular heart disease are inaccurate and lack specificity. Objective To develop and validate natural language processing (NLP) algorithms to identify aortic stenosis (AS) cases and associated parameters from semi-structured echocardiogram reports and compare its accuracy to administrative diagnosis codes. Methods Using 1,003 physician-adjudicated echocardiogram reports from Kaiser Permanente Northern California, a large, integrated healthcare system (>4.5 million members), NLP algorithms were developed and validated to achieve positive and negative predictive values >95% for identifying AS and associated echocardiographic parameters. Final NLP algorithms were applied to all adult echocardiography reports performed between 2008-2018, and compared to ICD-9/10 diagnosis code-based definitions for AS found from 14 days before to six months after the procedure date. Results A total of 927,884 eligible echocardiograms were identified during the study period among 519,967 patients. Application of the final NLP algorithm classified 104,090 (11.2%) echocardiograms with any AS (mean age 75.2 years, 52% women), with only 67,297 (64.6%) having a diagnosis code for AS between 14 days before and up to six months after the associated echocardiogram. Among those without associated diagnosis codes, 19% of patients had hemodynamically significant AS (i.e., greater than mild disease). Conclusion A validated NLP algorithm applied to a systemwide echocardiography database was substantially more accurate than diagnosis codes for identifying AS. Leveraging machine learning-based approaches on unstructured EHR data can facilitate more effective individual and population management than using administrative data alone.

Large-scale identification of aortic stenosis and its severity using natural language processing on electronic health records

Author links open overlay panel

Matthew D.SolomonMD, PhD∗†GraceTabadaMPH∗AmandaAllen∗Sue HeeSungMPH∗Alan S.GoMD∗‡§‖

Division of Research, Kaiser Permanente Northern California, Oakland, California

Department of Cardiology, Kaiser Oakland Medical Center, Oakland, California

Department of Health Systems Science, Kaiser Permanente Bernard J. Tyson School of Medicine, Pasadena, California

§

Departments of Epidemiology, Biostatistics and Medicine, University of California, San Francisco, San Francisco, California

Department of Medicine, Stanford University, Stanford, California

Available online 18 March 2021.

https://www.sciencedirect.com/science/article/pii/S2666693621000256

Background

Systematic case identification is critical to improving population health, but widely used diagnosis code–based approaches for conditions like valvular heart disease are inaccurate and lack specificity.

Objective

To develop and validate natural language processing (NLP) algorithms to identify aortic stenosis (AS) cases and associated parameters from semi-structured echocardiogram reports and compare their accuracy to administrative diagnosis codes.

Methods

Using 1003 physician-adjudicated echocardiogram reports from Kaiser Permanente Northern California, a large, integrated healthcare system (>4.5 million members), NLP algorithms were developed and validated to achieve positive and negative predictive values > 95% for identifying AS and associated echocardiographic parameters. Final NLP algorithms were applied to all adult echocardiography reports performed between 2008 and 2018 and compared to ICD-9/10 diagnosis code–based definitions for AS found from 14 days before to 6 months after the procedure date.

Results

A total of 927,884 eligible echocardiograms were identified during the study period among 519,967 patients. Application of the final NLP algorithm classified 104,090 (11.2%) echocardiograms with any AS (mean age 75.2 years, 52% women), with only 67,297 (64.6%) having a diagnosis code for AS between 14 days before and up to 6 months after the associated echocardiogram. Among those without associated diagnosis codes, 19% of patients had hemodynamically significant AS (ie, greater than mild disease).

Conclusion

A validated NLP algorithm applied to a systemwide echocardiography database was substantially more accurate than diagnosis codes for identifying AS. Leveraging machine learning–based approaches on unstructured electronic health record data can facilitate more effective individual and population management than using administrative data alone.

Keywords

Aortic stenosis Echocardiography Machine learning Population health Quality and outcomes Valvular heart disease

SOURCE

https://www.sciencedirect.com/science/article/pii/S2666693621000256

Read Full Post »


Fighting Chaos with care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur, BSc, MSc (Exp. 6/2021)

According to the Global Health Security Index released by Johns Hopkins University in October 2019 in collaboration with Nuclear Threat Initiative (NTI) and The Economist Intelligence Unit (EIU), the United States was announced to be the best developed country in the world to tackle any pandemic or health emergency in future.

The table turned within in one year of outbreak of the novel coronavirus COVID-19. By the end of March 2021, the country with highest COVID-19 cases and deaths in the world was United States. According to the latest numbers provided by World Health Organization (WHO), there were more than 540,000 deaths and more than 30 million confirmed cases in the United States.

Joia Mukherjee, associate professor of global health and social medicine in the Blavatnik Institute at Harvard Medical School said,

“When we think about how to balance control of an epidemic over chaos, we have to double down on care and concern for the people and communities who are hardest hit”.

She also added that U.S. possess all the necessary building blocks required for a health system to work, but it lacks trust, leadership, engagement and care to assemble it into a working system.

Mukherjee mentioned about the issues with the Index that it undervalued the organized and integrated system which is necessary to help public meet their needs for clinical care. Another necessary element for real health safety which was underestimated was conveying clear message and social support to make effective and sustainable efforts for preventive public health measures.

Mukherjee is a chief medical officer at Partners In Health, an organization focused on strengthening community-based health care delivery. She is also a core member of HMS community members who play important role in constructing a more comprehensive response to the pandemic in all over the U.S. With years of experience, they are training global health care workers, analyzing the results and constructing an integrated health system to fight against the widespread health emergency caused by coronavirus all around the world.

Mukherjee encouraged to strengthen the consensus among the community to constrain this infectious disease epidemic. She suggested that validation of the following steps are crucial such as testing of the people with symptoms of infection with coronavirus, isolation of infected individuals by providing them with necessary resources and providing clinical treatment and care to those people who are in need. Mukherjee said, that community engagement and material support are not just idealistic goal rather these are essential components for functioning of health care system during an outburst of coronavirus.

Continued alertness such as social distancing and personal contact with infected individual is important because it is not possible to rapidly replace the old-school public health approaches with new advanced technologies like smart phone applications or biomedical improvements.

Public health specialists emphasized that the infection limitation is the only and most vital strategy for controlling the outbreak in near future, even if the population is getting vaccinated. It is crucial to slowdown the spread of disease for restricting the natural modification of more dangerous variants as that could potentially escape the immune protection mechanism developed by recently generated vaccines as well as natural immune defense systems.

Making Crucial connections

The treatment is more expensive and complicated in areas with less health facilities, said Paul Farmer, the Kolokotrones University Professor at Harvard and chair of the HMS Department of Global Health and Social Medicine. He called this situation as treatment nihilism. Due to shortage of resources, the maximum energy is focused in public health care and prevention efforts. U.S. has resources to cope up with the increasing demand of hospital space and is developing vaccines, but there is a form of containment nihilism- which means prevention and infection containment are unattainable- said by many experts.

Farmer said, integration of necessary elements such as clinical care, therapies, vaccines, preventive measures and social support into a single comprehensive plan is the best approach for a better response to COVID-19 disease. He understands the importance of community trust and integrated health care system for fighting against this pandemic, as being one of the founders of Partners In Health and have years of experience along with his colleagues from HMS and PIH in fighting epidemics of HIV, Ebola, cholera, tuberculosis, other infectious and non-infectious diseases.

PIH launched the Massachusetts Community Tracing Collaborative (CTC), which is an initiative of contact tracing statewide in partnership with several other state bodies, local boards of Health system and PIH. The CTC was setup in April 2020 in U.S. by Governor Charlie Baker, with leadership from HMS faculty, to build a unified response to COVID-19 and create a foundation for a long-term movement towards a more integrated community-based health care system.

The contact tracing involves reaching out to individuals who are COVID-19 positive, then further detect people who came in close contact with infected individuals and screen out people with coronavirus symptoms and encourage them to seek testing and take necessary precautions to break the chain of infection into the community.

In the initial phase of outbreak, the CTC group comprises of contact tracers and health care coordinators who spoke 23 different languages, including social workers, public health practitioners, nurses and staff members from local board health agencies with deep links to the communities they are helping. The CTC worked with 339 out of 351 state municipalities with local public health agencies relied completely on CTC whereas some cities and towns depend occasionally on CTC backup. According to a report, CTC members reached up to 80 percent of contact tracking in hard-hit and resource deprived communities such as New Bedford.

Putting COVID-19 in context

Based on generations of experience helping people surviving some of the deadliest epidemic and endemic outbreaks in places like Haiti, Mexico, Rwanda and Peru, the staff was alert that people with bad social and economic condition have less space to get quarantined and follow other public health safety measures and are most vulnerable people at high risk in the pandemic situation.

Infected individuals or individuals at risk of getting infected by SARS-CoV-2 had many questions regarding when to seek doctor’s help and where to get tested, reported by contact tracers. People were worried about being evicted from work for two weeks and some immigrants worried about basic supplies as they were away from their family and friends.

The CTC team received more than 7,000 requests for social support assistance in the initial three months. The staff members and contact tracers were actively connecting the resourceful individuals with the needy people and filling up the gap when there was shortage in their own resources.

Farmer said, “COVID is a misery-seeking missile that has targeted the most vulnerable.”

The reality that infected individuals concerned about lacking primary household items, food items and access to childcare, emphasizes the urgency of rudimentary social care and community support in fighting against the pandemic. Farmer said, to break the chain of infection and resume society it is mandatory to meet all the elementary needs of people.

“What kinds of help are people asking for?” Farmer said and added “it’s important to listen to what your patients are telling you.”

An outbreak of care

The launch of Massachusetts CTC with the support from PIH, started receiving requests from all around the country to assist initiating contact tracing procedures. In May, 2020 the organization announced the launch of a U.S. public health accompaniment to cope up with the asked need.

The unit has included team members in nearly 24 states and municipal health departments in the country and work in collaboration with local organizations. The technical support on things like choosing and implementing the tools and software for contact tracing was provided by PIH. To create awareness and provide new understanding more rapidly, a learning collaboration was established with more than 200 team members from more than 100 different organizations. The team worked to meet the needs of population at higher risk of infection by advocating them for a stronger and more reliable public health response.

The PIH public health team helped to train contact trackers in the Navajo nation and operate to strengthen the coordination between SARS-CoV-2 testing, efforts for precaution, clinical health care delivery and social support in vulnerable communities around the U.S.

“For us to reopen our schools, our churches, our workplaces,” Mukherjee said, “we have to know where the virus is spreading so that we don’t just continue on this path.”

SOURCE:

https://hms.harvard.edu/news/fighting-chaos-care?utm_source=Silverpop&utm_medium=email&utm_term=field_news_item_1&utm_content=HMNews04052021

Other related articles were published in this Open Access Online Scientific Journal, including the following:

T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/30/t-cells-recognize-recent-sars-cov-2-variants/

The WHO team is expected to soon publish a 300-page final report on its investigation, after scrapping plans for an interim report on the origins of SARS-CoV-2 — the new coronavirus responsible for killing 2.7 million people globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/03/27/the-who-team-is-expected-to-soon-publish-a-300-page-final-report-on-its-investigation-after-scrapping-plans-for-an-interim-report-on-the-origins-of-sars-cov-2-the-new-coronavirus-responsibl/

Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2021/02/12/need-for-global-response-to-sars-cov-2-viral-variants/

Mechanistic link between SARS-CoV-2 infection and increased risk of stroke using 3D printed models and human endothelial cells

Reporter: Adina Hazan, PhD

https://pharmaceuticalintelligence.com/2020/12/28/mechanistic-link-between-sars-cov-2-infection-and-increased-risk-of-stroke-using-3d-printed-models-and-human-endothelial-cells/

Artificial intelligence predicts the immunogenic landscape of SARS-CoV-2

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2021/02/04/artificial-intelligence-predicts-the-immunogenic-landscape-of-sars-cov-2/

Read Full Post »


NLP Techniques based on NLP Year in Review — 2019 

Reporter: Aviva Lev-Ari, PhD, RN

 

Tags: 

In this blog post, I want to highlight some of the most important stories related to machine learning and NLP that I came across in 2019.


SAS AI/ML Training

By Elvis Saravia, Affective Computing & NLP Researcher

Other Related Topics

Activation Atlases is a technique developed by researchers at Google and Open AI to better understand and visualize the interactions happening between neurons of a neural network.

Figure

“An activation atlas of the InceptionV1 vision classification network reveals many fully realized features, such as electronics, buildings, food, animal ears, plants, and watery backgrounds.” — source

 

 

This Colab notebook provides a great introduction on how to use Nucleus and TensorFlow for “DNA Sequencing Error Correction”. And here is a great detailed post on the use of deep learning architectures for exploring DNA.

Figure

 

 

Alexander Rush is a Harvard NLP researcher who wrote an important article about the issues with tensors and how some current libraries expose them. He also went on to talk about a proposal for tensors with named indices.

 

ML/NLP Tools and Datasets ⚙️

 

StanfordNLP released StanfordNLP 0.2.0 which is a Python library for natural language analysis. You can perform different types of linguistic analysis such as lemmatization and part of speech recognition on over 70 different languages.

GQA is a visual question answering dataset for enabling research related to visual reasoning.

exBERT is a visual interactive tool to explore the embeddings and attention of Transformer language models. You can find the paper here and the demo here.

SOURCE

https://www.kdnuggets.com/2020/01/nlp-year-review-2019.html

 

 

6 NLP Techniques Every Data Scientist Should Know

Towards more efficient natural language processing

Sara A. Metwalli

Sara A. Metwalli

Ph.D. student working on Quantum Computing. Traveler, writing lover, science enthusiast, and CS instructor. Get in touch with me bit.ly/2CvFAw6

Jan 20·7 min read

Image for post

Photo by Sai Kiran Anagani on Unsplash

Natural language processing is perhaps the most talked-about subfield of data science. It’s interesting, it’s promising, and it can transform the way we see technology today. Not just technology, but it can also transform the way we perceive human languages.

Natural language processing has been gaining too much attention and traction from both research and industry because it is a combination between human languages and technology. Ever since computers were first created, people have dreamt about creating computer programs that can comprehend human languages.

The advances in machine learning and artificial intelligence fields have driven the appearance and continuous interest in natural language processing. This interest will only grow bigger, especially now that we can see how natural language processing could make our lives easier. This is prominent by technologies such as Alexa, Siri, and automatic translators.

The truth is, natural language processing is the reason I got into data science. I was always fascinated by languages and how they evolve based on human experience and time. I wanted to know how we can teach computers to comprehend our languages, not just that, but how can we make them capable of using them to communicate and understand us.

In this article, I will go through the 6 fundamental techniques of natural language processing that you should know if you are serious about getting into the field.

NLP 101: What is Natural Language Processing?

How did NLP start?

towardsdatascience.com

Lemmatization and stemming

Stemming and lemmatization are probably the first two steps to build an NLP project — you often use one of the two. They represent the field’s core concepts and are often the first techniques you will implement on your journey to be an NLP master.

Often, beginners tend to confuse the two techniques. Although they have their similarities, they are quite different.

  • Stemming: Stemming is a collection of algorithms that work by clipping off the end of the beginning of the word to reach its infinitive form.These algorithms do that by considering the common prefixes and suffixes of the language being analyzed. Clipping off the words can lead to the correct infinitive form, but that’s not always the case. There are many algorithms to perform stemming; the most common one used in English is the Porter stemmer. This algorithm contains 5 phases that work sequentially to obtain the word’s root.
  • Lemmatization: To overcome the flaws of stemming, lemmatization algorithms were designed. In these types of algorithms, some linguistic and grammar knowledge needs to be fed to the algorithm to make better decisions when extracting a word’s infinitive form. For lemmatization algorithms to perform accurately, they need to extract the correct lemma of each word. So, they often require a dictionary of the language to be able to categorize each word correctly.

Image for post

Image by the author, made using Canva

Based on these definitions, you can imagine that building a lemmatizer is more complex and more time consuming than building a stemmer. However, it is more accurate and will cause less noise in the final analysis results.

Keywords extraction

Keyword extraction — sometimes called keyword detection or keyword analysis — is an NLP technique used for text analysis. This technique’s main purpose is to automatically extract the most frequent words and expressions from the body of a text. It is often used as a first step to summarize the main ideas of a text and to deliver the key ideas presented in the text.

In the backend of keyword extraction algorithms lays the power of machine learning and artificial intelligence. They are used to extract and simplify a given text for it to be understandable by the computer. The algorithm can be adapted and applied to any type of context, from academic text to colloquial text used in social media posts.

Keywords extraction has many applications in today’s world, including social media monitoring, customer service/ feedback, product analysis, and search engine optimization.

Named Entity Recognition (NER)

Like stemming and lemmatization, named entity recognition, or NER, NLP’s basic and core techniques are. NER is a technique used to extract entities from a body of a text used to identify basic concepts within the text, such as people’s names, places, dates, etc.

NER algorithm has mainly two steps. First, it needs to detect an entity in the text and then categorize it into one set category. The performance of NER depends heavily on the training data used to develop the model. The more relevant the training data to the actual data, the more accurate the results will be.

another factor contributing to the accuracy of a NER model is the linguistic knowledge used when building the model. That being said, there are open NER platforms that are pre-trained and ready to use.

NER can be used in a varsity of fields, such as building recommendation systems, in health care to provide better service for patients, and in academia to help students get relevant materials to their study scopes.

Topic Modelling

You can use keyword extractions techniques to narrow down a large body of text to a handful of main keywords and ideas. From which, you can probably extract the main topic of the text.

Another, more advanced technique to identify a text’s topic is topic modeling—top modeling built upon unsupervised machine learning that doesn’t require a labeled data for training.

Multiple algorithms can be used to model a topic of text, such as Correlated Topic Model, Latent Dirichlet Allocation, and Latent Sentiment Analysis. The most commonly used approach is the Latent Dirichlet. This approach analyzes the text, breaks it down into words and statements, and then extracts different topics from these words and statements. All you need to do is feed the algorithm a body of text, and it will take it from there.

Image for post

Image by the author, made using Canva

Summarization

One of the useful and promising applications of NLP is text summarization. That is reducing a large body of text into a smaller chuck containing the text’s main message. This technique is often used in long news articles and to summarize research papers.

Text summarization is an advanced technique that used other techniques that we just mentioned to establish its goals, such as topic modeling and keyword extraction. The way this is established is via two steps, extract and then abstract.

In the extract phase, the algorithms create a summary by extracting the text’s important parts based on their frequency. After that, the algorithm generates another summary, this time by creating a whole new text that conveys the same message as the original text. There are many text summarization algorithms, e.g., LexRank and TextRank.

In LexRank, the algorithm categorizes the sentences in the text using a ranking model. The ranks are based on the similarity between the sentences; the more similar a sentence is to the rest of the text, the higher it will be ranked.

Sentiment Analysis

The most famous, well-known, and used NLP technique is, without a doubt, sentiment analysis. This technique’s core function is to extract the sentiment behind a body of text by analyzing the containing words.

The technique’s most simple results lay on a scale with 3 areas, negative, positive, and neutral. The algorithm can be more complex and advanced; however, the results will be numeric in this case. If the result is a negative number, then the sentiment behind the text has a negative tone to it, and if it is positive, then some positivity in the text.

Sentiment analysis is one of the broad applications of machine learning techniques. It can be implemented using either supervised or unsupervised techniques. Perhaps the most common supervised technique to perform sentiment analysis is using the Naive Bayes algorithm. Other supervised ML algorithms that can be used are gradient boosting and random forest.

NLP 101: Towards Natural Language Processing

10 steps to build a strong NLP foundation

Takeaways

Humans’ desire for computers to understand and communicate with them using spoken languages is an idea that is as old as computers themselves. Thanks to the rapid advances in technology and machine learning algorithms, this idea is no more just an idea. It is a reality that we can see and experience in our daily lives. This idea is the core diving power of natural language processing.

Natural language processing is one of today’s hot-topics and talent-attracting field. Companies and research institutes are in a race to create computer programs that fully understand and use human languages. Virtual agents and translators did improve rapidly since they first appeared in the 1960s.

Despite the different tasks that natural language processing can execute, to get in the field and start building your own projects, you need to be completely comfortable with the core 6 fundamental natural language processing techniques.

These techniques are the basic building blocks of most — if not all — natural language processing algorithms. So, if you understand these techniques and when to use them, nothing can stop you.

A Learning Path To Becoming a Data Scientist

The 10 steps roadmap to kickstarting your data science future

Read Full Post »


Artificial intelligence predicts the immunogenic landscape of SARS-CoV-2

Reporter: Irina Robu, PhD

Artificial intelligence makes it imaginable for machines to learn from experience, adjust to new inputs and perform human-like tasks. Using the technologies, computer can be trained to achieve specific tasks by processing large amount of data and recognizing patterns. Scientists from NEC OncoImmunity use artificial intelligence to forecast designs for designing universal vaccines for COVID 19, that contain a broad spectrum of T-cell epitopes capable of providing coverage and protection across the global population. To help test their hypothesis, they profiled the entire SARS COV2 proteome across the most frequent 100 HLA-A, HLA-B and HLA-DR alleles in the human population using host infected cell surface antigen and immunogenicity predictors from NEC Immune Profiler suite of tools, and generated comprehensive epitope maps. They use the epitope maps as a starting point for Monte Carlo simulation intended to identify the most significant epitope hotspot in the virus. Then they analyzed the antigen arrangement and immunogenic landscape to recognize a trend where SARS-COV-2 mutations are expected to have minimized potential to be accessible by host-infected cells, and subsequently noticed by the host immune system. A sequence conservation analysis then removed epitope hotspots that occurred in less-conserved regions of the viral proteome.

By merging the antigen arrangement to the infected-host cell surface and immunogenicity estimates of the NEC Immune Profiler with a Monte Carlo and digital twin simulation, the researchers have outlined the entire SARS-CoV-2 proteome and recognized a subset of epitope hotspots that could be used  in a vaccine formulation to provide a wide-ranging coverage across the global population.

By using the database of HLA haplotypes of approximately 22,000 individuals to design  a “digital twin” type simulation to model how efficient various  combinations of hotspots would work in a varied human population. 

SOURCE

https://www.nature.com/articles/s41598-020-78758-5?utm_content=buffer4ebb7

Read Full Post »


Google Cloud launches Vaccine Management Tools using ML & AI for Vaccine Distribution Efforts

Reporter: Aviva Lev-Ari, PhD, RN

 

Google Cloud announced Monday new artificial intelligence and machine learning tools to help with vaccine rollout efforts from vaccine information and scheduling, to distribution and analytics, to forecasting and modeling COVID-19 cases.

https://www.fiercehealthcare.com/tech/google-cloud-rolls-out-tools-for-vaccine-logistics-as-tech-giants-jump-into-distribution?utm_medium=nl&utm_source=internal&mrkid=993697&mkt_tok=eyJpIjoiWldZMVlXVmlNelprWXpNMyIsInQiOiJEQ3BsYnRMQTBPQU1HNDBqVFVhQnpKV3BlRUdIbXRBMWgwWFFEYktjWnc3XC9xWm9tNUNJcnNNR3M5cjNuZEhoYlFRQzZFTXAxU1NFUnFQc2o4Q09HYjBFMFRhejBMaWhuN1FLalU1U2xQQWV3bm1iZEtJQkk1aWRGVkVSOFVcL2tIIn0%3D

Read Full Post »


Countries that are early adopters of ML methods

Reporter: Aviva Lev-Ari, PhD, RN

 

Machine Learning Adoption by Country

Results of the survey appear in Figure 1 for the overall sample as well as countries that have 50 or more respondents. Overall, results show that the adoption rate of machine learning methods is 45%. Twenty-one percent of respondents indicate their company is exploring ML methods. Twenty percent of respondents indicate their company does not use ML methods.

Countries that are early adopters of ML methods include:

  1. Israel (63% adopt ML)
  2. Netherlands (57%)
  3. United States (56%)
  4. UK and Northern Ireland (54%)
  5. Germany (54%)
  6. Australia (53%)
  7. France (52%)
  8. China (52%)
  9. Taiwan (51%)
  10. Greece (49%)

Countries with the lowest adoption rate of ML methods include:

  1. Nigeria (23% adopt ML)
  2. Morocco (24%)
  3. Egypt (31%)
  4. Philippines (31%)
  5. Argentina (32%)

Countries with the highest percent of companies exploring ML methods include:

  1. Chile (36% are exploring ML methods)
  2. Sweden (35%)
  3. Malaysia (32%)
  4. South Korea (31%)
  5. Peru (29%)

Machine Learning Adoption Rates Around the World

Bob Hayes

A worldwide survey of data professionals showed that adoption of machine learning methods in their company is 45%. Twenty-one percent of survey respondents said their employer is exploring ML methods. ML adoption rates varied by country with Israel (63%), Netherlands (57%) and the United States (56%) showing the highest and Egypt (31%), Morocco (24%) and Nigeria (23%) showing the lowest adoption rate. ML adoption also varied by company size, with larger companies having higher adoption rates (61%) than medium (45%) and small (33%) companies.

Businesses are leveraging the power of machine learning methods to help them extract better quality information, increase productivity, reduce costs and extract more value from their data. As the amount of data continues to grow along with the processing power of technology, businesses will continue to incorporate ML into their business. Researchers have found different AI / ML adoption rates. In one study, adoption rate of ML Methods was 10%; in a 2020 study by McKinsey, adoption rate of AI was 50%. Still, another study found that 42% of companies were currently using AI and 40% of companies were planning on using AI in the next two years. Another 2020 study found that 59% of enterprises have machine learning initiatives either in production or at a proof-of-concept stage.

Current Analysis on Machine Learning Adoption

Kaggle conducted a worldwide survey in October 2020 of 20,036 data professionals (2020 Kaggle Machine Learning and Data Science Survey). The survey sample consisted of data professionals, including men (~79%) and women (~19%), from a variety of job titles (e.g., data scientist, business analyst, machine learning engineer, software developer) and company sizes. The survey asked a variety of questions, including “Does your current employer incorporate machine learning methods into their business?”

Figure 1. Machine Learning Adoption Rates across Countries. Click image to enlarge.

Figure 2. Adoption of ML Methods Across Company Size

Figures in:

http://businessoverbroadway.com/2021/02/01/machine-learning-adoption-rates-around-the-world/

Machine Learning Adoption by Company Size

We also looked at adoption rates by company size. Those results appear in Figure 2. Supporting prior studies, we found that larger companies have higher adoption rates about ML methods. The largest enterprise companies (10,000+ employees) reported ML adoption rates of 61%. The smallest companies (0-49 employees) reported adoption rates of 33%. Of the smallest companies, a little over a quarter of them (27%) indicate that they are exploring the use of ML methods.

Summary

Survey of data professionals showed that adoption rates of machine learning methods among businesses is 45%. About 21% of respondents indicated that their company is exploring machine learning methods with the hope of putting a model into production one day.

ML adoption rate varies by country and company size. Survey results reveal that early adopters come from large enterprise companies (adoption rate of 61%) and some countries including the United States, Israel, Netherlands and the UK and Northern Ireland.

Machine learning vendors, looking for inroads into businesses, could focus their marketing and sales efforts on small businesses as they have the highest percentage of companies who are exploring the use of ML methods.

SOURCE

http://businessoverbroadway.com/2021/02/01/machine-learning-adoption-rates-around-the-world/

Read Full Post »


Reporter: Adina Hazan, PhD

Elizabeth Unger from the Tian group at UC Davis, Jacob Keller from the Looger lab from HHMI, Michael Altermatt from the Gradinaru group at California Institute of Technology, and colleagues did just this, by redesigned the binding pocket of periplasmic binding proteins (PBPs) using artificial intelligence, such that it became a fluorescent sensor specific for serotonin. Not only this, the group showed that it could express and use this molecule to detect serotonin on the cell, tissue, and whole animal level.

By starting with a microbial PBP and early version of an acetyl choline sensor (iAChSnFR), the scientists used machine learning and modeling to redesign the binding site to exhibit a higher affinity and specificity to serotonin. After three repeats of mutagenesis, modeling, and library readouts, they produced iSeroSnFR. This version harbors 19 mutations compared to iAChSnFR0.6 and a Kd of 310 µM. This results in an increase in fluorescence in HEK293T cells expressing the serotonin receptor of 800%. Of over 40 neurotransmitters, amino acids, and small molecules screened, only two endogenous molecules evoked some fluorescence, but at significantly higher concentrations.

To acutely test the ability of the sensor to detect rapid changes of serotonin in the environment, the researchers used caged serotonin, a technique in which the serotonin is rapidly released into the environment with light pulses, and showed that iSeroSnFR accurately and robustly produced a signal with each flash of light. With this tool, it was then possible to move to ex-vivo mouse brain slices and detect endogenous serotonin release patterns across the brain. Three weeks after targeted injection of iSeroSnFR to specifically deliver the receptor into the prefrontal cortex and dorsal striatum, strong fluorescent signal could be detected during perfusion of serotonin or electrical stimulation.

Most significantly, this molecule was also shown to be detected in freely moving mice, a tool which could offer critical insight into the acute role of serotonin regulation during important functions such as mood and alertness. Through optical fiber placements in the basolateral amygdala and prefrontal cortex, the team measured dynamic and real-time changes in serotonin release in fear-trained mice, social interactions, and sleep wake cycles. For example, while both areas of the brain have been established as relevant to the fear response, they reliably tracked that the PFC response was immediate, while the BSA displayed a delayed response. This additional temporal resolution of neuromodulation may have important implications in neurotransmitter pharmacology of the central nervous system.

This study provided the scientific community with several insights and tools. The serotonin sensor itself will be a critical tool in the study of the central nervous system and possibly beyond. Additionally, an AI approach to mutagenesis in order to redesign a binding pocket of a receptor opens new avenues to the development of pharmacological tools and may lead to many new designs in therapeutics and research.

SOURCE:

Read Full Post »

Older Posts »