Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence in Health Care – Tools & Innovations’ Category

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

UPDATED on 11/5/2021

Introducing Isomorphic Labs

I believe we are on the cusp of an incredible new era of biological and medical research. Last year DeepMind’s breakthrough AI system AlphaFold2 was recognised as a solution to the 50-year-old grand challenge of protein folding, capable of predicting the 3D structure of a protein directly from its amino acid sequence to atomic-level accuracy. This has been a watershed moment for computational and AI methods for biology.
Building on this advance, today, I’m thrilled to announce the creation of a new Alphabet company –  Isomorphic Labs – a commercial venture with the mission to reimagine the entire drug discovery process from the ground up with an AI-first approach and, ultimately, to model and understand some of the fundamental mechanisms of life.

For over a decade DeepMind has been in the vanguard of advancing the state-of-the-art in AI, often using games as a proving ground for developing general purpose learning systems, like AlphaGo, our program that beat the world champion at the complex game of Go. We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself. One of the most important applications of AI that I can think of is in the field of biological and medical research, and it is an area I have been passionate about addressing for many years. Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.

An AI-first approach to drug discovery and biology
The pandemic has brought to the fore the vital work that brilliant scientists and clinicians do every day to understand and combat disease. We believe that the foundational use of cutting edge computational and AI methods can help scientists take their work to the next level, and massively accelerate the drug discovery process. AI methods will increasingly be used not just for analysing data, but to also build powerful predictive and generative models of complex biological phenomena. AlphaFold2 is an important first proof point of this, but there is so much more to come. 
At its most fundamental level, I think biology can be thought of as an information processing system, albeit an extraordinarily complex and dynamic one. Taking this perspective implies there may be a common underlying structure between biology and information science – an isomorphic mapping between the two – hence the name of the company. Biology is likely far too complex and messy to ever be encapsulated as a simple set of neat mathematical equations. But just as mathematics turned out to be the right description language for physics, biology may turn out to be the perfect type of regime for the application of AI.

What’s next for Isomorphic Labs
This is just the beginning of what we hope will become a radical new approach to drug discovery, and I’m incredibly excited to get this ambitious new commercial venture off the ground and to partner with pharmaceutical and biomedical companies. I will serve as CEO for Isomorphic’s initial phase, while remaining as DeepMind CEO, partially to help facilitate collaboration between the two companies where relevant, and to set out the strategy, vision and culture of the new company. This will of course include the building of a world-class multidisciplinary team, with deep expertise in areas such as AI, biology, medicinal chemistry, biophysics, and engineering, brought together in a highly collaborative and innovative environment. (We are hiring!
As pioneers in the emerging field of ‘digital biology’, we look forward to helping usher in an amazingly productive new age of biomedical breakthroughs. Isomorphic’s mission could not be a more important one: to use AI to accelerate drug discovery, and ultimately, find cures for some of humanity’s most devastating diseases.

SOURCE

https://www.isomorphiclabs.com/blog

DeepMind creates ‘transformative’ map of human proteins drawn by artificial intelligence

DeepMind plans to release hundreds of millions of protein structures for free

James Vincent July 22, 2021 11:00 am

AI research lab DeepMind has created the most comprehensive map of human proteins to date using artificial intelligence. The company, a subsidiary of Google-parent Alphabet, is releasing the data for free, with some scientists comparing the potential impact of the work to that of the Human Genome Project, an international effort to map every human gene.

Proteins are long, complex molecules that perform numerous tasks in the body, from building tissue to fighting disease. Their purpose is dictated by their structure, which folds like origami into complex and irregular shapes. Understanding how a protein folds helps explain its function, which in turn helps scientists with a range of tasks — from pursuing fundamental research on how the body works, to designing new medicines and treatments.
 “the culmination of the entire 10-year-plus lifetime of DeepMind” 
Previously, determining the structure of a protein relied on expensive and time-consuming experiments. But last year DeepMind showed it can produce accurate predictions of a protein’s structure using AI software called AlphaFold. Now, the company is releasing hundreds of thousands of predictions made by the program to the public.
“I see this as the culmination of the entire 10-year-plus lifetime of DeepMind,” company CEO and co-founder Demis Hassabis told The Verge. “From the beginning, this is what we set out to do: to make breakthroughs in AI, test that on games like Go and Atari, [and] apply that to real-world problems, to see if we can accelerate scientific breakthroughs and use those to benefit humanity.”



Two examples of protein structures predicted by AlphaFold (in blue) compared with experimental results (in green). 
Image: DeepMind


There are currently around 180,000 protein structures available in the public domain, each produced by experimental methods and accessible through the Protein Data Bank. DeepMind is releasing predictions for the structure of some 350,000 proteins across 20 different organisms, including animals like mice and fruit flies, and bacteria like 
E. coli. (There is some overlap between DeepMind’s data and pre-existing protein structures, but exactly how much is difficult to quantify because of the nature of the models.) Most significantly, the release includes predictions for 98 percent of all human proteins, around 20,000 different structures, which are collectively known as the human proteome. It isn’t the first public dataset of human proteins, but it is the most comprehensive and accurate.

If they want, scientists can download the entire human proteome for themselves, says AlphaFold’s technical lead John Jumper. “There is a HumanProteome.zip effectively, I think it’s about 50 gigabytes in size,” Jumper tells The Verge. “You can put it on a flash drive if you want, though it wouldn’t do you much good without a computer for analysis!”
 “anyone can use it for anything” 
After launching this first tranche of data, DeepMind plans to keep adding to the store of proteins, which will be maintained by Europe’s flagship life sciences lab, the European Molecular Biology Laboratory (EMBL). By the end of the year, DeepMind hopes to release predictions for 100 million protein structures, a dataset that will be “transformative for our understanding of how life works,” according to Edith Heard, director general of the EMBL.
The data will be free in perpetuity for both scientific and commercial researchers, says Hassabis. “Anyone can use it for anything,” the DeepMind CEO noted at a press briefing. “They just need to credit the people involved in the citation.”

The benefits of protein folding


Understanding a protein’s structure is useful for scientists across a range of fields. The information can help design new medicines, synthesize novel enzymes that break down waste materials, and create crops that are resistant to viruses or extreme weather. Already, DeepMind’s protein predictions are being used for medical research, including studying the workings of SARS-CoV-2, the virus that causes COVID-19.
 “it will definitely have a huge impact for the scientific community” 
New data will speed these efforts, but scientists note it will still take a lot of time to turn this information into real-world results. “I don’t think it’s going to be something that changes the way patients are treated within the year, but it will definitely have a huge impact for the scientific community,” Marcelo C. Sousa, a professor at the University of Colorado’s biochemistry department, told The Verge.
Scientists will have to get used to having such information at their fingertips, says DeepMind senior research scientist Kathryn Tunyasuvunakool. “As a biologist, I can confirm we have no playbook for looking at even 20,000 structures, so this [amount of data] is hugely unexpected,” Tunyasuvunakool told The Verge. “To be analyzing hundreds of thousands of structures — it’s crazy.”

Notably, though, DeepMind’s software produces predictions of protein structures rather than experimentally determined models, which means that in some cases further work will be needed to verify the structure. DeepMind says it spent a lot of time building accuracy metrics into its AlphaFold software, which ranks how confident it is for each prediction.

Example protein structures predicted by AlphaFold.
Image: DeepMind
Predictions of protein structures are still hugely useful, though. Determining a protein’s structure through experimental methods is expensive, time-consuming, and relies on a lot of trial and error. That means even a low-confidence prediction can save scientists years of work by pointing them in the right direction for research.
Helen Walden, a professor of structural biology at the University of Glasgow, tells The Verge that DeepMind’s data will “significantly ease” research bottlenecks, but that “the laborious, resource-draining work of doing the biochemistry and biological evaluation of, for example, drug functions” will remain.
Sousa, who has previously used data from AlphaFold in his work, says for scientists the impact will be felt immediately. “In our collaboration we had with DeepMind, we had a dataset with a protein sample we’d had for 10 years, and we’d never got to the point of developing a model that fit,” he says. “DeepMind agreed to provide us with a structure, and they were able to solve the problem in 15 minutes after we’d been sitting on it for 10 years.”

Why protein folding is so difficult

Proteins are constructed from chains of amino acids, which come in 20 different varieties in the human body. As any individual protein can be comprised of hundreds of individual amino acids, each of which can fold and twist in different directions, it means a molecule’s final structure has an incredibly large number of possible configurations. One estimate is that the typical protein can be folded in 10^300 ways — that’s a 1 followed by 300 zeroes.

 Protein folding has been a “grand challenge” of biology for decades 

Because proteins are too small to examine with microscopes, scientists have had to indirectly determine their structure using expensive and complicated methods like nuclear magnetic resonance and X-ray crystallography. The idea of determining the structure of a protein simply by reading a list of its constituent amino acids has been long theorized but difficult to achieve, leading many to describe it as a “grand challenge” of biology.
In recent years, though, computational methods — particularly those using artificial intelligence — have suggested such analysis is possible. With these techniques, AI systems are trained on datasets of known protein structures and use this information to create their own predictions.

DeepMind’s AlphaFold software has significantly increased the accuracy of computational protein-folding, as shown by its performance in the CASP competition. 
Image: DeepMind
Many groups have been working on this problem for years, but DeepMind’s deep bench of AI talent and access to computing resources allowed it to accelerate progress dramatically. Last year, the company competed in an international protein-folding competition known as CASP and blew away the competition. Its results were so accurate that computational biologist John Moult, one of CASP’s co-founders, said that “in some sense the problem [of protein folding] is solved.”

DeepMind’s AlphaFold program has been upgraded since last year’s CASP competition and is now 16 times faster. “We can fold an average protein in a matter of minutes, most cases seconds,” says Hassabis.

@@@@@@@

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

@@@@@@@

Liam McGuffin, a professor at Reading University who developed some of the UK’s leading protein-folding software, praised the technical brilliance of AlphaFold, but also noted that the program’s success relied on decades of prior research and public data. “DeepMind has vast resources to keep this database up to date and they are better placed to do this than any single academic group,” McGuffin told The Verge. “I think academics would have got there in the end, but it would have been slower because we’re not as well resourced.”

Why does DeepMind care?

Many scientists The Verge spoke to noted the generosity of DeepMind in releasing this data for free. After all, the lab is owned by Google-parent Alphabet, which has been pouring huge amounts of resources into commercial healthcare projects. DeepMind itself loses a lot of money each year, and there have been numerous reports of tensions between the company and its parent firm over issues like research autonomy and commercial viability.

Hassabis, though, tells The Verge that the company always planned to make this information freely available, and that doing so is a fulfillment of DeepMind’s founding ethos. He stresses that DeepMind’s work is used in lots of places at Google — “almost anything you use, there’s some of our technology that’s part of that under the hood” — but that the company’s primary goal has always been fundamental research.
 “There’s many ways value can be attained.” 

“The agreement when we got acquired is that we are here primarily to advance the state of AGI and AI technologies and then use that to accelerate scientific breakthroughs,” says Hassabis. “[Alphabet] has plenty of divisions focused on making money,” he adds, noting that DeepMind’s focus on research “brings all sorts of benefits, in terms of prestige and goodwill for the scientific community. There’s many ways value can be attained.”
Hassabis predicts that AlphaFold is a sign of things to come — a project that shows the huge potential of artificial intelligence to handle messy problems like human biology.

“I think we’re at a really exciting moment,” he says. “In the next decade, we, and others in the AI field, are hoping to produce amazing breakthroughs that will genuinely accelerate solutions to the really big problems we have here on Earth.”


SOURCE

https://www.theverge.com/platform/amp/2021/7/22/22586578/deepmind-alphafold-ai-protein-folding-human-proteome-released-for-free?__twitter_impression=true

Potential Use of Protein Folding Predictions for Drug Discovery

PROTAC Technology: Opportunities and Challenges

  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240Publication Date:March 12, 2020https://doi.org/10.1021/acsmedchemlett.9b00597Copyright © 2020 American Chemical Society

Abstract

PROTACs-induced targeted protein degradation has emerged as a novel therapeutic strategy in drug development and attracted the favor of academic institutions, large pharmaceutical enterprises (e.g., AstraZeneca, Bayer, Novartis, Amgen, Pfizer, GlaxoSmithKline, Merck, and Boehringer Ingelheim, etc.), and biotechnology companies. PROTACs opened a new chapter for novel drug development. However, any new technology will face many new problems and challenges. Perspectives on the potential opportunities and challenges of PROTACs will contribute to the research and development of new protein degradation drugs and degrader tools.

Although PROTAC technology has a bright future in drug development, it also has many challenges as follows:
(1)
Until now, there is only one example of PROTAC reported for an “undruggable” target; (18) more cases are needed to prove the advantages of PROTAC in “undruggable” targets in the future.
(2)
“Molecular glue”, existing in nature, represents the mechanism of stabilized protein–protein interactions through small molecule modulators of E3 ligases. For instance, auxin, the plant hormone, binds to the ligase SCF-TIR1 to drive recruitment of Aux/IAA proteins and subsequently triggers its degradation. In addition, some small molecules that induce targeted protein degradation through “molecular glue” mode of action have been reported. (21,22) Furthermore, it has been recently reported that some PROTACs may actually achieve target protein degradation via a mechanism that includes “molecular glue” or via “molecular glue” alone. (23) How to distinguish between these two mechanisms and how to combine them to work together is one of the challenges for future research.
(3)
Since PROTAC acts in a catalytic mode, traditional methods cannot accurately evaluate the pharmacokinetics (PK) and pharmacodynamics (PD) properties of PROTACs. Thus, more studies are urgently needed to establish PK and PD evaluation systems for PROTACs.
(4)
How to quickly and effectively screen for target protein ligands that can be used in PROTACs, especially those targeting protein–protein interactions, is another challenge.
(5)
How to understand the degradation activity, selectivity, and possible off-target effects (based on different targets, different cell lines, and different animal models) and how to rationally design PROTACs etc. are still unclear.
(6)
The human genome encodes more than 600 E3 ubiquitin ligases. However, there are only very few E3 ligases (VHL, CRBN, cIAPs, and MDM2) used in the design of PROTACs. How to expand E3 ubiquitin ligase scope is another challenge faced in this area.

PROTAC technology is rapidly developing, and with the joint efforts of the vast number of scientists in both academia and industry, these problems shall be solved in the near future.

PROTACs have opened a new chapter for the development of new drugs and novel chemical knockdown tools and brought unprecedented opportunities to the industry and academia, which are mainly reflected in the following aspects:
(1)
Overcoming drug resistance of cancer. In addition to traditional chemotherapy, kinase inhibitors have been developing rapidly in the past 20 years. (12) Although kinase inhibitors are very effective in cancer therapy, patients often develop drug resistance and disease recurrence, consequently. PROTACs showed greater advantages in drug resistant cancers through degrading the whole target protein. For example, ARCC-4 targeting androgen receptor could overcome enzalutamide-resistant prostate cancer (13) and L18I targeting BTK could overcome C481S mutation. (14)
(2)
Eliminating both the enzymatic and nonenzymatic functions of kinase. Traditional small molecule inhibitors usually inhibit the enzymatic activity of the target, while PROTACs affect not only the enzymatic activity of the protein but also nonenzymatic activity by degrading the entire protein. For example, FAK possesses the kinase dependent enzymatic functions and kinase independent scaffold functions, but regulating the kinase activity does not successfully inhibit all FAK function. In 2018, a highly effective and selective FAK PROTAC reported by Craig M. Crews’ group showed a far superior activity to clinical candidate drug in cell migration and invasion. (15) Therefore, PROTAC can expand the druggable space of the existing targets and regulate proteins that are difficult to control by traditional small molecule inhibitors.
(3)
Degrade the “undruggable” protein target. At present, only 20–25% of the known protein targets (include kinases, G protein-coupled receptors (GPCRs), nuclear hormone receptors, and iron channels) can be targeted by using conventional drug discovery technologies. (16,17) The proteins that lack catalytic activity and/or have catalytic independent functions are still regarded as “undruggable” targets. The involvement of Signal Transducer and Activator of Transcription 3 (STAT3) in the multiple signaling pathway makes it an attractive therapeutic target; however, the lack of an obviously druggable site on the surface of STAT3 limited the development of STAT3 inhibitors. Thus, there are still no effective drugs directly targeting STAT3 approved by the Food and Drug Administration (FDA). In November 2019, Shaomeng Wang’s group first reported a potent PROTAC targeting STAT3 with potent biological activities in vitro and in vivo. (18) This successful case confirms the key potential of PROTAC technology, especially in the field of “undruggable” targets, such as K-Ras, a tricky tumor target activated by multiple mutations as G12A, G12C, G12D, G12S, G12 V, G13C, and G13D in the clinic. (19)
(4)
Fast and reversible chemical knockdown strategy in vivo. Traditional genetic protein knockout technologies, zinc-finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), or CRISPR-Cas9, usually have a long cycle, irreversible mode of action, and high cost, which brings a lot of inconvenience for research, especially in nonhuman primates. In addition, these genetic animal models sometimes produce phenotypic misunderstanding due to potential gene compensation or gene mutation. More importantly, the traditional genetic method cannot be used to study the function of embryonic-lethal genes in vivo. Unlike DNA-based protein knockout technology, PROTACs knock down target proteins directly, rather than acting at the genome level, and are suitable for the functional study of embryonic-lethal proteins in adult organisms. In addition, PROTACs provide exquisite temporal control, allowing the knockdown of a target protein at specific time points and enabling the recovery of the target protein after withdrawal of drug treatment. As a new, rapid and reversible chemical knockdown method, PROTAC can be used as an effective supplement to the existing genetic tools. (20)

SOURCE

PROTAC Technology: Opportunities and Challenges
  • Hongying Gao
  • Xiuyun Sun
  • Yu Rao*

Cite this: ACS Med. Chem. Lett. 2020, 11, 3, 237–240

Goal in Drug Design: Eliminating both the enzymatic and nonenzymatic functions of kinase.

Work-in-Progress

Induction and Inhibition of Protein in Galectins Drug Design

Work-in-Progress

Screening Proteins in DeepMind’s AlphaFold DataBase

The company also released the underlying code for AlphaFold last week as open-source, allowing others to build on its work in the future.

Work-in-Progress

Other related research published in this Open Access Online Journal include the following:

Synthetic Biology in Drug Discovery

Peroxisome proliferator-activated receptor (PPAR-gamma) Receptors Activation: PPARγ transrepression  for Angiogenesis in Cardiovascular Disease and PPARγ transactivation for Treatment of Diabetes

Read Full Post »

New resource for finding FDA-approved medical devices that incorporate AI

Reporter: Satwik Sunnam, Research Assistant 3, One year Internship in Medical Text Analysis with Deep Learning NLP

This article reports List of FDA approved medical devices that are employing Artificial Intelligence and Machine Learning (AI/ML)

The FDA is providing this initial list of AI/ML-enabled medical devices marketed in the United States as a resource to the public about these devices and the FDA’s work in this area.

Contents of this list: This initial list contains publicly available information on AI/ML-enabled devices. The FDA assembled this list by searching FDA’s publicly-facing information, as well as by reviewing information in the publicly available resources cited below (*) and in other publicly available materials published by the specific manufacturers.

This list is not meant to be an exhaustive or comprehensive resource of AI/ML-enabled medical devices. Rather, it is a list of AI/ML-enabled devices across medical disciplines, based on publicly available information.

Updates to this list: The FDA plans to update this list on a periodic basis based on publicly available information. Send questions or feedback on this list to digitalhealth@fda.hhs.gov.

AI/ML-Enabled Medical Devices

Devices are listed in reverse chronological order by Date of Final Decision. To change the sort order, click the arrows in the column headings.

Use the Submission Number link to display the approval, authorization, or clearance information for the device in the appropriate FDA database. The database page will include a link to the FDA’s publicly available information.

Devices are listed in reverse chronological order by Date of Final Decision. To change the sort order, click the arrows in the column headings.

Use the Submission Number link to display the approval, authorization, or clearance information for the device in the appropriate FDA database. The database page will include a link to the FDA’s publicly available information.

FDA Final Decision in 2021:

List of AI/ML-enabled medical devices marketed in the United States

Date of Final Decision Submission NumberDeviceCompanyPanel (Lead)
06/17/2021K203514Precise PositionPhilips Healthcare (Suzhou) Co., Ltd.Radiology
06/16/2021K202718Qmenta Care Platform FamilyMint Labs, Inc., D/B/A. QMENTARadiology
06/11/2021K210484LINQ II Insertable Cardiac Monitor, Zelda AI ECG Classification SystemMedtronic, Inc.Cardiovascular
06/10/2021K203629IDx-DRDigital Diagnostics Inc.Ophthalmic
06/02/2021DEN200069Cognoa Asd Diagnosis AidCognoa, Inc.Neurology
05/19/2021K210237CINA CHESTAvicenna.AIRadiology
04/30/2021K210001HYPER AiRShanghai United Imaging Healthcare Co.,Ltd.Radiology
04/23/2021K203314Cartesion Prime (PCD-1000A/3) V10.8Canon Medical Systems CorporationRadiology
04/23/2021K203502MEDO-ThyroidMEDO DX Pte. Ltd.Radiology
04/21/2021K210556Preview ShoulderGenesis Software InnovationsRadiology
04/20/2021K203610Automatic Anatomy Recognition (AAR)Quantitative Radiology Solutions, LLCRadiology
04/19/2021K203469AI SegmentationVarian Medical SystemsRadiology
04/16/2021K203517Saige-QDeepHealth, Inc.Radiology
04/14/2021K202992BriefCase, RIB Fractures Triage (RibFx)Aidoc Medical, Ltd.Radiology
04/09/2021DEN200055GI GeniusCosmo Artificial Intelligence – AI, Ltd.Gastroenterology-Urology
04/02/2021K202441Eclipse II with Smart Noise CancellationCarestream Health, Inc.Radiology
04/01/2021DEN200038Gili Pro Biosensor (Also Known as Gili Biosensor System)Continuse Biometrics Ltd.Cardiovascular
03/31/2021K203258syngo.CT Lung CAD (Version VD20)Siemens Healthcare GmbHRadiology
03/31/2021K203443MAGNETOM Vida, MAGNETOM Sola, MAGNETOM Lumina, MAGNETOM Altea with syngo MR XA31ASiemens Medical Solutions USA, Inc.Radiology
03/31/2021K210071SIS System (Version 5.1.0)Surgical Information Sciences, Inc.Radiology
03/26/2021DEN200019Oxehealth Vital SignsOxehealth LimitedCardiovascular
03/24/2021K203225Aquilion ONE (TSX‐306A/3) V10.4 with Spectral Imaging SystemCanon Medical Systems CorporationRadiology
03/23/2021K210209Viz ICHViz.Ai, Inc.Radiology
03/19/2021K203235VBrainVysioneer Inc.Radiology
03/09/2021K203256Imbio RV/LV SoftwareImbio, LLCRadiology
03/05/2021K202300Optellum Virtual Nodule Clinic, Optellum Software, Optellum PlatformOptellum LtdRadiology
03/01/2021DEN200022Analytic for Hemodynamic Instability (AHI)Fifth Eye Inc.Cardiovascular
02/25/2021K202990NinesMeasureNines, Inc.Radiology
02/25/2021K203578OTIS 2.1 Optical Coherence Tomography System, THiA Optical Coherence Tomography SystemPerimeter Medical Imaging AI, Inc.General And Plastic Surgery
02/19/2021K202212TruplanCircle Cardiovascular Imaging Inc.Radiology
02/09/2021K203103Synapse 3D, Synapse 3D Base Tools v6.1Fujifilm CorporationRadiology
02/05/2021K210053LVivo Software ApplicationDiA Imaging Analysis Ltd.Radiology
01/29/2021K201411Visage Breast DensityVisage Imaging GmbHRadiology
01/15/2021K193271UAI Easytriage-RibShanghai United Imaging Intelligence Co., Ltd.Radiology
01/14/2021K202700ART-PlanTheraPanaceaRadiology
01/12/2021K201836Aquilion Lightning (TSX-036A/7) V10.2 With AiCE-ICanon Medical Systems CorporationRadiology
01/09/2021K200717CLEWICU System (ClewICUserver and ClewICUnitor)Clew Medical Ltd.Cardiovascular
01/07/2021K202414BrainInsightHyperfine Research, Inc.Radiology

SOURCE

https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices?utm_medium=email

Other related articles Published in this Open Access Online Scientific Journal include the following:

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

Reporter: Aviva Lev-Ari, PhD, RN

Al is on the way to lead critical ED decisions on CT

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Applying AI to Improve Interpretation of Medical Imaging

Author and Curator: Dror Nir, PhD

Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy

Reporter: Srinivas Sriram, Research Assistant I

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems – Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Al System Used to Detect Lung Cancer

Reporter: Irina Robu, Ph.D.

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I

Read Full Post »

Patients with type 2 diabetes may soon receive artificial pancreas and a smartphone app assistance

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

In a brief, randomized crossover investigation, adults with type 2 diabetes and end-stage renal disease who needed dialysis benefited from an artificial pancreas. Tests conducted by the University of Cambridge and Inselspital, University Hospital of Bern, Switzerland, reveal that now the device can help patients safely and effectively monitor their blood sugar levels and reduce the risk of low blood sugar levels.

Diabetes is the most prevalent cause of kidney failure, accounting for just under one-third (30%) of all cases. As the number of people living with type 2 diabetes rises, so does the number of people who require dialysis or a kidney transplant. Kidney failure raises the risk of hypoglycemia and hyperglycemia, or unusually low or high blood sugar levels, which can lead to problems ranging from dizziness to falls and even coma.

Diabetes management in adults with renal failure is difficult for both the patients and the healthcare practitioners. Many components of their therapy, including blood sugar level targets and medications, are poorly understood. Because most oral diabetes drugs are not indicated for these patients, insulin injections are the most often utilized diabetic therapy-yet establishing optimum insulin dose regimes is difficult.

A team from the University of Cambridge and Cambridge University Hospitals NHS Foundation Trust earlier developed an artificial pancreas with the goal of replacing insulin injections for type 1 diabetic patients. The team, collaborating with experts at Bern University Hospital and the University of Bern in Switzerland, demonstrated that the device may be used to help patients with type 2 diabetes and renal failure in a study published on 4 August 2021 in Nature Medicine.

The study’s lead author, Dr Charlotte Boughton of the Wellcome Trust-MRC Institute of Metabolic Science at the University of Cambridge, stated:

Patients living with type 2 diabetes and kidney failure are a particularly vulnerable group and managing their condition-trying to prevent potentially dangerous highs or lows of blood sugar levels – can be a challenge. There’s a real unmet need for new approaches to help them manage their condition safely and effectively.

The Device

The artificial pancreas is a compact, portable medical device that uses digital technology to automate insulin delivery to perform the role of a healthy pancreas in managing blood glucose levels. The system is worn on the outside of the body and consists of three functional components:

  • a glucose sensor
  • a computer algorithm for calculating the insulin dose
  • an insulin pump

The artificial pancreas directed insulin delivery on a Dana Diabecare RS pump using a Dexcom G6 transmitter linked to the Cambridge adaptive model predictive control algorithm, automatically administering faster-acting insulin aspart (Fiasp). The CamDiab CamAPS HX closed-loop app on an unlocked Android phone was used to manage the closed loop system, with a goal glucose of 126 mg/dL. The program calculated an insulin infusion rate based on the data from the G6 sensor every 8 to 12 minutes, which was then wirelessly routed to the insulin pump, with data automatically uploaded to the Diasend/Glooko data management platform.

The Case Study

Between October 2019 and November 2020, the team recruited 26 dialysis patients. Thirteen patients were randomly assigned to get the artificial pancreas first, followed by 13 patients who received normal insulin therapy initially. The researchers compared how long patients spent as outpatients in the target blood sugar range (5.6 to 10.0mmol/L) throughout a 20-day period.

Patients who used the artificial pancreas spent 53 % in the target range on average, compared to 38% who utilized the control treatment. When compared to the control therapy, this translated to approximately 3.5 more hours per day spent in the target range.

The artificial pancreas resulted in reduced mean blood sugar levels (10.1 vs. 11.6 mmol/L). The artificial pancreas cut the amount of time patients spent with potentially dangerously low blood sugar levels, known as ‘hypos.’

The artificial pancreas’ efficacy improved significantly over the research period as the algorithm evolved, and the time spent in the target blood sugar range climbed from 36% on day one to over 60% by the twentieth day. This conclusion emphasizes the need of employing an adaptive algorithm that can adapt to an individual’s fluctuating insulin requirements over time.

When asked if they would recommend the artificial pancreas to others, everyone who responded indicated they would. Nine out of ten (92%) said they spent less time controlling their diabetes with the artificial pancreas than they did during the control period, and a comparable amount (87%) said they were less concerned about their blood sugar levels when using it.

Other advantages of the artificial pancreas mentioned by study participants included fewer finger-prick blood sugar tests, less time spent managing their diabetes, resulting in more personal time and independence, and increased peace of mind and reassurance. One disadvantage was the pain of wearing the insulin pump and carrying the smartphone.

Professor Roman Hovorka, a senior author from the Wellcome Trust-MRC Institute of Metabolic Science, mentioned:

Not only did the artificial pancreas increase the amount of time patients spent within the target range for the blood sugar levels, but it also gave the users peace of mind. They were able to spend less time having to focus on managing their condition and worrying about the blood sugar levels, and more time getting on with their lives.

The team is currently testing the artificial pancreas in outpatient settings in persons with type 2 diabetes who do not require dialysis, as well as in difficult medical scenarios such as perioperative care.

The artificial pancreas has the potential to become a fundamental part of integrated personalized care for people with complicated medical needs,” said Dr Lia Bally, who co-led the study in Bern.

The authors stated that the study’s shortcomings included a small sample size due to “Brexit-related study funding concerns and the COVID-19 epidemic.”

Boughton concluded:

We would like other clinicians to be aware that automated insulin delivery systems may be a safe and effective treatment option for people with type 2 diabetes and kidney failure in the future.

Main Source:

Boughton, C. K., Tripyla, A., Hartnell, S., Daly, A., Herzig, D., Wilinska, M. E., & Hovorka, R. (2021). Fully automated closed-loop glucose control compared with standard insulin therapy in adults with type 2 diabetes requiring dialysis: an open-label, randomized crossover trial. Nature Medicine, 1-6.

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Artificial pancreas effectively controls type 1 diabetes in children age 6 and up

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/10/08/artificial-pancreas-effectively-controls-type-1-diabetes-in-children-age-6-and-up/

Google, Verily’s Uses AI to Screen for Diabetic Retinopathy

Reporter : Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/04/08/49900/

World’s first artificial pancreas

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2019/05/16/worlds-first-artificial-pancreas/

Artificial Pancreas – Medtronic Receives FDA Approval for World’s First Hybrid Closed Loop System for People with Type 1 Diabetes

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2016/09/30/artificial-pancreas-medtronic-receives-fda-approval-for-worlds-first-hybrid-closed-loop-system-for-people-with-type-1-diabetes/

Read Full Post »

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Al is on the way to lead critical ED decisions on CT

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

Artificial intelligence (AI) has infiltrated many organizational processes, raising concerns that robotic systems will eventually replace many humans in decision-making. The advent of AI as a tool for improving health care provides new prospects to improve patient and clinical team’s performance, reduce costs, and impact public health. Examples include, but are not limited to, automation; information synthesis for patients, “fRamily” (friends and family unpaid caregivers), and health care professionals; and suggestions and visualization of information for collaborative decision making.

In the emergency department (ED), patients with Crohn’s disease (CD) are routinely subjected to Abdomino-Pelvic Computed Tomography (APCT). It is necessary to diagnose clinically actionable findings (CAF) since they may require immediate intervention, which is typically surgical. Repeated APCTs, on the other hand, results in higher ionizing radiation exposure. The majority of APCT performance guidance is clinical and empiric. Emergency surgeons struggle to identify Crohn’s disease patients who actually require a CT scan to determine the source of acute abdominal distress.

Image Courtesy: Jim Coote via Pixabay https://www.aiin.healthcare/media/49446

Aid seems to be on the way. Researchers employed machine learning to accurately distinguish these sufferers from Crohn’s patients who appear with the same complaint but may safely avoid the recurrent exposure to contrast materials and ionizing radiation that CT would otherwise wreak on them.

The study entitled “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department” was published on July 9 in Digestive and Liver Disease by gastroenterologists and radiologists at Tel Aviv University in Israel.

Retrospectively, Jacob Ollech and his fellow researcher have analyzed 101 emergency treatments of patients with Crohn’s who underwent abdominopelvic CT.

They were looking for examples where a scan revealed clinically actionable results. These were classified as intestinal blockage, perforation, intra-abdominal abscess, or complex fistula by the researchers.

On CT, 44 (43.5 %) of the 101 cases reviewed had such findings.

Ollech and colleagues utilized a machine-learning technique to design a decision-support tool that required only four basic clinical factors to test an AI approach for making the call.

The approach was successful in categorizing patients into low- and high-risk groupings. The researchers were able to risk-stratify patients based on the likelihood of clinically actionable findings on abdominopelvic CT as a result of their success.

Ollech and co-authors admit that their limited sample size, retrospective strategy, and lack of external validation are shortcomings.

Moreover, several patients fell into an intermediate risk category, implying that a standard workup would have been required to guide CT decision-making in a real-world situation anyhow.

Consequently, they generate the following conclusion:

We believe this study shows that a machine learning-based tool is a sound approach for better-selecting patients with Crohn’s disease admitted to the ED with acute gastrointestinal complaints about abdominopelvic CT: reducing the number of CTs performed while ensuring that patients with high risk for clinically actionable findings undergo abdominopelvic CT appropriately.

Main Source:

Konikoff, Tom, Idan Goren, Marianna Yalon, Shlomit Tamir, Irit Avni-Biron, Henit Yanai, Iris Dotan, and Jacob E. Ollech. “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department.” Digestive and Liver Disease (2021). https://www.sciencedirect.com/science/article/abs/pii/S1590865821003340

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Al App for People with Digestive Disorders

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/24/ai-app-for-people-with-digestive-disorders/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Al System Used to Detect Lung Cancer

Reporter: Irina Robu, Ph.D.

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Artificial Intelligence: Genomics & Cancer

https://pharmaceuticalintelligence.com/ai-in-genomics-cancer/

Yet another Success Story: Machine Learning to predict immunotherapy response

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/07/06/yet-another-success-story-machine-learning-to-predict-immunotherapy-response/

Systemic Inflammatory Diseases as Crohn’s disease, Rheumatoid Arthritis and Longer Psoriasis Duration May Mean Higher CVD Risk

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2017/10/09/systemic-inflammatory-diseases-as-crohns-disease-rheumatoid-arthritis-and-longer-psoriasis-duration-may-mean-higher-cvd-risk/

Autoimmune Inflammatory Bowel Diseases: Crohn’s Disease & Ulcerative Colitis: Potential Roles for Modulation of Interleukins 17 and 23 Signaling for Therapeutics

Curators: Larry H Bernstein, MD FCAP and Aviva Lev-Ari, PhD, RN https://pharmaceuticalintelligence.com/2016/01/23/autoimmune-inflammtory-bowl-diseases-crohns-disease-ulcerative-colitis-potential-roles-for-modulation-of-interleukins-17-and-23-signaling-for-therapeutics/

Inflammatory Disorders: Inflammatory Bowel Diseases (IBD) – Crohn’s and Ulcerative Colitis (UC) and Others

Curators: Larry H. Bernstein, MD, FCAP and Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/gama-delta-epsilon-gde-is-a-global-holding-company-absorbing-lpbi/subsidiary-5-joint-ventures-for-ip-development-jvip/drug-discovery-with-3d-bioprinting/ibd-inflammatory-bowl-diseases-crohns-and-ulcerative-colitis/

Read Full Post »

This AI Just Evolved From Companion Robot To Home-Based Physician Helper

Reporter: Ethan Coomber, Research Assistant III, Data Science and Podcast Library Development 

Article Author: Gil Press Senior Contributor Enterprise & Cloud @Forbes 

Twitter: @GilPress I write about technology, entrepreneurs and innovation.

Intuition Robotics announced today that it is expanding its mission of improving the lives of older adults to include enhancing their interactions with their physicians. The Israeli startup has developed the AI-based, award-winning proactive social robot ElliQ which has spent over 30,000 days in older adults’ homes over the past two years. Now ElliQ will help increase patient engagement while offering primary care providers continuous actionable data and insights for early detection and intervention.

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

The very big challenge Intuition Robotics set up to solve was to “understand how to create a relationship between a human and a machine,” says co-founder and CEO Dor Skuler. Unlike a number of unsuccessful high-profile social robots (e.g., Pepper) that tried to perform multiple functions in multiple settings, ElliQ has focused exclusively on older adults living alone. Understanding empathy and how to grow a trusting relationship were the key objectives of Intuition Robotics’ research project, as well as how to continuously learn the specific (and changing) behavioral characteristics, habits, and preferences of the older adults participating in the experiment.

The results are impressive: 90% of users engage with ElliQ every day, without deterioration in engagement over time. When ElliQ proactively initiates deep conversational interactions with its users, there’s 70% response rate. Most important, the participants share something personal with ElliQ almost every day. “She has picked up my attitude… she’s figured me out,” says Deanna Dezern, an ElliQ user who describes her robot companion as “my sister from another mother.”

Higher patient engagement leads to lower costs of delivering care and the quality of the physician-patient relationship is positively associated with improved functional health, studies have found. Typically, however, primary care physicians see their patients anywhere from once a month to once a year, even though about 85% of seniors in the U.S. have at least one chronic health condition. ElliQ, with the consent of its users, can provide data on the status of patients in between office visits and facilitate timely and consistent communications between physicians and their patients.

Supporting the notion of a home-based physician assistant robot is the transformation of healthcare delivery in the U.S. More and more primary care physicians are moving from a fee-for-service business model, where doctors are paid according to the procedures used to treat a patient, to “capitation,” where doctors are paid a set amount for each patient they see. This shift in how doctors are compensated is gaining momentum as a key solution for reducing the skyrocketing costs of healthcare: “…inadequate, unnecessary, uncoordinated, and inefficient care and suboptimal business processes eat up at least 35%—and maybe over 50%—of the more than $3 trillion that the country spends annually on health care. That suggests more than $1 trillion is being squandered,” states “The Case for Capitation,” a Harvard Business Review article.

Under this new business model, physicians have a strong incentive to reduce or eliminate visits to the ER and hospitalization, so ElliQ’s assistance in early intervention and support of proactive and preventative healthcare is highly valuable. ElliQ’s “new capabilities provide physicians with visibility into the patient’s condition at home while allowing seamless communication… can assist me and my team in early detection and mitigation of health issues, and it increases patients’ involvement in their care through more frequent engagement and communication,” says in a statement Dr. Peter Barker of Family Doctors, a Mass General Brigham-affiliated practice in Swampscott, MA, that is working with Intuition Robotics.

With the new stage in its evolution, ElliQ becomes “a conversational agent for self-reported data on how people are doing based on what the doctor is telling us to look for and, at the same time, a super-simple communication channel between the physician and the patient,” says Skuler. As only 20% of the individual’s health has to do with the administration of healthcare, Skuler says the balance is already taken care of by ElliQ—encouraging exercise, watching nutrition, keeping mentally active, connecting to the outside world, and promoting a sense of purpose.

A recent article in The Communication of the ACM pointed out that “usability concerns have for too long overshadowed questions about the usefulness and acceptability of digital technologies for older adults.” Specifically, the authors challenge the long-held assumption that accessibility and aging research “fall under the same umbrella despite the fact that aging is neither an illness nor a disability.”

For Skuler, a “pyramid of value” is represented in Intuition Robotics offering. At the foundation is the physical product, easy to use and operate and doing what it is expected to do. Then there is the layer of “building relationships based on trust and empathy,” with a lot of humor and social interaction and activities for the users. On top are specific areas of value to older adults, and the first one is healthcare. There will be more in the future, anything that could help older adults live better lives, such as direct connections to the local community. ”Healthcare is an interesting experiment and I’m very much looking forward to see what else the future holds for ElliQ,” says Skuler.

Original. Reposted with permission, 7/7/2021.

Other related articles published in this Open Access Online Scientific Journal include the Following:

The Future of Speech-Based Human-Computer Interaction
Reporter: Ethan Coomber
https://pharmaceuticalintelligence.com/2021/06/23/the-future-of-speech-based-human-computer-interaction/

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Read Full Post »

The Future of Speech-Based Human-Computer Interaction

Reporter: Ethan Coomber, Research Assistant III

2021 LPBI Summer Internship in Data Science and Podcast Library Development
This article reports on a research conducted by the Tokyo Institute of Technology, published on 9 June 2021.

As technology continues to advance, the human-computer relationship develops alongside with it. As researchers and developers find new ways to improve a computer’s ability to recognize the distinct pitches that compose a human’s voice, the potential of technology begins to push back what people previously thought was possible. This constant improvement in technology has allowed us to identify new potential challenges in voice-based technological interaction.

When humans interact with one another, we do not convey our message with only our voices. There are a multitude of complexities to our emotional states and personality that cannot be obtained simply through the sound coming out of our mouths. Aspects of our communication such as rhythm, tone, and pitch are essential in our understanding of one another. This presents a challenge to artificial intelligence as technology is not able to pick up on these cues.

https://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php

In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.

– Kate Seaborn

While it will always be difficult for technology to perfectly replicate a human interaction, the inclusion of filler terms such as “I mean…”, “um” and “like…” have been shown to improve human’s interaction and comfort when communicating with technology. Humans prefer communicating with agents that match their personality and overall communication style. The illusion of making the artificial intelligence appear human has a dramatic affect on the overall comfort of the person interacting with the technology. Several factors that have been proven to improve communication are when the artificial intelligence comes across as happy or empathetic with a higher pitched voice.

Using machine learning, computers are able to recognize patterns within human speech rather than requiring programming for specific patterns. This allows for the technology to adapt to human tendencies as they continue to see them. Over time, humans develop nuances in the way they speak and communicate which frequently results in a tendency to shorten certain words. One of the more common examples is the expression “I don’t know”. This expression is frequently reduced to the phrase “dunno”. Using machine learning, computers would be able to recognize this pattern and realize what the human’s intention is.

With advances in technology and the development of voice assistance in our lives, we are expanding our interactions to include computer interfaces and environments. While there are still many advances that need to be made in order to achieve the desirable level of communication, developers have identified the necessary steps to achieve the desirable human-computer interaction.

Sources:

Tokyo Institute of Technology. “The role of computer voice in the future of speech-based human-computer interaction.” ScienceDaily. ScienceDaily, 9 June 2021.

Rev. “Speech Recognition Trends to Watch in 2021 and Beyond: Responsible AI.” Rev, 2 June 2021, http://www.rev.com/blog/artificial-intelligence-machine-learning-speech-recognition.

“The Role of Computer Voice in the Future of Speech-Based Human-Computer Interaction.” EurekAlert!, 1 June 2021, http://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

The Human Genome Project
Reporter: Larry H Bernstein, MD, FCAP, Curator
https://pharmaceuticalintelligence.com/2015/09/09/the-human-genome-project/

Read Full Post »

Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy

Reporter: Srinivas Sriram, Research Assistant I
Research Team: Srinivas Sriram, Abhisar Anand

2021 LPBI Summer Intern in Data Science and Website Construction
This article reports on a research study conducted from January 2021 to May 2021.
This Research was completed before the 2021 LPBI Summer Internship that began on 6/15/2021.

The main criterion of this study was to utilize the dataset (shown above) to develop a DL network that could accurately predict new seizures based on incoming data. To begin the study, our research group did some exploratory data analysis on the dataset and we recognized the key defining pattern of the data that allowed for the development of the DL model. This pattern of the data can be represented in the graph above, where the lines representing seizure data had major spikes in extreme hertz values, while the lines representing normal patient data remained stable without any spikes. We utilized this pattern as a baseline for our model. 

Conclusions and Future Improvements:

Through our system, we were able to create a prototype solution that would predict when seizures happened in a potential patient using an accurate LSTM network and a reliable hardware system. This research can be implemented in hospitals with patients suffering from epilepsy in order to help them as soon as they experience a seizure to prevent damage. However, future improvements need to be made to this solution to allow it to be even more viable in the Healthcare Industry, which is listed below.

  • Needs to be implemented on a more reliable EEG headset (covers all neurons of the brain, less prone to electric disruptions shown in the prototype). 
  • Needs to be tested on live patients to deem whether the solution is viable and provides a potential solution to the problem. 
  • The network can always be fine-tuned to maximize performance. 
  • A better alert system can be implemented to provide as much help as possible. 

These improvements, when implemented, can help provide a real solution to one of the most common diseases faced in the world. 

Background Information:

Epilepsy is described as a brain disorder diagnostic category for multiple occurrences of seizures that happen within recurrent and/or a brief timespan. According to the World Health Organization, seizure disorders, including epilepsy, are among the most common neurological diseases. Those who suffer seizures have a 3 times higher risk of premature death. Epilepsy is often treatable, especially when physicians can provide necessary treatment quickly. When untreated, however, seizures can cause physical, psychological, and emotional, including isolation from others. Quick diagnosis and treatment prevent suffering and save lives. The importance of a quick diagnosis of epilepsy has led to our research team developing Deep Learning (DL) algorithms for the sole purpose of detecting epileptic seizures as soon as they occur. 

Throughout the years, one common means of detecting Epilepsy has emerged in the form of an electroencephalogram (EEG). EEGs can detect and compile “normal” and “abnormal “brain wave activity” and “indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities”. EEG waves are classified mainly by brain wave frequencies (EEG, 2020). The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slow, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. EEG detection of electrical brain wave frequencies can be used to detect and diagnose seizures based on their deviation from usual brain wave patterns.

In this particular research project, our research group hoped to develop a DL algorithm that when implemented on a live, portable EEG brain wave capturing device, could accurately predict when a particular patient was suffering from Epilepsy as soon as it occurred. This would be accomplished by creating a network that could detect when the brain frequencies deviated from the normal frequency ranges. 

The Study:

Line Graph representing EEG Brain Waves from a Seizure versus EEG Brain Waves from a normal individual. 

Source Dataset: https://archive.ics.uci.edu/ml/datasets/Epileptic+Seizure+Recognition

To expand more on the dataset, it is an EEG data set compiled by Qiuyi Wu and Ernest Fokoue (2021) from the work of medical researchers R.Andrzejak, M.D. et al. (2001) which had been made public domain through the UCI Machine Learning Repository We also confirmed fair use permission with UCI. The dataset had been gathered by Andrzejak during examinations of 500 patients with a chronic seizure disorder. R.G.Andrzejak, et al. (2001) recorded each entry in the EEG dataset used for this project within 23.6 seconds in a time-series data structure. Each row in the dataset represented a patient recorded. The continuous variables in the dataset were single EEG data points at that specific point in time during the measuring period. At the end of the dataset, was a y-variable that indicated whether or not the patient had a seizure during the period the data was recorded. The continuous variables, or the EEG data, for each patient, varied widely based on whether the patient was experiencing a seizure at that time. The Wu & Fokoue Dataset (2021) consists of one file of 11,500 rows, each with 178 sequential data points concatenated from the original dataset of 5 data folders, each including 100 files of EEG recordings of 23.6 seconds and containing 4097 data points. Each folder contained a single, original subset. Subset A contained EEG data gathering during epileptic seizure…. Subset B contained EEG data from brain tumor sites. Subset 3, from a healthy site where tumors had been located. Subsets 4 and 5 from non-seizure patients at rest with eyes open and closed, respectively. 

Based on the described data, our team recognized that a Recurrent Neural Network (RNN) was needed to input the sequential data and return an output of whether the sequential data was a seizure or not. However, we realized that RNN models are known to get substantially large over time, reducing computation speeds. To help provide a solution to this issue, our group decided to implement a long-short-term memory (LSTM) model. After deciding our model’s architecture, we proceeded to train our model in two different DL frameworks inside Python, TensorFlow, and PyTorch. Through various rounds of retesting and redesigning, we were able to train and develop two accurate models in each of the models that not only performed well while learning the data while training, but also could accurately predict new data in the testing set (98 percent accuracy on the unseen data). These LSTM networks could classify normal EEG data when the brain waves are normal, and then immediately predict the seizure data based on if a dramatic spike occurred in the data. 

After training our model, we had to implement our model in a real-life prototype scenario in which we utilized a Single Board Computer (SBC) in the Raspberry Pi 4 and a live capturing EEG headset in the Muse 2 Headband. The two hardware components would sync up through Bluetooth and the headband would return EEG data to the Raspberry Pi, which would process the data. Through the Muselsl API in Python, we were able to retrieve this EEG data in a format similar to the manner implemented during training. This new input data would be fed into our LSTM network (TensorFlow was chosen for the prototype due to its better performance than the PyTorch network), which would then output the result of the live captured EEG data in small intervals. This constant cycle would be able to accurately predict a seizure as soon as it occurs through batches of EEG data being fed into the LSTM network. Part of the reason why our research group chose the Muse Headband, in particular, was not only due to its compatibility with Python but also due to the fact that it was able to represent seizure data. Because none of our members had epilepsy, we had to find a reliable way of testing our model to make sure it worked on the new data. Through electrical disruptions in the wearable Muse Headband, we were able to simulate these seizures that worked with our network’s predictions. In our program, we implemented an alert system that would email the patient’s doctor as soon as a seizure was detected.

Individual wearing the Muse 2 Headband

Image Source: https://www.techguide.com.au/reviews/gadgets-reviews/muse-2-review-device-help-achieve-calm-meditation/

Sources Cited:

Wu, Q. & Fokoue, E. (2021).  Epileptic seizure recognition data set: Data folder & Data set description. UCI Machine Learning Repository: Epileptic Seizure Recognition. Jan. 30. Center for Machine Learning and Intelligent Systems, University of California Irvine.

Nayak, C. S. (2020). EEG normal waveforms.” StatPearls [Internet]. U.S. National Library of Medicine, 31 Jul. 2020, www.ncbi.nlm.nih.gov/books/NBK539805/#.

Epilepsy. (2019). World Health Organization Fact Sheet. Jun. https://www.who.int/ news-room/fact-sheet s/detail/epilepsy

Other Related Articles published in this Open Access Online Scientific Journal include the following:

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I

https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc

https://pharmaceuticalintelligence.com/2021/05/04/machine-learning-ml-in-cancer-prognosis-prediction-helps-the-researcher-to-identify-multiple-known-as-well-as-candidate-cancer-diver-genes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms

Reporter: Dror Nir, PhD

https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning extracts Histopathological Patterns and accurately discriminates 28 Cancer and 14 Normal Tissue Types: Pan-cancer Computational Histopathology Analysis

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/10/28/deep-learning-extracts-histopathological-patterns-and-accurately-discriminates-28-cancer-and-14-normal-tissue-types-pan-cancer-computational-histopathology-analysis/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe

Reporter: Howard Donohue, PhD (EAW)

https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

Read Full Post »

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves

Reporter: Abhisar Anand, Research Assistant I
Research Team: Abhisar Anand, Srinivas Sriram

2021 LPBI Summer Internship in Data Science and Website construction.
This article reports on a research study conducted till December 2020.
Research completed before the 2021 LPBI Summer Internship began in 6/15/2021.

As the field of Artificial Intelligence progresses, various algorithms have been implemented by researchers to classify emotions from EEG signals. Few researchers from China and Singapore released a paper (“An Investigation of Deep Learning Models from EEG-Based Emotion Recognition”) analyzing different types of DL model architectures such as deep neural networks (DNN), convolutional neural networks (CNN), long short-term memory (LSTM), and a hybrid of CNN and LSTM (CNN-LSTM). The dataset used in this investigation was the DEAP Dataset which consisted of EEG signals of patients that watched 40 one-minute long music videos and then rated them in terms of the levels of arousal, valence, like/dislike, dominance and familiarity. The result of the investigation presented that CNN (90.12%) and CNN-LSTM (94.7%) models had the highest performance out of the batch of DL models. On the other hand, the DNN model had a very fast training speed but was not able to perform as accurately as other other models. The LSTM model was also not able to perform accurately and the training speed was much slower as it was difficult to achieve convergence.

This research in the various model architectures provides a sense of what the future of Emotion Classification with AI holds. These Deep Learning models can be implemented in a variety of different scenarios across the world, all to help with detecting emotions in scenarios where it may be difficult to do so. However, there needs to be more research implemented in the model training aspect to ensure the accuracy of the classification is top-notch. Along with that, newer and more reliable hardware can be implemented in society to provide an easy-to-access and portable EEG collection device that can be used in any different scenario across the world. Overall, although future improvements need to be implemented, the future of making sure that emotions are accurately detected in all people is starting to look a lot brighter thanks to the innovation of AI in the neuroscience field.

Emotions are a key factor in any person’s day to day life. Most of the time, we as humans can detect these emotions through physical cues such as movements, facial expressions, and tone of voice. However, in certain individuals, it can be hard to identify their emotions through their visible physical cues. Recent studies in the Machine Learning and AI field provide a particular development in the ability to detect emotions through brainwaves, more specifically EEG brainwaves. These researchers from across the world utilize the same concept of EEG implemented in AI to help predict the state an individual is in at any given moment.

Emotion classification based on brain wave: a survey (Figure 4)

Image Source: https://hcis-journal.springeropen.com/articles/10.1186/s13673-019-0201-x

EEGs can detect and compile normal and abnormal brain wave activity and indicate brain activity or inactivity that correlates with physical, emotional, and intellectual activities. EEG signals are classified mainly by brain wave frequencies. The most commonly studied are delta, theta, alpha, sigma, and beta waves. Alpha waves, 8 to 12 hertz, are the key wave that occurs in normal awake people. They are the defining factor for the everyday function of the adult brain. Beta waves, 13 to 30 hertz, are the most common type of wave in both children and adults. They are found in the frontal and central areas of the brain and occur at a certain frequency which, if slowed, is likely to cause dysfunction. Theta waves, 4 to 7 hertz, are also found in the front of the brain, but they slowly move backward as drowsiness increases and the brain enters the early stages of sleep. Theta waves are known as active during focal seizures. Delta waves, 0.5 to 4 hertz, are found in the frontal areas of the brain during deep sleep. Sigma waves, 12-16 hertz, are very slow frequency waves that occur during sleep. These EEG signals can help for the detection of emotions based on the frequencies that the signals happen in and the activity of the signals (whether they are active or relatively calm). 

Sources:

Zhang, Yaqing, et al. “An Investigation of Deep Learning Models for EEG-Based Emotion Recognition.” Frontiers in Neuroscience, vol. 14, 2020. Crossref, doi:10.3389/fnins.2020.622759.

Nayak, Anilkumar, Chetan, Arayamparambil. “EEG Normal Waveforms.” National Center for Biotechnology Information, StatPearls Publishing LLC., 4 May 2021, http://www.ncbi.nlm.nih.gov/books/NBK539805.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for the Instant Prediction of Patients with Epilepsy
Reporter: Srinivas Sriram, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-the-instant-prediction-of-patients-with-epilepsy/

Prediction of Cardiovascular Risk by Machine Learning (ML) Algorithm: Best performing algorithm by predictive capacity had area under the ROC curve (AUC) scores: 1st, quadratic discriminant analysis; 2nd, NaiveBayes and 3rd, neural networks, far exceeding the conventional risk-scaling methods in Clinical Use
Curator: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/07/04/prediction-of-cardiovascular-risk-by-machine-learning-ml-algorithm-best-performing-algorithm-by-predictive-capacity-had-area-under-the-roc-curve-auc-scores-1st-quadratic-discriminant-analysis/

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes
Reporter: Amandeep Kaur, B.Sc., M.Sc.
https://pharmaceuticalintelligence.com/2021/05/29/developing-machine-learning-models-for-prediction-of-onset-of-type-2-diabetes/

Deep Learning-Assisted Diagnosis of Cerebral Aneurysms
Reporter: Dror Nir, PhD
https://pharmaceuticalintelligence.com/2019/06/09/deep-learning-assisted-diagnosis-of-cerebral-aneurysms/

Mutations in a Sodium-gated Potassium Channel Subunit Gene related to a subset of severe Nocturnal Frontal Lobe Epilepsy
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2012/10/22/mutations-in-a-sodium-gated-potassium-channel-subunit-gene-to-a-subset-of-severe-nocturnal-frontal-lobe-epilepsy/

A new treatment for depression and epilepsy – Approval of external Trigeminal Nerve Stimulation (eTNS) in Europe
Reporter: Howard Donohue, PhD (EAW)
https://pharmaceuticalintelligence.com/2012/10/07/a-new-treatment-for-depression-and-epilepsy-approval-of-external-trigeminal-nerve-stimulation-etns-in-europe/

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

Older Posts »

%d bloggers like this: