Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
The Continued Impact and Possibilities of AI in Medical and Pharmaceutical Industry Practices
Reporter: Adam P. Tubman, MSc Biotechnology, Research Associate 3, Computer Graphics and AI in Drug Discovery
Researchers have been able to discover many ways to incorporate AI into the practices of healthcare, both in terms of medical healthcare and also in pharmaceutical drug development. For example, given the situation where a doctor provides an inaccurate diagnosis to a patient because the doctor had an incomplete or inaccurate medical record/history, AI presents a solution that has the potential to rapidly and correctly account for human error and predict the correct diagnosis based on the patterns identified in other patient’s medical history to disease diagnosis indication. In the pharmaceutical industry, companies are changing and expanding approaches to drug discovery and development given the possibilities that AI can offer. One company, Reverie Labs, located in Cambridge, MA, is a pharmaceutical company utilizing AI for application of machine learning and computational chemistry to discover new possible compounds to be used in the development of cancer treatments.
Today, AI uses have had many other applications in medicine including managing healthcare data and performing robotic surgery, both of which transform the in-person patient and doctor experience. AI has even been used to change in-person cancer patient experiences. For example, Freenome, a company in San Francisco, CA uses AI in initial screenings, blood tests and diagnostic tests when a patient is being initially tested for cancer. The hope is that this technology will aide in speeding up cancer diagnoses and lead to new treatment developments.
The future will continue to bring many possibilities of AI, provided an acceptable level of accuracy is still maintained by AI technologies and that the technology remains beneficial. If research continues to focus on diagnosing diseases at a faster rate given the potential human errors in having an inaccurate or incomplete medical record upon diagnosis, AI could provide an improved experience for patients given the quicker diagnosis and treatment combined with less time spent either treating the wrong underlying condition or not knowing what condition to treat when accounting for an incomplete medical record. If this technology is proven to be successful not just in theory, but in practice, technology would then be available and could be beneficially applied to all diagnoses and treatment plans, across the world.
However, the reality regarding AI development is that its evolution depends on how much human effort is involved in its development. Therefore, the world won’t know or see the full benefits of AI until it is developed and actively applied. Similarly, the impact that AI will have in medical and pharmaceutical practices won’t be known until scientists fully develop and apply the technologies. Many possibilities, including a possible drastic lowering of the cost for pharmaceutical drugs across the board once drugs are much more readily discovered and produced, may carry a profound benefit to patients who currently struggle to afford their own treatment plans. Additionally, unforeseen advances in the medicinal and pharmaceutical fields because of AI development will lead to unforeseen effects on the global economy and many other life changing variables for the entire world.
For more information on this topic, please check out the article below.
Reporter: Frason Francis Kalapurakal, Research Assistant II
Researchers from MIT and Technion have made a significant contribution to the field of machine learning by developing an adaptive algorithm that addresses the challenge of determining when a machine should follow a teacher’s instructions or explore on its own. The algorithm autonomously decides whether to use imitation learning, which involves mimicking the behavior of a skilled teacher, or reinforcement learning, which relies on trial and error to learn from the environment.
The researchers’ key innovation lies in the algorithm’s adaptability and ability to determine the most effective learning method throughout the training process. To achieve this, they trained two “students” with different learning approaches: one using a combination of reinforcement and imitation learning, and the other relying solely on reinforcement learning. The algorithm continuously compared the performance of these two students, adjusting the emphasis on imitation or reinforcement learning based on which student achieved better results.
The algorithm’s efficacy was tested through simulated training scenarios, such as navigating mazes or reorienting objects with touch sensors. In all cases, the algorithm demonstrated superior performance compared to non-adaptive methods, achieving nearly perfect success rates and significantly outperforming other methods in terms of both accuracy and speed. This adaptability could enhance the training of machines in real-world situations where uncertainty is prevalent, such as robots navigating unfamiliar buildings or performing complex tasks involving object manipulation and locomotion.
Furthermore, the algorithm’s potential applications extend beyond robotics to various domains where imitation or reinforcement learning is employed. For example, large language models like GPT-4 could be used as teachers to train smaller models to excel in specific tasks. The researchers also suggest that analyzing the similarities and differences between machines and humans learning from their respective teachers could provide valuable insights for improving the learning experience.The MIT and Technion researchers’ algorithm stands out due to its principled approach, efficiency, and versatility across different domains. Unlike existing methods that require brute-force trial-and-error or manual tuning of parameters, their algorithm dynamically adjusts the balance between imitation and trial-and-error learning based on performance comparisons. This robustness, adaptability, and promising results make it a noteworthy advancement in the field of machine learning.
References:
“TGRL: TEACHER GUIDED REINFORCEMENT LEARNING ALGORITHM FOR POMDPS” Reincarnating Reinforcement Learning Workshop at ICLR 2023 https://openreview.net/pdf?id=kTqjkIvjj7
Concrete Problems in AI Safety by Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané https://arxiv.org/abs/1606.06565
Other related articles published in this Open Access Online Scientific Journal include the following:
92 articles in the Category:
‘Artificial Intelligence – Breakthroughs in Theories and Technologies’
Big pharma companies are snapping up collaborations with firms using AI to speed up drug discovery, with one of the latest being Sanofi’s pact with Exscientia.
Tech giants are placing big bets on digital health analysis firms, such as Oracle’s €25.42B ($28.3B) takeover of Cerner in the US.
There’s also a steady flow of financing going to startups taking new directions with AI and bioinformatics, with the latest example being a €20M Series A round by SeqOne Genomics in France.
“IBM Watson uses a philosophy that is diametrically opposed to SeqOne’s,” said Jean-Marc Holder, CSO of SeqOne. “[IBM Watson seems] to rely on analysis of large amounts of relatively unstructured data and bet on the volume of data delivering the right result. By opposition, SeqOne strongly believes that data must be curated and structured in order to deliver good results in genomics.”
Francisco Partners is picking up a range of databases and analytics tools – including
Health Insights,
MarketScan,
Clinical Development,
Social Programme Management,
Micromedex and
other imaging and radiology tools, for an undisclosed sum estimated to be in the region of $1 billion.
IBM said the sell-off is tagged as “a clear next step” as it focuses on its platform-based hybrid cloud and artificial intelligence strategy, but it’s no secret that Watson Health has failed to live up to its early promise.
The sale also marks a retreat from healthcare for the tech giant, which is remarkable given that it once said it viewed health as second only to financial services market as a market opportunity.
IBM said it “remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT.”
The company reportedly invested billions of dollars in Watson, but according to a Wall Street Journal report last year, the health business – which provided cloud-based access to the supercomputer and a range of analytics services – has struggled to build market share and reach profitability.
An investigation by Stat meanwhile suggested that Watson Health’s early push into cancer for example was affected by a premature launch, interoperability challenges and over-reliance on human input to generate results.
For its part, IBM has said that the Watson for Oncology product has been improving year-on-year as the AI crunches more and more data.
That is backed up by a meta analysis of its performance published last year in Nature found that the treatment recommendations delivered by the tool were largely in line with human doctors for several cancer types.
However, the study also found that there was less consistency in more advanced cancers, and the authors noted the system “still needs further improvement.”
Watson Health offers a range of other services of course, including
tools for genomic analysis and
running clinical trials that have found favour with a number of pharma companies.
Francisco said in a statement that it offers “a market leading team [that] provides its customers with mission critical products and outstanding service.”
The deal is expected to close in the second quarter, with the current management of Watson Health retaining “similar roles” in the new standalone company, according to the investment company.
IBM’s step back from health comes as tech rivals are still piling into the sector.
@pharma_BI is asking: What will be the future of WATSON Health?
@AVIVA1950 says on 1/26/2022:
Aviva believes plausible scenarios will be that Francisco Partners will:
A. Invest in Watson Health – Like New Mountains Capital (NMC) did with Cytel
B. Acquire several other complementary businesses – Like New Mountains Capital (NMC) did with Cytel
C. Hold and grow – Like New Mountains Capital (NMC) is doing with Cytel since 2018.
D. Sell it in 7 years to @Illumina or @Nvidia or Google’s Parent @AlphaBet
1/21/2022
IBM said Friday it will sell the core data assets of its Watson Health division to a San Francisco-based private equity firm, marking the staggering collapse of its ambitious artificial intelligence effort that failed to live up to its promises to transform everything from drug discovery to cancer care.
IBM has reached an agreement to sell its Watson Health data and analytics business to the private-equity firm Francisco Partners. … He said the deal will give Francisco Partners data and analytics assets that will benefit from “the enhanced investment and expertise of a healthcare industry focused portfolio.”5 days ago
5 days ago — IBM has been trying to find buyers for the Watson Health business for more than a year. And it was seeking a sale price of about $1 billion, The …Missing: Statement | Must include: Statement
5 days ago — IBM Watson Health – Certain Assets Sold: Executive Perspectives. In a prepared statement about the deal, Tom Rosamilia, senior VP, IBM Software, …
Feb 18, 2021 — International Business Machines Corp. is exploring a potential sale of its IBM Watson Health business, according to people familiar with the …
3 days ago — Nuance played a part in building watson in supplying the speech recognition component of Watson. Through the years, Nuance has done some serious …
Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?
Some in industry voice their concerns
Curator: Stephen J. Williams, PhD
Post on AI healthcare and explainable AI
In a Policy Forum article in Science “Beware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions. The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes. These have been discussed in prior posts on this site, as well as issues arising from multi-center trials. The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.
Summary
Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
Types of AI/ML Algorithms: Explainable and Interpretable algorithms
Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image). Generally the algorithm has to be trained on past data with known parameters. When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex. Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
Explainable AI/ML: This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model. The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing. Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable. Many medical devices that use an AI/ML algorithm use this type. An example is deep learning and neural networks.
The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.
For a deeper understanding of these two types of algorithms see here:
How interpretability is different from explainability
Why a model might need to be interpretable and/or explainable
Who is working to solve the black box problem—and how
What is interpretability?
Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.
All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.
People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.
Interpretability means that the cause and effect can be determined.
What is explainability?
ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.
To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.
Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.
In this neural network, the hidden layers (the two columns of blue dots) would be the black box.
For example, we have these data inputs:
Age
BMI score
Number of years spent smoking
Career category
If this model had high explainability, we’d be able to say, for instance:
The career category is about 40% important
The number of years spent smoking weighs in at 35% important
The age is 15% important
The BMI score is 10% important
Explainability: important, not always necessary
Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.
The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.
Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.
In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.
If that signal is high, that node is significant to the model’s overall performance.
If that signal is low, the node is insignificant.
With this understanding, we can define explainability as:
Knowledge of what one node represents and how important it is to the model’s performance.
So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?
The authors argue:
“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”
A suggestion for
Enhanced more involved clinical trials
Provide individuals added flexibility when interacting with a model, for example inputting their own test data
More interaction between user and model generators
Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)
Other articles on AI/ML in medicine and healthcare on this Open Access Journal include
Al is on the way to lead critical ED decisions on CT
Curator and Reporter: Dr. Premalata Pati, Ph.D., Postdoc
Artificial intelligence (AI) has infiltrated many organizational processes, raising concerns that robotic systems will eventually replace many humans in decision-making. The advent of AI as a tool for improving health care provides new prospects to improve patient and clinical team’s performance, reduce costs, and impact public health. Examples include, but are not limited to, automation; information synthesis for patients, “fRamily” (friends and family unpaid caregivers), and health care professionals; and suggestions and visualization of information for collaborative decision making.
In the emergency department (ED), patients with Crohn’s disease (CD) are routinely subjected to Abdomino-Pelvic Computed Tomography (APCT). It is necessary to diagnose clinically actionable findings (CAF) since they may require immediate intervention, which is typically surgical. Repeated APCTs, on the other hand, results in higher ionizing radiation exposure. The majority of APCT performance guidance is clinical and empiric. Emergency surgeons struggle to identify Crohn’s disease patients who actually require a CT scan to determine the source of acute abdominal distress.
Aid seems to be on the way. Researchers employed machine learning to accurately distinguish these sufferers from Crohn’s patients who appear with the same complaint but may safely avoid the recurrent exposure to contrast materials and ionizing radiation that CT would otherwise wreak on them.
Retrospectively, Jacob Ollech and his fellow researcher have analyzed 101 emergency treatments of patients with Crohn’s who underwent abdominopelvic CT.
They were looking for examples where a scan revealed clinically actionable results. These were classified as intestinal blockage, perforation, intra-abdominal abscess, or complex fistula by the researchers.
On CT, 44 (43.5 %) of the 101 cases reviewed had such findings.
Ollech and colleagues utilized a machine-learning technique to design a decision-support tool that required only four basic clinical factors to test an AI approach for making the call.
The approach was successful in categorizing patients into low- and high-risk groupings. The researchers were able to risk-stratify patients based on the likelihood of clinically actionable findings on abdominopelvic CT as a result of their success.
Ollech and co-authors admit that their limited sample size, retrospective strategy, and lack of external validation are shortcomings.
Moreover, several patients fell into an intermediate risk category, implying that a standard workup would have been required to guide CT decision-making in a real-world situation anyhow.
Consequently, they generate the following conclusion:
We believe this study shows that a machine learning-based tool is a sound approach for better-selecting patients with Crohn’s disease admitted to the ED with acute gastrointestinal complaints about abdominopelvic CT: reducing the number of CTs performed while ensuring that patients with high risk for clinically actionable findings undergo abdominopelvic CT appropriately.
Main Source:
Konikoff, Tom, Idan Goren, Marianna Yalon, Shlomit Tamir, Irit Avni-Biron, Henit Yanai, Iris Dotan, and Jacob E. Ollech. “Machine learning for selecting patients with Crohn’s disease for abdominopelvic computed tomography in the emergency department.” Digestive and Liver Disease (2021). https://www.sciencedirect.com/science/article/abs/pii/S1590865821003340
Other Related Articles published in this Open Access Online Scientific Journal include the following:
Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes
Reporter: Amandeep Kaur, B.Sc., M.Sc.
A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.
The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”
Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.
The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.
When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.
The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”
This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.
Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”
Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137
Other related articles were published in this Open Access Online Scientific Journal, including the following:
AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas
Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD
Quantum Physics Can Fight Fraud By Making Credit Card Verification Unspoofable
Reporter: Aviva Lev-Ari, PhD, RN
Decades of data security research have brought us highly reliable, standardized tools for common tasks such as digital signatures and encryption. But hackers are constantly working to crack data security innovations. Current credit/debit card technologies put personal money at risk because they’re vulnerable to fraud.
Physical security – which deals with anti-counterfeiting and the authentication of actual objects – is part of the problem too. The good guys and bad guys are locked in a never-ending arms race: one side develops objects and structures that are difficult to copy; the other side tries to copy them, and often succeeds.
But we think our new invention has the potential to leave the hackers behind. This innovative security measure uses the quantum properties of light to achieve fraud-proof authentication of objects.
The arms race is fought in secret; revealing your technology helps the enemy. Consequently, nobody knows how secure a technology really is. Remarkably, a recent development called Physical Unclonable Functions (PUFs) has made it possible to be completely open. A PUF is a piece of material that can be probed in many ways and that produces a complex response that depends very precisely on the challenge and the PUF’s internal structure.
The best known examples are Optical PUFs. The PUF is a piece of material – such as white paint with millions of nanoparticles – that will strongly scatter any light beamed at it. The light bounces around inside the paint, creating a unique pattern that can be used for authentication. Optical PUFs could be used on any object, but would be especially useful on credit/debit cards.
In 2012, researchers at Twente University realized they discovered something very important. The magic ingredient is a Spatial Light Modulator (SLM), a programmable device that re-shapes the speckle pattern. In their experiments, they programmed an SLM such that the correct response from an Optical PUF gets concentrated and passes through a pinhole, where a photon detector notices the presence of the photon. An incorrect response, however, is transformed to a random speckle pattern that does not pass through the pinhole. The method was dubbed Quantum-Secure Authentication (QSA).
QSA does not require any secrets, so no money has to be spent on protecting them. QSA can be implemented with relatively simple technology that is already available. The PUF can be as simple as a layer of paint. It turns out that the challenge does not have to be a single photon; a weak laser pulse suffices, as long as the number of photons in the pulse is small enough. Laser diodes, as found in CD players, are widely available and cheap. SLMs are already present in modern projectors. A sensitive photodiode or image sensor can serve as the photon detector. With all these advantages, QSA has the potential to massively improve the security of cards and other physical credentials.