Posts Tagged ‘IBM Watson’

Powerful AI Tools Being Developed for the COVID-19 Fight

Curator: Stephen J. Williams, Ph.D.


Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

IBM Releases Novel AI-Powered Technologies to Help Health and Research Community Accelerate the Discovery of Medical Insights and Treatments for COVID-19

April 3, 2020 | Written by: 

IBM Research has been actively developing new cloud and AI-powered technologies that can help researchers across a variety of scientific disciplines accelerate the process of discovery. As the COVID-19 pandemic unfolds, we continue to ask how these technologies and our scientific knowledge can help in the global battle against coronavirus.

Today, we are making available multiple novel, free resources from across IBM to help healthcare researchers, doctors and scientists around the world accelerate COVID-19 drug discovery: from gathering insights, to applying the latest virus genomic information and identifying potential targets for treatments, to creating new drug molecule candidates.

Though some of the resources are still in exploratory stages, IBM is making them available to qualifying researchers at no charge to aid the international scientific investigation of COVID-19.

Today’s announcement follows our recent leadership in launching the U.S. COVID-19 High Performance Computing Consortium, which is harnessing massive computing power in the effort to help confront the coronavirus.

Streamlining the Search for Information

Healthcare agencies and governments around the world have quickly amassed medical and other relevant data about the pandemic. And, there are already vast troves of medical research that could prove relevant to COVID-19. Yet, as with any large volume of disparate data sources, it is difficult to efficiently aggregate and analyze that data in ways that can yield scientific insights.

To help researchers access structured and unstructured data quickly, we are offering a cloud-based AI research resource that has been trained on a corpus of thousands of scientific papers contained in the COVID-19 Open Research Dataset (CORD-19), prepared by the White House and a coalition of research groups, and licensed databases from the DrugBankClinicaltrials.gov and GenBank. This tool uses our advanced AI and allows researchers to pose specific queries to the collections of papers and to extract critical COVID-19 knowledge quickly. Please note, access to this resource will be granted only to qualified researchers. To learn more and request access, please click here.

Aiding the Hunt for Treatments

The traditional drug discovery pipeline relies on a library of compounds that are screened, improved, and tested to determine safety and efficacy. In dealing with new pathogens such as SARS-CoV-2, there is the potential to enhance the compound libraries with additional novel compounds. To help address this need, IBM Research has recently created a new, AI-generative framework which can rapidly identify novel peptides, proteins, drug candidates and materials.

We have applied this AI technology against three COVID-19 targets to identify 3,000 new small molecules as potential COVID-19 therapeutic candidates. IBM is releasing these molecules under an open license, and researchers can study them via a new interactive molecular explorer tool to understand their characteristics and relationship to COVID-19 and identify candidates that might have desirable properties to be further pursued in drug development.

To streamline efforts to identify new treatments for COVID-19, we are also making the IBM Functional Genomics Platform available for free for the duration of the pandemic. Built to discover the molecular features in viral and bacterial genomes, this cloud-based repository and research tool includes genes, proteins and other molecular targets from sequenced viral and bacterial organisms in one place with connections pre-computed to help accelerate discovery of molecular targets required for drug design, test development and treatment.

Select IBM collaborators from government agencies, academic institutions and other organizations already use this platform for bacterial genomic study. And now, those working on COVID-19 can request the IBM Functional Genomics Platform interface to explore the genomic features of the virus. Access to the IBM Functional Genomics Platform will be prioritized for those conducting COVID-19 research. To learn more and request access, please click here.

Drug and Disease Information

Clinicians and healthcare professionals on the frontlines of care will also have free access to hundreds of pieces of evidence-based, curated COVID-19 and infectious disease content from IBM Micromedex and EBSCO DynaMed. Using these two rich decision support solutions, users will have access to drug and disease information in a single and comprehensive search. Clinicians can also provide patients with consumer-friendly patient education handouts with relevant, actionable medical information. IBM Micromedex is one of the largest online reference databases for medication information and is used by more than 4,500 hospitals and health systems worldwide. EBSCO DynaMed provides peer-reviewed clinical content, including systematic literature reviews in 28 specialties for comprehensive disease topics, health conditions and abnormal findings, to highly focused topics on evaluation, differential diagnosis and management.

The scientific community is working hard to make important new discoveries relevant to the treatment of COVID-19, and we’re hopeful that releasing these novel tools will help accelerate this global effort. This work also outlines our long-term vision for the future of accelerated discovery, where multi-disciplinary scientists and clinicians work together to rapidly and effectively create next generation therapeutics, aided by novel AI-powered technologies.

Learn more about IBM’s response to COVID-19: IBM.com/COVID19.

Source: https://www.ibm.com/blogs/research/2020/04/ai-powered-technologies-accelerate-discovery-covid-19/

DiA Imaging Analysis Receives Grant to Accelerate Global Access to its AI Ultrasound Solutions in the Fight Against COVID-19

Source: https://www.grantnews.com/news-articles/?rkey=20200512UN05506&filter=12337

Grant will allow company to accelerate access to its AI solutions and use of ultrasound in COVID-19 emergency settings

TEL AVIV, IsraelMay 12, 2020 /PRNewswire-PRWeb/ — DiA Imaging Analysis, a leading provider of AI based ultrasound analysis solutions, today announced that it has received a government grant from the Israel Innovation Authority (IIA) to develop solutions for ultrasound imaging analysis of COVID-19 patients using Artificial Intelligence (AI).Using ultrasound in point of care emergency settings has gained momentum since the outbreak of COVID-19 pandemic. In these settings, which include makeshift hospital COVID-19 departments and triage “tents,” portable ultrasound offers clinicians diagnostic decision support, with the added advantage of being easier to disinfect and eliminating the need to transport patients from one room to another.However, analyzing ultrasound images is a process that it is still mostly done visually, leading to a growing market need for automated solutions and decision support.As the leading provider of AI solutions for ultrasound analysis and backed by Connecticut Innovations, DiA makes ultrasound analysis smarter and accessible to both new and expert ultrasound users with various levels of experience. The company’s flagship LVivo Cardio Toolbox for AI-based cardiac ultrasound analysis enables clinicians to automatically generate objective clinical analysis, with increased accuracy and efficiency to support decisions about patient treatment and care.

The IIA grant provides a budget of millions NIS to increase access to DiA’s solutions for users in Israel and globally, and accelerate R&D with a focus on new AI solutions for COVID-19 patient management. DiA solutions are vendor-neutral and platform agnostic, as well as powered to run in low processing, mobile environments like handheld ultrasound.Recent data highlights the importance of looking at the heart during the progression of COVID-19, with one study citing 20% of patients hospitalized with COVID-19 showing signs of heart damage and increased mortality rates in those patients. DiA’s LVivo cardiac analysis solutions automatically generate objective, quantified cardiac ultrasound results to enable point-of-care clinicians to assess cardiac function on the spot, near patients’ bedside.

According to Dr. Ami Applebaum, the Chairman of the Board of the IIA, “The purpose of IIA’s call was to bring solutions to global markets for fighting COVID-19, with an emphasis on relevancy, fast time to market and collaborations promising continuity of the Israeli economy. DiA meets these requirements with AI innovation for ultrasound.”DiA has received several FDA/CE clearances and established distribution partnerships with industry leading companies including GE Healthcare, IBM Watson and Konica Minolta, currently serving thousands of end users worldwide.”We see growing use of ultrasound in point of care settings, and an urgent need for automated, objective solutions that provide decision support in real time,” said Hila Goldman-Aslan, CEO and Co-founder of DiA Imaging Analysis, “Our AI solutions meet this need by immediately helping clinicians on the frontlines to quickly and easily assess COVID-19 patients’ hearts to help guide care delivery.”

About DiA Imaging Analysis:
DiA Imaging Analysis provides advanced AI-based ultrasound analysis technology that makes ultrasound accessible to all. DiA’s automated tools deliver fast and accurate clinical indications to support the decision-making process and offer better patient care. DiA’s AI-based technology uses advanced pattern recognition and machine-learning algorithms to automatically imitate the way the human eye detects image borders and identifies motion. Using DiA’s tools provides automated and objective AI tools, helps reduce variability among users, and increases efficiency. It allows clinicians with various levels of experience to quickly and easily analyze ultrasound images.

For additional information, please visit http://www.dia-analysis.com.

Read Full Post »

Computer Aided Design

Larry H. Bernstein, MD, FCAP, Curator



IBM’s Watson shown to enhance human-computer co-creativity, support biologically inspired design

Watson Engagement Advisor AI system was trained to “learn” about biologically inspired design from biology articles, then answer questions
November 13, 2015   http://www.kurzweilai.net/ibms-watson-shown-to-enhance-human-computer-co-creativity-support-biologically-inspired-design


Georgia Institute of Technology researchers, working with student teams, trained a cloud-based version of IBM’s Watson called the Watson Engagement Advisor to provide answers to questions about biologically inspired design (biomimetics), a design paradigm that uses biological systems as analogues for inventing technological systems.


Ashok Goel, a professor at Georgia Tech’s School of Interactive Computing who conducts research on computational creativity. In an experiment, he used this version of Watson as an “intelligent research assistant” to support teaching about biologically inspired design and computational creativity in the Georgia Tech CS4803/8803 class on Computational Creativity in Spring 2015. Goel found that Watson’s ability to retrieve natural language information could allow a novice to quickly “train up” about complex topics and better determine whether their idea or hypothesis is worth pursuing.

An intelligent research assistant

In the form of a class project, the students fed Watson several hundred biology articles from Biologue, an interactive biology repository, and 1,200 question-answer pairs. The teams then posed questions to Watson about the research it had “learned” regarding big design challenges in areas such as engineering, architecture, systems, and computing.

Examples of questions:

“How do you make a better desalination process for consuming sea water?” (Animals have a variety of answers for this, such as how seagulls filter out seawater salt through special glands.)

“How can manufacturers develop better solar cells for long-term space travel?” One answer: Replicate how plants in harsh climates use high-temperature fibrous insulation material to regulate temperature.

Watson effectively acted as an intelligent sounding board to steer students through what would otherwise be a daunting task of parsing a wide volume of research that may fall outside their expertise.

This version of Watson also prompts users with alternate ways to ask questions for better results. Those results are packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. This was intended to allow the average user to navigate results more easily on a given topic.



Results from training the Watson AI system were packaged as a “treetop” where each answer is a “leaf” that varies in size based on its weighted importance. Each leaf is the starting point for a Q&A with Watson. (credit: Georgia Tech)


“Imagine if you could ask Google a complicated question and it immediately responded with your answer — not just a list of links to manually open, says Goel. “That’s what we did with Watson. Researchers are provided a quickly digestible visual map of the concepts relevant to the query and the degree to which they are relevant. We were able to add more semantic and contextual meaning to Watson to give some notion of a conversation with the AI.”



Georgia Tech’s Watson Engagement Advisor (credit: Georgia Tech)


Goel believes this approach to using Watson could assist professionals in a variety of fields by allowing them to ask questions and receive answers as quickly as in a natural conversation. He plans to investigate other areas with Watson such as online learning and healthcare.

The work was presented at the Association for the Advancement of Artificial Intelligence (AAAI) 2015 Fall Symposium on Cognitive Assistance in Government, Nov. 12–14, in Arlington, Va. and was published in Procs. AAAI 2015 Fall Symposium on Cognitive Assistance (open access).


Abstract of Using Watson for Enhancing Human-Computer Co-Creativity

We describe an experiment in using IBM’s Watson cognitive system to teach about human-computer co-creativity in
a Georgia Tech Spring 2015 class on computational creativity. The project-based class used Watson to support biologically
inspired design, a design paradigm that uses biological systems as analogues for inventing technological
systems. The twenty-four students in the class self-organized into six teams of four students each, and developed semester-long projects that built on Watson to support biologically inspired design. In this paper, we describe this experiment in using Watson to teach about human-computer co-creativity, present one project in detail, and summarize the remaining five projects. We also draw lessons on building on Watson for (i) supporting biologically inspired design, and (ii) enhancing human-computer co-creativity.


Interesting however Google had just announced a big AI venture of their own. Although it is curious why they needed such a defined training set. It seems, as was said in the EmTechMIT lectures that AI is still in its infancy and is nowhere near a true AI system. It is also interesting to note how rapidly China is expanding their supercomputing power (growth of supercomputers in China is dwarfing that in the US, in fact US has 20 less suipercomputers).

Read Full Post »

From Turing to Watson

Larry H Bernstein, MD, FCAP, Curator


From Turing to Watson: The Long-Burning Hype of Machine Learning

Thomas Slowe, Founder & CEO, Nervve



Each year, technology industry watchers anxiously await the release of Gartner’s Hype Cycle to see what’s rising, what’s falling and what’s completely fizzled when it comes to emerging technologies. Many observers specifically look for the “peak of inflated expectations” to see which technologies have hit their high point when it comes to media saturation, but still need several years before reaching their true potential. While there are many well-known categories on this year’s list, including 3-D printing, virtual reality and wearables, they’re joined by one that comes with a bit more mystery—machine learning.

For people in technology, the definition of machine learning is relatively simple. Machine learning is a form of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.

However, if you ask someone on the street, you’re likely to get a blank stare. Some may be able to connect the dots to AI, or perhaps they can reference the technology’s most famous example to date, IBM Watson, thanks to its ability to defeat Jeopardy champions. In reality, machine learning plays a larger role in our everyday lives than people realize. From Siri and Cortana, to our Bluetooth car entertainment systems, to visual technology that can be trained to spot a specific piece of clothing on fuzzy surveillance video, these automated technologies many take for granted have been developed thanks to a machine’s ability to “learn” from prior interactions.

While it may seem machine learning has developed overnight thanks to huge investments and breakthroughs from major players in Silicon Valley, it has, in fact, gone through a long and meandering history over the past 80 plus years. It has morphed from the research focus of a few dozen engineers to the power behind some of today’s most widespread consumer technologies.

In the beginning
While AT&T Bell Labs’ development of the electronic speech synthesizer in 1936 may have been the first major breakthrough for machine learning, the more well-known achievement from this early era was the Turing Test in 1950. Alan Turing introduced the test in a paper he opened with the simple words “I propose to consider the question, ‘Can machines think?’” By proving that humans couldn’t always differentiate between a real person and a machine in basic textual conversations, Turing laid the groundwork for societal acceptance of the concept that machines can learn.

Major advancements in early machine learning weren’t limited to solely voice and text recognition. From the Rosenblatt Preceptron in 1957, to Larry Roberts Computer Visions dissertation in 1963, image also played a major role in molding future research within the artificial intelligence community.

Relatable examples
So when did machine learning truly evolve into a real-world technology? A few major breakthroughs occurred in the 1970s that made the concept much more relatable. In 1976, a license plate recognition system was invented in the U.K. at the Police Scientific Development Branch. While license plate software didn’t become more widely used until a couple of decades later, this type of computer vision technology was the basis for the more high-tech versions used by law enforcement and intelligence agencies today.

Machine learning and robotics collided when the Stanford Cart successfully crossed a chair-filled room without human intervention in 1979. The process took more than five hours, thanks to many pauses so the cart could process what it was seeing and plan a new route. While Google’s self-driving cars certainly don’t take that long to navigate the streets of San Francisco or Austin, machine learning technology brings us closer than ever before to realizing technology that was previously only imagined in movies.

The modern era
The past two decades of AI and machine learning have seen the technology grow. The power of neural networks was initially appreciated in mid-1980s, but computers were too weak to tap the value at a practical level. Geoffrey Hinton revived interest in neural networks in 2006 under the name of “Deep Learning” when he demonstrated a system that learned to classify handwritten digits with high accuracy. Even more impressive was the system’s ability to generate novel handwritten examples that it hadn’t been shown before.

IBM's Watson computer system, powered by IBM POWER7, competes against Jeopardy!'s two most successful and celebrated contestants: Ken Jennings and Brad Rutter. Image: IBM Newsroom
IBM’s Watson computer system, powered by IBM POWER7, competes against Jeopardy!’s two most successful and celebrated contestants: Ken Jennings and Brad Rutter. Image: IBM Newsroom

This work started a wave of major Silicon Valley investment in the technology that now powers many of the consumer-focused technology we see today. From virtual assistants like Siri and Cortana, to virtual online chat agents and even to our GPS systems, machine learning-based technology influences almost every aspect of our daily interactions.However, one legacy company has made an outstanding and lasting impact when it comes to how the public perceives the topic of machine technology—IBM. From Deep Blue’s defeat of world chess champion Garry Kasparov in 1997, to Watson’s take down of legendary Ken Jennings on Jeopardy in 2011, one could argue IBM has done more to raise public awareness of a machine’s ability to learn than any other company. Watson’s achievements were particularly impressive, because it wasn’t just about its deep knowledge of questions and answers, but the ability to use natural language processing to understand Alex Trebek’s questions and signal its answer more quickly than humans.

The future
As we look into the future of machine learning, some may have visions of machine overlords eventually taking over the world. However, we believe we’re still several decades away from contemplating this scenario. What we can choose to focus on instead is where the technology will generate value in the next five to 10 years.

While entirely autonomous machines are unfeasible in the near future, “person-in-the-loop” approaches have shown enormous value. From digital services like learning how to beat video games, to physical world applications (like visually identifying possible infrastructure failures and helping doctors perform complicated surgeries), the societal possibilities for machine learning are endless. And when in doubt just ask Siri!


The highly-anticipated educational tracks for the 2015 R&D 100 Awards & Technology Conference feature 28 sessions, plus keynote speakers Dean Kamen and Oak Ridge National Laboratory Director Thom Mason.  Learn more.

Read Full Post »