Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence – General’ Category

Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

NEW GENRE Edition, Editor-in-Chief: Aviva Lev-Ari, PhD, RN

Series B: Frontiers in Genomics Research NEW GENRE Audio English-Spanish

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-b-frontiers-in-genomics-research/new-genre-volume-two-latest-in-genomics-methodologies-for-therapeutics-gene-editing-ngs-and-bioinformatics-simulations-and-the-genome-ontology-series-b-volume-2/

PART A: The eTOCs in Spanish in Audio format AND the eTOCs in Bi-lingual format: Spanish and English in Text format

PART C: The Editorials of the original e-Books in English in Audio format

However,

PART B: The graphical results of Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP) algorithms AND the Domain Knowledge Expert (DKE) interpretation of the results in Text format – PART B IS ISSUED AS A STANDALONE VOLUME, named

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-b-frontiers-in-genomics-research/genomics-volume-2-results-of-medical-text-analysis-with-natural-language-processing-nlp/

 See only Graphic results in

Genomics, Volume 3: NLP results – 38 or 39 Hypergraph Plots and 38 or 39 Tree diagram Plots by Madison Davis

https://pharmaceuticalintelligence.com/biomed-e-books/genomics-orientations-for-personalized-medicine/genomics-volume-2-nlp-results-38-or-39-hypergraph-plots-and-38-or-39-tree-diagram-plots-by-madison-davis/

Series C: e-Books on Cancer & Oncology NEW GENRE Audio English-Spanish

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-c-e-books-on-cancer-oncology/new-genre-volume-one-cancer-biology-and-genomics-for-disease-diagnosis-series-c-volume-1%ef%bf%bc/

PART A:

PART A.1: The eTOCs in Spanish in Audio format AND

PART A.2: The eTOCs in Bi-lingual format: Spanish and English in Text format

PART B:

The graphical results of Medical Text Analysis with Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP) algorithms AND the Domain Knowledge Expert (DKE) interpretation of the results in Text format

See only graphics in

https://pharmaceuticalintelligence.com/biomed-e-books/series-c-e-books-on-cancer-oncology/cancer-volume-1-nlp-results-12-hypergraph-plots-and-12-tree-diagram-plots-by-madison-davis/

PART C:

The Editorials of the original e-Book in English in Audio format

Read Full Post »

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Tweet Collection of 2022 #EmTechDigital @MIT, March 29-30, 2022

Tweet Author: Aviva Lev-Ari, PhD, RN

Selective Tweet Retweets for The Technology Review: Aviva Lev-Ari, PhD, RN

 

UPDATED on 4/11/2022

Analytics for @AVIVA1950 Tweeting at #EmTechDigital

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2022/04/11/analytics-for-aviva1950-tweeting-at-emtechdigital/

 

Aviva Lev-Ari

17.9K Tweets

See new Tweets

Aviva Lev-Ari

@AVIVA1950

Mar 30

#EmTechDigital

@AVIVA1950

@pharma_BI

@techreview

FRONTIER OF #AI follow my tweets of this event more than few tweets per speaker

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

#error in programmatic labeling use auto #ml aggregate #transactions

1

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

RajivShah Snorkel AI #programmatic #labelling solution #heuristics converted #code #tagging integration of #labelled data #classification algorithms #scores #BERT improve ing quality of data labeling #functions #knowledge #graphs

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

#NLP #customization of #tools data #standardization in #healthcare and trucking #datasystem #heterogeneity is highest #data life cycle of #ML

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

in last decade #ML advanced #opencode frees effort to #dataset avoid #label inconsistency #images #small vs #big #data-centric #ai #system #dataset #slice #data #curation #teams develop #tools #storage #migration #Legacy

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTechDigital

@MIT

, March 29-30, 2022 https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

Real Time Coverage: Aviva Lev-Ari, PhD, RN #EmTechDigital

@AVIVA1950

@techReview

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

@AVIVA1950

#EmTechDigital

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 26

#EmTech2022

@MIT

Quote Tweet

Stephen J Williams

@StephenJWillia2

  • Mar 25

@AVIVA1950 #EMT twitter.com/Pharma_BI/stat…

1

1

You Retweeted

MIT Technology Review

@techreview

Mar 30

That’s a wrap on #EmTechDigital 2022! Thanks for joining us in-person and online.

2

8

23

Show this thread

You Retweeted

LANDING AI

@landingAI

Mar 29

If you missed

@AndrewYNg

’s #EmTechDigital session, you can still learn more about #DataCentricAI here: https://bit.ly/3iM8bPq

@techreview

@strwbilly

2

1

You Retweeted

Mark Weber

@markRweber

Mar 29

On #bias embedded in historical data. #syntheticdata can help us build models for the world we aspire to rather than the prejudiced one of the past. Paraphrasing

@danny_lange

of

@unity

at #EmTechDigital #generativeai

Selective Tweets and Retweets from @StephenJWillia2

 

 

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

2022 EmTechDigital @MIT, March 29-30, 2022

Real Time Coverage: Aviva Lev-Ari, PhD, RN 

#EmTechDigital

@AVIVA1950

@pharma_BI

@techreview

SPEAKERS

https://event.technologyreview.com/emtech-digital-2022/speakers

Ali
Alvi

Turing Group Program Manager

Microsoft

Refik
Anadol

CEO, RAS Lab; Lecturer

UCLA

Lauren
Bennett

Group Software Engineering Lead, Spatial Analysis and Data Science

Esri

Elizabeth
Bramson-Boudreau

CEO

MIT Technology Review

Tara
Chklovski

Founder & CEO

Technovation

Sheldon
Fernandez

CEO

DarwinAI

David
Ferrucci

Founder, CEO, & Chief Scientist

Elemental Cognition

Anthony
Green

Podcast Producer

MIT Technology Review

Agrim
Gupta

PhD Student, Stanford Vision and Learning Lab

Stanford University

Mike
Haley

VP of Research

Autodesk

Will Douglas
Heaven

Senior Editor for AI

MIT Technology Review

Natasha
Jaques

Senior Research Scientist

Google Brain

Tony
Jebara

VP of Engineering and Head of Machine Learning

Spotify

Clinton
Johnson

Racial Equity Unified Team Lead

Esri

Danny
Lange

SVP of Artificial Intelligence

Unity Technologies

Julia (Xing)
Li

Deputy General Manager

Baidu USA

Darcy
MacClaren

Senior Vice President, Digital Supply Chain

SAP North America

Haniyeh
Mahmoudian

Global AI Ethicist

DataRobot

Andrew
Moore

GM and VP, Google Cloud AI

Google

Mira
Murati

SVP, Research, Product, & Partnerships

OpenAI

Prem
Natarajan

Vice President Alexa AI, Head of NLU

Amazon

Andrew
Ng

Founder and CEO

Landing AI

Amy
Nordrum

Editorial Director, Special Projects & Operations

MIT Technology Review

Kavitha
Prasad

VP & GM, Datacenter, AI and Cloud Execution and Strategy

Intel Corporation

Bali
Raghavan

Head of Engineering

Forward

Rajiv
Shah

Principal Data Scientist

Snorkel AI

Sameena
Shah

Managing Director, J.P. Morgan AI Research

JP Morgan Chase

David
Simchi-Levi

Director, Data Science Lab

MIT

Jennifer
Strong

Senior Editor for Podcasts and Live Journalism

MIT Technology Review

Fiona
Tan

CTO

Wayfair

Zenna
Tavares

Research Scientist, Columbia University; Co-Founder

Basis

Nicol
Turner Lee

Director, Center for Technology Innovation

Brookings Institution

Raquel
Urtasun

Founder & CEO

Waabi

Oriol
Vinyals

Principal Scientist

DeepMind

MIT Inside Track

David
Cox

IBM Director

MIT-IBM Watson AI Lab

Luba
Elliott

Curator, Producer, and Researcher

Creative AI

Charlotte
Jee

Reporter, News

MIT Technology Review

Naveen
Kamat

Executive Director, Data and AI Services

Kyndryl

Joseph
Lehar

Senior Vice President, R&D Strategy

Owkin

Stefanie
Mueller

Associate Professor

MIT CSAIL

Jianxiong
Xiao

Founder and CEO

AutoX

TUESDAY, MARCH 29

 

Data-Centric AI

Better Data, Better AI

Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.

Innovation to Reality

The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.

Harness What’s Possible at the Edge

With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.

Generative AI Solutions

The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.

Day 1: Data-Centric AI (9:00 a.m. – 5:20 p.m.)

Day 1: Data-Centric AI (9:00 a.m. – 5:20 p.m.)

9:00 AM

Welcome Remarks

Will Douglas Heaven

Senior Editor for AI, MIT Technology Review

Better Data, Better AI (9:10 a.m. – 10:35 a.m.)

Data powers AI. Good data can mean the difference between an impactful solution or one that never gets off the ground. Re-assess the foundational AI questions to ensure your data is working for, not against, you.

9:10 AM

Empowering Data-Centric AI

Data is the most under-valued and de-glamorized aspect of AI. Learn why shifting the focus from model/algorithm development to quality of the data is the next and most efficient, way to improve the decision-making abilities of AI.

Andrew Ng

Founder and CEO, Landing AI

9:40 AM

The Mechanics of Data-First AI

Data labeling is key to determining the success or failure of AI applications. Learn how to implement a data-first approach that can transform AI inference, resulting in better models that make better decisions.

Rajiv Shah

Principal Data Scientist, Snorkel AI

10:10 AM

Thought Leadership in Responsible AI

Question the status quo. Build stakeholder trust. These are foundational elements of thought leadership in AI. Explore how organizations can use their data and algorithms in ethical and responsible ways while building bigger and more effective systems.

Haniyeh Mahmoudian

Global AI Ethicist, DataRobot

Mainstage Break (10:35 a.m. – 11:05 a.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

10:35 AM

MIT Inside Track: From AI Startup to Tech “Unicorn” (available online only)

With its next-generation machine learning models fueling precision medicine, French biotech company, Owkin, captured the attention of the pharma industry. Learn how they did it and get tips to navigate the complex task of scaling your innovation.

Joseph Lehar

Senior Vice President, R&D Strategy, Owkin

Networking Break

Networking and refreshments for our live audience.

Innovation to Reality (11:05 a.m. – 12:30 p.m.)

The challenges of implementing AI are many. Avoid the common pitfalls with real-world case studies from leaders who have successfully turned their AI solutions into reality.

11:05 AM

Secrets of Successful AI Deployments

Deploying AI in real-world environments benefits from human input before and during implementation. Get an inside look at how organizations can ensure reliable results with the key questions and competing needs that should be considered when implementing AI solutions.

Andrew Moore

GM and VP, Google Cloud AI, Google

11:35 AM

From Research Lab to Real World

AI is evolving from the research lab into practical real world applications. Learn what issues should be top of mind for businesses, consumers, and researchers as we take a deep dive into AI solutions that increase modern productivity and accelerate intelligence transformation.

Julia (Xing) Li

Deputy General Manager, Baidu USA

12:00 PM

Closing the 20% Performance Gap

Getting AI to work 80% of the time is relatively straightforward, but trustworthy AI requires deployments that work 100% of the time. Unpack some of the biggest challenges that come up when eliminating the 20% gap.

Bali Raghavan

Head of Engineering, Forward

Lunch and Networking Break (12:30 p.m. – 1:30 p.m.)

12:30 PM

Lunch and Networking Break

Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.

Harness What’s Possible at the Edge (1:30 p.m. – 3:15 p.m.)

With its potential for near instantaneous decision making, pioneers are moving AI to the edge. We examine the pros and cons of moving AI decisions to the edge, with the experts getting it right.

1:30 PM

AI Integration Across Industries – Presented by Intel

To create sustainable business impact, AI capabilities need to be tailored and optimized to an industry or organization’s specific requirements and infrastructure model. Hear how customers’ challenges across industries can be addressed in any compute environment from the cloud to the edge with end-to-end hardware and software optimization.

Kavitha Prasad

VP & GM, Datacenter, AI and Cloud Execution and Strategy, Intel Corporation

Elizabeth Bramson-Boudreau

CEO, MIT Technology Review

1:55 PM

Explainability at the Edge

Decision making has moved from the edge to the cloud before settling into a hybrid setup for many AI systems. Through the examination of key use-cases, take a deep dive into understanding the benefits and detractors of operating a machine-learning system at the point of inference.

Sheldon Fernandez

CEO, DarwinAI

2:25 PM

AI Experiences at the Edge

Enable your organization to transform customer experiences through AI at the edge. Learn about the required technologies, including teachable and self-learning AI, that are needed for a successful shift to the edge, and hear how deploying these technologies at scale can unlock richer, more responsive experiences.

Prem Natarajan

Vice President Alexa AI, Head of NLU, Amazon

2:50 PM

The Road Ahead

Reimagine AI solutions as a unified system, instead of individual components. Through the lens of autonomous vehicles, discover the pros and cons of using an all-inclusive AI-first approach that includes AI decision-making at the edge and see how this thinking can be applied across industry.

Raquel Urtasun

Founder & CEO, Waabi

Mainstage Break (3:15 p.m. – 3:45 p.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

3:15 PM

Networking Break

Networking and refreshments for our live audience.

MIT Inside Track: The Impact of Creative AI (available online only)

Advances in machine learning are enabling artists and creative technologists to think about and use AI in new ways. Discuss the concept of creative AI and look at project examples from London’s art scene that illustrate the various ways creative AI is bridging the gap between the traditional art world and the latest technological innovations.

Luba Elliott

Curator, Producer, and Researcher, Creative AI

Generative AI Solutions (3:45 p.m. – 5:10 p.m.)

The use of generative AI to boost human creativity is breaking boundaries in creative areas previously untouched by AI. We explore the intersection of data and algorithms enabling collaborative AI processes to design and create.

3:45 PM

Enhancing Design through Generative AI

Change the design problem with AI. The creative nature of generative AI enhances design capabilities, finding efficiencies and opportunities that humans alone might not conceive. Explore business applications including project planning, construction, and physical design.

Mike Haley

VP of Research, Autodesk

4:15 PM

Using Synthetic Data and Simulations

Deep learning is data hungry technology. Manually labelled training data has become cost prohibitive and time-consuming. Get a glimpse at how interactive large-scale synthetic data generation can accelerate the AI revolution, unlocking the potential of data-driven artificial intelligence.

Danny Lange

SVP of Artificial Intelligence, Unity Technologies

4:40 PM

The Art of AI

Push beyond the typical uses of AI. Explore the nexus of art, technology, and human creativity through the unique innovation of kinetic data sculptures that use machines to give physical context and shape to data to rethink how we engage with the physical world.

Refik Anadol

CEO, RAS Lab; Lecturer, UCLA

Last Call with the Editors (5:10 p.m. – 5:20 p.m.)

5:10 PM

Last Call with the Editors

Before we wrap day 1, join our last call with all of our editors to get their analysis on the day’s topics, themes, and guests.

Networking Reception (5:20 p.m. – 6:20 p.m.)

WEDNESDAY, MARCH 30

Evolving the Algorithms

What’s Next for Deep Learning

Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.

AI in Day-To-Day Business

Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.

Making AI Work for All

As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.

Envisioning the Next AI

Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.

Day 2: Evolving the Algorithms (9:00 a.m. – 5:25 p.m.)

9:00 AM

Welcome Remarks

Will Douglas Heaven

Senior Editor for AI, MIT Technology Review

What’s Next for Deep Learning (9:10 a.m. – 10:25 a.m.)

Deep learning algorithms have powered most major AI advances of the last decade. We bring you into the top innovation labs to see how they are advancing their deep learning models to find out just how much more we can get out of these algorithms.

9:10 AM

Transforming Traditional Algorithms

Transformer-based language models are revolutionizing the way neural networks process natural language. This deep dive looks at how organizations can put their data to work using transformer models. We consider the problems that business may face as these massive models mature, including training needs, managing parallel processing at scale, and countering offensive data.

Ali Alvi

Turing Group Program Manager, Microsoft

9:35 AM

Human-like Problem Solving

Critical thinking may be one step closer for AI by combining large-scale transformers with smart sampling and filtering. Get an early look at how AlphaCode’s entry into competitive programming may lead to a human-like capacity for AI to write original code that solves unforeseen problems.

Oriol Vinyals

Principal Scientist, DeepMind

10:00 AM

Aligning AI Technologies at Scale

As advanced AI systems gain greater capabilities in our search for artificial general intelligence, it’s critical to teach them how to understand human intentions. Look at the latest advancements in AI systems and how to ensure they can be truthful, helpful, and safe.

Mira Murati

SVP, Research, Product, & Partnerships, OpenAI

Mainstage Break (10:25 a.m. – 10:55 a.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

10:25 AM

Networking Break

Networking and refreshments for our live audience.

Business-Ready Data Holds the Key to AI Democratization – Presented by Kyndryl

Good data is the bedrock of a self-service data consumption model, which in turn unlocks insights, analytics, personalization at scale through AI. Yet many organizations face immense challenges setting up a robust data foundation. Dive into a pragmatic perspective on abstracting the complexity and untangling the conflicts in data management for better AI.

Naveen Kamat

Executive Director, Data and AI Services, Kyndryl

AI in Day-To-Day Business (10:55 a.m. – 12:20 p.m.)

Many organizations are already using AI internally in their day-to-day operations, in areas like cybersecurity, customer service, finance, and manufacturing. We examine the tools that organizations are using when putting AI to work.

10:55 AM

Improving Business Processes with AI

Effectively operationalized AI/ML can unlock untapped potential in your organization. From enhancing internal processes to managing the customer experience, get the pragmatic advice and takeaways leaders need to better understand their internal data to achieve impactful results.

Fiona Tan

CTO, Wayfair

11:25 AM

Accelerating the Supply Chain

Use AI to maximize reliability of supply chains. Learn the dos and don’ts to managing key processes within your supply chain, including workforce management, streamlining and simplification, and reaping the full value of your supply chain solutions.

Darcy MacClaren

Senior Vice President, Digital Supply Chain, SAP North America

David Simchi-Levi

Director, Data Science Lab, MIT

11:55 AM

Putting Recommendation Algorithms to Work

Machine and reinforcement learning enable Spotify to deliver the right content to the right listener at the right time, allowing for personalized listening experiences that facilitate discovery at a global scale. Through user interactions, algorithms suggest new content and creators that keep customers both happy and engaged with the platform. Dive into the details of making better user recommendations.

Tony Jebara

VP of Engineering and Head of Machine Learning, Spotify

Lunch and Networking Break (12:20 p.m. – 1:15 p.m.)

12:20 PM

Lunch and Networking Break

Lunch served at the MIT Media Lab and a selection of curated content for those tuning in virtually.

Making AI Work for All (1:15 p.m. – 2:35 p.m.)

As AI increasingly underpins our lives, businesses, and society, we must ensure that AI must work for everyone – not just those represented in datasets, and not just 80% of the time. Examine the challenges and solutions needed to ensure AI works fairly, for all.

1:15 PM

Mapping Equity

Walk through the practical steps to map and understand the nuances, outliers, and special cases in datasets. Get tips to ensure ethical and trustworthy approaches to training AI systems that grow in scope and scale within a business.

Lauren Bennett

Group Software Engineering Lead, Spatial Analysis and Data Science, Esri

Clinton Johnson

Racial Equity Unified Team Lead, Esri

1:45 PM

Bridging the AI Accessibility Gap

Get an inside look at the long- and short-term benefits of addressing inequities in AI opportunities, ranging from educating the tech youth of the future to a 10,000-foot view on what it will take to ensure that equity top is of mind within society and business alike.

Tara Chklovski

Founder & CEO, Technovation

2:10 PM

The AI Policies We Need

Public policies can help to make AI more equitable and ethical for all. Examine how policies could impact corporations and what it means for building internal policies, regardless of what government adopts. Identify actionable ideas to best move policies forward for the widest benefit to all.

Nicol Turner Lee

Director, Center for Technology Innovation, Brookings Institution

Mainstage Break (2:35 p.m. – 3:05 p.m.)

Networking and refreshments for our live audience and a selection of curated content for those tuning in virtually.

2:35 PM

Networking Break

Networking and refreshments for our live audience.

MIT Inside Track: Accelerating the Advent of Autonomous Driving (available online only)

From the U.S. to China, the global robo-taxi race is gaining traction with consumers and regulators alike. Go behind the scenes with AutoX – a Level 4 driving technology company – and hear how it overcame obstacles while launching the world’s second and China’s first public, fully driverless robo-taxi service.

Jianxiong Xiao

Founder and CEO, AutoX

Envisioning the Next AI (3:05 p.m. – 4:50 p.m.)

Some business problems can’t be solved with current deep learning methods. We look at what’s around the corner at the new approaches and most revolutionary ideas propelling us toward the next stage in AI evolution.

3:05 PM

How AI Is Powering the Future of Financial Services – Presented by JP Morgan Chase

The use of AI in finance is gaining traction as organizations realize the advantages of using algorithms to streamline and improve the accuracy of financial tasks. Step through use cases that examine how AI can be used to minimize financial risk, maximize financial returns, optimize venture capital funding by connecting entrepreneurs to the right investors; and more.

Sameena Shah

Managing Director, J.P. Morgan AI Research, JP Morgan Chase

3:30 PM

Evolution of Mind and Body

In a study of simulated robotic evolution, it was observed that more complex environments and evolutionary changes to the robot’s physical form accelerated the growth of robot intelligence. Examine this cutting-edge research and decipher what this early discovery means for the next generation of AI and robotics.

Agrim Gupta

PhD Student, Stanford Vision and Learning Lab, Stanford University

4:00 PM

A Path to Human-like Common Sense

Understanding human thinking and reasoning processes could lead to more general, flexible and human-like artificial intelligence. Take a close look at the research building AI inspired by human common-sense that could create a new generation of tools for complex decision-making.

Zenna Tavares

Research Scientist, Columbia University; Co-Founder, Basis

4:25 PM

Social Learning Bots

Look under the hood at this innovative approach to AI learning with multi-agent and human-AI interactions. Discover how bots work together and learn together through personal interactions. Recognize the future implications for AI, plus the benefits and obstacles that may come from this new process.

Natasha Jaques

Senior Research Scientist, Google Brain

Closing Segment (4:50 p.m. – 5:25 p.m.)

4:50 PM

Pulling Back the Curtain on AI

David Ferrucci was the principal investigator for the team that led IBM Watson to its landmark Jeopardy success, awakening the world to the possibilities of AI. We pull back the curtain on AI for a wide-ranging discussion on explicable models, and the next generation of human and machine collaboration creating AI thought partners with limitless applications.

David Ferrucci

Founder, CEO, & Chief Scientist, Elemental Cognition

5:15 PM

Closing Remarks

Closing Toast (5:25 p.m. – 5:45 p.m.)

Read Full Post »

AI enabled Drug Discovery and Development: The Challenges and the Promise

Reporter: Aviva Lev-Ari, PhD, RN

 

Early Development

Caroline Kovac (the first IBM GM of Life Sciences) is the one who started in silico development of drugs in 2000 using a big db of substances and computer power. She transformed an idea into $2b business. Most of the money was from big pharma. She was asking what is are the new drugs they are planning to develop and provided the four most probable combinations of substances, based on in Silicon work. 

Carol Kovac

General Manager, Healthcare and Life Sciences, IBM

from speaker at conference on 2005

Carol Kovac is General Manager of IBM Healthcare and Life Sciences responsible for the strategic direction of IBM′s global healthcare and life sciences business. Kovac leads her team in developing the latest information technology solutions and services, establishing partnerships and overseeing IBM investment within the healthcare, pharmaceutical and life sciences markets. Starting with only two employees as an emerging business unit in the year 2000, Kovac has successfully grown the life sciences business unit into a multi-billion dollar business and one of IBM′s most successful ventures to date with more than 1500 employees worldwide. Kovac′s prior positions include general manager of IBM Life Sciences, vice president of Technical Strategy and Division Operations, and vice president of Services and Solutions. In the latter role, she was instrumental in launching the Computational Biology Center at IBM Research. Kovac sits on the Board of Directors of Research!America and Africa Harvest. She was inducted into the Women in Technology International Hall of Fame in 2002, and in 2004, Fortune magazine named her one of the 50 most powerful women in business. Kovac earned her Ph.D. in chemistry at the University of Southern California.

SOURCE

https://www.milkeninstitute.org/events/conferences/global-conference/2005/speaker-detail/1536

 

In 2022

The use of artificial intelligence in drug discovery, when coupled with new genetic insights and the increase of patient medical data of the last decade, has the potential to bring novel medicines to patients more efficiently and more predictably.

WATCH VIDEO

https://www.youtube.com/watch?v=b7N3ijnv6lk

SOURCE

https://engineering.stanford.edu/magazine/promise-and-challenges-relying-ai-drug-development?utm_source=Stanford+ALL

Conversation among three experts:

Jack Fuchs, MBA ’91, an adjunct lecturer who teaches “Principled Entrepreneurial Decisions” at Stanford School of Engineering, moderated and explored how clearly articulated principles can guide the direction of technological advancements like AI-enabled drug discovery.

Kim Branson, Global head of AI and machine learning at GSK.

Russ Altman, the Kenneth Fong Professor of Bioengineering, of genetics, of medicine (general medical discipline), of biomedical data science and, by courtesy, of computer science.

 

Synthetic Biology Software applied to development of Galectins Inhibitors at LPBI Group

 

The Map of human proteins drawn by artificial intelligence and PROTAC (proteolysis targeting chimeras) Technology for Drug Discovery

Curators: Dr. Stephen J. Williams and Aviva Lev-Ari, PhD, RN

Using Structural Computation Models to Predict Productive PROTAC Ternary Complexes

Ternary complex formation is necessary but not sufficient for target protein degradation. In this research, Bai et al. have addressed questions to better understand the rate-limiting steps between ternary complex formation and target protein degradation. They have developed a structure-based computer model approach to predict the efficiency and sites of target protein ubiquitination by CRNB-binding PROTACs. Such models will allow a more complete understanding of PROTAC-directed degradation and allow crafting of increasingly effective and specific PROTACs for therapeutic applications.

Another major feature of this research is that it a result of collaboration between research groups at Amgen, Inc. and Promega Corporation. In the past commercial research laboratories have shied away from collaboration, but the last several years have found researchers more open to collaborative work. This increased collaboration allows scientists to bring their different expertise to a problem or question and speed up discovery. According to Dr. Kristin Riching, Senior Research Scientist at Promega Corporation, “Targeted protein degraders have broken many of the rules that have guided traditional drug development, but it is exciting to see how the collective learnings we gain from their study can aid the advancement of this new class of molecules to the clinic as effective therapeutics.”

Literature Reviewed

Bai, N. , Riching K.M. et al. (2022) Modeling the CRLRA ligase complex to predict target protein ubiquitination induced by cereblon-recruiting PROTACsJ. Biol. Chem.

The researchers NanoBRET assays as part of their model validation. Learn more about NanoBRET technology at the Promega.com website.

SOURCE

https://www.promegaconnections.com/protac-ternary-complex/?utm_campaign=ms-2022-pharma_tpd&utm_source=linkedin&utm_medium=Khoros&utm_term=sf254230485&utm_content=030822ct-blogsf254230485&sf254230485=1

Read Full Post »

 

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,

The motivation for this post is two fold:

First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.

Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.

Third, to highlight one academic center with an AI focus

ETH Logo
 
ETH AI Center - Header Image
 
 
Dear friends of the ETH AI Center,

We would like to provide you with some exciting updates from the ETH AI Center and its growing community.

We would like to provide you with some exciting updates from the ETH AI Center and its growing community. The ETH AI Center now comprises 110 research groups in the faculty, 20 corporate partners and has led to nine AI startups.

As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.

We already have many interesting events coming up, we look forward to seeing you at our main and community events!

SOURCE

https://news.ethz.ch/html_mail.jsp?params=%2FUnFXUQJ%2FmiOP6akBq8eHxaXG%2BRdNmeoVa9gX5ArpTr6mX74xp5d78HhuIHTd9V6AHtAfRahyx%2BfRGrzVL1G8Jy5e3zykvr1WDtMoUC%2B7vILoHCGQ5p1rxaPzOsF94ID

 

 

LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning

Our Book 

Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:

Part 1: Next Generation Sequencing (NGS)

 

1.1 The NGS Science

1.1.1 BioIT Aspect

 

Hypergraph Plot #1 and Tree Diagram Plot #1

for 1.1.1 based on 16 articles & on 12 keywords

protein, cancer, dna, genes, rna, survival, immune, tumor, patients, human, genome, expression

(more…)

Read Full Post »

@MIT Artificial intelligence system rapidly predicts how two proteins will attach: The model called Equidock, focuses on rigid body docking — which occurs when two proteins attach by rotating or translating in 3D space, but their shapes don’t squeeze or bend

Reporter: Aviva Lev-Ari, PhD, RN

This paper introduces a novel SE(3) equivariant graph matching network, along with a keypoint discovery and alignment approach, for the problem of protein-protein docking, with a novel loss based on optimal transport. The overall consensus is that this is an impactful solution to an important problem, whereby competitive results are achieved without the need for templates, refinement, and are achieved with substantially faster run times.
28 Sept 2021 (modified: 18 Nov 2021)ICLR 2022 SpotlightReaders:  Everyone Show BibtexShow Revisions
 
Keywords:protein complexes, protein structure, rigid body docking, SE(3) equivariance, graph neural networks
AbstractProtein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications such as drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no three-dimensional flexibility during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right location and the right orientation relative to the second protein. We mathematically guarantee that the predicted complex is always identical regardless of the initial placements of the two structures, avoiding expensive data augmentation. Our model approximates the binding pocket and predicts the docking pose using keypoint matching and alignment through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements over existing protein docking software and predict qualitatively plausible protein complex structures despite not using heavy sampling, structure refinement, or templates.
One-sentence SummaryWe perform rigid protein docking using a novel independent SE(3)-equivariant message passing mechanism that guarantees the same resulting protein complex independent of the initial placement of the two 3D structures.
 
SOURCE
 

MIT researchers created a machine-learning model that can directly predict the complex that will form when two proteins bind together. Their technique is between 80 and 500 times faster than state-of-the-art software methods, and often predicts protein structures that are closer to actual structures that have been observed experimentally.

This technique could help scientists better understand some biological processes that involve protein interactions, like DNA replication and repair; it could also speed up the process of developing new medicines.

Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them. This deep-learning model can learn these types of interactions from data,” says Octavian-Eugen Ganea, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Ganea’s co-lead author is Xinyuan Huang, a graduate student at ETH Zurich. MIT co-authors include Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in CSAIL, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering in CSAIL and a member of the Institute for Data, Systems, and Society. The research will be presented at the International Conference on Learning Representations.

Significance of the Scientific Development by the @MIT Team

EquiDock wide applicability:

  • Our method can be integrated end-to-end to boost the quality of other models (see above discussion on runtime importance). Examples are predicting functions of protein complexes [3] or their binding affinity [5], de novo generation of proteins binding to specific targets (e.g., antibodies [6]), modeling back-bone and side-chain flexibility [4], or devising methods for non-binary multimers. See the updated discussion in the “Conclusion” section of our paper.

 

Advantages over previous methods:

  • Our method does not rely on templates or heavy candidate sampling [7], aiming at the ambitious goal of predicting the complex pose directly. This should be interpreted in terms of generalization (to unseen structures) and scalability capabilities of docking models, as well as their applicability to various other tasks (discussed above).

 

  • Our method obtains a competitive quality without explicitly using previous geometric (e.g., 3D Zernike descriptors [8]) or chemical (e.g., hydrophilic information) features [3]. Future EquiDock extensions would find creative ways to leverage these different signals and, thus, obtain more improvements.

   

Novelty of theory:

  • Our work is the first to formalize the notion of pairwise independent SE(3)-equivariance. Previous work (e.g., [9,10]) has incorporated only single object Euclidean-equivariances into deep learning models. For tasks such as docking and binding of biological objects, it is crucial that models understand the concept of multi-independent Euclidean equivariances.

  • All propositions in Section 3 are our novel theoretical contributions.

  • We have rewritten the Contribution and Related Work sections to clarify this aspect.

   


Footnote [a]: We have fixed an important bug in the cross-attention code. We have done a more extensive hyperparameter search and understood that layer normalization is crucial in layers used in Eqs. 5 and 9, but not on the h embeddings as it was originally shown in Eq. 10. We have seen benefits from training our models with a longer patience in the early stopping criteria (30 epochs for DIPS and 150 epochs for DB5). Increasing the learning rate to 2e-4 is important to speed-up training. Using an intersection loss weight of 10 leads to improved results compared to the default of 1.

 

Bibliography:

[1] Protein-ligand blind docking using QuickVina-W with inter-process spatio-temporal integration, Hassan et al., 2017

[2] GNINA 1.0: molecular docking with deep learning, McNutt et al., 2021

[3] Protein-protein and domain-domain interactions, Kangueane and Nilofer, 2018

[4] Side-chain Packing Using SE(3)-Transformer, Jindal et al., 2022

[5] Contacts-based prediction of binding affinity in protein–protein complexes, Vangone et al., 2015

[6] Iterative refinement graph neural network for antibody sequence-structure co-design, Jin et al., 2021

[7] Hierarchical, rotation-equivariant neural networks to select structural models of protein complexes, Eismann et al, 2020

[8] Protein-protein docking using region-based 3D Zernike descriptors, Venkatraman et al., 2009

[9] SE(3)-transformers: 3D roto-translation equivariant attention networks, Fuchs et al, 2020

[10] E(n) equivariant graph neural networks, Satorras et al., 2021

[11] Fast end-to-end learning on protein surfaces, Sverrisson et al., 2020

SOURCE

https://openreview.net/forum?id=GQjaI9mLet

Read Full Post »

IBM has reached an agreement to sell its Watson Health data and analytics business to the private-equity firm Francisco Partners

Reporter: Aviva Lev-Ari, PhD, RN

UPDATED on 2/5/2022


UPDATED on 1/31/2022

AI Hot in Healthcare Despite IBM’s Watson Health Pullout

BY JONATHAN SMITH

28/01/2022

  • Big pharma companies are snapping up collaborations with firms using AI to speed up drug discovery, with one of the latest being Sanofi’s pact with Exscientia.
  • Tech giants are placing big bets on digital health analysis firms, such as Oracle’s €25.42B ($28.3B) takeover of Cerner in the US.
  • There’s also a steady flow of financing going to startups taking new directions with AI and bioinformatics, with the latest example being a €20M Series A round by SeqOne Genomics in France. 

IBM Watson uses a philosophy that is diametrically opposed to SeqOne’s,” said Jean-Marc Holder, CSO of SeqOne. “[IBM Watson seems] to rely on analysis of large amounts of relatively unstructured data and bet on the volume of data delivering the right result. By opposition, SeqOne strongly believes that data must be curated and structured in order to deliver good results in genomics.”  

https://www.labiotech.eu/trends-news/ibm-watson-health-ai/

UPDATED on 1/31/2022

Key M&A in Health IT include:

Last month,

  • IBM arch-rival Oracle announced a $28 billion takeover of electronic health record company Cerner, while 2021 also saw
  • Microsoft’s $19.7 billion play for AI specialist Nuance and a
  • $17 billion takeover of Athenahealth by investment groups Bain Capital and Hellman & Friedman.

IBM sells off large parts of Watson Health business

Phil TaylorPhil Taylor

January 24, 2022

Francisco Partners is picking up a range of databases and analytics tools – including

  • Health Insights,
  • MarketScan,
  • Clinical Development,
  • Social Programme Management,
  • Micromedex and
  • other imaging and radiology tools, for an undisclosed sum estimated to be in the region of $1 billion.

IBM said the sell-off is tagged as “a clear next step” as it focuses on its platform-based hybrid cloud and artificial intelligence strategy, but it’s no secret that Watson Health has failed to live up to its early promise.

The sale also marks a retreat from healthcare for the tech giant, which is remarkable given that it once said it viewed health as second only to financial services market as a market opportunity.

IBM said it “remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT.”

The company reportedly invested billions of dollars in Watson, but according to a Wall Street Journal report last year, the health business – which provided cloud-based access to the supercomputer and  a range of analytics services – has struggled to build market share and reach profitability.

An investigation by Stat meanwhile suggested that Watson Health’s early push into cancer for example was affected by a premature launch, interoperability challenges and over-reliance on human input to generate results.

For its part, IBM has said that the Watson for Oncology product has been improving year-on-year as the AI crunches more and more data.

That is backed up by a meta analysis of its performance published last year in Nature found that the treatment recommendations delivered by the tool were largely in line with human doctors for several cancer types.

However, the study also found that there was less consistency in more advanced cancers, and the authors noted the system “still needs further improvement.”

Watson Health offers a range of other services of course, including

  • tools for genomic analysis and
  • running clinical trials that have found favour with a number of pharma companies.
  • Francisco said in a statement that it offers “a market leading team [that] provides its customers with mission critical products and outstanding service.”

The deal is expected to close in the second quarter, with the current management of Watson Health retaining “similar roles” in the new standalone company, according to the investment company.

IBM’s step back from health comes as tech rivals are still piling into the sector.

SOURCE

https://pharmaphorum.com/news/ibm-sells-off-large-parts-of-watson-health-business/

@pharma_BI is asking: What will be the future of WATSON Health?

@AVIVA1950 says on 1/26/2022:

Aviva believes plausible scenarios will be that Francisco Partners will:

A. Invest in Watson Health – Like New Mountains Capital (NMC) did with Cytel

B. Acquire several other complementary businesses – Like New Mountains Capital (NMC) did with Cytel

C. Hold and grow – Like New Mountains Capital (NMC) is doing with Cytel since 2018.

D. Sell it in 7 years to @Illumina or @Nvidia or Google’s Parent @AlphaBet

1/21/2022

IBM said Friday it will sell the core data assets of its Watson Health division to a San Francisco-based private equity firm, marking the staggering collapse of its ambitious artificial intelligence effort that failed to live up to its promises to transform everything from drug discovery to cancer care.

https://www.statnews.com/2022/01/21/ibm-watson-health-sale-equity/

IBM has reached an agreement to sell its Watson Health data and analytics business to the private-equity firm Francisco Partners. … He said the deal will give Francisco Partners data and analytics assets that will benefit from “the enhanced investment and expertise of a healthcare industry focused portfolio.”5 days ago


IBM Is Selling Watson Health Unit to Private-Equity Firmhttps://www.barrons.com › articles › ibm-selling-watson-h…
About featured snippetsFeedback
IBM is selling off Watson Health to a private equity firm.https://www.nytimes.com › 2022/01/21 › business › ibm-…

5 days ago — IBM has been trying to find buyers for the Watson Health business for more than a year. And it was seeking a sale price of about $1 billion, The …Missing: Statement ‎| Must include: Statement


IBM Sells Some Watson Health Assets for More Than $1 Billionhttps://www.bloomberg.com › news › articles › ibm-is-s…

5 days ago — International Business Machines Corp. agreed to sell part of its IBM Watson Health business to private equity firm Francisco Partners, …
IBM has sold Watson Health. It was a long time coming.https://www.protocol.com › bulletins › ibm-watson-heal…

5 days ago — IBM announced today that it has sold its Watson Health data and analytics assets to private equity firm Francisco Partners.
IBM to sell Watson Health assets to Francisco Partnershttps://www.healthcareitnews.com › news › ibm-sell-wa…

5 days ago — IBM on Friday announced a deal with Bay Area-based Francisco Partners to sell off healthcare data and analytics assets from its Watson …
IBM is selling off its Watson Health assets – CNNhttps://www.cnn.com › 2022/01/21 › tech › ibm-selling-w…

5 days ago — IBM said Friday that it will sell off the healthcare data and analytics assets housed under its Watson Health unit to private equity firm …
IBM to sell Watson Health division to private equity firmhttps://www.healthcaredive.com › news › ibm-sell-wats…

5 days ago — Tom Rosamilia, senior vice president of IBM Software, said in a statement the deal is a next step allowing IBM to focus more intensely on its …
IBM sells Watson Health assets to investment firm Francisco …https://www.fiercehealthcare.com › tech › ibm-sells-wat…

5 days ago — IBM has reached a deal to sell the healthcare data and analytics assets from its Watson Health business to investment firm Francisco …
Francisco Partners to Acquire IBM’s Healthcare Data and …https://newsroom.ibm.com › 2022-01-21-Francisco-Par…

5 days ago — “IBM remains committed to Watson, our broader AI business, and to the clients and partners we support in healthcare IT.Missing: Statement ‎| Must include: Statement
IBM Sells Portion of Watson Health Business to Francisco …https://www.channele2e.com › ChannelE2E Blog

5 days ago — IBM Watson Health – Certain Assets Sold: Executive Perspectives. In a prepared statement about the deal, Tom Rosamilia, senior VP, IBM Software, …


IBM Watson Health Finally Sold by IBM After 11 Months of …https://www.enterpriseai.news › 2022/01/21 › ibm-wats…

5 days ago — Another IBM executive, Tom Rosamilia, a senior vice president with IBM Software, said in a statement that the sale of the Watson Health assets …
Francisco Partners scoops up bulk of IBM’s Watson Health unithttps://techcrunch.com › 2022/01/21 › francisco-partne…

5 days ago — In what has to be considered an anticlimactic ending, IBM sold off the data assets of its Watson Health unit to private equity firm …
IBM Sells Some Watson Health Assets for More Than $1 Billionhttps://www.bloombergquint.com › Business

IBM confirmed an earlier Bloomberg report on the sale in a statement on Friday, … “IBM remains committed to Watson, our broader AI business, and to the …
IBM offloads Watson Health business data, analyticshttps://searchbusinessanalytics.techtarget.com › news › IB…

5 days ago — IBM has sold the bulk of its Watson Health data and analytics business to a … In a press statementIBM said offloading its Watson Health …
IBM selling Watson Health data and analytics business to …https://digitalhealth.modernhealthcare.com › Finance

5 days ago — IBM announced Friday that Francisco Partners will acquire its … The news comes after IBM sold three components of its Watson Health …
IBM to sell Watson Health assets to private equity firmhttps://www.auntminnie.com › …

5 days ago — IBM has agreed to sell healthcare data analytics assets from its current Watson Health business to private equity firm Francisco Partners.
IBM sells off large parts of Watson Health business -https://pharmaphorum.com › news › ibm-sells-off-large…

2 days ago — Tech giant IBM draws back from its digital health aspirations, agreeing a deal to sell a large chunk of IBM Watson Health to private equity …
IBM sells off Watson AI healthcare unit – Verdicthttps://www.verdict.co.uk › ibm-sells-off-watson-ai-hea…

2 days ago — IBM is to sell off its Watson Health data assets, bringing all but the final blow to … Senior Vice President, IBM Software in a statement.
IBM Sells Portions Of Watson Health Unit To Investment Firmhttps://www.investors.com › news › technology › ibm-s…

5 days ago — The sale to Francisco Partners is the latest step by IBM to refocus its … said IBM senior vice president Tom Rosamilia in a statement.


IBM Sells Off Watson Health Assets | Healthcare Innovationhttps://www.hcinnovationgroup.com › news › ibm-sells…

5 days ago — 5, IBM (NYSE: IBM) initially explored putting IBM Watson Health up for sale in … senior vice president of IBM Software, in a statement.
IBM is selling off its Watson Health assets – KESQhttps://kesq.com › money › 2022/01/21 › ibm-is-selling…

5 days ago — “The Watson Health sale has been anticipated for quite some time,” Paddy … senior vice president of IBM Software, said in a statement.
Report: IBM seeking to sell Watson Health unit for $1B+https://siliconangle.com › 2022/01/06 › report-ibm-see…

Jan 6, 2022 — IBM Corp. has launched a new effort to sell its Watson Health division, Axios reported on Wednesday, and the company is said to be hoping …
History of IBM – Wikipediahttps://en.wikipedia.org › wiki › History_of_IBM
As the sales force grew into a highly professional and knowledgeable arm of the company, Watson focused their attention on providing large-scale tabulating …
Remember IBM’s Amazing Watson AI? Now it’s desperately …https://almooon.com › remember-ibms-amazing-watson…

Jan 7, 2022 — IBM’s infamous Watson artificial intelligence once defeated two $1 … offering the health portion of its much-hyped algorithm for sale.
IBM shifts focus with sale of Watson marketing, commerce …https://www.marketingdive.com › news › ibm-shifts-foc…

Apr 9, 2019 — IBM plans to sell its Watson marketing and commerce solutions to the private equity firm Centerbridge Partners, the company announced in a …
IBM and Salesforce Join Forces to Bring Watson and Einstein …http://www.smartcustomerservice.com › News-Features

Jan 26, 2018 — IBM has, meanwhile, named Salesforce its preferred customer engagement platform for sales and service. “The combination of IBM Cloud and …
IBM Sells Watson Health Assets to Investment Firm – WSJhttps://www.wsj.com › articles › ibm-sells-watson-health-a…

5 days ago — International Business Machines Corp. IBM 5.65% agreed to sell the data and analytics assets from its Watson Health business to investment …Missing: Statement ‎| Must include: Statement
Latest News & Videos, Photos about ibm watson health – The …https://economictimes.indiatimes.com › topic › ibm-wat…
IBM is said to consider sale of Watson Health amid cloud focus … Research India and CTO IBM India /South Asia, was quoted as saying in an IBM statement.
IBM explores sale of Watson Health | Fox Businesshttps://www.foxbusiness.com › healthcare › ibm-explores-…

Feb 18, 2021 — International Business Machines Corp. is exploring a potential sale of its IBM Watson Health business, according to people familiar with the …


IBM Explores Sale of IBM Watson Health – Slashdothttps://slashdot.org › story › ibm-explores-sale-of-ibm-…

Feb 19, 2021 — IBM is exploring a potential sale of its IBM Watson Health business, WSJ is reporting, citing people familiar with the matter, …
Watson Applications Software Sales Specialist | IBM Careershttps://krb-sjobs.brassring.com › HomeWithPreLoad

Job Details: Do you have experience helping clients implement innovative enterprise technology solutions that help them sol.
Georgia Watson – Sales Enablement Festivalhttps://festival2021.salesenablementcollective.com › ge…

Working for IBM as a Sales Enablement and Skills Transformation lead, Georgia was recently recognized as an Innovator of the Year in the International Business …
IBM’s Watson Health is sold off in parts | Hacker Newshttps://news.ycombinator.com › item

3 days ago — Please don’t make such definite statements. You even say it in your own comment … People outside tech were buzzing about IBM and Watson.
Data Before Technology: IBM Watson’s Vision – Forresterhttps://www.forrester.com › Featured Blogs

Nov 2, 2014 — I sat down with Steve Cowley, General Manager for IBM Watson, on Tuesday at … Steve surprised me with this statement, “[With] traditional …
How IBM Watson Overpromised and Underdelivered on AI …https://spectrum.ieee.org › how-ibm-watson-overpromised…
After its triumph on Jeopardy!, IBM’s AI seemed poised to revolutionize medicine. Doctors are still waiting.
You’re probably using IBM’s Watson computer and don’t know ithttps://www.vox.com › ibm-ginni-rometty-watson

Jun 1, 2016 — But please don’t call it “artificial intelligence,” IBM’s CEO says. … sales figures aren’t yet disclosed in its financial statements.
IBM is selling off its Watson Health assets – KYMAhttps://kyma.com › news › 2022/01/21 › ibm-is-selling-…

5 days ago — “The Watson Health sale has been anticipated for quite some time,” Paddy … senior vice president of IBM Software, said in a statement.
IBM, investment firm reach deal for Watson Health assetshttps://www.mmm-online.com › home › channel › brea…

5 days ago — IBM will sell the healthcare data and analytics assets of its Watson Health business to investment firm Francisco Partners as part of a deal …
IBM has sold off Watson at a steep discount, and is exiting the …https://www.reddit.com › Futurology › comments › ib…

3 days ago — Nuance played a part in building watson in supplying the speech recognition component of Watson. Through the years, Nuance has done some serious …

Read Full Post »

Older Posts »

%d bloggers like this: