Artificial Intelligence: Genomics & Cancer
Artificial Intelligence Definitions As a useful reference, HAI Associate Director Chris Manning has created an easy to use guide on key AI definitions. |
Updated on 8/27/2021
Workshop on Foundation Models
Updated on 8/25/2021
Anthropic raises $124 million to build more reliable, general AI systems
Anthropic, an AI safety and research company, has raised $124 million in a Series A. The financing round will support Anthropic in executing against its research roadmap and building prototypes of reliable and steerable AI systems.
The company is led by siblings Dario Amodei (CEO) and Daniela Amodei (President). The Anthropic team has previously conducted research into GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Anthropic will use the funding for computationally-intensive research to develop large-scale AI systems that are steerable, interpretable, and robust.
“Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people. We’re thrilled to be working with investors that support us in this mission and expect to concentrate on research in the immediate term,” said Anthropic CEO Dario Amodei.
Anthropic will focus on research into increasing the safety of AI systems; specifically, the company is focusing on increasing the reliability of large-scale AI models, developing the techniques and tools to make them more interpretable, and building ways to more tightly integrate human feedback into the development and deployment of these systems.
The Series A round was led by Jaan Tallinn, technology investor and co-founder of Skype. The round included participation from James McClave, Dustin Moskovitz, the Center for Emerging Risk Research, Eric Schmidt, and others.
To find out more about Anthropic’s research agenda and approach, you can read our website and its job postings. The company is hiring researchers, engineers, and operational experts to support it in executing against its research roadmap. Find out more here: Anthropic.com.
SOURCE
Updated on 6/11/2021
GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters
Wu Dao 2.0 is 10x larger than GPT-3. Imagine what it can do.
Alberto Romero
5 days ago·6 min read

Final thoughts: Wu Dao 2.0 towards AGI
Some of BAAI’s most important members expressed their thoughts on Wu Dao 2.0’s role on the road towards AGI (artificial general intelligence):
“The way to artificial general intelligence is big models and big computer. […]What we are building is a power plant for the future of AI. With mega data, mega computing power, and mega models, we can transform data to fuel the AI applications of the future.”
— Dr. Zhang Hongjiang, chairman of BAAI
“These sophisticated models, trained on gigantic data sets, only require a small amount of new data when used for a specific feature because they can transfer knowledge already learned into new tasks, just like human beings. […] Large-scale pre-trained models are one of today’s best shortcuts to artificial general intelligence.”
— Blake Yan, AI researcher
Wu Dao 2.0 aims to enable machines to think like humans and achieve cognitive abilities beyond the Turing test.”
— Tang Jie, lead researcher behind Wu Dao 2.0
They bet on GPT-like multimodal and multitasking models to reach AGI. Without a doubt, Wu Dao 2.0 — as GPT-3 before it — is an important step towards AGI. Yet, how much closer it will take us is debatable. Some expertsargue we’ll need hybrid models to reach AGI. Others defend embodied AI, rejecting traditional bodiless paradigms, such as neural networks, entirely.
No one knows which is the right path. Even if larger pre-trained models are the logical trend today, we may be missing the forest for the trees, and we may end up reaching a less ambitious ceiling ahead. The only clear thing is that if the world has to suffer from environmental damage, harmful biases, or high economic costs, not even reaching AGI would be worth it.
SOURCE
What Computers Can’t Do: A Critique of Artificial Reason
Updated on 6/7/2021
The State Of Data, May 2021
By Gil Press
https://www.forbes.com/sites/gilpress/2021/05/31/the-state-of-data-may-2021/?sh=198964881cb4
Updated on 3/28/2021
Watch video
Kira Radinsky: Using Data To Fix the World #OCSummit19
Updated on 2/2/2021
Gartner: The future of AI is not as rosy as some might think
A Gartner report predicts that the second-order consequences of widespread AI will have massive societal impacts, to the point of making us unsure if and when we can trust our own eyes.
By Brandon Vigliarolo | January 25, 2021, 11:53 AM PST
AI ethics: The future of AI could become scary
Gartner has released a series of Predicts 2021 research reports, including one that outlines the serious, wide-reaching ethical and social problems it predicts artificial intelligence (AI) to cause in the next several years. In Predicts 2021: Artificial Intelligence and Its Impact on People and Society, five Gartner analysts report on different predictions it believes will come to fruition by 2025. The report calls particular attention to what it calls second-order consequences of artificial intelligence that arise as unintended results of new technologies.
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
Generative AI, for example, is now able to create amazingly realistic photographs of people and objects that don’t actually exist; Gartner predicts that by 2023, 20% of account takeovers will use deepfakes generated by this type of AI. “AI capabilities that can create and generate hyper-realistic content will have a transformational effect on the extent to which people can trust their own eyes,” the report said.
The report tackles five different predictions for the AI market, and gives recommendations for how businesses can address those challenges and adapt to the future:
- By 2025, pre-trained AI models will be largely concentrated among 1% of vendors, making responsible use of AI a societal concern
- In 2023, 20% of successful account takeover attacks will use deepfakes as part of social engineering attacks
- By 2024, 60% of AI providers will include harm/misuse mitigation as a part of their software
- By 2025, 10% of governments will avoid privacy and security concerns by using synthetic populations to train AI
- By 2025, 75% of workplace conversations will be recorded and analyzed for use in adding organizational value and assessing risk
Each of those analyses is enough to make AI-watchers sit up and take notice, but when combined it creates a picture of a grim future rife with ethical concerns, potential misuse of AI, and loss of privacy in the workplace.
How businesses can respond
Concerns over AI’s effect on privacy and truth are sure to be major topics in the coming years if Gartner’s analysts are accurate in their predictions, and successful businesses will need to be ready to adapt quickly to those concerns.
A recurring theme in the report is the establishment of ethics boards at companies that rely on AI, whether as a service or a product. This is mentioned particularly for businesses that plan to record and analyze workplace conversations: Boards with employee representation should be established to ensure fair use of conversations data, Gartner said.
SEE: Natural language processing: A cheat sheet (TechRepublic)
Gartner also recommends that businesses establish criteria for responsible AI consumption and prioritize vendors that “can demonstrate responsible development of AI and clarity in addressing related societal concerns.”
As for security concerns surrounding deepfakes and generative AI, Gartner recommends that organizations should schedule training about deepfakes. “We are now entering a zero-trust world. Nothing can be trusted unless it is certified as authenticated using cryptographic digital signatures,” the report said.
There’s a lot to digest in this report, from figures saying that the best deepfake detection software will top out at a 50% identification rate in the long term, to the prediction that in 2023 a major US corporation will adopt conversation analysis to determine employee compensation. There’s much to be worried about in these analyses, but potential antidotes are included as well. The full report is available at Gartner, but interested parties will need to pay for access.
Also see
- IT leader’s guide to deep learning (TechRepublic Premium)
- Building the bionic brain (free PDF) (TechRepublic)
- Hiring Kit: Autonomous Systems Engineer (TechRepublic Premium)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)
SOURCE
Updated on 1/17/2021
This conference zeroed in on the latest research on cognitive science, neuroscience, vision, language, and thought, informing the pursuit of artificial intelligence.
- AI, NLP, ML for Text Analysis of LPBI’s four e-Books: Scientists and CS/AI Collaboration on What to look for AND setting guiding rules for algorithm searching the English Text in
Cancer: Volume One:
Cancer: Volume Two:
Genomics: Volume One
Genomics: Volume Two
https://pharmaceuticalintelligence.com/2020-summer-internship/
PORTAL for 2020 SUMMER INTERNSHIP @LPBI on Data Curation and Data Annotation
LPBI’s 2020 Summer Interns
LPBI’s 2020 Summer Interns in Data Science:
Noam Steiner Tomer, Summer Internship, Research Assistant 1
UPDATED on 7/14/2020
NLP Resources
- University of New South Wales NLP Dictionary
https://www.cse.unsw.edu.au/~billw/nlpdict.html
- Princeton WordNet 3.1: Thesaurus/Dictionary
http://wordnetweb.princeton.edu/perl/webwn|https://wordnet.princeton.edu/download/current-version
- Natural Language Toolkit: (NLP Python Library) https://www.nltk.org/
UPDATED ON 6/19/2020
From @MIT
This week we released our annual list of 35 Innovators under 35. At a time when the news has felt relentless, getting to read about so many talented, mission-driven technologists has been a sight for sore eyes. It’s especially exciting to see how many of them are working directly with AI—either to advance the field through fundamental research or by applying it responsibly across their respective industries. Here are just a few of the people that inspired me:
Inioluwa Deborah Raji, 24, AI Now Institute. Inioluwa Deborah Raji was interning at machine-learning startup Clarifai when she had an experience she remembers as “horrible.” While building a computer vision model to help clients flag inappropriate images as “not safe for work,” she discovered it flagged photos of people of color much more often than those of white people. The realization led her down a path of algorithmic accountability research, where she is a leading voice today. Most recently, Amazon’s decision to implement a one-year moratorium on police use of Rekognition stemmed directly from a paper she co-authored with Joy Buolamwini (one of our innovators in 2018) to demonstrate the product’s discrimination. Read more here.
Manuel Le Gallo, 34, IBM Research. Much of the energy use in modern computing comes from the fact that data needs to be constantly transferred back and forth between the memory and the processor. So Manuel Le Gallo worked with his team at IBM Research to develop a system that uses memory itself to process data. Their early work has shown they can achieve both precision and huge energy savings. They recently completed a process using just 1% as much energy as would have been needed with conventional methods. Read more here.
Bo Li, 32, University of Illinois at Urbana-Champaign. A few years ago, Bo Li and her colleagues fooled an autonomous vehicle into reading a stop sign as one posting a 45 mph speed limit. All they’d done was place small black-and-white stickers on the sign in a seemingly random pattern to the human eye. But to a neural network, the graffiti-like arrangement turned into a powerful “adversarial attack”—making it see something that was not. It was one of the first times researchers had demonstrated how such vulnerabilities could translate into consequences in the physical world. Li now works on devising these subtle changes to better understand and prevent these attacks in the future. Read more here.
Andrej Karpathy, 33, Tesla. As a graduate student at Stanford, Andrej Karpathy extended techniques for building what are known as convolutional neural networks (CNNs)—systems that broadly mimic the neuron structure in the visual cortex. By combining CNNs with other deep-learning approaches, he created a system that was not just better at recognizing individual items in images, but capable of seeing an entire scene full of objects and effectively building a story of what was happening in it and what might happen next. Karpathy is now applying his knowledge to Tesla, where he oversees neural networks for the cars’ Autopilot feature. Read more here.
Leila Pirhaji, 34, Revivemed. Our bodies contain 100,000 metabolites, tiny molecules involved in our metabolism that show the effects of our genes and lifestyle. Such metabolites include everything from blood sugars and cholesterol to obscure molecules that appear in significant numbers only when someone is sick. The problem is that measuring and identifying these molecules is expensive and time consuming, and fewer than 5% of them in a patient can be identified using common technologies. So Leila Pirhaji developed a platform that uses machine learning to do it much more quickly. Her work could help us better detect and treat diseases. Read more here.
See the full list of 35 Innovators Under 35 here.
SOURCE
From: The Algorithm from MIT Technology Review <newsletters@technologyreview.com>
Reply-To: The Algorithm from MIT Technology Review <newsletters@technologyreview.com>
Date: Friday, June 19, 2020 at 12:38 PM
To: Aviva Lev-Ari <AvivaLev-Ari@alum.berkeley.edu>
Subject: Meet these brilliant young AI innovators under 35
Important resources
- Natural Language Processing (NLP) with Python: The Free eBook
- Data Curation and Data Annotation
https://www.oreilly.com/library/view/natural-language-annotation/9781449332693/
Breaking News May 2020
Artificial Intelligence Cannot Be Inventors, US Patent Office Rules
An AI system called DABUS “invented” two new devices, but the USPTO says only humans can do that.
On Monday, the United States Patent and Trademark Office published a decision that claims artificial intelligences cannot be inventors. Only “natural persons” currently have the right to get a patent.
Last year, two relatively mundane patents—one for a shape-shifting food container and another for an emergency flashlight—posed an existential question for international patent regulations around the world: Does an inventor have to be a human?
These two inventions were the work of DABUS, an artificial intelligence system created by physicist and AI researcher Stephen Thaler. Now, the USPTO has decided that DABUS and any other AI cannot be listed as an inventor on a patent filing.
Until now, US patent law was vague about whether machines could invent, referring to “individuals” as eligible inventors. Thaler, along with a group of patent law experts, argued that because Thaler didn’t have any expertise in containers or flashlights, and didn’t help DABUS make the inventions, it wouldn’t be right for him to be listed as the inventor.
“If I teach my Ph.D. student that and they go on to make a final complex idea, that doesn’t make me an inventor on their patent, so it shouldn’t with a machine,” Ryan Abbott, a law and health-sciences professor at the University of Surrey in the UK who led the group of legal experts in the AI patent project, told the Wall Street Journal last year.
In the UK, the DABUS patents were rejected under patent laws there that forbid non-natural persons from inventing. With this week’s announcement, the US has followed suit, stating that “only natural persons may be named as an inventor in a patent application.”
The DABUS case brings up similar questions of ownership and non-human rights—and even what makes us human, and what makes other entities not—as that infamous monkey-selfie copyright case, where PETA tried to say a monkey could own the rights to a selfie. Ultimately, under regulations from the U.S. Copyright Office that only photographs taken by humans can be copyrighted, PETA’s case was dismissed.
AI PORTAL on Genomics & Cancer @ LPBI
BACKGROUND ON AI and LPBI
- Aviva was exposed to BioInformatics in Pharmaceutics – work of Dr. Philip L. Lorenzi, System Biology Department, MD Anderson Cancer Center in May 2014 at BioIT Conference
- Aviva and Larry Published in September 2014
- Aviva was exposed to NLP in Medicine in 9/2014 and at 2019 World Medical Innovation Forum Conference.
- The Presentation in 4/2019 by
On Sat, 8 Feb 2020 at 15:22 Aviva Lev-Ari <aviva.lev-ari@comcast.net> wrote:
UPDATED on 7/19/2020
The concept of HyperGraphDB and set of rules are needed to be created for the 1400 curations of Dr. Larry, more than half are about signaling pathways used in therapeutics using receptors.
THEN we will have the software for autoindependent semantic Text analysis of Dr. Larry’s UNIVERSE of articles – this was Dr. Williams great Idea.
(7/19/2020 – 5,875 (all articles) / 1400 (LHB’s Universe = 23.8%) of our IP in Journal articles.
Please read below, article and send me your comment on its relevance to our current search for a Partner with expertise in Machine Learning, Natural Language Processing, AI for text analysis.
https://www.wired.com/story/code-obsessed-novelist-builds-writing-bot-the-plot-thickens/
THE TASK AT HAND for 2020 SUMMER are FOUR BOOKS: 2 in Genomics and 2 in Cancer
I quote from 2/10/2020:
You need to invest in a very good Python coder who could:
1) Harvest the site
2) Clean it from HTML
3) Look at the markup of categories and titles and subtitles and store them separately
4) Transform the data to XML
Assignments for June 16, 2020 11AM EST and during 2020 Summer
- Research Text Analysis with NLP algorithms
Please LOOK at these four books on which we will apply NLP algorithms this Summer
Volume One: Genomics Orientations for Personalized Medicine
Volume One: Cancer Biology and Genomics for Disease Diagnosis
Four of LPBI Group’s 16 volumes in Medicine deal with Genomics and
Two of LPBI Group’s 16 volumes in Medicine deal with Cancer and Oncology.
In June 2020 we are launching a new initiative on Artificial Intelligence in GENOMICS and in CANCER for pharmacological & therapeutics aims.
- Applications of NLP, ML and AI on our six books is the Goal for automation of the exploration of novel pathways and potential off-label indications for existing therapies
- For Artificial Intelligence in Medicine Search the SELECT CATEGORY field on the Home Page on the right hand side column on the Home Page
- New content is forthcoming