Breaking News
5/2/2023
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
4/28/2023
Ilya Sutskever is the co-founder and chief scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. In this conversation with Stanford adjunct lecturer Ravi Belani, Sutskever explains his approach to making complex decisions at OpenAI and for AI companies in general, and makes predictions about the future of deep learning. ————————————– Stanford eCorner content is produced by the Stanford Technology Ventures Program. At STVP, we empower aspiring entrepreneurs to become global citizens who create and scale responsible innovations.
https://www.youtube.com/watch?v=Wmo2vR7U9ck
4/24/2023
https://www.youtube.com/watch?v=XjSUJUL9ADw
4/23/2023
What Is ChatGPT Doing and Why Does It Work?
In this article, we will explain how ChatGPT works and why it is able to produce coherent and diverse conversations.
By Stephen Wolfram, Founder & CEO at Wolfram Research on April 18, 2023 in Natural Language Processing
https://www.kdnuggets.com/2023/04/chatgpt-work.html
AutoGPT: Everything You Need To Know
Just when we got our heads around ChatGPT, another one came along. AutoGPT is an experimental open-source pushing the capabilities of the GPT-4 language model.
By Nisha Arya, KDnuggets on April 14, 2023 in Artificial Intelligence
https://www.kdnuggets.com/2023/04/autogpt-everything-need-know.html
6 ChatGPT mind-blowing extensions to use anywhere
And how to make ChatGPT our daily assistant using them.
By Josep Ferrer, Data Analyst and Project Manager at Catalan Tourist Board on April 13, 2023 in Artificial Intelligence
https://www.kdnuggets.com/2023/04/6-chatgpt-mindblowing-extensions-anywhere.html
4/19/2023
From: Arnon <Publications@arnontl.com>
Reply-To: <Publications@arnontl.com>
Date: Wednesday, April 19, 2023 at 8:49 AM
To: Aviva Lev-Ari <avivalev-ari@alum.Berkeley.edu>
Subject: Legal Considerations in Building an AI Business #1
- Generally Applicable LawsAside from AI-specific regulations, existing laws that govern particular uses or industries will continue to apply. These include, but are not limited to, consumer protection regulations, laws governing medical device safety, intellectual property law, and privacy law.5. Responsible AI Principles Key principles for ethical and responsible use of AI include:
- Fairness and Equality – There is a tendency to think that algorithms will be more neutral and less biased than humans, but this is not always the case. Biases can be introduced into algorithms based on the data being input to train the algorithm or the way it is set up. If the dataset used to train the algorithm reflects existing social biases, the algorithm will reinforce those same biases. Studies have also shown that a homogeneous group of developers may develop biased algorithms unintentionally. Mechanisms should be instituted to identify biases that may have been introduced and to ensure that AI systems are not discriminatory and to ensure the system is fit for and achieves its designated purpose.
- Robustness, reliability, and safety– Adequate testing and oversight should be in place in order to ensure safety and reliability. AI systems should be traceable and should allow for a human to be kept in the decision-making loop where possible. More practical action items include: (i) executing Impact Assessments before embarking on AI development; (ii) applying data governance and management policies for AI programs; (iii) appointing human oversight to review AI systems regularly, including designing AI systems so that they can be monitored and do not become “black boxes” (see sub-section e below); (iv) ensuring that expected errors can be corrected competently and quickly; (v) ensuring the AI system is secure, this relates to data security measures.
- Privacy– Privacy considerations should be taken into account, both in terms of the source of the data on which AI systems are trained and the manner in which AI systems are used. Compliance with privacy laws should be ensured at all times.
- Accountability– The OECD refers to AI accountability as follows: “organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process.”
- Transparency, explainability and interpretability– Where people are affected by the conclusion reached by an AI system, it is important that they understand that the decision was made by an AI system and how the AI was developed and used in the specific context. Companies using AI should be sure to be transparent with end users and disclose the use of AI, such as in a chatbot, and should work to ensure that AI systems do not become “black boxes.” This means it should be possible to explain why the output is the way it is. Being able to explain how the AI is built is essential for enabling users to trust the output and can also allow for proper oversight. For example, greater transparency into how AI systems are built and trained can provide for more oversight over biases that may be introduced, can allow for better review of the safety and robustness of AI systems, and can ensure that the right parties remain responsible for the functioning of the AI.
- Using AI in Practice
As AI systems take center stage and use of AI becomes more regulated, it is essential that companies are aware of legal and regulatory requirements early on in the development of their products. As mentioned above, in the next article we intend to propose certain concrete steps for companies enabling them to design and develop safer and compliant AI systems.
4/15/2023
Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.
The Commonwealth Club of California
Stuart Russell, Professor of Computer Science,
- Director of the Kavli Center for Ethics, Science, and the Public,
- Director of the Center for Human-Compatible AI, University of California, Berkeley;
- Author, Human Compatible: Artificial Intelligence and the Problem of Control
The A.I. Dilemma – March 9, 2023
CENTER FOR HUMANE TECHNOLOGY
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails. For the podcast version, please visit: https://www.humanetech.com/podcast/th…
4/14/2023
The combination of ChatGPT and Wolfram Alpha could be very powerful in the words of Stephen Wolfram
ChatGPT and Wolfram|Alpha
It’s always amazing when things suddenly “just work”. It happened to us with Wolfram|Alpha back in 2009. It happened with our Physics Project in 2020. And it’s happening now with OpenAI’s ChatGPT.
I’ve been tracking neural net technology for a long time (about 43 years, actually). And even having watched developments in the past few years I find the performance of ChatGPT thoroughly remarkable. Finally, and suddenly, here’s a system that can successfully generate text about almost anything—that’s very comparable to what humans might write. It’s impressive, and useful. And, as I discuss elsewhere, I think its success is probably telling us some very fundamental things about the nature of human thinking.
But while ChatGPT is a remarkable achievement in automating the doing of major human-like things, not everything that’s useful to do is quite so “human like”. Some of it is instead more formal and structured. And indeed one of the great achievements of our civilization over the past several centuries has been to build up the paradigms of mathematics, the exact sciences—and, most importantly, now computation—and to create a tower of capabilities quite different from what pure human-like thinking can achieve.
I myself have been deeply involved with the computational paradigm for many decades, in the singular pursuit of building a computational language to represent as many things in the world as possible in formal symbolic ways. And in doing this my goal has been to build a system that can “computationally assist”—and augment—what I and others want to do. I think about things as a human. But I can also immediately call on Wolfram Language and Wolfram|Alpha to tap into a kind of unique “computational superpower” that lets me do all sorts of beyond-human things.
It’s a tremendously powerful way of working. And the point is that it’s not just important for us humans. It’s equally, if not more, important for human-like AIs as well—immediately giving them what we can think of as computational knowledge superpowers, that leverage the non-human-like power of structured computation and structured knowledge.We’ve just started exploring what this means for ChatGPT. But it’s pretty clear that wonderful things are possible. Wolfram|Alpha does something very different from ChatGPT, in a very different way. But they have a common interface: natural language. And this means that ChatGPT can “talk to” Wolfram|Alpha just like humans do—with Wolfram|Alpha turning the natural language it gets from ChatGPT into precise, symbolic computational language on which it can apply its computational knowledge power.
4/13/2023
ChemCrow: Augmenting large-language models with chemistry tools
Large-language models (LLMs) have recently shown strong performance in tasks across domains, but struggle with chemistry-related problems. Moreover, these models lack access to external knowledge sources, limiting their usefulness in scientific applications. In this study, we introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design. By integrating 13 expert-designed tools, ChemCrow augments the LLM performance in chemistry, and new capabilities emerge. Our evaluation, including both LLM and expert human assessments, demonstrates ChemCrow’s effectiveness in automating a diverse set of chemical tasks. Surprisingly, we find that GPT-4 as an evaluator cannot distinguish between clearly wrong GPT-4 completions and GPT-4 + ChemCrow performance. There is a significant risk of misuse of tools like ChemCrow and we discuss their potential harms. Employed responsibly, ChemCrow not only aids expert chemists and lowers barriers for non-experts, but also fosters scientific advancement by bridging the gap between experimental and computational chemistry.
Submission history
From: Andrew White [view email]
[v1] Tue, 11 Apr 2023 17:41:13 UTC (13,130 KB)
[v2] Wed, 12 Apr 2023 15:14:31 UTC (13,681 KB)
For ChatGPT applied to Medicine
Go To
https://pharmaceuticalintelligence.com/medicine-w-gpt-4-chatgpt/
- ChatGPT applied to Cardiovascular diseases: Diagnosis and Management
- ChatGPT applied to Cancer & Oncology
- ChatGPT applied to Medical Imaging & Radiology
ChatGPT + Wolfram PlugIn applied to LPBI Group IT needs
- 2.0 LPBI Group’s Mission #5 – An AI Concept/Plan for Launch:
Journal articles UPDATING System powered by OpenAI’s ChatGPT with Wolfram’s PlugIn sending Updates to Twitter then adding the update to each article at its own URL.
Resources for 2.0 LPBI Group Mission #5:
#1:
ChatGPT will CHANGE MEDICINE FOREVER! Here is how
#2:
ChatGPT Gets Its “Wolfram Superpowers”!
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
#3:
CHATGPT + WOLFRAM – THE FUTURE OF AI!
#4:
Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711
https://www.youtube.com/watch?v=F5tXWmCJ_wo
#5:
GPT-4 & Large Language Models (LLM)
GPT-4 Creator Ilya Sutskever
#6:
Sparks of AGI: early experiments with GPT-4
Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. “GPT-4 is not good at this, and maybe large language models in general will never be good at it,” he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. “If you want to say that intelligence is planning, then GPT-4 is not intelligent.”
GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. “It doesn’t care if it’s turned off,” Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs [Josh Tenenbaum, MIT]
Some Glimpse AGI in ChatGPT. Others Call It a Mirage | WIRED
https://www.wired.com/story/chatgpt-agi-intelligence/?bxid=5be9cb353f92a40469de971e
#7:
Google invests $300 million in Anthropic as race to compete with ChatGPT heats up
https://venturebeat.com/ai/google-invests-300-million-in-anthropic-as-race-to-compete-with-chatgpt-heats-up/
#8:
Review: We put ChatGPT, Bing Chat, and Bard to the test
This page has the following sub pages.