• Home
  • 2.0 LPBI Executive Summary
  • 1.0 LPBI Executive Summary
  • Journal PharmaceuticalIntelligence.com
  • Portfolio of IP Assets
  • Knowledge PORTALS System (KPS)
  • 1.0 LPBI – 2012-2020 VISTA
  • LPBI Group’s History
  • 2.0 LPBI – 2021-2025 VISION
  • BioMed e-Series
  • Press Coverage
  • Investor Relations
  • Our TEAM
  • Founder
  • Funding, Deals & Partnerships
  • 1.0 LPBI Group News
  • 1.0 LPBI CALENDAR
  • 2.0 LPBI Group News
  • Testimonials about LPBI
  • DrugDiscovery @LPBI Group
  • Medical 3D Printing
  • e-VOICES Podcasting
  • LPBI Newsletters
  • Customer Surveys
  • Health Care INVESTOR’s Corner ($)
  • 2021 Summer Internship Portal
  • 2021-2022 Medical Text Analysis (NLP)
  • Artificial Intelligence: Genomics & Cancer
  • SOP Web STAT
  • Blockchain Transactions Network Ecosystem
  • Contact Us
  • 1.0 LPBI Brochure
  • 2.0 LPBI Brochure
  • 2.0 LPBI – Calendar of Zooms
  • Coronavirus, SARS-CoV-2 Portal
  • LPBI India
  • Synthetic Biology in Drug Discovery
  • Certificate One Year
  • NFT: Redefined Format of IP Assets
  • Audio English-Spanish: BioMed e-Series
  • Five Bilingual BioMed e-Series
  • Press Releases
  • Intangibles CIM
  • ChatGPT + Wolfram PlugIn
  • Medicine with GPT-4 & ChatGPT

Leaders in Pharmaceutical Business Intelligence (LPBI) Group

Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com

Feeds:
Posts
Comments

ChatGPT + Wolfram PlugIn

Breaking News

5/2/2023

ARTIFICIAL INTELLIGENCE

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

By 

  • Will Douglas Heavenarchive page
May 2, 2023
https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/?truid=edf020ada5f25f6d6c4b0b32ac4a1ee9&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=&utm_

4/28/2023

Inside OpenAI [Entire Talk]

Apr 26, 2023

Ilya Sutskever is the co-founder and chief scientist of OpenAI, which aims to build artificial general intelligence that benefits all of humanity. He leads research at OpenAI and is one of the architects behind the GPT models. In this conversation with Stanford adjunct lecturer Ravi Belani, Sutskever explains his approach to making complex decisions at OpenAI and for AI companies in general, and makes predictions about the future of deep learning. ————————————– Stanford eCorner content is produced by the Stanford Technology Ventures Program. At STVP, we empower aspiring entrepreneurs to become global citizens who create and scale responsible innovations.

https://www.youtube.com/watch?v=Wmo2vR7U9ck

4/24/2023

Fireside Chat: With Ilya Sutskever and Jensen Huang, Nvidia CEO: AI Today and Vision of the Future (March 2023)

https://www.youtube.com/watch?v=XjSUJUL9ADw

4/23/2023

What Is ChatGPT Doing and Why Does It Work?

In this article, we will explain how ChatGPT works and why it is able to produce coherent and diverse conversations.

By Stephen Wolfram, Founder & CEO at Wolfram Research on April 18, 2023 in Natural Language Processing

https://www.kdnuggets.com/2023/04/chatgpt-work.html

 

AutoGPT: Everything You Need To Know

Just when we got our heads around ChatGPT, another one came along. AutoGPT is an experimental open-source pushing the capabilities of the GPT-4 language model.

By Nisha Arya, KDnuggets on April 14, 2023 in Artificial Intelligence

https://www.kdnuggets.com/2023/04/autogpt-everything-need-know.html

 

6 ChatGPT mind-blowing extensions to use anywhere

And how to make ChatGPT our daily assistant using them.

By Josep Ferrer, Data Analyst and Project Manager at Catalan Tourist Board on April 13, 2023 in Artificial Intelligence

https://www.kdnuggets.com/2023/04/6-chatgpt-mindblowing-extensions-anywhere.html

4/19/2023

From: Arnon <Publications@arnontl.com>
Reply-To: <Publications@arnontl.com>
Date: Wednesday, April 19, 2023 at 8:49 AM
To: Aviva Lev-Ari <avivalev-ari@alum.Berkeley.edu>
Subject: Legal Considerations in Building an AI Business #1

  1. Generally Applicable LawsAside from AI-specific regulations, existing laws that govern particular uses or industries will continue to apply. These include, but are not limited to, consumer protection regulations, laws governing medical device safety, intellectual property law, and privacy law.5. Responsible AI Principles Key principles for ethical and responsible use of AI include:
  • Fairness and Equality – There is a tendency to think that algorithms will be more neutral and less biased than humans, but this is not always the case. Biases can be introduced into algorithms based on the data being input to train the algorithm or the way it is set up. If the dataset used to train the algorithm reflects existing social biases, the algorithm will reinforce those same biases. Studies have also shown that a homogeneous group of developers may develop biased algorithms unintentionally. Mechanisms should be instituted to identify biases that may have been introduced and to ensure that AI systems are not discriminatory and to ensure the system is fit for and achieves its designated purpose.
  • Robustness, reliability, and safety– Adequate testing and oversight should be in place in order to ensure safety and reliability. AI systems should be traceable and should allow for a human to be kept in the decision-making loop where possible. More practical action items include: (i) executing Impact Assessments before embarking on AI development; (ii) applying data governance and management policies for AI programs; (iii) appointing human oversight to review AI systems regularly, including designing AI systems so that they can be monitored and do not become “black boxes” (see sub-section e below); (iv) ensuring that expected errors can be corrected competently and quickly; (v) ensuring the AI system is secure, this relates to data security measures.
  • Privacy– Privacy considerations should be taken into account, both in terms of the source of the data on which AI systems are trained and the manner in which AI systems are used. Compliance with privacy laws should be ensured at all times.
  • Accountability– The OECD refers to AI accountability as follows: “organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process.”
  • Transparency, explainability and interpretability– Where people are affected by the conclusion reached by an AI system, it is important that they understand that the decision was made by an AI system and how the AI was developed and used in the specific context. Companies using AI should be sure to be transparent with end users and disclose the use of AI, such as in a chatbot, and should work to ensure that AI systems do not become “black boxes.” This means it should be possible to explain why the output is the way it is. Being able to explain how the AI is built is essential for enabling users to trust the output and can also allow for proper oversight. For example, greater transparency into how AI systems are built and trained can provide for more oversight over biases that may be introduced, can allow for better review of the safety and robustness of AI systems, and can ensure that the right parties remain responsible for the functioning of the AI.
  1. Using AI in Practice
    As AI systems take center stage and use of AI becomes more regulated, it is essential that companies are aware of legal and regulatory requirements early on in the development of their products. As mentioned above, in the next article we intend to propose certain concrete steps for companies enabling them to design and develop safer and compliant AI systems.

4/15/2023

Beyond ChatGPT: Stuart Russell on the Risks and Rewards of A.I.

The Commonwealth Club of California

Stuart Russell, Professor of Computer Science,

  • Director of the Kavli Center for Ethics, Science, and the Public,
  • Director of the Center for Human-Compatible AI, University of California, Berkeley;
  • Author, Human Compatible: Artificial Intelligence and the Problem of Control

https://youtu.be/ow3XrwTmFA8

 

The A.I. Dilemma – March 9, 2023

CENTER FOR HUMANE TECHNOLOGY

https://www.humanetech.com/

Apr 5, 2023

Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world. This presentation is from a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4. We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails. For the podcast version, please visit: https://www.humanetech.com/podcast/th…

https://youtu.be/xoVJKj8lcNQ

 

4/14/2023

The combination of ChatGPT and Wolfram Alpha could be very powerful in the words of Stephen Wolfram

Source: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/

ChatGPT and Wolfram|Alpha

It’s always amazing when things suddenly “just work”. It happened to us with Wolfram|Alpha back in 2009. It happened with our Physics Project in 2020. And it’s happening now with OpenAI’s ChatGPT.

I’ve been tracking neural net technology for a long time (about 43 years, actually). And even having watched developments in the past few years I find the performance of ChatGPT thoroughly remarkable. Finally, and suddenly, here’s a system that can successfully generate text about almost anything—that’s very comparable to what humans might write. It’s impressive, and useful. And, as I discuss elsewhere, I think its success is probably telling us some very fundamental things about the nature of human thinking.

But while ChatGPT is a remarkable achievement in automating the doing of major human-like things, not everything that’s useful to do is quite so “human like”. Some of it is instead more formal and structured. And indeed one of the great achievements of our civilization over the past several centuries has been to build up the paradigms of mathematics, the exact sciences—and, most importantly, now computation—and to create a tower of capabilities quite different from what pure human-like thinking can achieve.

I myself have been deeply involved with the computational paradigm for many decades, in the singular pursuit of building a computational language to represent as many things in the world as possible in formal symbolic ways. And in doing this my goal has been to build a system that can “computationally assist”—and augment—what I and others want to do. I think about things as a human. But I can also immediately call on Wolfram Language and Wolfram|Alpha to tap into a kind of unique “computational superpower” that lets me do all sorts of beyond-human things.

It’s a tremendously powerful way of working. And the point is that it’s not just important for us humans. It’s equally, if not more, important for human-like AIs as well—immediately giving them what we can think of as computational knowledge superpowers, that leverage the non-human-like power of structured computation and structured knowledge.We’ve just started exploring what this means for ChatGPT. But it’s pretty clear that wonderful things are possible. Wolfram|Alpha does something very different from ChatGPT, in a very different way. But they have a common interface: natural language. And this means that ChatGPT can “talk to” Wolfram|Alpha just like humans do—with Wolfram|Alpha turning the natural language it gets from ChatGPT into precise, symbolic computational language on which it can apply its computational knowledge power.

4/13/2023

ChemCrow: Augmenting large-language models with chemistry tools

Andres M Bran, Sam Cox, Andrew D White, Philippe Schwaller

Large-language models (LLMs) have recently shown strong performance in tasks across domains, but struggle with chemistry-related problems. Moreover, these models lack access to external knowledge sources, limiting their usefulness in scientific applications. In this study, we introduce ChemCrow, an LLM chemistry agent designed to accomplish tasks across organic synthesis, drug discovery, and materials design. By integrating 13 expert-designed tools, ChemCrow augments the LLM performance in chemistry, and new capabilities emerge. Our evaluation, including both LLM and expert human assessments, demonstrates ChemCrow’s effectiveness in automating a diverse set of chemical tasks. Surprisingly, we find that GPT-4 as an evaluator cannot distinguish between clearly wrong GPT-4 completions and GPT-4 + ChemCrow performance. There is a significant risk of misuse of tools like ChemCrow and we discuss their potential harms. Employed responsibly, ChemCrow not only aids expert chemists and lowers barriers for non-experts, but also fosters scientific advancement by bridging the gap between experimental and computational chemistry.

Subjects: Chemical Physics (physics.chem-ph); Machine Learning (stat.ML)
Cite as: arXiv:2304.05376 [physics.chem-ph]
(or arXiv:2304.05376v2 [physics.chem-ph] for this version)
https://doi.org/10.48550/arXiv.2304.05376

Submission history

From: Andrew White [view email]
[v1] Tue, 11 Apr 2023 17:41:13 UTC (13,130 KB)
[v2] Wed, 12 Apr 2023 15:14:31 UTC (13,681 KB)

For ChatGPT applied to Medicine

Go To

https://pharmaceuticalintelligence.com/medicine-w-gpt-4-chatgpt/

  • ChatGPT applied to Cardiovascular diseases: Diagnosis and Management
  • ChatGPT applied to Cancer & Oncology
  • ChatGPT applied to Medical Imaging & Radiology

ChatGPT + Wolfram PlugIn applied to LPBI Group IT needs

  • 2.0 LPBI Group’s Mission #5 – An AI Concept/Plan for Launch:

Journal articles UPDATING System powered by OpenAI’s ChatGPT with Wolfram’s PlugIn sending Updates to Twitter then adding the update to each article at its own URL.

 

Resources for 2.0 LPBI Group Mission #5:

#1:

ChatGPT will CHANGE MEDICINE FOREVER!  Here is how

https://youtu.be/72EmsOAIjoU

#2:

ChatGPT Gets Its “Wolfram Superpowers”!

March 23, 2023

https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/

#3:

CHATGPT + WOLFRAM – THE FUTURE OF AI!

https://youtu.be/z5WZhCBRDpU

#4:

Stephen Wolfram on AI’s rapid progress & the “Post-Knowledge Work Era” | E1711

https://www.youtube.com/watch?v=F5tXWmCJ_wo

#5:

GPT-4 & Large Language Models (LLM)

GPT-4 Creator Ilya Sutskever

https://youtu.be/SjhIlw3Iffs

#6:

Sparks of AGI: early experiments with GPT-4

https://youtu.be/qbIk7-JPB2c

Bubeck notes that GPT-4 lacks a working memory and is hopeless at planning ahead. “GPT-4 is not good at this, and maybe large language models in general will never be good at it,” he says, referring to the large-scale machine learning algorithms at the heart of systems like GPT-4. “If you want to say that intelligence is planning, then GPT-4 is not intelligent.”

GPT-4 is remarkable but quite different from human intelligence in a number of ways. For instance, it lacks the kind of motivation that is crucial to the human mind. “It doesn’t care if it’s turned off,” Tenenbaum says. And he says humans do not simply follow their programming but invent new goals for themselves based on their wants and needs [Josh Tenenbaum, MIT]

Some Glimpse AGI in ChatGPT. Others Call It a Mirage | WIRED

https://www.wired.com/story/chatgpt-agi-intelligence/?bxid=5be9cb353f92a40469de971e

#7:

Google invests $300 million in Anthropic as race to compete with ChatGPT heats up
https://venturebeat.com/ai/google-invests-300-million-in-anthropic-as-race-to-compete-with-chatgpt-heats-up/

#8:

Review: We put ChatGPT, Bing Chat, and Bard to the test

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)

Like this:

Like Loading...

This page has the following sub pages.

  • Frason Francis Kalapurakal – Research Associate 2: Personal Page

  • Follow Blog via Email

    Enter your email address to follow this blog and receive notifications of new posts by email.

    Join 9,313 other subscribers
  • Recent Posts

    • W. Gerald “Jerry” Austen, MD influential in the design and creation of a cardiopulmonary (heart-lung) bypass machine and the intra-aortic balloon pump at MGH as renowned cardiac surgeon June 9, 2023
    • Mastering the Art of Learning: Algorithm Balances Imitation and Trial-and-Error for Effective Machine Training June 9, 2023
    • Mastering the Art of Learning: Algorithm Balances Imitation and Trial-and-Error for Effective Machine Training June 1, 2023
    • Artificial Intelligence (AI) Used to Successfully Determine Most Likely Repurposed Antibiotic Against Deadly Superbug Acinetobacter baumanni May 26, 2023
    • Testimonial on the English to Spanish Translation JOINT Project that yielded the “BioMed e-Series Spanish-language Edition” on Amazon.com May 24, 2023
    • percutaneous Left Ventricular Assist Device (pLVAD) – An Israeli startup, Magenta Medical, behind the world’s smallest heart pump has raised $55 million May 5, 2023
    • microRNA (miRNA) miR-483-5p has a key role in preventing stress-related anxiety by acting on its target gene Pgap2 that curbs the development of this type of anxiety May 3, 2023
    • Prediction of exact requirement of exogenous hormone doses in contraceptives to reduce health issues April 25, 2023
    • Alliance for Cancer Gene Therapy to honor Dr. Crystal Mackall with Edward Netter Leadership Award April 8, 2023
    • Creating a Twitter Space for @pharma_BI for Live Broadcasts March 14, 2023
  • Archives

  • Categories

  • Meta

    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    • 1 2012pharmaceutical
    • 1 Amandeep Kaur
    • 1 Aashir Awan, Phd
    • 1 Abhisar Anand
    • 1 Adina Hazan
    • 1 Alan F. Kaul, PharmD., MS, MBA, FCCP
    • 1 alexcrystal6
    • 1 anamikasarkar
    • 1 apreconasia
    • 1 aviralvatsa
    • 1 David Orchard-Webb, PhD
    • 1 danutdaagmailcom
    • 1 Demet Sag, Ph.D., CRA, GCP
    • 1 Dror Nir
    • 1 dsmolyar
    • 1 Ethan Coomber
    • 1 evelinacohn
    • 1 FrasonKalapurackal
    • 1 Gail S Thornton
    • 1 Irina Robu
    • 1 jayzmit48
    • 1 jdpmd
    • 1 jshertok
    • 1 kellyperlman
    • 1 Ed Kislauskis
    • 1 larryhbern
    • 1 Madison Davis
    • 1 marzankhan
    • 1 megbaker58
    • 1 ofermar2020
    • 1 Dr. Pati
    • 1 pkandala
    • 1 ritusaxena
    • 1 Rick Mandahl
    • 1 sjwilliamspa
    • 1 Srinivas Sriram
    • 1 stuartlpbi
    • 1 Dr. Sudipta Saha
    • 1 tildabarliya
    • 1 zraviv06
    • 1 zs22

Powered by WordPress.com.

WPThemes.


%d bloggers like this: