
Future of Big Data for Societal Transformation
Larry H. Bernstein, MD, FCAP, Curator
LPBI
Musk, others commit $1 billion to non-profit AI research company to ‘benefit humanity’

Elon Musk and associates announced OpenAI, a non-profit AI research company, on Friday (Dec. 11), committing $1 billion toward their goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
The funding comes from a group of tech leaders including Musk, Reid Hoffman, Peter Thiel, and Amazon Web Services, but the venture expects to only spend “a tiny fraction of this in the next few years.”
The founders note that it’s hard to predict how much AI could “damage society if built or used incorrectly” or how soon. But the hope is to have a leading research institution that can “prioritize a good outcome for all over its own self-interest … as broadly and evenly distributed as possible.”
Brains trust
OpenAI’s co-chairs are Musk, who is also the principal funder of Future of Life Institute, and Sam Altman, president of venture-capital seed-accelerator firm Y Combinator, who is also providing funding.
“I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.” — Elon Musk on Medium
The founders say the organization’s patents (if any) “will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.”
OpenAI’s research director is machine learning expert Ilya Sutskever, formerly at Google, and its CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are “world-class research engineers and scientists” Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. The company will be based in San Francisco.
If I’m Dr. Evil and I use it, won’t you be empowering me?
“There are a few different thoughts about this. Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think its far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.” — Sam Altman in an interview with Steven Levy on Medium.
The announcement follows recent announcements by Facebook to open-source the hardware design of its GPU-based “Big Sur” AI server (used for large-scale machine learning software to identify objects in photos and understand natural language, for example); by Google to open-source its TensorFlow machine-learning software; and by Toyota Corporation to invest $1 billion in a five-year private research effort in artificial intelligence and robotics technologies, jointly with Stanford University and MIT.
To follow OpenAI: @open_ai or info@openai.com Topics: AI/Robotics | Survival/Defense
Spot on Elon! The threat is and currently the developments are unfortunately pointing exactly in that direction that AI will be controlled via a handful of big and powerful cooperation . None surprisingly none of those subjects are part of the OpenAI movement.
I like the sentiment, AI for all and for the common good, and at one level it seems doable but at another level it seems problematic on the scale of nation states and multinational entities.
If we all have AI systems then it will be those with control of the most energy to run their AI who will have the most influence, and that could be a “Dr. Evil”. It is the sum total of computing power on any given side of a conflict that will determine the outcome, if AI is a significant factor at all.
We could see bigger players looking at strategic questions such as, do they act now, or wait and put more resources into advancing the power of their AI so that they have better odds later, but at the risk of falling to a preemptive attack. Given this sort of thing I don’t see that AI will be a game changer, a leveller, rather it could just fit into the existing arms race type scenarios, at least until one group crosses a singularity threshold and then accelerates away from the pack while holding everyone else back so that they cannot catch up.
Not matter how I look at it I always see the scenarios running in the opposite direction to diversity, toward a singular dominant entity that “roots” all the other AI, sensor and actuator systems and then assimilates them.
How do they plan to stop this? How can one group of AIs have an ethical framework that allows them to “keep down” another group or single AI so that it does not get into a position to dominate them? How will this be any less messy than how the human super-powers have interacted in the last century?
I recommend the book “SuperIntelligence” by Nick Bostrom. Most thorough and penetrating. It covers many permutations of the intelligence explosion. The Allegory at the beginning is worth the price alone.
Elon, for goodness sake, focus! Get the big battery factory working, get space industry off the ground and America back in the ISS resupply and re-crew business, but enough with the non-profit expenditures already! Keep sinking your capital into non profits like the Hyperlink-a beautiful, high tech version of the old “I just know I can make trains profitable again outside of the northeast” dream and this non-profit AI and you’ll eventually go one financial step too far.
Both for you and for all of us who benefit from your efforts, consider this. At least change your attitude about profit; keep the option open that this AI will bring some profit, even with the open source aspect. This is a great effort, as I see you possibly becoming the “good AI” element that Ray writes about in his first essay, in the essay section on this site. There, Ray is confident that the good people with AI will out-think the bad people with AI and so good AI will prevail.
Leave a Reply