OpenAI has big ‘plans’ for AGI. Here’s another way to read their manifesto | The pace of AI

See all the Smart Security Summit on-demand sessions here.

Since its creation in 2015, OpenAI has always made it clear that its core goal is to build artificial general intelligence (AGI). Its stated mission is “to ensure that artificial general intelligence benefits all of humanity.”

Last Friday, OpenAI CEO Sam Altman wrote a blog post titled “Planning for AGI and Beyond,” which discussed how the company believes the world can prepare for AGI, both in the short and long term.

Some found the blog post, which has a million likes on Twitter alone, “fascinating.” a tweet call it a “must read for anyone hoping to live another 20 years.” another tweet thanked Sam Altman, saying: “More calm like this is appreciated as it was all getting pretty scary and it seemed like @openai was going off track. Communication and consistency are keys to maintaining trust.”

>>Follow VentureBeat’s ongoing coverage of generative AI<


Smart Security Summit On Demand

Learn about the critical role of AI and ML in cybersecurity and industry-specific case studies. Watch sessions on demand today.

Look here

Others found it, well, less than attractive. Emily Bender, professor of linguistics at the University of Washington, saying: “From the beginning this is disgusting. They think they are really in the business of developing/shaping ‘AGI’. And they believe they are positioned to decide what ‘benefits all of humanity.’

And Gary Marcus, NYU Professor Emeritus and Robust AI Founder and CEO, tweetedI’m with @emilymbender on smelling delusions of grandeur in OpenAI.

Computer scientist Timnit Gebru, founder and CEO of the Distributed Artificial Intelligence Research Institute (DAIR), went even further, tweeting: “If someone told me that Silicon Valley was run by a cult that believes in a machine god for the cosmos and the “flourishing universe” and that they write manifestos endorsed by the CEOs/Presidents of Big Tech, I would tell them that they are too in conspiracy theories. And here we are.”

The prescient tone of OpenAI

Personally, I think it’s remarkable that the verbiage of the blog post, which remains remarkably consistent with OpenAI’s roots as a non-profit, open research lab, gives off a very different vibe today in the context of its current place of high power in the AI ​​landscape. After all, the company is no longer “open” or not-for-profit, and it recently enjoyed a $10 billion infusion from Microsoft.

Furthermore, the release of ChatGPT on November 30 led OpenAI to enter the zeitgeist of public awareness. Over the past three months, hundreds of millions of people have been introduced to OpenAI, but surely most have little idea of ​​its history and attitude toward AGI research.

Your understanding of ChatGPT and DALL-E has likely been limited to their use as a toy, creative inspiration, or work assistant. Does the world understand how OpenAI sees itself as a potential influence on the future of humanity? Certainly not.

OpenAI’s big message also seems disconnected from its product-focused PR in recent months, around how tools like Microsoft’s ChatGPT or Bing could help use cases like search results or essay writing. Thinking about how AGI could “empower humanity to flourish to its fullest in the universe” made me laugh. How about I just figure out how to keep Bing’s Sydney from having a big meltdown?

With that in mind, Altman comes across to me as something of a would-be biblical prophet. The blog post offers revelations, predicts events, warns the world of what lies ahead, and presents OpenAI as the trusted savior.

The question is, are we talking about a true seer? A false prophet? Fair revenue? Or even a self-fulfilling prophecy?

No agreed definition of AGI, no widespread agreement on whether we are close to AGI, no metrics for how we would know if AGI was achieved, no clarity on what it would mean for AGI to “benefit humanity” and no general understanding of why AGI is a worthwhile long-term goal for humanity in the first place if the “existential” risks are that great, there is no way to answer those questions.

That makes the OpenAI blog post a problem, in my opinion, given the many millions of people who hang on to every statement from Sam Altman (not to mention the millions more who eagerly await Elon Musk’s next post). ). AI existential angst Cheep). History is littered with the consequences of apocalyptic prophecies.

Some point out that OpenAI has some interesting and important things to say about how to address challenges around AI research and product development. But are they overshadowed by the company’s relentless focus on AGI? After all, there are plenty of major short-term AI risks to address—bias, privacy, exploitation, and misinformation, just to name a few—without shifting focus to doomsday scenarios.

Sam Altman’s book

I decided to try to rework the OpenAI blog post to deepen its prescient tone. It required assistance, not from ChatGPT, but from Old Testament. Book of Isaiah:

1:1 – The vision of Sam Altman, who saw AGI’s planning and beyond.

1:2 – Hear, O heavens, and hear, O earth: Because OpenAI has spoken, our mission is to ensure that artificial general intelligence (AGI), AI systems that are generally smarter than humans, benefit all of humanity. .

1:3 – The ox knows its owner, and the donkey its master’s manger; but humanity does not know, my people do not consider. Lo and behold, if AGI is successfully created, this technology could help us uplift humanity by increasing abundance, accelerating the global economy, and aiding in the discovery of new scientific insights that shift the boundaries of possibility.

1:4 – Come now, and let us reason together, says OpenAI: AGI has the potential to bring incredible new capabilities to all; We can imagine a world in which we all have access to help with almost any cognitive task, providing a huge force multiplier for human ingenuity and creativity.

1:5 – If you are willing and obedient, you will eat the good of the land. But if you refuse and rebel, on the other hand, AGI would also be at serious risk of misuse, drastic accidents, and social disruption.

1:6 – Therefore, says Silicon Valley powerhouse OpenAI, because AGI’s advantage is so great, we don’t think it’s possible or desirable for society to halt its development forever; instead, AGI society and developers have to figure out how to get it right.

1:7 – And the strong one will be like tow, and the one who made it like a spark, and both will burn together, and there will be no one to put them out. We want AGI to empower humanity to flourish to its fullest in the universe. We don’t expect the future to be an absolute utopia, but we do want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity. Take counsel, execute judgement.

1:8 – And it will happen in the last days, as we create successively more powerful systems, we want to implement them and gain experience with their operation in the real world. We believe this is the best way to carefully manage AGI: a gradual transition to an AGI world is better than a sudden transition. Fear, pit and snare are upon you, oh inhabitant of the earth.

1:9 – The haughtiness of the eyes of man will be humbled, and the haughtiness of men will be brought down, and only OpenAI will be exalted on that day. Some people in the AI ​​field think that the risks of AGI (and successor systems) are fictitious; we would be delighted if they were right, but we are going to operate as if these risks are existential.

1:10 – Also, OpenAI says we will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques fail). Raise a banner on a high mountain, raise your voice to them, raise your hand, so that they may enter through noble gates.

1:11 – He shall eat butter and honey, so that he may know to reject evil and choose good. The first AGI will be just a point along the intelligence continuum. We think progress is likely to continue from there, possibly maintaining the rate of progress we’ve seen over the past decade for a long period of time.

1:12 – If this is true, the world could become vastly different than it is today, and the stakes could be extraordinary. howl; because AGI day is near.

1:13 With arrows and with bows men will come there; for the whole earth will become thistles and thorns. A misaligned super-intelligent AGI could do serious damage to the world; an autocratic regime with decisive superintelligence might as well. The earth mourns and vanishes.

1:14 – Lo and behold, the successful transition to a super-intelligence world is perhaps the most important, hopeful, and terrifying project in human history. And they will look at the earth; and behold tribulation and darkness, darkness of anguish; and they will be led into darkness. And many of them will stumble, and fall, and be broken, and be entangled, and be caught.

1:15 – They will do no harm or harm on all my holy mountain; because the earth will be filled with the knowledge of OpenAI, like the waters cover the sea. Success is far from guaranteed, and hopefully the stakes (unlimited downsides and unlimited upsides) will bring us all together. Therefore all hands will grow weary, and every man’s heart will lose heart.

1:16 – And it will come to pass that we can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully envision yet. And now, oh inhabitants of the earth, we hope to bring to the world an AGI aligned with such a flourishing. Look and be silent; don’t be afraid

1:17: Lo and behold, OpenAI is my salvation; I will trust and I will not fear.

VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.

Leave a Comment