The AI arms race to build ‘digital god’

In today's episode of Decoder, We will try to find out the “digital god”. I thought we've been doing this long enough, so let's just move on. Can we build an artificial intelligence so powerful that it changes the world and answers all of our questions? The AI ​​industry has decided the answer is yes.

In September, OpenAI's Sam Altman published a blog post claiming that we will have super-intelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of OpenAI competitor Anthropic, published a 14,000-word post outlining exactly what he thinks such a system will be capable of when it comes to market could be the case as early as 2026.

What's fascinating is that the visions laid out in both posts are so similar – both promise dramatic superintelligent AI that will bring massive improvements to work, science and healthcare, and even democracy and prosperity. Digital God, baby.

But while the visions are similar, the companies are openly against it in many ways: Anthropic is the original OpenAI churn story. Dario and a cohort of fellow researchers left OpenAI in 2021 after becoming concerned about the company's increasingly commercial focus and approach to security, forming Anthropic to become a safer, slower AI company. And until recently the focus has really been on safety; just last year, a major New York Times The company's profile called it the “hot center of AI doomerism.”

But the launch of ChatGPT and the generative AI boom that followed sparked a colossal tech arms race, and now Anthropic is as much in the game as anyone else. It has raised billions in funding, mostly from Amazon, and developed Claude, a chatbot and language model that rivals OpenAI's GPT-4. Now Dario writes long blog posts about spreading democracy with AI.

So what's going on here? Why is the Anthropic boss suddenly talking so optimistically about AI, even though it was previously seen as the safer and slower alternative to OpenAI, which is advancing at all costs? Is this just another AI hype to woo investors? And if AGI really is around the corner, how can we even measure what it means to be safe?

To get to the point: I brought it up edge Senior AI reporter Kylie Robison discusses what it means, what's going on in the industry and whether we can trust these AI leaders to tell us what they really think.

If you would like to learn more about some of the news and topics we discussed in this episode, check out the links below:

  • Machines of Loving Grace | Dario Amodei
  • The Age of Intelligence | Sam Altman
  • Anthropic CEO believes AI will lead to utopia | The edge
  • AI manifestos flood the tech zone | Axios
  • OpenAI just raised $6.6 billion to build ever-larger AI models | The edge
  • OpenAI was a research lab – now it's just another tech company | The edge
  • Anthropic's latest AI update can use a computer alone | The edge
  • Agents are the future AI companies promise – and desperately need | The edge
  • California governor vetoes key AI safety bill | The edge
  • At the red-hot center of AI doomerism | NYT
  • The close partnership between Microsoft and OpenAI shows signs of unraveling | NYT
  • The $14 billion question separating OpenAI and Microsoft | WSJ
  • Anthropic has been valued at $40 billion in financing talks | The information

Decoder with Nilay Patel /

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Leave a Comment

url url url url url url url url url url url url url url url url url url url