#92 – Brian Christian on the alignment problem

2:55:45
 
Share
 

Manage episode 286640926 series 2077132
By The 80,000 Hours Podcast with Rob Wiblin and The 80000 Hours team. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.
Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science.
Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer.
Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all.
Links to learn more, summary and full transcript.
Here’s a tease of 10 Hollywood-worthy stories from the episode:
The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience.
ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch.
Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen.
Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net.
Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die.
The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination.
Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple.
Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies.
AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself.
Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip.
We also cover:
• How reinforcement learning actually works, and some of its key achievements and failures
• How a lack of curiosity can cause AIs to fail to be able to do basic things
• The pitfalls of getting AI to imitate how we ourselves behave
• The benefits of getting AI to infer what we must be trying to achieve
• Why it’s good for agents to be uncertain about what they're doing
• Why Brian isn’t that worried about explicit deception
• The interviewees Brian most agrees with, and most disagrees with
• Developments since Brian finished the manuscript
• The effective altruism and AI safety communities
• And much more
Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

138 episodes