80,000 Hours Podcast Podcast Por The 80 000 Hours team capa

80,000 Hours Podcast

80,000 Hours Podcast

De: The 80 000 Hours team
Ouça grátis

Sobre este título

The most important conversations about artificial intelligence you won’t hear anywhere else. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin, Luisa Rodriguez, and Zershaaneh Qureshi.All rights reserved
Episódios
  • A Ukraine ceasefire could accidentally set Europe up for a bigger war | RAND's top Russia expert Samuel Charap
    Mar 24 2026

    Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dangerous, just in different ways.

    That’s the counterintuitive argument from Samuel Charap, Distinguished Chair in Russia and Eurasia Policy at RAND. He’s not worried about a Russian blitzkrieg on Estonia. He forecasts instead a fragile peace that breaks down and drags in European neighbours; instability in Belarus prompting Russian intervention; hybrid sabotage operations that escalate through tit-for-tat responses.

    Samuel’s case isn’t that peace is bad, but that the Ukraine conflict has remilitarised Europe, made Russia more resentful, and collapsed diplomatic relations between the two. That’s a postwar environment primed for the kind of miscalculation that starts unintended wars.

    What he prescribes isn’t a full peace treaty; it’s a negotiated settlement that stops the killing and begins a longer negotiation that gives neither side exactly what it wants, but just enough to deter renewed aggression. Both sides stop dying and the flames of war fizzle — hopefully.

    None of this is clean or satisfying: Russia invaded, committed war crimes, and is being offered a path back to partial normalcy. But Samuel argues that the alternatives — indefinite war or unstructured ceasefire — are much worse for Ukraine, Europe, and global stability.


    Links to learn more, video, and full transcript: https://80k.info/sc26

    This episode was recorded on February 27, 2026.

    Chapters:

    • Cold open (00:00:00)
    • Could peace in Ukraine lead to Europe’s next war? (00:00:47)
    • Do Russia’s motives for war still matter? (00:11:41)
    • What does a good ceasefire deal look like? (00:17:38)
    • What’s still holding back a ceasefire (00:38:44)
    • Why Russia might accept Ukraine’s EU membership (00:46:00)
    • How to prevent a spiraling conflict with NATO (00:48:00)
    • What’s next for nuclear arms control (00:49:57)
    • Finland and Sweden strengthened NATO — but also raised the stakes for conflict (00:53:25)
    • Putin isn’t Hitler: How to negotiate with autocrats (00:56:35)
    • Why Russia still takes NATO seriously (01:02:01)
    • Neither side wants to fight this war again (01:10:49)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Transcripts and web: Nick Stockton, Elizabeth Cox, and Katy Moore

    Exibir mais Exibir menos
    1 hora e 12 minutos
  • Why automating human labour will break our political system | Rose Hadshar, Forethought
    Mar 17 2026

    The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.

    That’s the view of Rose Hadshar, researcher at Forethought, who believes we could see extreme, AI-enabled power concentration without a coup or dramatic ‘end of democracy’ moment.

    She foresees something more insidious: an elite group with access to such powerful AI capabilities that the normal mechanisms for checking elite power — law, elections, public pressure, the threat of strikes — cease to have much effect. Those mechanisms could continue to exist on paper, but become ineffectual in a world where humans are no longer needed to execute even the largest-scale projects.

    Almost nobody wants this to happen — but we may find ourselves unable to prevent it.

    If AI disrupts our ability to make sense of things, will we even notice power getting severely concentrated, or be able to resist it? Once AI can substitute for human labour across the economy, what leverage will citizens have over those in power? And what does all of this imply for the institutions we’re relying on to prevent the worst outcomes?

    Rose has answers, and they’re not all reassuring.

    But she’s also hopeful we can make society more robust against these dynamics. We’ve got literally centuries of thinking about checks and balances to draw on. And there are some interventions she’s excited about — like building sophisticated AI tools for making sense of the world, or ensuring multiple branches of government have access to the best AI systems.

    Rose discusses all of this, and more, with host Zershaaneh Qureshi in today’s episode.

    Links to learn more, video, and full transcript: https://80k.info/rh

    This episode was recorded on December 18, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who's Rose Hadshar? (00:01:05)
    • Three dynamics that could reshape political power in the AI era (00:02:37)
    • AI gives small groups the productive power of millions (00:12:49)
    • Dynamic 1: When a software update becomes a power grab (00:20:41)
    • Dynamic 2: When AI labour means governments no longer need their citizens (00:31:20)
    • How democracy could persist in name but not substance (00:45:15)
    • Dynamic 3: When AI filters our reality (00:54:54)
    • Good intentions won't stop power concentration (01:08:27)
    • Slower-moving worlds could still get scary (01:23:57)
    • Why AI-powered tyranny will be tough to topple (01:31:53)
    • How power concentration compares to "gradual disempowerment" (01:38:18)
    • Some interventions are cross-cutting — and others could backfire (01:43:54)
    • What fighting back actually looks like (01:55:15)
    • Why power concentration researchers should avoid getting too "spicy" (02:04:10)
    • Why the "Manhattan Project" approach should worry you — but truly international projects might not be safe either (02:09:18)
    • Rose wants to keep humans around! (02:12:06)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

    Exibir mais Exibir menos
    2 horas e 14 minutos
  • #238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)
    Mar 10 2026

    How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.

    Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:

    • Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
    • Would road-mobile launchers still be able to hide in tunnels and under netting?
    • Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
    • Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?

    Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.


    Links to learn more, video, and full transcript: https://80k.info/swlnl

    This episode was recorded on November 24, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)
    • How nuclear deterrence actually works (00:01:46)
    • AI vs nuclear submarines (00:10:31)
    • AI vs road-mobile missiles (00:22:21)
    • AI vs missile defence systems (00:28:38)
    • AI vs nuclear command, control, and communications (NC3) (00:35:20)
    • AI won't break deterrence, but may trigger an arms race (00:43:27)
    • Technological supremacy isn't political supremacy (00:52:31)
    • Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)
    • Book and movie recommendations (01:08:53)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

    Exibir mais Exibir menos
    1 hora e 11 minutos
Ainda não há avaliações