80,000 Hours Podcast Podcast Por Rob Luisa and the 80000 Hours team capa

80,000 Hours Podcast

80,000 Hours Podcast

De: Rob Luisa and the 80000 Hours team
Ouça grátis

OFERTA POR TEMPO LIMITADO: Apenas R$ 0,99/mês nos primeiros 3 meses

Renova automaticamente por R$ 19,90/mês após 3 meses

Sobre este título

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episódios
  • AI Won't End Mutually Assured Destruction (Probably) | Sam Winter-Levy & Nikita Lalwani
    Mar 10 2026

    How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own. But some theorists think that sophisticated AI could eliminate this capability — for example, by locating and destroying all of an adversary’s nuclear weapons simultaneously, by disabling command-and-control networks, or by enhancing missile defence systems. If they are right, whichever country got those capabilities first could wield unprecedented coercive power.

    Today’s guests — Nikita Lalwani and Sam Winter-Levy of the Carnegie Endowment for International Peace — assess how advances in AI might threaten nuclear deterrence:

    • Would AI be able to locate nuclear submarines hiding in a vast, opaque ocean?
    • Would road-mobile launchers still be able to hide in tunnels and under netting?
    • Would missile defence become so accurate that the United States could be protected under something like Israel’s Iron Dome?
    • Can we imagine an AI cybersecurity breakthrough that would allow countries to infiltrate their rivals’ nuclear command-and-control networks?

    Yet even without undermining deterrence, Sam and Nikita claim that AI could make the nuclear world far more dangerous. It could spur arms races, encourage riskier postures, and force dangerously short response times. Their message is urgent: AI experts and nuclear experts need to start talking to each other now, before the technology makes any conversation moot.


    Links to learn more, video, and full transcript: https://80k.info/swlnl

    This episode was recorded on November 24, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who are Nikita Lalwani and Sam Winter-Levy? (00:01:03)
    • How nuclear deterrence actually works (00:01:46)
    • AI vs nuclear submarines (00:10:31)
    • AI vs road-mobile missiles (00:22:21)
    • AI vs missile defence systems (00:28:38)
    • AI vs nuclear command, control, and communications (NC3) (00:35:20)
    • AI won't break deterrence, but may trigger an arms race (00:43:27)
    • Technological supremacy isn't political supremacy (00:52:31)
    • Fast AI takeoff creates dangerous "windows of vulnerability" (00:56:43)
    • Book and movie recommendations (01:08:53)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Nick Stockton and Katy Moore

    Exibir mais Exibir menos
    1 hora e 11 minutos
  • Using AI to enhance societal decision making (article by Zershaaneh Qureshi)
    Mar 6 2026

    The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI development also presents an opportunity: we could build and deploy AI tools that help us think more clearly, act more wisely, and coordinate more effectively. And if we roll these decision-making tools out quickly enough, humanity could be far better equipped to navigate the critical period ahead.

    This article is narrated by the author, Zershaaneh Qureshi. It explores why AI decision-making tools could be a big deal, who might be a good fit to help shape this new field, and what the downside risks of getting involved might be.

    Read the original article on the 80,000 Hours website: https://80000hours.org/problem-profiles/ai-enhanced-decision-making/

    Chapters:

    • Check out our new narrations feed (00:00:00)
    • Summary (00:01:21)
    • Section 1: Why advancing AI decision making tools might matter a lot (00:02:52)
    • AI tools could help us make much better decisions (00:05:59)
    • We might be able to differentially speed up the rollout of AI decision making tools (00:11:04)
    • Section 2: What are the arguments against working to advance AI decision making tools? (00:13:17)
    • Section 3: How to work in this area (00:26:19)
    • Want one-on-one advice? (00:29:50)

    Audio editing: Dominic Armstrong and Milo McGuire

    Exibir mais Exibir menos
    31 minutos
  • We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI
    Mar 3 2026

    Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with that?

    Robert Long founded Eleos AI to explore questions like these, on the basis that AI may one day be capable of suffering — or already is. In today’s episode, Robert and host Luisa Rodriguez explore the many ways in which AI consciousness may be very different from anything we’re used to.

    Things get strange fast: If AI is conscious, where does that consciousness exist? In the base model? A chat session? A single forward pass? If you close the chat, is the AI asleep or dead?

    To Robert, these kinds of questions aren’t just philosophical exercises: not being clear on AI’s moral status as it transitions from human-level to superhuman intelligence could be dangerous. If we’re too dismissive, we risk unintentionally exploiting sentient beings. If we’re too sympathetic, we might rush to “liberate” AI systems in ways that make them harder to control — worsening existential risk from power-seeking AIs.

    Robert argues the path through is doing the empirical and philosophical homework now, while the stakes are still manageable.

    The field is tiny. Eleos AI is three people. As a result, Robert argues that driven researchers with a willingness to venture into uncertain territory can push out the frontier on these questions remarkably quickly.


    Links to learn more, video, and full transcript: https://80k.info/rl26

    This episode was recorded November 18–19, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who’s Robert Long? (00:00:42)
    • How AIs are (and aren't) like farmed animals (00:01:18)
    • If AIs love their jobs… is that worse? (00:11:05)
    • Are LLMs just playing a role, or feeling it too? (00:31:58)
    • Do AIs die when the chat ends? (00:55:09)
    • Studying AI welfare empirically: behaviour, neuroscience, and development (01:27:34)
    • Why Eleos spent weeks talking to Claude even though it's unreliable (01:51:58)
    • Can LLMs learn to introspect? (01:57:58)
    • Mechanistic interpretability as AI neuroscience (02:08:01)
    • Does consciousness require biological materials? (02:31:06)
    • Eleos’s work & building the playbook for AI welfare (02:50:36)
    • Avoiding the trap of wild speculation (03:18:15)
    • Robert's top research tip: don't do it alone (03:22:43)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Katy Moore

    Exibir mais Exibir menos
    3 horas e 26 minutos
Ainda não há avaliações