80,000 Hours Podcast Podcast Por Rob Luisa and the 80000 Hours team capa

80,000 Hours Podcast

80,000 Hours Podcast

De: Rob Luisa and the 80000 Hours team
Ouça grátis

Sobre este título

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episódios
  • #142 Classic episode – John McWhorter on why the optimal number of languages might be one, and other provocative claims about language
    Jan 6 2026

    John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work, he's written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column.

    Rebroadcast: this episode was originally released in December 2022.

    YouTube video version: https://youtu.be/MEd7TT_nMJE

    Links to learn more, video, and full transcript: https://80k.link/JM

    We ask him what we think are the most important things everyone ought to know about linguistics, including:

    • Can you communicate faster in some languages than others, or is there some constraint that prevents that?
    • Does learning a second or third language make you smarter or not?
    • Can a language decay and get worse at communicating what people want to say?
    • If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own?
    • Did Shakespeare write in a foreign language, and if so, should we translate his plays?
    • How much does language really shape the way we think?
    • Are creoles the best languages in the world — languages that ideally we would all speak?
    • What would be the optimal number of languages globally?
    • Does trying to save dying languages do their speakers a favour, or is it more of an imposition?
    • Should we bother to teach foreign languages in UK and US schools?
    • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself?
    • Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make?

    We’ve also added John’s talk “Why the World Looks the Same in Any Language” to the end of this episode. So stick around after the credits!

    Chapters:

    • Rob's intro (00:00:00)
    • Who's John McWhorter? (00:05:02)
    • Does learning another language make you smarter? (00:05:54)
    • Updating Shakespeare (00:07:52)
    • Should we bother teaching foreign languages in school? (00:12:09)
    • Language loss (00:16:05)
    • The optimal number of languages for humanity (00:27:57)
    • Do we reason about the world using language and words? (00:31:22)
    • Can we communicate meaningful information more quickly in some languages? (00:35:04)
    • Creole languages (00:38:48)
    • AI and the future of language (00:50:45)
    • Should we keep ums and ahs in The 80,000 Hours Podcast? (00:59:10)
    • Why the World Looks the Same in Any Language (01:02:07)

    Producer: Keiran Harris
    Audio mastering: Ben Cordell and Simon Monsour
    Video editing: Ryan Kessler and Simon Monsour
    Transcriptions: Katy Moore

    Exibir mais Exibir menos
    1 hora e 35 minutos
  • 2025 Highlight-o-thon: Oops! All Bests
    Dec 29 2025

    It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:

    • Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itself
    • Ian Dunt on why the unelected House of Lords is by far the best part of the British government
    • Sam Bowman’s strategy to get NIMBYs to love it when things get built next to their houses
    • Buck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans

    …as well as 18 other top observations and arguments from the past year of the show.

    Links to learn more, video, and full transcript: https://80k.info/best25

    It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

    Chapters:

    • Cold open (00:00:00)
    • Rob's intro (00:02:35)
    • Helen Toner on whether we're racing China to build AGI (00:03:43)
    • Hugh White on what he'd say to Americans (00:06:09)
    • Buck Shlegeris on convincing AI models they've already escaped (00:12:09)
    • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)
    • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)
    • Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)
    • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)
    • Toby Ord on whether rich people will get access to AGI first (00:30:13)
    • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)
    • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)
    • Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)
    • Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)
    • Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)
    • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)
    • Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)
    • Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)
    • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)
    • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)
    • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)
    • Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)
    • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)
    • Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)


    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Coordination, transcripts, and web: Katy Moore

    Exibir mais Exibir menos
    1 hora e 40 minutos
  • #232 – Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious beings
    Dec 19 2025

    Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.

    Links to learn more and full transcript: https://80k.info/am25

    For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.

    Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.

    Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.

    There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.

    However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.

    The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.

    In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.

    This episode was recorded on December 3, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Introducing Zershaaneh (00:00:55)
    • The puzzle of moral patienthood (00:03:20)
    • Is subjective experience necessary? (00:05:52)
    • What is it to desire? (00:10:42)
    • Desiring without experiencing (00:17:56)
    • What would make AIs moral patients? (00:28:17)
    • Another route entirely: deserving autonomy (00:45:12)
    • Maybe there's no objective truth about any of this (01:12:06)
    • Practical implications (01:29:21)
    • Why not just let superintelligence figure this out for us? (01:38:07)
    • How could human extinction be a good thing? (01:47:30)
    • Lexical threshold negative utilitarianism (02:12:30)
    • So... should we still try to prevent extinction? (02:25:22)
    • What are the most important questions for people to address here? (02:32:16)
    • Is God GDPR compliant? (02:35:32)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Coordination, transcripts, and web: Katy Moore

    Exibir mais Exibir menos
    2 horas e 37 minutos
Ainda não há avaliações