80,000 Hours Podcast Podcast Por Rob Luisa and the 80000 Hours team capa

80,000 Hours Podcast

80,000 Hours Podcast

De: Rob Luisa and the 80000 Hours team
Ouça grátis

Sobre este título

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episódios
  • What the hell happened with AGI timelines in 2025?
    Feb 10 2026

    In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of 2025, sentiment swung all the way back in the other direction, with people's forecasts for when AI might really shake up the world blowing out even further than they had been before reasoning models came along.

    What the hell happened? Was it just swings in vibes and mood? Confusion? A series of fundamentally unexpected and unpredictable research results?

    Host Rob Wiblin has been trying to make sense of it for himself, and here's the best explanation he's come up with so far.

    Links to learn more, video, and full transcript: https://80k.info/tl

    Chapters:

    • Making sense of the timelines madness in 2025 (00:00:00)
    • The great timelines contraction (00:00:46)
    • Why timelines went back out again (00:02:10)
    • Other longstanding reasons AGI could take a good while (00:11:13)
    • So what's the upshot of all of these updates? (00:14:47)
    • 5 reasons the radical pessimists are still wrong (00:16:54)
    • Even long timelines are short now (00:23:54)

    This episode was recorded on January 29, 2026.

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Dominic Armstrong
    Coordination, transcripts, and web: Katy Moore

    Exibir mais Exibir menos
    26 minutos
  • #179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
    Feb 3 2026

    Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young people. At any point in time, something like 20% of young people are working through anxiety or depression that’s seriously interfering with their lives — but nowhere near 20% of people in their 20s have severe heart disease or cancer or a similar failure in a key organ of the body other than the brain.

    From an evolutionary perspective, that’s to be expected, right? If your heart or lungs or legs or skin stop working properly while you’re a teenager, you’re less likely to reproduce, and the genes that cause that malfunction get weeded out of the gene pool.

    So why is it that these evolutionary selective pressures seemingly fixed our bodies so that they work pretty smoothly for young people most of the time, but it feels like evolution fell asleep on the job when it comes to the brain? Why did evolution never get around to patching the most basic problems, like social anxiety, panic attacks, debilitating pessimism, or inappropriate mood swings? For that matter, why did evolution go out of its way to give us the capacity for low mood or chronic anxiety or extreme mood swings at all?

    Today’s guest, Randy Nesse — a leader in the field of evolutionary psychiatry — wrote the book Good Reasons for Bad Feelings, in which he sets out to try to resolve this paradox.

    Rebroadcast: This episode originally aired in February 2024.

    Links to learn more, video, and full transcript: https://80k.info/rn

    In the interview, host Rob Wiblin and Randy discuss the key points of the book, as well as:

    • How the evolutionary psychiatry perspective can help people appreciate that their mental health problems are often the result of a useful and important system.
    • How evolutionary pressures and dynamics lead to a wide range of different personalities, behaviours, strategies, and tradeoffs.
    • The missing intellectual foundations of psychiatry, and how an evolutionary lens could revolutionise the field.
    • How working as both an academic and a practicing psychiatrist shaped Randy’s understanding of treating mental health problems.
    • The “smoke detector principle” of why we experience so many false alarms along with true threats.
    • The origins of morality and capacity for genuine love, and why Randy thinks it’s a mistake to try to explain these from a selfish gene perspective.
    • Evolutionary theories on why we age and die.
    • And much more.

    Chapters:

    • Cold Open (00:00:00)
    • Rob's Intro (00:00:55)
    • The interview begins (00:03:01)
    • The history of evolutionary medicine (00:03:56)
    • The evolutionary origin of anxiety (00:12:37)
    • Design tradeoffs, diseases, and adaptations (00:43:19)
    • The tricker case of depression (00:48:57)
    • The purpose of low mood (00:54:08)
    • Big mood swings vs barely any mood swings (01:22:41)
    • Is mental health actually getting worse? (01:33:43)
    • A general explanation for bodies breaking (01:37:27)
    • Freudianism and the origins of morality and love (01:48:53)
    • Evolutionary medicine in general (02:02:42)
    • Objections to evolutionary psychology (02:16:29)
    • How do you test evolutionary hypotheses to rule out the bad explanations? (02:23:19)
    • Striving and meaning in careers (02:25:12)
    • Why do people age and die? (02:45:16)

    Producer and editor: Keiran Harris
    Audio Engineering Lead: Ben Cordell
    Technical editing: Dominic Armstrong
    Transcriptions: Katy Moore

    Exibir mais Exibir menos
    2 horas e 51 minutos
  • Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead
    Jan 27 2026

    Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.

    For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.

    Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'

    Links to learn more, video, and full transcript: https://80k.info/dd

    He argues democracy wasn’t the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?

    “The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they’ve needed us,” David explains. “Life can only get so bad when you’re needed. That’s the key thing that’s going to change.”

    In David’s telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they’re at a disadvantage compared to governments that strategically restrict civil liberties.

    But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that’s increasingly shaped by machine-to-machine communication — even if every AI does exactly what it’s told.

    This episode was recorded on August 21, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Who’s David Duvenaud? (00:00:50)
    • Alignment isn’t enough: we still lose control (00:01:30)
    • Smart AI advice can still lead to terrible outcomes (00:14:14)
    • How gradual disempowerment would occur (00:19:02)
    • Economic disempowerment: Humans become "meddlesome parasites" (00:22:05)
    • Humans become a "criminally decadent" waste of energy (00:29:29)
    • Is humans losing control actually bad, ethically? (00:40:36)
    • Political disempowerment: Governments stop needing people (00:57:26)
    • Can human culture survive in an AI-dominated world? (01:10:23)
    • Will the future be determined by competitive forces? (01:26:51)
    • Can we find a single good post-AGI equilibria for humans? (01:34:29)
    • Do we know anything useful to do about this? (01:44:43)
    • How important is this problem compared to other AGI issues? (01:56:03)
    • Improving global coordination may be our best bet (02:04:56)
    • The 'Gradual Disempowerment Index' (02:07:26)
    • The government will fight to write AI constitutions (02:10:33)
    • “The intelligence curse” and Workshop Labs (02:16:58)
    • Mapping out disempowerment in a world of aligned AGIs (02:22:48)
    • What do David’s CompSci colleagues think of all this? (02:29:19)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Music: CORBIT
    Camera operator: Jake Morris
    Coordination, transcriptions, and web: Katy Moore

    Exibir mais Exibir menos
    2 horas e 32 minutos
Ainda não há avaliações