80,000 Hours Podcast Podcast Por Rob Luisa and the 80000 Hours team capa

80,000 Hours Podcast

80,000 Hours Podcast

De: Rob Luisa and the 80000 Hours team
Ouça grátis

Sobre este título

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80000 Hours' wherever you get podcasts. Hosted by Rob Wiblin and Luisa Rodriguez.All rights reserved
Episódios
  • Andreas Mogensen on what we owe 'philosophical Vulcans' and unconscious AIs
    Dec 19 2025

    Most debates about the moral status of AI systems circle the same question: is there something that it feels like to be them? But what if that’s the wrong question to ask? Andreas Mogensen — a senior researcher in moral philosophy at the University of Oxford — argues that so-called 'phenomenal consciousness' might be neither necessary nor sufficient for a being to deserve moral consideration.

    Links to learn more and full transcript: https://80k.info/am25

    For instance, a creature on the sea floor that experiences nothing but faint brightness from the sun might have no moral claim on us, despite being conscious.

    Meanwhile, any being with real desires that can be fulfilled or not fulfilled can arguably be benefited or harmed. Such beings arguably have a capacity for welfare, which means they might matter morally. And, Andreas argues, desire may not require subjective experience.

    Desire may need to be backed by positive or negative emotions — but as Andreas explains, there are some reasons to think a being could also have emotions without being conscious.

    There’s another underexplored route to moral patienthood: autonomy. If a being can rationally reflect on its goals and direct its own existence, we might have a moral duty to avoid interfering with its choices — even if it has no capacity for welfare.

    However, Andreas suspects genuine autonomy might require consciousness after all. To be a rational agent, your beliefs probably need to be justified by something, and conscious experience might be what does the justifying. But even this isn’t clear.

    The upshot? There’s a chance we could just be really mistaken about what it would take for an AI to matter morally. And with AI systems potentially proliferating at massive scale, getting this wrong could be among the largest moral errors in history.

    In today’s interview, Andreas and host Zershaaneh Qureshi confront all these confusing ideas, challenging their intuitions about consciousness, welfare, and morality along the way. They also grapple with a few seemingly attractive arguments which share a very unsettling conclusion: that human extinction (or even the extinction of all sentient life) could actually be a morally desirable thing.

    This episode was recorded on December 3, 2025.

    Chapters:

    • Cold open (00:00:00)
    • Introducing Zershaaneh (00:00:55)
    • The puzzle of moral patienthood (00:03:20)
    • Is subjective experience necessary? (00:05:52)
    • What is it to desire? (00:10:42)
    • Desiring without experiencing (00:17:56)
    • What would make AIs moral patients? (00:28:17)
    • Another route entirely: deserving autonomy (00:45:12)
    • Maybe there's no objective truth about any of this (01:12:06)
    • Practical implications (01:29:21)
    • Why not just let superintelligence figure this out for us? (01:38:07)
    • How could human extinction be a good thing? (01:47:30)
    • Lexical threshold negative utilitarianism (02:12:30)
    • So... should we still try to prevent extinction? (02:25:22)
    • What are the most important questions for people to address here? (02:32:16)
    • Is God GDPR compliant? (02:35:32)

    Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
    Coordination, transcripts, and web: Katy Moore

    Exibir mais Exibir menos
    2 horas e 37 minutos
  • How AI could transform the nature of war | Paul Scharre, author of 'Army of None'
    Dec 17 2025
    In 1983, Stanislav Petrov, a Soviet lieutenant colonel, sat in a bunker watching a red screen flash “MISSILE LAUNCH.” The system told him the United States had fired five nuclear weapons at the Soviet Union. Protocol demanded he report it to superiors, which would almost certainly trigger a retaliatory strike.Petrov didn’t do it. He had a “funny feeling” in his gut. He reasoned that if the US were actually attacking, they wouldn’t just fire five missiles — they’d empty the silos. He bet the fate of the world on a hunch that the machine was broken. He was right.Paul Scharre, the former Army Ranger who led the Pentagon team that wrote the US military’s first policy on autonomous weapons, asks a terrifying question: What would an AI have done in Petrov’s shoes? Would an AI system have been flexible and wise enough to make the same judgement? Or would it have launched a counterattack?Paul joins host Luisa Rodriguez to explain why we are hurtling toward a “battlefield singularity” — a tipping point where AI increasingly replaces humans in much of the military, changing the way war is fought with speed and complexity that outpaces humans’ ability to keep up.Links to learn more, video, and full transcript: https://80k.info/psMilitaries don’t necessarily want to take humans out of the loop. But Paul argues that the competitive pressure of warfare creates a “use it or lose it” dynamic. As former Deputy Secretary of Defense Bob Work put it: “If our competitors go to Terminators, and their decisions are bad, but they’re faster, how would we respond?”Once that line is crossed, Paul warns we might enter an era of “flash wars” — conflicts that spiral out of control as quickly and inexplicably as a flash crash in the stock market, with no way for humans to call a timeout.In this episode, Paul and Luisa dissect what this future looks like:Swarming warfare: Why the future isn’t just better drones, but thousands of cheap, autonomous agents coordinating like a hive mind to overwhelm defences.The Gatling gun cautionary tale: The inventor of the Gatling gun thought automating fire would reduce the number of soldiers needed, saving lives. Instead, it made war significantly deadlier. Paul argues AI automation could do the same, increasing lethality rather than creating “bloodless” robot wars.The cyber frontier: While robots have physical limits, Paul argues cyberwarfare is already at the point where AI can act faster than human defenders, leading to intelligent malware that evolves and adapts like a biological virus.The US-China “adoption race”: Paul rejects the idea that the US and China are in a spending arms race (AI is barely 1% of the DoD budget). Instead, it’s a race of organisational adoption — one where the US has massive advantages in talent and chips, but struggles with bureaucratic inertia that might not be a problem for an autocratic country.Paul also shares a personal story from his time as a sniper in Afghanistan — watching a potential target through his scope — that fundamentally shaped his view on why human judgement, with all its flaws, is the only thing keeping war from losing its humanity entirely.This episode was recorded on October 23-24, 2025.Chapters:Cold open (00:00:00)Who’s Paul Scharre? (00:00:46)How will AI and automation transform the nature of war? (00:01:17)Why would militaries take humans out of the loop? (00:12:22)AI in nuclear command, control, and communications (00:18:50)Nuclear stability and deterrence (00:36:10)What to expect over the next few decades (00:46:21)Financial and human costs of future “hyperwar” scenarios (00:50:42)AI warfare and the balance of power (01:06:37)Barriers to getting to automated war (01:11:08)Failure modes of autonomous weapons systems (01:16:28)Could autonomous weapons systems actually make us safer? (01:29:36)Is Paul overall optimistic or pessimistic about increasing automation in the military? (01:35:23)Paul’s takes on AGI’s transformative potential and whether natsec people buy it (01:37:42)Cyberwarfare (01:46:55)US-China balance of power and surveillance with AI (02:02:49)Policy and governance that could make us safer (02:29:11)How Paul’s experience in the Army informed his feelings on military automation (02:41:09)Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon MonsourMusic: CORBITCoordination, transcripts, and web: Katy Moore
    Exibir mais Exibir menos
    2 horas e 45 minutos
  • AI could let a few people control everything — permanently (article by Rose Hadshar)
    Dec 12 2025

    Power is already concentrated today: over 800 million people live on less than $3 a day, the three richest men in the world are worth over $1 trillion, and almost six billion people live in countries without free and fair elections.

    This is a problem in its own right. There is still substantial distribution of power though: global income inequality is falling, over two billion people live in electoral democracies, no country earns more than a quarter of GDP, and no company earns as much as 1%.

    But in the future, advanced AI could enable much more extreme power concentration than we’ve seen so far.

    Many believe that within the next decade the leading AI projects will be able to run millions of superintelligent AI systems thinking many times faster than humans. These systems could displace human workers, leading to much less economic and political power for the vast majority of people; and unless we take action to prevent it, they may end up being controlled by a tiny number of people, with no effective oversight. Once these systems are deployed across the economy, government, and the military, whatever goals they’re built to have will become the primary force shaping the future. If those goals are chosen by the few, then a small number of people could end up with the power to make all of the important decisions about the future.

    This article by Rose Hadshar explores this emerging challenge in detail. You can see all the images and footnotes in the original article on the 80,000 Hours website.

    Chapters:

    • Introduction (00:00)
    • Summary (02:15)
    • Section 1: Why might AI-enabled power concentration be a pressing problem? (07:02)
    • Section 2: What are the top arguments against working on this problem? (45:02)
    • Section 3: What can you do to help? (56:36)

    Narrated by: Dominic Armstrong
    Audio engineering: Dominic Armstrong and Milo McGuire
    Music:
    CORBIT

    Exibir mais Exibir menos
    1 hora
Ainda não há avaliações