Great interview! I find this angle on the topic of AI fascinating. Some thoughts:
1. I found that my view of where AI is going in the next 5 years to actually be more pessimistic than anyone's on this episode. There seemed to be agreement that there will be widespread use of agents in the next few years, but I'm skeptical. Issues like reliability, common sense, memory, planning, robustness, and continuity over time and longer contexts seem like real challenges for agents that even frontier CoT models have not addressed. I'm skeptical current approaches will solve this with just more compute or data. Models have gotten a bit better in these areas, but I think people underestimate just how far they have to go to equal even an average human. In this I suppose I'm broadly in agreement with some of the things Yann LeCun has said recently. In other words, I'd be surprised if most white collar jobs have significant components of those jobs automated by agents by 2030.
2. I think a key point Ajeya touched on only briefly is that the doomer community's point of view really depends on the idea that we'll get god-like superintelligences that are difficult to align/control, and that these will emerge quickly. I really think this is might be the most important disagreement between doomers any everyone else, and I would be interested in further discussion of that point in particular at some point, as I'm skeptical that that is possible or, if it is possible, that it will come quickly even with AI helping us get there.
3. One reason I'm skeptical of the doomers is a more meta, sociological one. Frankly, they seem cult-ish to me, almost like a fundamentalist religion for neurodivergent STEM nerds. The more you dive into the rationalist and EA communities (which I've done multiple times for over a decade at this point), the more they seem like a cult. There's constant talk of doomsday, faith that the creation of aligned god-like AI is the only thing that matters, prominent figures are writing and speaking in emotionally charged and extreme ways, the community is in many ways extremely insular, and very strange behavior and philosophical beliefs are normalized (even outside of the topic of AI). The problem isn't that they're weird, it's that they're weird and organized in away similar to cults. I'm not saying we should dismiss their point of view out of hand due to his, but I think it is reason for a healthy extra dose of skepticism toward their views.
Maybe it didn't come through in the conversation but I agree on all three points here.
I also think it's unlikely that we'll see AIs automating a significant number of white-collar jobs before 2030. And like you I'm skeptical of the concept of superintelligence or the idea that a sufficiently powerful entity would have god-like powers relative to human beings. I wrote about that back in 2023:
When I spend time talking to doomers I always find them to be kind, thoughtful people. But I do think they've gotten themselves into an intellectual bubble that makes it harder for them to see the world clearly.
Oh, to be clear, I could tell you were the most skeptical, I guess I felt I was maybe even more skeptical than you seemed to be, though yeah it does sound like we have similar views. Edit: I'm not sure I have a clear picture of your view on agents and how quickly they will or won't advance and be useful, beyond what you just wrote here about them not automating jobs before 2030. I recently subscribed to your substack so maybe I missed it.
And I agree that the doomer community folks are kind and thoughtful, I didn't meant to imply that with the "cult" terminology. I more meant it in the sense that I think outsiders don't realize how extremely tight-knit this community is and how all encompassing their culture and worldview is. Maybe not a religion, but more similar to one than it is to a typical intellectual/academic community.
As a member of the "doomer community", I'm neither a rationalist nor a member of EA, and in fact, have a lot of connections in traditionally anti-EA societies like religious organizations. I think that the risk awareness is due to simple logic of creating something that will invalidate humanity.
Fair, I guess it depends on what you mean by doomer. I do worry about AI risk in a sense, but I'm not super concerned specifically about the kind of doomerism we see from the rationalist/EA world. Briefly, I'd describe is as the hard takeoff of misaligned AI. I'm pretty skeptical of the hard takeoff scenario despite recent progress. I'm also much more optimistic about alignment than the self-described doomers. At least so far, AI has seemed pretty easy to align. It's not perfect, true, but the old vision of an AI being told to build as many paperclips as possible and killing all humans in the process seems extremely implausible at this point. And if anything it seems like AI has been easier to align the more sophisticated it gets. It was much easier to get older LLMs to say racial slurs, to use one example.
I'm more concerned about things like:
* Bad actors using AI for nefarious purposes, be they terrorists or authoritarian governments
* AI creating unimaginable levels of inequality between capital owners and the rest of us
* Automated weapons
* Loss of meaning and purpose in an automated world
* Humans being taken care of, but completely losing our influence and control over our destinies as individuals and a species.
I think that takeover is completely possible, but I agree that my main risk engines come from the last two. I don't actually think we have that much alignment, and conatus highly suggests that AI will essentially align itself to what maximizes itself. Insofar as it might not look very different from "maximizing itself for the corporation", that might be true.
This is my main risk of doom, and I think that it only relies on current trends of Goodharting to continue. Essentially, "industrializing humanity out of existence."
I think everyone's individual view on the likelihood of white-collar job automation is almost beside the point, we've all collectively proven to be pretty terrible at predicting AI progress (ha).
I think what is much more important that it is the clear intention of corporations/entities that control the most sophisticated models to automate these white-collar jobs. Whether it is Zuckerberg saying they'll have agents performing like mid-level engineers this year, or VC folks very explicitly saying the agent market opportunity is 10x greater than SaaS because it automates people in addition to software: https://www.youtube.com/watch?v=ASABxNenD_U (16 & 20 minute marks for those comments)
This is where I think the AI safety core premise around catastrophic risk and the AGI discussion as a whole really becomes counterproductive. Regardless of whether a model can cause catastrophic harm or qualifies as AGI, from an economic perspective if an model-based process can perform the same tasks as humans in a cheaper fashion it's clear what corporations are incentivized to do, and they are increasingly making those automation intentions more transparent.
I wish we had as much focus on the societal economic ramifications of continued AI innovation as we do on more severe safety concerns. Even if the outcome isn't the elimination of jobs, if you now need 5 white collar workers where you previously needed 10, that's going to have significant economic ramifications at the individual level.
I know prior technological enhancements have led to more opportunities at categories of work, but I think it's fair to be skeptical of that premise as we go up the cognitive chain with automation.
I actually do think that this is an essential part of the existential risk as well - because as you increasingly devalue human as part of industrialization, this also removes their bargaining power.
I would also characterize it as part of the existential risk personally, but I don't think that what I described gets nearly enough attention relative to other types of risks in the AI safety world. I completely agree with your assessment about bargaining power.
I think a large part of it is is that it seems as low status to complain about your job, because its all a sense of "skill issue" if you can't keep your job or to ask for protection. There's almost a sense of you should not defend yourself.
I agree with this. I also wonder about what happens even if AI is aligned with humanity. We know that people who don't work are often depressed and spend a lot of their time on activities like TV and video games. Can people be happy if they don't feel like they're contributing anything to the world, if we live pampered lives as glorified pets?
"With advanced robotics in the 1990s and the perfect replication of processes made possible by computers, increasing numbers of professions have become subject to automation. Now, with artificial intelligence and the promise of autonomous robots, still more work appears slated for replacement. Some predict a future without work — which would be catastrophic."
This episode brought up a lot of things I’d been wondering! It makes sense that the “doomer” crowd is made up of people who read the Alignment Problem and pivoted to work on it because they were convinced. People who weren’t convinced didn’t pivot to it or work much on it thus the community has been more built up.
However I think that while people who are certain about digital minds and sun-covering tech gravitate to the “doomer” crowd, there’s still people in the “doomer” crowd who aren’t and are motivated instead about other direction of risks such as authoritarian lock-in (like me).
I think we will start to see more work in this direction (as well as societal snags directions like NEPA) as AI becomes more of topic of interest for less-online crowds too.
Thanks for this clarifying episode about the direction of the communities!
Great interview! I find this angle on the topic of AI fascinating. Some thoughts:
1. I found that my view of where AI is going in the next 5 years to actually be more pessimistic than anyone's on this episode. There seemed to be agreement that there will be widespread use of agents in the next few years, but I'm skeptical. Issues like reliability, common sense, memory, planning, robustness, and continuity over time and longer contexts seem like real challenges for agents that even frontier CoT models have not addressed. I'm skeptical current approaches will solve this with just more compute or data. Models have gotten a bit better in these areas, but I think people underestimate just how far they have to go to equal even an average human. In this I suppose I'm broadly in agreement with some of the things Yann LeCun has said recently. In other words, I'd be surprised if most white collar jobs have significant components of those jobs automated by agents by 2030.
2. I think a key point Ajeya touched on only briefly is that the doomer community's point of view really depends on the idea that we'll get god-like superintelligences that are difficult to align/control, and that these will emerge quickly. I really think this is might be the most important disagreement between doomers any everyone else, and I would be interested in further discussion of that point in particular at some point, as I'm skeptical that that is possible or, if it is possible, that it will come quickly even with AI helping us get there.
3. One reason I'm skeptical of the doomers is a more meta, sociological one. Frankly, they seem cult-ish to me, almost like a fundamentalist religion for neurodivergent STEM nerds. The more you dive into the rationalist and EA communities (which I've done multiple times for over a decade at this point), the more they seem like a cult. There's constant talk of doomsday, faith that the creation of aligned god-like AI is the only thing that matters, prominent figures are writing and speaking in emotionally charged and extreme ways, the community is in many ways extremely insular, and very strange behavior and philosophical beliefs are normalized (even outside of the topic of AI). The problem isn't that they're weird, it's that they're weird and organized in away similar to cults. I'm not saying we should dismiss their point of view out of hand due to his, but I think it is reason for a healthy extra dose of skepticism toward their views.
Maybe it didn't come through in the conversation but I agree on all three points here.
I also think it's unlikely that we'll see AIs automating a significant number of white-collar jobs before 2030. And like you I'm skeptical of the concept of superintelligence or the idea that a sufficiently powerful entity would have god-like powers relative to human beings. I wrote about that back in 2023:
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
When I spend time talking to doomers I always find them to be kind, thoughtful people. But I do think they've gotten themselves into an intellectual bubble that makes it harder for them to see the world clearly.
Thanks for listening!
Oh, to be clear, I could tell you were the most skeptical, I guess I felt I was maybe even more skeptical than you seemed to be, though yeah it does sound like we have similar views. Edit: I'm not sure I have a clear picture of your view on agents and how quickly they will or won't advance and be useful, beyond what you just wrote here about them not automating jobs before 2030. I recently subscribed to your substack so maybe I missed it.
And I agree that the doomer community folks are kind and thoughtful, I didn't meant to imply that with the "cult" terminology. I more meant it in the sense that I think outsiders don't realize how extremely tight-knit this community is and how all encompassing their culture and worldview is. Maybe not a religion, but more similar to one than it is to a typical intellectual/academic community.
As a member of the "doomer community", I'm neither a rationalist nor a member of EA, and in fact, have a lot of connections in traditionally anti-EA societies like religious organizations. I think that the risk awareness is due to simple logic of creating something that will invalidate humanity.
Fair, I guess it depends on what you mean by doomer. I do worry about AI risk in a sense, but I'm not super concerned specifically about the kind of doomerism we see from the rationalist/EA world. Briefly, I'd describe is as the hard takeoff of misaligned AI. I'm pretty skeptical of the hard takeoff scenario despite recent progress. I'm also much more optimistic about alignment than the self-described doomers. At least so far, AI has seemed pretty easy to align. It's not perfect, true, but the old vision of an AI being told to build as many paperclips as possible and killing all humans in the process seems extremely implausible at this point. And if anything it seems like AI has been easier to align the more sophisticated it gets. It was much easier to get older LLMs to say racial slurs, to use one example.
I'm more concerned about things like:
* Bad actors using AI for nefarious purposes, be they terrorists or authoritarian governments
* AI creating unimaginable levels of inequality between capital owners and the rest of us
* Automated weapons
* Loss of meaning and purpose in an automated world
* Humans being taken care of, but completely losing our influence and control over our destinies as individuals and a species.
I think that takeover is completely possible, but I agree that my main risk engines come from the last two. I don't actually think we have that much alignment, and conatus highly suggests that AI will essentially align itself to what maximizes itself. Insofar as it might not look very different from "maximizing itself for the corporation", that might be true.
https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like
This is my main risk of doom, and I think that it only relies on current trends of Goodharting to continue. Essentially, "industrializing humanity out of existence."
I think everyone's individual view on the likelihood of white-collar job automation is almost beside the point, we've all collectively proven to be pretty terrible at predicting AI progress (ha).
I think what is much more important that it is the clear intention of corporations/entities that control the most sophisticated models to automate these white-collar jobs. Whether it is Zuckerberg saying they'll have agents performing like mid-level engineers this year, or VC folks very explicitly saying the agent market opportunity is 10x greater than SaaS because it automates people in addition to software: https://www.youtube.com/watch?v=ASABxNenD_U (16 & 20 minute marks for those comments)
This is where I think the AI safety core premise around catastrophic risk and the AGI discussion as a whole really becomes counterproductive. Regardless of whether a model can cause catastrophic harm or qualifies as AGI, from an economic perspective if an model-based process can perform the same tasks as humans in a cheaper fashion it's clear what corporations are incentivized to do, and they are increasingly making those automation intentions more transparent.
I wish we had as much focus on the societal economic ramifications of continued AI innovation as we do on more severe safety concerns. Even if the outcome isn't the elimination of jobs, if you now need 5 white collar workers where you previously needed 10, that's going to have significant economic ramifications at the individual level.
I know prior technological enhancements have led to more opportunities at categories of work, but I think it's fair to be skeptical of that premise as we go up the cognitive chain with automation.
I actually do think that this is an essential part of the existential risk as well - because as you increasingly devalue human as part of industrialization, this also removes their bargaining power.
I would also characterize it as part of the existential risk personally, but I don't think that what I described gets nearly enough attention relative to other types of risks in the AI safety world. I completely agree with your assessment about bargaining power.
I think a large part of it is is that it seems as low status to complain about your job, because its all a sense of "skill issue" if you can't keep your job or to ask for protection. There's almost a sense of you should not defend yourself.
I agree with this. I also wonder about what happens even if AI is aligned with humanity. We know that people who don't work are often depressed and spend a lot of their time on activities like TV and video games. Can people be happy if they don't feel like they're contributing anything to the world, if we live pampered lives as glorified pets?
I think this is a fair long term concern, but in the near/intermediate term I am more concerned about increasing inequality.
https://nationalaffairs.com/publications/detail/technology-for-the-american-family covers this well.
"With advanced robotics in the 1990s and the perfect replication of processes made possible by computers, increasing numbers of professions have become subject to automation. Now, with artificial intelligence and the promise of autonomous robots, still more work appears slated for replacement. Some predict a future without work — which would be catastrophic."
This episode brought up a lot of things I’d been wondering! It makes sense that the “doomer” crowd is made up of people who read the Alignment Problem and pivoted to work on it because they were convinced. People who weren’t convinced didn’t pivot to it or work much on it thus the community has been more built up.
However I think that while people who are certain about digital minds and sun-covering tech gravitate to the “doomer” crowd, there’s still people in the “doomer” crowd who aren’t and are motivated instead about other direction of risks such as authoritarian lock-in (like me).
I think we will start to see more work in this direction (as well as societal snags directions like NEPA) as AI becomes more of topic of interest for less-online crowds too.
Thanks for this clarifying episode about the direction of the communities!