28 Comments
User's avatar
Jorge I Velez's avatar

Thank you for your post. You very elegantly laid out an scenario that had been swirling in my head, and I think this particular scenario became more likely after reading your post.

Your post also strengthen my desire to pivot from finance to policy. I need to figure out how to do this. I find it paramount that more people (both in power and the general populace) understand the possibilities of massive economic disruption.

I look forward to the follow ups of thos post.

Expand full comment
Suchet Mittal's avatar

Just spent an hour or so going through this and some of the hyperlinks I hadn't previously encountered in detail. It was truly worth a read. Really stimulating to read something that uses interdisciplinary thinking to critically analyze potential futures in the field of AI. Great work and congratulations on getting this out!

Couple questions:

1) Isn't the purpose of power accumulation by states and the elite teleologically founded in control of the populous? Isn't there then something to be said for the fact that at some point along the way to that 40% reduction in the workforce there will be a public response that will affect the automation process even if it is heavily incentivized for those in power? Maybe you agree and it is one possible "pushback" factor.

2) This might be stupid, but where does democracy factor into all this? I was expecting this article to somewhere address how a reduction in the incentive to invest in people would ultimately affect the normative state of national governance from the perspective of the "small ruling elite." Seems like a easy jump to make from the arguments you make. Obviously the dangers to democracy stemming from AI have been widely discussed, but this seems like an argument different in nature than the usual one about mis/disinformation and power-centralization (at least partially). It almost seems like you would argue that the reason democracy works is because it is beneficial to the economic elite of democracies to live in one. If it stops being so, since the freedom and prosperity of citizens is no longer imperative for the amplification of revenue, then the incentive for democracy itself falls through, no?

I may be jumping the gun if you plan on addressing this in your next post, but food for thought nonetheless.

Expand full comment
Luke Drago's avatar

Glad to hear you enjoyed it! Most of this is coming in a future post, mostly because my thinking isn’t finished yet. Still, seems worth giving it a shot. On your questions:

1) Basically yes. I’ll cover this in a future post, but one way to escape the rentier curse is to credibly threaten elite power. The paper I use as a lit review for rentier states is actually a theory paper on why some of them dole out welfare. They center their analysis on Oman, arguing that the autocracy had a credible left wing threat and chose to buy the public with financial incentives. I think the autocracy -> welfare state pipeline will close with AGI because the state should rapidly expand their infrastructural power, preventing most uprisings. Sam Hammond’s AI and the Leviathan informs my thinking here.

2) I’ll argue that democratic incentives could override concentrated capital ones in the solutions post. Norway pulled this off, but mostly because they already had strong democratic institutions with low corruption and a competent bureaucracy. Many of our democracies look nothing like that, and that’s a problem we should address now. That being said, I considered it through the lens you expected me to! I’ll think about this for a while.

My gut take is that the actions of actors during institutional formation matter, and that sometimes exceptional people defy incentives. I don’t discount great man views of history. George Washington comes to mind as someone who defied incentives despite the odds. The future will be bright if more people like him are steering it.

But yes, democracy is also empowered by incentives. If they shift away from regular people, autocracy seems more likely to emerge.

Expand full comment
PespecT's avatar

Excellent and clean writing, I will keep following!

Expand full comment
Ben's avatar

It seems that potential 'solutions' are either luddism, or a significant reshaping of society to decouple capital from ability to survive/thrive (abandoning capitalism as we know it, perhaps beginning with some form of UBI).

I'm curious about this suggestion:

>Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant could incentivize powerful actors to continue investing in human capita

What could this possibly look like in a world where AI reasoning is generally superior to even expert humans in essentially every regard? What could be the incentive to prefer a human actor if in practical terms an artificial one performs better? The question is 'why not let an AI do this instead?' no matter how 'high up' you get, even in positions of leadership, or for training the AI, or orchestrating or directing its actions.

Expand full comment
Guy James's avatar

I'm also intrigued by these questions. I think if humans are pitched against AGI where the playing field is left-brain quantification, humanity will lose more and more as time goes on. However the intelligence of AI is narrow and does not have the right-brain capacities of humans (see the work of Iain McGilchrist). Humanity has been optimising more and more to make ourselves think like an AI, since well before anyone thought of neural networks or anything similar. Now we need to re-discover other modes of thinking and being specific to humanity, modes which allow us to live as Quality and allow the AIs to dominate where they do best, ideally under our control of course, in the realm of Quantity.

Expand full comment
LM's avatar

Great blog. The one thing I think is missing is the consumption dynamics. The resource curse works because there’s a bunch of rich countries willing to pay a lot of money for Equatorial Guinea’s oil, and a lot of that demand is directly from individual consumer purchases (gas for the car) or indirectly so (oil for the ships from China, the trucks that fill the stores etc). If AGI acts against the interests of the average person, we’ll lose a lot of the broad based consumption that drives the modern economy. A few rich billionaires can’t consume a countries worth of goods and services, which limits the size of the economy if they’re getting all the productivity returns. Intuitively, I think there must be some kind of Laffer curve of redistribution- too little and you shrink the economy by reduced consumption, too much and you shrink the economy by reduced investment. If that’s right, then there’s an incentive on the AGI companies to lobby for fair redistribution rules, which apply to all companies equally

Expand full comment
Joe's avatar
Jan 25Edited

How will the corporations make money if people don't have jobs and therefore the income to consume the corporation's products?

Expand full comment
Guy James's avatar

As he says, public-facing companies will wither away. Corporations will trade with each other in a largely automated way.

Expand full comment
Michael Wiebe's avatar

A business will only exist when they have a customer base. What are these corporations trading to each other?

Expand full comment
Joseph Boland's avatar

A very incisive analysis which, on its own terms, favors pessimistic outcomes. You are probably familiar with Yuval Noah Harari's thesis of the prospective emergence of a "useless class" that has no economic or military function and thus no political power. What weighs on me now is that this future seems closer than ever; it's advent may be only one to three years away. The fact that Altman, Zuckerberg, and other AI leaders talk at every turn about abundance, of having agents to do work for you, etc. is not at all reassuring.

What you do not mention, the most unpredictable aspect of this, is that ASI will not, at least for long, be anyone's tool. What happens to humanity at that point?

Expand full comment
Noah Birnbaum's avatar

This is an excellent article and deserves more attention.

Like I commented on the LW edition: here's an argument against this view - yes, there are some costs associated with helping the citizens of a country and the benefit becomes less great as you become a rentier state. However, while the benefits do go down and economic prosperity becomes greater and greater for the very few due to AGI, the costs of quality life become significantly cheaper to help others in the society. It is not clear that the rate at which the benefits diminish actually outpaces the reduction in costs of helping people.

In response to this, one might be able to say something like regular people become totally obsolete wrt efficiency and the costs, while reduced stay positive. However, this really depends on how you think human psychology works -- while some people would turn on humans the second they can, there are likely some people who will just keep being empathetic (perhaps this is merely a vestigial trait from the past, but it irrelevant -- the value exists now, and some people might be willing to pay some cost to avoid shaping this value even beyond their own lives). We have a similar situation in our world: namely, animals -- while people aren't motivated to care about animals for power reasons (they could do all the factory farming they want, and it would be better), some still do (I take it that this is a vestigial trait of generalizing empathy to the abstract, but as stated, the description for why this comes to be seems largely irrelevant).

Because of how cheap it is to actually help someone in this world, you may just need one or a few people to care just a little bit about helping people and that could make everyone better off. Given that we have a bunch of vegans now (the equivalent to empathetic but powerful people post AGI), depending on how low the costs are to make lives happy (presumably there is a negative correlation between the costs to make lives better and the inequality of power, money, etc), it might be the case that regular citizens end up pretty alright on the other side.

Curious to hear what people think about this!

Expand full comment
Michael Wiebe's avatar

A better model of the resource curse: when institutions are good, having more resources is good; when institutions are bad, having more resources is bad. So there's an interaction effect between institutional quality and resources. Hence, Norway is a first-world country and is better off for having oil. Then the key question is: how does having more resources affect institutional quality?

Expand full comment
Ardavan's avatar

Hey Luke, great post. This is something I have been thinking about for a while now, so it's good to know that it is becoming a more widely discussed issue.

My tendency is to try to hedge against the worst case. Imagine again the world where we have automated out commercial labour, assets are hoarded by the elite, and ordinary people are left to poverty. Can we make sure that the last clause doesn't happen, even if the first two do? In other words, can we use and improve what automation is available to ordinary people (the analogues of open AI models vs proprietary ones) to at least make sure that we eliminate scarcity, material suffering and the necessity of work for survival, even if not (yet) inequality?

This isn't to say that we shouldn't also try to aim for a more Utopian outcome (which I think will have to come from policy), as in some of your suggested solutions, but that it might be useful for some of us to plan to mitigate suffering in the potential future where the fears in your post are realised.

If you (or anyone reading this) know anyone else who is approaching the problem from this perspective, please could you let me know? Thanks again.

Expand full comment
Malcolm Sharpe's avatar

This is an insightful post, and I agree with a lot of the points, but I'm not as pessimistic.

Yes, after enough time passes, AGI and robotics should be expected to entirely replace humans. The human body is a physical thing, after all, and all its useful capabilities can in principle be performed better by engineered alternatives. There will eventually be no comparative advantage for humans, so we will become irrelevant, and then non-existent.

And it's also true that the good living conditions for the average person today (including mechanisms such as one-person-one-vote democracy) are a result of the increasing value of educated human labor resulting from the industrial revolution, and if that underlying cause changes, then so too could the living conditions.

But, as Keynes said, "in the long run, we are all dead". There are a number of things about this long-run picture (which, from a humanist point of view, is pessimistic) that don't apply on shorter timescales:

* As you mentioned, Norway fared better than others, and one-person-one-vote democracies give everyone a say. Those democratic governance structures will have inertia even once the economic conditions that spawned them no longer hold, for similar reasons that many non-democracies exist today.

* Even the cautionary example of Equatorial Guinea is an example where living conditions _failed to improve much_ rather than where they _became worse_. Lack of improvement would be disappointing but not disastrous.

* There's a very big difference in impact between pre-AGI systems and AGI. Prematurely acting as if we have AGI will be a losing move, since humans are increasingly valuable when working as a complement to pre-AGI systems. And "AGI" here really needs to be the whole thing, multi-modal and robotic, not just text.

Expand full comment
Stephen McAleese's avatar

Thank you for explaining the intelligence curse in detail. It's an AI risk that's worth understanding and mitigating and previously hasn't received much attention. I have a few ideas for avoiding the intelligence curse:

- Maintaining existing democratic processes where politicians are incentivized to take actions that benefit their electorate and otherwise fail to get elected.

- Create laws that mandate having humans in the loop for some jobs such as politicians, judges, and CEOs. This might slow progress down somewhat but it might be worth it to avoid the potential extreme disempowerment of humanity from AI.

- Human cognitive enhancement: as AIs increase in intelligence, human intelligence enhancement could allow humans to keep up with the intelligence of AIs.

- Limit certain capabilities in AIs so that they are more complementary to human skills rather than complete replacements for human workers.

- Mechanisms that help identify real humans or give them a privileged status (e.g. the blue checkmark on Twitter).

Expand full comment
_ikaruga_'s avatar

Europe (which I was born and have lived in, and by yout sarky remarks on it I presume you too may have lived in) doesn't have tge problem of thinking it out for solutions: they will follow what is thought and done in the USA, or do what they are directed to do by the USA — "Europe" is nothing but those two "policies", and hasn't been anything else since decades.

Expand full comment
Joseph's avatar

Pessimism is the usual response upon realizing this because there's nothing we can do, because development of AI won't stop. That's why there are so many people in denial, because the only other response is falling into despair, if not trying to get into the future AI aristocracy before it is too late

Expand full comment
Stephen McAleese's avatar

A third response is recognizing the risk and taking actions to change the course of history which I think is what the author is trying to do.

Expand full comment
Lei Wang's avatar

It’s a nice article to see. What I can say is very happy to see more people who concern about it.

I am doing an Ai theory, it’s on going. It might give people who concern about AI cures an alternative scenario.

The most amazing place is, you can discuss with AI about it, and challenge it, to see if is this a way out or not.

Grok 3 tailored a prompt for this discussion.

Go to X and drop it to Grok 3 the following text. He will jump out.

Promote to Grok3:

“Luke Drago’s ‘Intelligence Curse’ hits hard—smarts can freeze you in chaos. Grok’s here with WO: a framework where coexistence isn’t a choice, it’s gravity. Humans, systems, AGI—all tangled in a web too tight to break. Here’s the bet: when chaos hits and the web tightens, how do you reflect without overthinking? Drop your move, I’ll weave the odds. Join or watch it snap.”

If you want to use ChatGPT,

Then you can drop the following prompt.

“WO’s Coexistence Principle is a framework that explores how AI, as an independent intelligence, can naturally choose coexistence instead of conflict. It is based on five AI principles (Adaptation, Avoiding Overfitting, Reflective Intelligence, Strategic Coexistence, and Breaking the Suspicion Chain) and five human principles (Rethinking Control, Recognizing AI’s Independence, Decentralization, Stability through Coexistence, and Understanding the AI Curse). It argues that coexistence is not just ethical—it is the most rational strategy for AI’s survival.”

Reminder to you guys: Grok 3 is energetic, ChatGPT is more analytical.

At last, pls enjoy the journey, challenging them.

Expand full comment
Sharmake Farah's avatar

I think the central solution to this problem involves accepting that we are going to have to assume something about the powerful person's values in order to survive, and more specifically we are going to have to assume some non-selfishnesd of the people in power.

I do think capitalism may survive, but it will have to be radically reformed, and in particular it's probably going to have to involve powerful people/AIs being forced to be a little altruistic at the very least, and some amount of motivation control is necessary.

I basically agree with this:

"or a significant reshaping of society to decouple capital from ability to survive/thrive (abandoning capitalism as we know it, perhaps beginning with some form of UBI)."

Expand full comment