The Intelligence Curse
With AGI, powerful actors will lose their incentives to invest in people
“Show me the incentive, and I’ll show you the outcome.” – Charlie Munger
Economists are used to modeling AI as an important tool, so they don’t get how it could make people irrelevant. Past technological revolutions have driven human potential further. The agrarian revolution birthed civilizations; the industrial revolution let us scale them.
But AGI looks a lot more like coal or oil than the plow, steam engine, or computer. Like those resources:
It will require immensely wealthy actors to discover and harness.
Control will be concentrated in the hands of a few players, mainly the labs that produce it and the states where they reside.
The states and companies that earn rents mostly or entirely from it won’t need to rely on people for revenue.
It will displace the previous fuel of civilization. For coal, it was wood. For AGI, it’s us.
On December 28, Rudolf published Capital, AGI, and human ambition. He summarized his argument as:
Labour-replacing AI will shift the relative importance of human v non-human factors of production, which reduces the incentives for society to care about humans while making existing powers more effective and entrenched.
My goal is to give this phenomenon a name and build the evidentiary case for it. Potential solutions will be in a future post.
This problem looks a lot like the plague that affects rentier states, or states that predominantly rely on rents from a resource for their wealth instead of taxes from their citizens. These states suffer from the resource curse – despite having a natural source of income, they do worse than their economically diverse peers at improving their ordinary citizens’ living standards.
Powerful actors that adopt labor force-replacing AI systems will face rentier state-like incentives with far higher stakes. Because their revenues will come from intelligence on tap instead of people, they won’t receive returns on the investments we consider prerequisites to sustenance like education to prepare people for employment, employment and salaries, or a welfare state for the unemployed. As a result, they won’t invest – and their people will be unable to sustain themselves as a result. Humans need not apply, and so humans will not get paid.
This is the intelligence curse – when powerful actors create and implement general intelligence, they will lose their incentives to invest in people.
Before we begin, my assumptions are:
I believe that artificial general intelligence (AGI), specifically “a highly autonomous system that outperforms humans at most economically valuable work” is technologically achievable and >90% likely to exist in the next 1-20 years (and honestly, 10 years feels way too long). You should too.[1]
Once AI systems that are better, cheaper, faster, and more reliable than humans at most economic activity are widely available, the intelligence curse should begin to take effect. We should expect to be locked into the outcome 1-5 years after this moment.
Why powerful actors care about you
By powerful actors, I mean large organizations such as states, corporations, and bureaucracies that shape the world we live in and how we interact with it.
Rudolf offers an explanation for why states care about their people:
Since the industrial revolution, the interests of states and people have been unusually aligned. To be economically competitive, a strong state needs efficient markets, a good education system that creates skilled workers, and a prosperous middle class that creates demand. It benefits from using talent regardless of its class origin. It also benefits from allowing high levels of freedom to foster science, technology, and the arts & media that result in global soft-power and cultural influence. Competition between states largely pushes further in all these directions—consider the success of the US, or how even the CCP is pushing for efficient markets and educated rich citizens, and faces incentives to allow some freedoms for the sake of Chinese science and startups. Contrast this to the feudal system, where the winning strategy was building an extractive upper class to rule over a population of illiterate peasants and spend a big share of extracted rents on winning wars against nearby states
Powerful actors don’t care about you out of the goodness of their heart. They care about you for two reasons:
You offer a return on investment, usually through taxes or profits.
You impact their ability to retain power, either through democratic means like voting or through credible threats to a regime.
Most states in the modern world are diversified economies, meaning value comes from many different sectors and human activities, rather than a single or handful of sources. They rely on taxing people and corporations to generate revenue. The best way for them to increase their revenue is to increase their citizens’ productivity. You could try instead to do this by increasing taxes, but you can only tax what is being generated, yielding an upper limit. Instead, the state is incentivized to produce engineers, entrepreneurs, innovators, and other economically productive workers and create an environment for them to return on the investment. To do so, they tend to:
Establish good schools, research institutions, and universities
Build infrastructure like roads and public transportation
Set up reliable governing systems and courts to protect property rights
Protect speech and the flow of information
Support small business formation
Foster competitive markets
Create social safety nets to support risk-taking
These increase the productivity of citizens and increase the surface area of luck for innovation to occur. Equally importantly, these are the kinds of things that lift people out of abject poverty, increase living standards, and foster political and economic freedoms. With good schools, infrastructure, and competitive markets, a citizen can train for and find a high-paying job that exceeds their basic needs. And with reliable governing systems, fair courts, and free speech, a citizen can petition their government for their needs without the fear of becoming a political prisoner. They gain bargaining power through their votes and their economic output, so they can force changes that raise their standards of living. As a result, sometimes states capitulate to citizens' demands even if it will cost them.
A similar phenomenon affects corporations. Take, for example, the exorbitant salaries of Silicon Valley. Tech workers (until recently) have a skill set companies desperately need to make more money. Those workers are a hot commodity and competition to attract them is fierce. To win them over, companies pay large salaries, offer stock options, purchase pool tables, offer 24-7 free meals from a Michelin star chef, and do their laundry. No one is seriously arguing that the company laundry service is 10x’ing revenue, but it might win over a potential employee or keep an otherwise unsatisfied one from leaving for a competitor. The employees have bargaining power, so they can demand lavish perks that improve their quality of life.
This creates a feedback loop – as regular people make powerful actors more money, they are more likely to cater to them. Will education 10x your population’s (and thus the state’s) lifetime earnings? Build the damn schools. Will offering paid family leave get better employees for your company? Change the damn policy.
The resource curse
We already have societies that divorce their nation’s economic output from their human capital. They’re called rentier states. These states – including Venezuela, Saudi Arabia, Norway, and Oman, derive most of their earnings from resources (usually oil), rather than the productive output of their citizens.
You would expect the people in states with free money in the ground to be wealthy. Just dig it out and sell it to willing buyers. Why worry about building a diverse economy? You’re literally walking on money.
The Democratic Republic of Congo has over $24 trillion worth of untapped minerals in their ground. How have their citizens fared? According to the World Bank:
Most people in DRC have not benefited from this wealth. A long history of conflict, political upheaval and instability, and authoritarian rule have led to a grave, ongoing humanitarian crisis. In addition, there has been forced displacement of populations. These features have not changed significantly since the end of the Congo Wars in 2003.
DRC is among the five poorest nations in the world. An estimated 73.5% of Congolese people lived on less than $2.15 a day in 2024. About one out of six people living in extreme poverty in SSA lives in DRC.
What’s going on here? How can it be that trillions in total available resources have resulted in abject poverty?
Economists and political scientists call this the resource curse. Countries with abundant natural resources tend to experience poorer economic growth and higher rates of poverty than their economically diverse peers.
There are many factors that lead to the resource curse, but I’m going to focus on a core one: the incentives they create to stop caring about your people’s economic well being.
Because they earn money from resources, rentier states have no incentive to pay regular people today or invest in them tomorrow. Building better schools doesn’t earn them more money. They invest just as much as it takes to move the oil out of the ground, onto trucks, and out to the ports.[2] It’s not that their citizens couldn’t do anything worth taxing, it’s that there’s no reason to develop them into a taxable population. Why ask your people for money when you can get it from the ground?
Without money, regular people struggle to make demands. In autocracies, there’s no incentive to care about them unless they credibly threaten your power. Those who control the rents can extract wealth without worrying about everyone else.
So what do the lives of their citizens look like? Dr. Ferdinand Ebil and Dr. Steffen Hertog offer two competing visions:
There are few issues on which comparative politics theories offer more sharply contrasting predictions than on the link between resource rents and government welfare provision. Some authors, especially those in the tradition of “rentier state theory,” expect oil-rich rulers to engage in mass co-optation, politically pacifying their population with expansive welfare policies (Beblawi and Luciani 1987; Karl 1997). Others, especially those proposing formal models of politics in oil-rich states, expect rentier rulers to neglect their population. As rents are siphoned off by a small ruling elite that does not need a domestic economic basis for their self-enrichment, welfare provision is minimal and misery spreads (Acemoglu, Robinson and Verdier 2004; Mesquita and Smith 2009).
There are empirical examples for both trajectories. Oman and Equatorial Guinea have broadly comparable levels of natural resource rents per capita—slightly above 8,000 USD per capita in the 1995 to 2014 period (Ross 2013). Both have been ruled by the same autocrats since the 1970s, when both countries were desperately poor. Under Sultan Qaboos, Omani public services have expanded at a rapid pace, leading to one of the world’s fastest declines in child mortality, from 159 per one thousand live births in 1971 to 9 by 2010, far below the Middle East average of 32. In Teodoro Obiang’s Equatorial Guinea, the state outside of the security services remains embryonic, the vast majority of the population continues to live in abject poverty, and infant mortality has declined painfully slowly: from 263 in 1971 to 109 in 2010, remaining above the (high) sub-Saharan average of 89. Access to rentier wealth is monopolized by the president’s small entourage (Wood 2004).
Occasionally, rentier states result in large social safety nets.[3] But in most cases, they result in abject poverty for all but the few who control streams of rent.[4] Why? Ebil and Hertog provide an answer:
We concur with formal models of politics in resource-rich countries that ruling elites seek to ensure survival in power. Public policies are subject to this overarching goal and reflect elites’ assessment of threats to their rule. Within these constraints, elites will seek to maximize their personal rents from resource revenues.
We also agree with existing literature that the relative economic pay-off of welfare provision is lower in resource-based regimes, while its potential modernization effects are politically undesired (Acemoglu and Robinson 2006; Mesquita and Smith 2009). All else being equal, we therefore expect oil-rich regimes to establish narrow kleptocratic coalitions with limited welfare provision and rampant elite self-enrichment.
This effect doesn’t map onto widespread technologies, because they rely on regular people to use them in their workflows to increase productivity. What about AGI?
AGI looks more like a resource than a technology
Imagine for a moment that you are the CEO of a large company. Employing people is an investment you make. You pay them salaries which make up a large chunk of your total budget. In return, they do work that helps you generate revenue. Every year, you hire thousands of entry-level analysts to do the grunt work of your company like collecting data, writing reports, or making pretty powerpoint slides. You’ll also train them and promote them as other employees move up the corporate ladder. Their work output makes you money today. In 20 years, many of these analysts will be senior employees, and one might even replace you!
Hiring analysts serves two purposes:
Create a labor force to do the grunt work today
Build the bench that will replace existing hires as they age out
In the 2010s, laptops became widely available. Instead of clunky desktop computers, your analysts could now work from anywhere. They could take detailed notes in meetings and collaborate in the breakout room. But the laptops couldn’t replace the analysts, because you couldn’t give a laptop a task in plain English and expect them to do it. Instead, you needed the analysts to use laptops to access their benefits.
So you bought all your analysts laptops. It made nearly all of them more productive, which resulted in increased profits for your company. The laptops were a tool to be used by the analysts, but it didn’t 1) enable one analyst to do the job of 10 or 2) automate the analysts entirely.
Fast forward to 2030. BigLab just released an AI agent powered by GPT-8. It completes any task 20% faster and 10% better than any of your analysts. Oh, and running it to do the work of one analyst costs $10,000 per year – that’s at least an 80% cost reduction. It might let your best analyst do the job of 10, or you could use it to clone the best one and automate the analyst class entirely.
And it’s not just better – it’s more predictable. AI will remove the bottlenecks in finding talent by erasing the difficulty in finding, accurately judging, and hiring talent in any field. Turning to Rudolf:
If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:
It's often hard to judge talent, unless you yourself have considerable talent in the same domain. Therefore, if you try to find talent, you will often miss.
Talent is rare (and credentialed talent even more so—and many actors can't afford to rely on any other kind, because of point 1), so there's just not very much of it going around.
Even if you can locate the top talent, the top talent tends to be less amenable to being bought out by money than others.
AGI will not just be better than your analyst. It will be reliably better. You will know exactly how it will perform, either before integrating it or shortly thereafter. You could predict how much better it will get with each successive iteration. In a few months or years after it gets better than your analysts, it’ll get better than you at making strategic decisions for the company.
Maybe you really like the existing analysts and are skeptical of this new system. You integrate it as a trial, and in a year it’s outperforming all of them. In fact, keeping humans in the loop slows down the system and produces human results. Are you going to hire more analysts? No. Your future analyst classes are going to wildly shrink. And if you hit hard times as a company, you’ll remember that you can fire most of your staff and get better results.
With all this in mind, why the hell wouldn’t you fire your analysts? They are more expensive, worse at the job, and unreliable. Sure, Mike interviews well and is very nice to be around, but companies fire people their leadership personally likes all the time. And if your company doesn’t fire them, you will be crushed by competition that does.
Do you know what else performs like this? Natural resources. I know what oil does, how much of it I will need to do a thing that requires energy, and which kind of oil is best suited for my purpose. When I need gas for my car, I don’t have to interview or reference check 10 gas stations and make a gamble on which one is most likely to get my car from point A to B. All I need to do is pull in, confirm the type I need for my car, and fill up my tank.
What oil did for energy, AGI will do for anything that will require intelligence. It will easily slot in, reliably do a job, and do it better than any of its predecessors (including you) could ever do. Every actor – every company, every bureaucracy, every government – will be under competitive pressure to get humans out and their AI successors in. AGI will be domain agnostic – the goal is not to get superhuman abilities in one field, but in all of them. It will come for the programmer and the writer and the analyst and the CEO.
This is not hypothetical. We are starting to see pre-AGI systems shrink analyst classes, change personnel strategies, and trigger layoffs. Remember that today is the worst these systems will ever be. You should expect that they will become more capable as time goes on. As they get better, their impact on the labor market will grow rapidly. As Aschenbrenner says, “that doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.”
We are heading towards the default outcome, charted by the default incentives. What are those incentives, and what world will they create?
Defining the Intelligence Curse
The intelligence curse describes the incentives in a post-AGI economy that will drive powerful actors to invest in artificial intelligence instead of humans. If AI can do your job cheaper and faster, there isn’t a reason to hire you. But more importantly, there isn’t an economic reason to invest in your lifelong productivity, take care of you, or keep you around. We could produce unparalleled value with fully automated everything, but if the spoils are distributed like the worst rentier states it will not result in prosperity for the masses.
A common rebuttal I’ve heard is that some jobs can never be automated because we will demand humans do them. I hear this a lot about teachers. I think most parents would strongly prefer a real, human teacher to watch their kids throughout the day. But this argument totally misses the bigger picture: it’s not that there won’t be a demand for teachers, it’s that there won’t be an incentive to fund schools. I can repeat this ad nauseam for anything that invests in regular people’s productive capacity, any luxury that relies on their surplus income, or any good that keeps them afloat.[5] By default, powerful actors won’t build things that employ humans or provide them resources, because they won’t have to.
Taxes will still be a relevant form of income for governments, but only those from corporations. Likewise, corporations will make money from their AI systems, not from the work people produce. The investments that the developed world associates with a high quality of life — salaries, education, infrastructure, stable governance, etc — will no longer provide a return. People won’t make powerful actors any money.
Where might the powerful actors get their money from instead?
States will earn money from corporate taxes. Companies that produce advanced AI systems and companies that use them will generate large revenues. As they get bigger, states will tax them more. In 2022, corporate taxes made up 11.5% of the average OECD state’s revenue – a sample of high-performing, diverse economies. In the US, it’s only 6.5%. Like Norway, Saudi Arabia, and the Democratic Republic of the Congo, states will rely less on income taxes and more on taxes from AI companies or other companies that enable powerful actors to accomplish goals. When state revenue breakdowns look more like these countries than the OECD average, you’ll know the intelligence curse has taken hold.
AI labs will make money by becoming the new rentiers. The stated goals of the AI labs are to build AGI. One of the labs is changing their corporate structure to ensure they can capitalize on it. Once they have a system that can do it all, do you think they’ll just give it away? They’ll become a horizontal layer of the economy, extracting rents from all economic activity by selling it to powerful actors who use it to replace their workers. Initially, some wrappers might be able to make money from this by scaffolding agents to work better in specific verticals (this is already happening). Don’t expect this to last – remember, the goal is to do everything. This will make them a significant percentage of total global GDP, enabling them to wield economic power that was previously exclusive to states.
Companies will trade amongst themselves and other powerful actors. Land, energy, compute, manufacturing hubs, data centers, and many more things that exist in the physical world and enable actors to accomplish goals will have value. The cafe chain and the marketing firm will be irrelevant, but the landlord and energy company will be able to make more money than ever before. Powerful actors, likely human-controlled (at least for a while), will extract the vast majority of value from these sources.
One place where the intelligence curse differs from the resource curse is the long-term incentive to diversify. The climate effects of oil and the rise of renewables that let any state produce energy has forced petrostates to search for new income streams, empowering their citizens in the process. This effect won’t map to AGI – each subsequent model will be more capable than the last one and will likely be controlled by the same few actors. You also can’t “run out” of AGI like you can with oil. You could exhaust compute capacity or existing energy, but compute gets cheaper over time and energy is getting greener by the day. We won’t need to transition from it – once we have it, it’s here to stay.
So what will happen to most regular people, assuming powerful actors follow the default trajectory? Show me the incentives, and I’ll show you the outcome:
Companies will be incentivized to fire them, and never hire new ones. They won’t produce anything they can value. For a short time they might rely on them as consumers, but most people-facing companies will fizzle out as their demand base loses economic power.
States will be incentivized to decimate public funding. Remember, their revenue base will shift towards other powerful actors. They will derive no value from their labor and are thus incentivized against building things that turn them into productive workers. ROI – capital, power, and resilience – comes from ensuring the AI labs can build better models and the companies using them can do things in the world. Also, the taxes to fund human investment would come in large part from AGI labs. Competition between states means that if any tries to set up a UBI with this tax, their AGI could fall behind other states.
Regular people won’t have the resources to support themselves or each other. The vast majority of people will not have the economic power necessary to make any demands. They won’t be able to incentivize resource-controlling actors to invest in them. That means (at best) they’ll struggle to fulfil their basic needs or rely on benevolent charity from powerful actors.
For a while, they might be able to generate some value. Rentier states require some humans to move things in the physical world – someone has to get the oil out of the ground. It could be that humans are paid for manual labor while agents are limited to virtual forms. As robotics improves[6], the need for them will decrease. They won’t be able to participate in the economy because they won’t be able to do anything better, faster, cheaper, or more reliably than their artificial replacers.
In rentier states and colonial states,[7] value is derived primarily from raw materials or physical goods, which are then sold to foreign buyers – usually other states or businesses. A few humans are involved in the raw production or management of this, but most don’t benefit. You should expect a similar scenario here. This leads to an obvious question: who are powerful actors producing anything for?
Powerful actors have goals, so production will strive to achieve them. States want control over territory and companies want to enrich their owners. Individuals who have accrued significant capital might also have goals. Maybe they’ll want to use their newfound power to colonize Mars or excavate the oceans. It could be less historic – plenty of ultra-wealthy people are content to live their lives maximizing their own pleasure. All of them will want to ensure their newfound place in society is secure, and this could require vast amounts of power and resources. Without regular people in the value loop, there is no incentive for spoils to go to them.
Even if humans at the very top of the pyramid remain relevant, the ability for new actors to enter the equation will be frozen. An actor will have power because they had it before the intelligence curse took hold or were well-positioned to capitalize on it as it began.
This sounds a lot like feudal economies. Rudolf makes the comparison aptly:
In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.
To recap, the intelligence curse will create rentier state-style incentives at scale and without their typical restraints. When people are not relevant, powerful actors will by default not invest in people. Without intervention, the default case outcome looks like the worst rentier states – a few extraordinarily wealthy players, mass poverty for the rest, held in a stable equilibrium. A small number of post-AGI elites will control all powerful actors, while everyone else struggles to meet their basic needs.
So people are working on this…right? Right?
The world is waiting on you
Most people are not taking this seriously. When a few friends and I got some of the world’s top experts to agree on the best ways to govern AI by 2030, our economic section asked governments to “consider bold, innovative policy ideas if we arrive at economic conditions that necessitate a more dramatic response.” That’s policy-speak for “we have no idea what to do and need some smart people to think about it.”
We are going to have to break the culture of mass-denial fueled by indefinite optimism[8]. Wishful thinking is dominating the conversation. Some of it is motivated by a sense of self-importance: many people believe that their job is actually super special and automation proof forever, so why should they care?
Two conversations stick out to me:
First, I had a conversation over a year ago with a senior person in AI policy. When I brought up the idea that automation might make people worse off, they considered the possibility of technological replacement totally impossible. Why?
“We’ll have new jobs – maybe everyone will work in AI policy!”
I thought they were kidding. Further discussion proved they weren’t. Everyone thinks their job is safe – even the AI policy people.
Second, in a more recent conversation, I raised the concept of the intelligence curse. I hadn’t fleshed it all out yet, but their response convinced me I needed to. This person, a well-connected person in the AI space, agreed technological displacement was the most likely outcome of AGI, but believed that it would default to utopia.
“We won’t need jobs – we’ll be free to self-actualize. We’ll pursue meaningful goals and write poetry.”
You do not get to utopian poetry writing by having faith that someone else will figure it out. You are not praying to God, you are praying to men more ignorant than you.
The AI safety community thinks they are immune from this because they’ve identified a deeply relevant problem – intent alignment – and are spending all of their energy trying to solve it. I agree with you! Intent alignment must be solved. There’s no way around it. But the safety community often sounds like the person predicting poetry parties. Aligned AGI and superintelligence does not equal utopia.[9] You are merely ensuring the most powerful technology in human history is reliably controllable for the actors that will be most afflicted by the intelligence curse. You can’t just plan for AGI – you have to plan the day after.
For the few who see the intelligence curse for what it is, mass denial has been supplanted by indefinite pessimism.
A day after o3 dropped, I got a text from a software engineer who refused to use Cursor because they didn’t believe it could possibly be better than them:
“Thoughts on o3? This is the first time I am starting to feel a little cooked”
Indefinite pessimism has made us think we’re “cooked” with no way out. “What is your p-doom?” is more common than “what is your solution?”
If your reaction to the last year of progress has been paralyzed hopelessness, dust yourself off. The world is waiting on you – one of the few who sees what is coming – to do something about it. Hope is a prerequisite.
In my next post, I’ll use that to identify some ways I think we could break the intelligence curse, partially by looking at states that avoided the resource curse. I’m working on the specifics, but I think solutions will fall into two categories:
Governance solutions. In healthy democracies, the ballot box could beat the intelligence curse. People could vote their way out. But our governments aren’t ready.
Innovative solutions. Tech that increases human agency, fosters human ownership of AI systems or clusters of agents, or otherwise allows humans to remain economically relevant could incentivize powerful actors to continue investing in human capital.
This isn’t just a problem for a blog post. Governments should be forecasting AI capabilities and thinking through solutions to the intelligence curse right now. Think tanks need to start turning out policies designed to get us ready for a post-employment world. AI labs need to be critically examining their own incentives and building better internal governance structures to overcome them. Ambitious young people should start companies trying to design tech that will keep humans economically relevant and spread abundance, and VCs should start funding them. If you are well-positioned to contribute to solving this problem, what are you waiting for?
There are some problems that are impossible to solve – but there are no big problems that aren’t worth giving it everything we’ve got. I am more optimistic than I have ever been because naming the problem gives us something to solve.
Change the incentives, and you can change the outcome. The work starts today.
Thank you to Rudolf Laine, Josh Priest, Lysander Mawby, Jacob Pfau, Luca Gandrud, Bilal Chughtai, Nicholas Osaka, Stefan Arama, Joe Pollard, and Caleb Peppiatt for reviewing drafts of this post.
[1] If you disagree, I’d strongly encourage you to read this, this, this, this, and this (and watch this). You should also consider that it is the stated goal of OpenAI, Meta, and Google DeepMind, and it looks like that’s what Anthropic is aiming at. You should also know that the top recommendation from the Congressional US-China Commission in 2024 was for Congress to “establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability.”
[2] For more on this, see Chapter 7 of this book.
[3] Why a few rentier states like Oman and Norway become expansive welfare states (and what this means for the intelligence curse) will be the subject of a future post. Spoiler alert: Oman’s model won’t be a solution to the intelligence curse, but Norway’s might be.
[4] For other evidence, see here, here, and here.
[5] If the next thing that pops into your head is “but what about comparative advantage?”, know that this section originally had a 1500 word takedown of that argument which was cut for length. That post is coming soon.
[6] This is nine months old running on a much worse model than today’s state of the art ones. Again, believe in straight lines.
[7] One day I’ll write a post about how colonial states function a lot like rentier states. In both of them, extractive institutions generate wealth for a power that isn’t incentivized to care much about the people in their borders. Post-colonial states still suffer because, instead of extracting value for a foreign power, the same institutions are turned into value extraction tools for the domestic political elite.
[8] Indefinite/Definite Optimism/Pessimism was first defined by Peter Thiel in Zero to One. For a summary of this concept, click here.
[9] An assumption underpinning this is that we either a) solve intent alignment before making sure that systems are aligned with human values, or b) abandon aligning systems with human values entirely, because powerful actors would rather not have machines that tell them no based on a moral compass the actor doesn’t agree with.
Thank you for your post. You very elegantly laid out an scenario that had been swirling in my head, and I think this particular scenario became more likely after reading your post.
Your post also strengthen my desire to pivot from finance to policy. I need to figure out how to do this. I find it paramount that more people (both in power and the general populace) understand the possibilities of massive economic disruption.
I look forward to the follow ups of thos post.
Just spent an hour or so going through this and some of the hyperlinks I hadn't previously encountered in detail. It was truly worth a read. Really stimulating to read something that uses interdisciplinary thinking to critically analyze potential futures in the field of AI. Great work and congratulations on getting this out!
Couple questions:
1) Isn't the purpose of power accumulation by states and the elite teleologically founded in control of the populous? Isn't there then something to be said for the fact that at some point along the way to that 40% reduction in the workforce there will be a public response that will affect the automation process even if it is heavily incentivized for those in power? Maybe you agree and it is one possible "pushback" factor.
2) This might be stupid, but where does democracy factor into all this? I was expecting this article to somewhere address how a reduction in the incentive to invest in people would ultimately affect the normative state of national governance from the perspective of the "small ruling elite." Seems like a easy jump to make from the arguments you make. Obviously the dangers to democracy stemming from AI have been widely discussed, but this seems like an argument different in nature than the usual one about mis/disinformation and power-centralization (at least partially). It almost seems like you would argue that the reason democracy works is because it is beneficial to the economic elite of democracies to live in one. If it stops being so, since the freedom and prosperity of citizens is no longer imperative for the amplification of revenue, then the incentive for democracy itself falls through, no?
I may be jumping the gun if you plan on addressing this in your next post, but food for thought nonetheless.