Luke Drago

Luke Drago

Home
Archive
About

The Use of Knowledge in (AGI) Society

How to build to break the intelligence curse

Luke Drago's avatar
Rudolf Laine's avatar
Luke Drago
and
Rudolf Laine
Apr 09, 2025
Cross-posted by Luke Drago
"An avenue for building tech to break the intelligence curse & the concerns discussed in Capital, AGI & Human Ambition."
-
Rudolf Laine
John Henry First Encounters the Steam Drill by Nick Danzi

The AI labs are aiming to create AGI that could obsolete all human labor, severing the dependency that powerful actors have on the regular people. If they achieve this, they risk triggering the intelligence curse, leaving humans irrelevant and severing the social contract between regular people and powerful actors. The most ambitious labs are further aiming for a singleton—a single superintelligent system or steering committee with total control over the world—which would further cause unprecedented power concentration and create a single point of failure.

We’ve previously argued that you have to build your way out of this problem. We think we now know how.

Today’s superintelligent systems

We already have narrowly superintelligent human systems, defined as collections of people that together perform far better than the people within them could perform without the system. They work in large part because they are not single, solitary minds.

The best example is markets. Markets are made of different actors that pursue both competing and complementary goals. This push and pull creates a system that outperforms what any central planner or individual could do, precisely because the best allocation of resources is best known at the local level, rather than by a few puppet-masters controlling the system.

Why decentralization works

Why does this work? Central planning efforts have long tried to outperform market-based systems. They don’t seem to work very well.

Hayek explains this well. In “The Use of Knowledge in Society”, he argues:

The peculiar character of the problem of a rational economic order is determined precisely by the fact that the knowledge of the circumstances of which we must make use never exists in concentrated or integrated form but solely as the dispersed bits of incomplete and frequently contradictory knowledge which all the separate individuals possess. The economic problem of society is thus not merely a problem of how to allocate “given” resources—if “given” is taken to mean given to a single mind which deliberately solves the problem set by these “data.” It is rather a problem of how to secure the best use of resources known to any of the members of society, for ends whose relative importance only these individuals know. Or, to put it briefly, it is a problem of the utilization of knowledge which is not given to anyone in its totality.

Hidden knowledge throughout society could be visible to an individual without being visible to others. A corporation could be superintelligent and could still miss the on-the-ground demand for a coffee shop you keep hearing from neighbors on your street. Moreover, the different goals of the agents that make up the larger system lead them to pursue different activities, make different friends, and follow different paths. This means every agent has access to different information, and explores a different part of their shared environment. By combining them and allowing them to allocate resources locally, you outperform a central planner. The combination of individual insights and the diversity of goals across society leads to a better system of resource allocation and a richer world than a top-down plan.

If you’ve ever sat in brainstorming sessions, you know this to be true. If everyone in the session is identical, the insights and ideas are monotonous. Because people in the room have different experiences, they collectively produce better ideas. If every agent in the system pursued the same things—or were given their presumed wants by a central planner—the world would be far poorer and more monotonous than it is now.

Rage against the singleton

AI will inherently be centralizing, and make central planning easier and more effective than it is now. However, an outright victory of centralization1 is much harder than it seems, much worse than it seems, and easier to fight than many think.

It’s much harder because it requires an incredibly large amount of data about the world, and the problem is computationally difficult.2 There’s a huge amount of information about the world that’s missing from the internet or other AI training corpuses. To get this information, and feed it to the superintelligent AI system, you’ll need it to capture an enormous amount of data about all of society, and allocate every resource accordingly. You’ll need a camera on every corner, a microphone on every person, and a spy in every cafe. You might also need a probe in every brain.

A victory of centralization is also worse than it seems. The surveillance and control necessary to build such a system is a one-way ticket to a dystopian dictatorship that could trigger if the control of the system—whether wielded by an AI or human—is ever corrupted. Your “Stalin risk”3 is enormously high; you should presume that any system which is deeply vulnerable to dictatorial takeover is likely to succumb to such threats. Centralization through AI is also distinct from previous centralizing forces, since the control is exerted not by a bureaucracy made of humans but by non-human AIs and robots. Our current economy, split between labor and capital, could become centralized into an economy dominated by capital alone, which would incentivize powerful actors to care less about humans.

However, thankfully centralization is likely easier to fight than many believe, notwithstanding claims that a singleton machine god is inevitable and immediate. Throughout history, attempts by central planners to impose legibility and surveillance on populations have been met by resistance by those who would be surveilled and planned, who recognize the loss of power and control this entails.4 This resistance is helped by the difficulty of capturing all hidden knowledge.

The use of your knowledge in (AGI) society

Tacit knowledge is something everyone has, and also something that’s hard to suck up into the training data—it’s not well-represented in internet tokens, since much of it is context-dependent, local, and embodied. And even though LLMs are good at getting the gist of implicit knowledge, they tend towards a bland average of what’s online—a generic perspective from nowhere, rather than someone’s specific perspective and insight.

We believe that tacit knowledge—the bits of knowledge that are visible to some individual but invisible to the whole of society—is the bedrock for maintaining human relevance. Humans need better tools to harness and build tacit knowledge. The most powerful way to create these tools is using AI. Over time, the AI will get more and more powerful, do more and more of the work, and likely move from a tool you use towards an agent you delegate to. But the AI should not be imposed from above, and automate the humans—it should be incorporated by the humans bottom-up, and augment them.

If the time comes for humans to finally hand over reins to the AIs, it should not be as automated workers giving up the security that comes from mattering to the world while surrendering to a generic centralized AI planner. It should be as free agents, uplifted by a vast series of AI tools and helpers and agents that they themselves crafted and grew with, continuing to steer their personalized AIs and through them the world, and continuing to own the benefits that they earn.5

Surrounded by technical miracles, each human will find themselves increasingly uplifted by the machines they crafted for themselves, participating in a free society as they claim the heavens together.

Work on this

We plan to tune LLMs to be aligned to an individual and trained on their knowledge, in line with these principles.

If you want to work on this, click here.

Subscribe for future posts

1

There are many things you could do with superintelligence that do not require solving the central planning problem. For example, you can take over the world without being able to optimize it well—consider how many governments can rule a territory while egregiously mismanaging it, or how many ways there are to fall short of the best future even if most things are fine.

2

At least to solve it optimally.

3

Also known as P(Stalin).

4

The work of James C. Scott is especially relevant. See Seeing Like a State, as summarized here, or The Art of Not Being Governed, as summarized here.

5

For another example of this, see this tweet by Séb Krier

Rudolf Laine's avatar
A guest post by
Rudolf Laine
.
Subscribe to Rudolf

No posts

© 2025 Luke Drago
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture