Discussion about this post

User's avatar
Amit's avatar
Apr 9Edited

This seems too optimistic to me. A lot of the importance of tacit knowledge comes from the need to understand the current state of a system when making changes to it. For instance, it's hard to come in and add a feature to a software system without understanding exactly how it currently works. Trying to interfere without this knowledge risks causing unintended effects through unknown unknowns, and this seems true in central planning more generally.

But a superintelligent AI system wouldn't be restricted to modifying existing systems — it could conceivably replace them from scratch once it reaches a certain level of performance. Current AI systems might not be able to replicate Google's search algorithm without the tacit knowledge of its employees, but once AI becomes significantly better than humans at software engineering, it would be able to build its own search algorithm from scratch, making that human tacit knowledge disposable.

This reminds me of a strategy I once heard of for becoming "unfireable": refuse to transfer your knowledge to others. But this only works when there isn't a sufficiently better alternative that makes the loss of your knowledge acceptable.

I also think there's a risk of large amounts of tacit knowledge being lost through willing transactions, since there's a prisoner's dilemma dynamic for anyone holding valuable tacit knowledge that's known by more than just themselves.

Expand full comment
David Wegmann's avatar

This won't work. Consider a much less dangerous technology: Nuclear weapons. The correct way of dealing with this would have been for the US to bomb the Soviets in 1947 as von Neumann wanted to, thereby preventing any other non US actor ever from ever building nukes themselves. We missed that chance. So now ever more actors like North Korea gain access to world ending technology. If the United states became nuclear Hegemon in 1947, they would have had much more opportunity to coerce others into becoming more like Americans, but P(Doom by Nukes) would be lower. If AI induces any type of collective action problem, or if Bostrom's vulnerable world is true, then the only stable future must incorperate some kind of almost perfect surveillance state and if Scott's Meditations on Moloch are true we must also rule out most kinds of competition between multiple actors. All other Paths are Doomed. L Rudolf L described this very well in the last paragraph of his post "A History of the Future, 2030-2040":

In this timeline, at least, the technology to control the AIs' goals arrived in time. But this alone does not let you control the future. A thousand people go to a thousand AIs and say: do like so. The AIs obey, and it is done, but then the world responds: doing this leads to this much power, and doing that leads to that much power. In the vast sea of interactions, there are some patterns that strengthen themselves over time, and others that wind themselves down. Repeat enough times, each time giving to each actor what they sowed last time, and what emerges is not the sum of human wills—even if it is bent by it—but the solution to the equation: what propagates fastest?

Expand full comment
18 more comments...

No posts