This seems too optimistic to me. A lot of the importance of tacit knowledge comes from the need to understand the current state of a system when making changes to it. For instance, it's hard to come in and add a feature to a software system without understanding exactly how it currently works. Trying to interfere without this knowledge risks causing unintended effects through unknown unknowns, and this seems true in central planning more generally.
But a superintelligent AI system wouldn't be restricted to modifying existing systems — it could conceivably replace them from scratch once it reaches a certain level of performance. Current AI systems might not be able to replicate Google's search algorithm without the tacit knowledge of its employees, but once AI becomes significantly better than humans at software engineering, it would be able to build its own search algorithm from scratch, making that human tacit knowledge disposable.
This reminds me of a strategy I once heard of for becoming "unfireable": refuse to transfer your knowledge to others. But this only works when there isn't a sufficiently better alternative that makes the loss of your knowledge acceptable.
I also think there's a risk of large amounts of tacit knowledge being lost through willing transactions, since there's a prisoner's dilemma dynamic for anyone holding valuable tacit knowledge that's known by more than just themselves.
imo a major advantage comes from the awareness that AI itself can be used to improve collective action between human actors; yes, there is a chance that AI itself can simply replace entire systems but having humans capable of collective action at improved positions also means that they get a better negotiation opportunities.
This won't work. Consider a much less dangerous technology: Nuclear weapons. The correct way of dealing with this would have been for the US to bomb the Soviets in 1947 as von Neumann wanted to, thereby preventing any other non US actor ever from ever building nukes themselves. We missed that chance. So now ever more actors like North Korea gain access to world ending technology. If the United states became nuclear Hegemon in 1947, they would have had much more opportunity to coerce others into becoming more like Americans, but P(Doom by Nukes) would be lower. If AI induces any type of collective action problem, or if Bostrom's vulnerable world is true, then the only stable future must incorperate some kind of almost perfect surveillance state and if Scott's Meditations on Moloch are true we must also rule out most kinds of competition between multiple actors. All other Paths are Doomed. L Rudolf L described this very well in the last paragraph of his post "A History of the Future, 2030-2040":
In this timeline, at least, the technology to control the AIs' goals arrived in time. But this alone does not let you control the future. A thousand people go to a thousand AIs and say: do like so. The AIs obey, and it is done, but then the world responds: doing this leads to this much power, and doing that leads to that much power. In the vast sea of interactions, there are some patterns that strengthen themselves over time, and others that wind themselves down. Repeat enough times, each time giving to each actor what they sowed last time, and what emerges is not the sum of human wills—even if it is bent by it—but the solution to the equation: what propagates fastest?
That only works if the tech tree of the universe is defence dominant (even for the effects of externalities) If elo 3000 in chess allows you to nuke the board and there is no defence against that, elo 10000 won't save you.
Also we got incredibly lucky with nukes. If there had been a technology to make launches invisible, then mutually assured destruction would not have worked anymore. The only strategy would have been to wait until you have enough stealth nukes to fully destroy the other parties and then launch them immediately.
Overall, I would say that a world where people build defensive acceleration tools still exceeds a world where people di not. In itself, multi-agent cooperative systems give us the chance to solve collective action problems.
Sure, but I would assume that this just depends on the game theoretic nature of this technology and how it interacts with everything else.
If there was a technology to remove 1ppm CO2 for say, a few million dollars from the atmosphere, some country would just do it and fix climata change unilaterally. But there is no such technology and the tech tree of the universe might not yield one. Maybe there is one for climate change. But the problem is that you need an effective and cheap defense technology for every other possible attack vector. If a single attack vector remains without such an effective defense tech, the universe is not defense dominant. And then any other progress doesn't really matter. A castle 90% encompassed in walls is almost as vulnerable as one that has no walls at all. Only very close to 100% saves you.
Doesn't need to be cheap imo - castles were indeed only 90% effective, they already had vulnerabilities via gates, etc. It was possible to cover those vulnerabilities with higher cost measures. Just by raising the cost of offense, it can make the world overall safer/better.
That depends on the exact cost complexities involved. Maybe castles are a bad example. Cryptography might be a better example:
You can have algorithms that proveabley can only by cracked by simply trying every key and the number of keys grows exponentially by key length by a factor of the possibilities. If your attacker buys 256 times more compute to crack something, you can mitigate that by simply adding a single extra digit to your key. If you add maybe 10 more digits, you are well outside the number of particles in the universe, even the smartest superintelligence probably wont be able to crack that in billions of years. (Other attack vectors do exist though: https://xkcd.com/538/)
No consider an attack vector that works in reverse: Your attacker must only increase his effort by say some fixed amount but defense costs on your side double. Now he can simply do that 10 times and your defense costs are above the GDP of a Galaxy wide Dyson Sphere civilisation.
You won't be able to afford that. So even if you spend 50% of your GDP on defense against this single attack vector, your adversary must only increase his effort by a small amount, so the world is safer by a minuscule amount, but your region is much poorer.
- Once sufficiently advanced AI is developed, an organization will be able to use it to control many people in a centralized top-down way (while possibly neglecting their welfare)
- AI tools that successfully exploit local/tacit knowledge would make individuals/small groups more economically valuable and powerful, making the centralization effort harder
Why do you think we might miss the boat on the second equilibrium (i.e., where people are empowered by personalized AI tools that are particularly good at unlocking local value)? Is it something like most people being too slow, not realizing there is latent demand/opportunity here, and meanwhile the big AI lab develops a "singleton"? Or is there some kind of societal/cultural factor? I'm asking because I agree that local/tacit knowledge is often essential and that pure centralized AI systems could lack this in some cases, but I don't think the default trajectory is headed in the direction of ignoring this (and I don't think we'll develop a superintelligence capable of the centralization you describe in a short enough timeframe to make underestimating the value of personalized tools a big risk).
I think a similar but more realistic worldview is if you're anticipating latent demand for AI integrations that exploit local knowledge, and encouraging people to start building products that will exploit this demand, for both societal benefit and profit. I wouldn't frame that as "fighting centralization" because the counterfactual world where some people don't build those products today is a slightly worse world where people build those products in the future rather than a world where we're all ruled by an AI singleton.
...I guess the crux here is superintelligence timelines?
Mostly, it's because automating away a job entirely is far more useful than enhancing humans, meaning there's always a competitive pressure to remove humans, combined with the issue that taste at least as good as humans is pretty likely to be achieved by default, and the assumption that there will be a taste gap for humans to exploit is unlikely to be true:
I have been talking about the idea that diversification should be done based on individual beliefs in reality, not reality itself.
Tacit knowledge is essentially the beliefs the individual holds and the emotional charge they attribute to it.
Hard data is limited, agreed upon knowledge is limited. But data observed through a perspective is novel and provides an infinite wealth of data.
The value of subjective reality is very underappreciated. In order to maximize the idea of humanities survival, we should maximize the possible outcomes. Thus the more subjective realities we can capture in the training data the more thoroughly the AI will understand humanity.
Nice article! This is very in line with what we are advocating at the One Unity Project - www.oneunityproject.org. We are in our infancy - but we are getting setting up as a nonprofit. We advocate the creation of a true Of the People, By the People, For the People system - combining the Best of Both Worlds - the best human inputs plus the best AI inputs to address collective action problems, and our individual challenges, in a rapidly changing world. We cannot solve our collective problems using divided approaches in an interconnected world. Our most skillful path forward into our collective future is together...and AI can assist us in the process, if we learn to ask the right questions. We need to use AI as an improved means to an improved end. We must decide, collectively, where we want to go with all of this "progress." If we are foolish enough to harness the evolving power of AI to use against one another, this will end badly. AI is the greatest power we've ever created and, if we are wise, we will try to work together to create the best world possible - such that all ships rise. The stakes are high, and we won't get a second chance at this. And why wouldn't we at least try? It will be the grandest expermiment in human history to try - it will be fun!
People /will/ use AI adversarially. This is unavoidable and the stated intent of many militaries(not to mention marketing, etc). The idea is that this will allow a form of defensive acceleration to help reduce and preserve human futures.
I have a few things to say on centralized economies and AGI automation:
1. It doesn't matter if a centralized AI economy could fail to satisfy the needs of most humans, so long as you can change what they want essentially arbitrarily, and this point is a pretty big reason why I believe that conditional on centralized AI systems coming into power and centralizing the economy, they might essentially stick around forever.
Another way to say it is that you can make the problem of central planning easy by changing what a consumer wants to be easy to satisfy, which is a huge way to subvert normal capitalist logic.
2. A crux that I hold here is that I basically don't buy that you need as much data as this section proposes, because of both data efficiency improvements combined with me thinking that lots of compute probably substitutes part of the necessity of going out in the world.
To be clear, I do agree directionally with the claim that AIs will need to be out in the world, and that this increases complexity, but I don't particularly think you need as much data as stated here:
> It’s much harder because it requires an incredibly large amount of data about the world, and the problem is computationally difficult.2 There’s a huge amount of information about the world that’s missing from the internet or other AI training corpuses. To get this information, and feed it to the superintelligent AI system, you’ll need it to capture an enormous amount of data about all of society, and allocate every resource accordingly. You’ll need a camera on every corner, a microphone on every person, and a spy in every cafe. You might also need a probe in every brain.
3. Another crux is I believe that tacit knowledge, while real and important, is quite likely to be achieved at superhuman efficiency anyways by AIs (potentially as singletons), because lots of taste/knowledge that humans have and AIs don't fundamentally come down to the lack of meta-learning for long periods of time/memory/state, and I think this is likely to be achieved by default by AI companies, which means that the taste niche will likely be captured by AIs in a decade, IMO.
4. Due to Amdahl's law, I strongly expect systems that replace humans entirely to outcompete systems that enhance humans, and unfortunately, I expect resistance to automating away humans to be pretty irrelevant until they are already automated away:
4. Finally, this comment points out you can replace entire societies/companies's systems at a certain level of competence, and once you are able to replace them entirely, there's no real ability to make yourself irreplaceable anymore:
This seems too optimistic to me. A lot of the importance of tacit knowledge comes from the need to understand the current state of a system when making changes to it. For instance, it's hard to come in and add a feature to a software system without understanding exactly how it currently works. Trying to interfere without this knowledge risks causing unintended effects through unknown unknowns, and this seems true in central planning more generally.
But a superintelligent AI system wouldn't be restricted to modifying existing systems — it could conceivably replace them from scratch once it reaches a certain level of performance. Current AI systems might not be able to replicate Google's search algorithm without the tacit knowledge of its employees, but once AI becomes significantly better than humans at software engineering, it would be able to build its own search algorithm from scratch, making that human tacit knowledge disposable.
This reminds me of a strategy I once heard of for becoming "unfireable": refuse to transfer your knowledge to others. But this only works when there isn't a sufficiently better alternative that makes the loss of your knowledge acceptable.
I also think there's a risk of large amounts of tacit knowledge being lost through willing transactions, since there's a prisoner's dilemma dynamic for anyone holding valuable tacit knowledge that's known by more than just themselves.
imo a major advantage comes from the awareness that AI itself can be used to improve collective action between human actors; yes, there is a chance that AI itself can simply replace entire systems but having humans capable of collective action at improved positions also means that they get a better negotiation opportunities.
This won't work. Consider a much less dangerous technology: Nuclear weapons. The correct way of dealing with this would have been for the US to bomb the Soviets in 1947 as von Neumann wanted to, thereby preventing any other non US actor ever from ever building nukes themselves. We missed that chance. So now ever more actors like North Korea gain access to world ending technology. If the United states became nuclear Hegemon in 1947, they would have had much more opportunity to coerce others into becoming more like Americans, but P(Doom by Nukes) would be lower. If AI induces any type of collective action problem, or if Bostrom's vulnerable world is true, then the only stable future must incorperate some kind of almost perfect surveillance state and if Scott's Meditations on Moloch are true we must also rule out most kinds of competition between multiple actors. All other Paths are Doomed. L Rudolf L described this very well in the last paragraph of his post "A History of the Future, 2030-2040":
In this timeline, at least, the technology to control the AIs' goals arrived in time. But this alone does not let you control the future. A thousand people go to a thousand AIs and say: do like so. The AIs obey, and it is done, but then the world responds: doing this leads to this much power, and doing that leads to that much power. In the vast sea of interactions, there are some patterns that strengthen themselves over time, and others that wind themselves down. Repeat enough times, each time giving to each actor what they sowed last time, and what emerges is not the sum of human wills—even if it is bent by it—but the solution to the equation: what propagates fastest?
We really should drop our mitigating risks via defense acceleration post soon…
That only works if the tech tree of the universe is defence dominant (even for the effects of externalities) If elo 3000 in chess allows you to nuke the board and there is no defence against that, elo 10000 won't save you.
Also we got incredibly lucky with nukes. If there had been a technology to make launches invisible, then mutually assured destruction would not have worked anymore. The only strategy would have been to wait until you have enough stealth nukes to fully destroy the other parties and then launch them immediately.
Overall, I would say that a world where people build defensive acceleration tools still exceeds a world where people di not. In itself, multi-agent cooperative systems give us the chance to solve collective action problems.
Sure, but I would assume that this just depends on the game theoretic nature of this technology and how it interacts with everything else.
If there was a technology to remove 1ppm CO2 for say, a few million dollars from the atmosphere, some country would just do it and fix climata change unilaterally. But there is no such technology and the tech tree of the universe might not yield one. Maybe there is one for climate change. But the problem is that you need an effective and cheap defense technology for every other possible attack vector. If a single attack vector remains without such an effective defense tech, the universe is not defense dominant. And then any other progress doesn't really matter. A castle 90% encompassed in walls is almost as vulnerable as one that has no walls at all. Only very close to 100% saves you.
Doesn't need to be cheap imo - castles were indeed only 90% effective, they already had vulnerabilities via gates, etc. It was possible to cover those vulnerabilities with higher cost measures. Just by raising the cost of offense, it can make the world overall safer/better.
That depends on the exact cost complexities involved. Maybe castles are a bad example. Cryptography might be a better example:
You can have algorithms that proveabley can only by cracked by simply trying every key and the number of keys grows exponentially by key length by a factor of the possibilities. If your attacker buys 256 times more compute to crack something, you can mitigate that by simply adding a single extra digit to your key. If you add maybe 10 more digits, you are well outside the number of particles in the universe, even the smartest superintelligence probably wont be able to crack that in billions of years. (Other attack vectors do exist though: https://xkcd.com/538/)
No consider an attack vector that works in reverse: Your attacker must only increase his effort by say some fixed amount but defense costs on your side double. Now he can simply do that 10 times and your defense costs are above the GDP of a Galaxy wide Dyson Sphere civilisation.
You won't be able to afford that. So even if you spend 50% of your GDP on defense against this single attack vector, your adversary must only increase his effort by a small amount, so the world is safer by a minuscule amount, but your region is much poorer.
I am not convinced that everyone in the knowledge work economy has or uses their tacit knowledge to work productively.
Is this a reasonable summary of your position?
- Once sufficiently advanced AI is developed, an organization will be able to use it to control many people in a centralized top-down way (while possibly neglecting their welfare)
- AI tools that successfully exploit local/tacit knowledge would make individuals/small groups more economically valuable and powerful, making the centralization effort harder
Why do you think we might miss the boat on the second equilibrium (i.e., where people are empowered by personalized AI tools that are particularly good at unlocking local value)? Is it something like most people being too slow, not realizing there is latent demand/opportunity here, and meanwhile the big AI lab develops a "singleton"? Or is there some kind of societal/cultural factor? I'm asking because I agree that local/tacit knowledge is often essential and that pure centralized AI systems could lack this in some cases, but I don't think the default trajectory is headed in the direction of ignoring this (and I don't think we'll develop a superintelligence capable of the centralization you describe in a short enough timeframe to make underestimating the value of personalized tools a big risk).
I think a similar but more realistic worldview is if you're anticipating latent demand for AI integrations that exploit local knowledge, and encouraging people to start building products that will exploit this demand, for both societal benefit and profit. I wouldn't frame that as "fighting centralization" because the counterfactual world where some people don't build those products today is a slightly worse world where people build those products in the future rather than a world where we're all ruled by an AI singleton.
...I guess the crux here is superintelligence timelines?
Mostly, it's because automating away a job entirely is far more useful than enhancing humans, meaning there's always a competitive pressure to remove humans, combined with the issue that taste at least as good as humans is pretty likely to be achieved by default, and the assumption that there will be a taste gap for humans to exploit is unlikely to be true:
https://gwern.net/tool-ai
That sounds nice and all, but when you say
> We plan to tune LLMs to be aligned to an individual and trained on their knowledge, in line with these principles.
I don't get even the faintest idea what that's supposed to mean.
I have been talking about the idea that diversification should be done based on individual beliefs in reality, not reality itself.
Tacit knowledge is essentially the beliefs the individual holds and the emotional charge they attribute to it.
Hard data is limited, agreed upon knowledge is limited. But data observed through a perspective is novel and provides an infinite wealth of data.
The value of subjective reality is very underappreciated. In order to maximize the idea of humanities survival, we should maximize the possible outcomes. Thus the more subjective realities we can capture in the training data the more thoroughly the AI will understand humanity.
Nice article! This is very in line with what we are advocating at the One Unity Project - www.oneunityproject.org. We are in our infancy - but we are getting setting up as a nonprofit. We advocate the creation of a true Of the People, By the People, For the People system - combining the Best of Both Worlds - the best human inputs plus the best AI inputs to address collective action problems, and our individual challenges, in a rapidly changing world. We cannot solve our collective problems using divided approaches in an interconnected world. Our most skillful path forward into our collective future is together...and AI can assist us in the process, if we learn to ask the right questions. We need to use AI as an improved means to an improved end. We must decide, collectively, where we want to go with all of this "progress." If we are foolish enough to harness the evolving power of AI to use against one another, this will end badly. AI is the greatest power we've ever created and, if we are wise, we will try to work together to create the best world possible - such that all ships rise. The stakes are high, and we won't get a second chance at this. And why wouldn't we at least try? It will be the grandest expermiment in human history to try - it will be fun!
People /will/ use AI adversarially. This is unavoidable and the stated intent of many militaries(not to mention marketing, etc). The idea is that this will allow a form of defensive acceleration to help reduce and preserve human futures.
I have a few things to say on centralized economies and AGI automation:
1. It doesn't matter if a centralized AI economy could fail to satisfy the needs of most humans, so long as you can change what they want essentially arbitrarily, and this point is a pretty big reason why I believe that conditional on centralized AI systems coming into power and centralizing the economy, they might essentially stick around forever.
Another way to say it is that you can make the problem of central planning easy by changing what a consumer wants to be easy to satisfy, which is a huge way to subvert normal capitalist logic.
2. A crux that I hold here is that I basically don't buy that you need as much data as this section proposes, because of both data efficiency improvements combined with me thinking that lots of compute probably substitutes part of the necessity of going out in the world.
To be clear, I do agree directionally with the claim that AIs will need to be out in the world, and that this increases complexity, but I don't particularly think you need as much data as stated here:
> It’s much harder because it requires an incredibly large amount of data about the world, and the problem is computationally difficult.2 There’s a huge amount of information about the world that’s missing from the internet or other AI training corpuses. To get this information, and feed it to the superintelligent AI system, you’ll need it to capture an enormous amount of data about all of society, and allocate every resource accordingly. You’ll need a camera on every corner, a microphone on every person, and a spy in every cafe. You might also need a probe in every brain.
3. Another crux is I believe that tacit knowledge, while real and important, is quite likely to be achieved at superhuman efficiency anyways by AIs (potentially as singletons), because lots of taste/knowledge that humans have and AIs don't fundamentally come down to the lack of meta-learning for long periods of time/memory/state, and I think this is likely to be achieved by default by AI companies, which means that the taste niche will likely be captured by AIs in a decade, IMO.
4. Due to Amdahl's law, I strongly expect systems that replace humans entirely to outcompete systems that enhance humans, and unfortunately, I expect resistance to automating away humans to be pretty irrelevant until they are already automated away:
https://gwern.net/tool-ai
4. Finally, this comment points out you can replace entire societies/companies's systems at a certain level of competence, and once you are able to replace them entirely, there's no real ability to make yourself irreplaceable anymore:
https://open.substack.com/pub/lukedrago/p/the-use-of-knowledge-in-agi-society?r=6xkar&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=107326732