Toner-Rodgers' paper provides empirical evidence relevant to your argument, demonstrating that AI enhances research productivity by automating idea-generation tasks but remains reliant on human expertise, specifically, the top scientists' "taste" and judgment, to identify truly promising innovations. The findings reinforce your point that creative discernment may be humanity's enduring advantage over AI in innovation. Link to paper: https://arxiv.org/abs/2412.17866 Also, hi!
The 10s of trillions of dollars at minimum question on whether taste is relevant in the future as a competitive advantage for humans is IMO whether AI meta-learning/in-context learning either becomes a substitute for weights learning at runtime, or whether the ability to learn at run-time in the weights after training becomes unlocked for AIs.
If this does happen, especially within 5-20 years, then humans have basically no comparative advantage left. If it doesn't, I'd be much more sympathetic to claims of scaling hitting a wall.
Toner-Rodgers' paper provides empirical evidence relevant to your argument, demonstrating that AI enhances research productivity by automating idea-generation tasks but remains reliant on human expertise, specifically, the top scientists' "taste" and judgment, to identify truly promising innovations. The findings reinforce your point that creative discernment may be humanity's enduring advantage over AI in innovation. Link to paper: https://arxiv.org/abs/2412.17866 Also, hi!
Will read this and reach out to the author. Fascinating paper.
Incredible essay, as always
Great piece!
The 10s of trillions of dollars at minimum question on whether taste is relevant in the future as a competitive advantage for humans is IMO whether AI meta-learning/in-context learning either becomes a substitute for weights learning at runtime, or whether the ability to learn at run-time in the weights after training becomes unlocked for AIs.
If this does happen, especially within 5-20 years, then humans have basically no comparative advantage left. If it doesn't, I'd be much more sympathetic to claims of scaling hitting a wall.
More from Gwern here:
https://www.lesswrong.com/posts/deesrjitvXM4xYGZd/?commentId=hSkQG2N8rkKXosLEF