My human advocation for AI
Unless you’ve been living under a rock, it’s common knowledge that progress is moving toward a world where people can obtain the most accurate results with the least amount of effort or technical barrier. While the intense competition among companies grows, it has also given the common man the ability to use state-of-the-art models in as easy and natural a way as possible, not only for coding and technology, but also in fields traditionally dominated by human expertise, such as mathematics and the sciences. AI is doing its bit for us.
But true AI is none of these things in isolation. It emerges from the complex interplay of data, models, human feedback, and the creative ways in which we choose to apply it. People are quick to joke about AGI and mock singularity when it can’t determine the number of “r”s in strawberry (AI fails in genuinely embarrassing ways, and anyone saying otherwise is selling something, and also for this please blame tokenization!). But when a model works through a proof nobody’s seen before (like here!), or surfaces a pattern in clinical data that took humans years to find (like here!), “just algorithms” doesn’t really cover what’s going on. Something comes out of the combination that wasn’t there in any of the pieces alone. I don’t think we have clean language for it yet. I think the more important shift we need to actively be a part of about AI is to recognize that it is far more than the sum of its parts. Too often and almost always, we are quick to reduce AI to just data, or raw computation power (scaling laws go brrr), but I believe it’s not just that.
We need to move beyond the limitations of thinking “AI can’t do X” and instead approach problems with the assumption that AI will be able to do X, and then work to make that a reality. I keep seeing people around me adopt the myopic fear that AI is going to replace them. They thus fail to see its real capabilities and what they can do with it. The question “will AI take my job” is less useful than “what does my job look like if I actually use this.” One closes things down. The other opens them up. When more people embrace this perspective and stop seeing AI as a threat, meaningful progress will accelerate. We should stop approaching AI like you need to defend the decision to use it. Too many people are getting afraid of AI rather than getting excited. We should proactively “believe” that there are more instances of move 37 possible. (reference)
I think the overlords in the big frontier labs who are pushing the limits of what AI can do have this innate curiosity inside them, something I believe all of us should foster. The next progress in human-AI interaction will be the reduction of friction in accessing and adapting AI tools, making them seamlessly integrated into our daily lives. If someone doesn’t keep up with this mindset, then it’s going to be difficult for them to adapt to what’s coming. We should let go of the old constraints that make us doubt AI’s potential. Instead of starting from a place of skepticism, wondering whether AI can or cannot do X, we should start from the assumption that AI will be able to do X, and then work aggressively to make that happen.
Lastly, this post is not about blind optimism. Although I have been told by many that I’m way too bullish on AI, this is about acknowledging that AI’s capabilities are evolving so rapidly that old boundaries are often outdated. I’m not saying AI will do everything. I genuinely don’t know where the hard limits are, and I’m suspicious of anyone who claims to. What I am saying is that the people who assume capability and then stress-test that assumption are going to find out faster than the people who assume limitation and never try. One of those is a better epistemic position. When we approach AI as a partner in discovery and innovation, we open the door to much more meaningful progress.
A critical shift in mindset is needed.
Until next time sama releases a model.
Enjoy Reading This Article?
Here are some more articles you might like to read next:
Comments