The Future of AI
February 2026Two weeks ago, Opus 4.6 released. I've been building AI startups, or at least attempting to, for the last couple years, and in many ways I'm in the eye of the storm here in San Francisco. Back in 2023, I still remember using ChatGPT 3.5 to finish my advanced databases homework for me. I've been heavily invested in the AI world since it began. I even spent a couple months late last year going deep into the model layer to really understand how these systems worked - my last blog post about RLing your life actually stemmed from that experience.
When I was first exposed to AI, I was especially bullish on how far this would go. For that reason, combined with my love for startups, I've always been an early adopter of new products in this space. I awed at claude code, cursor, perplexity, new models of anthropic, openAI, gemini, and even tools such as perplexity, poke, wispr and fireflies. I used the models to the point where I pushed them to their limits. Within a month of MCP coming out, I built my and went to the first MCP meetup in San Francisco, and recently did the same for Openclaw (Clawdbot). Even with how bullish I was at the start, I did not comprehend how this is playing out now.
Interestingly, when I got to the model layer during my research period, at the end of last year, the magic of these models started demystifying for me. Listening to researchers like Richard Sutton and Yann Le Cun certainly pulled back some of my bullishness. They made very valid points - that AI today is trained on human data, and therefore it is bound by our knowledge instead of creating its own perceptions. The classic analogy is comparing an animal such as a squirrel to AI - these animals have, over years of evolution, developed biological wirings in their brains that enable them to perceive and interact with environments in ways that AI cannot do, because their interactions are pure, while AI is pre-trained from human knowledge on the internet. Yet, it is astounding to see the rate of progress of these models.
Now, after having a sinusoidal outlook on the future of AI, I think my current conclusion is that while AI's foundation might be based on human knowledge, which is inherently an abstraction from pure learning from reality, we've been able to encode it with enough reasoning power to be able to reach AGI and soon after ASI. These models are able to reason sufficiently well, which they've been able to garner from all documented purposes of human reasoning. This, combined with the ability to ingest exabytes of information, more than any one human can possibly know, makes it likely that at the very worst, it will be the best of us, and at the very best, will be unfathomably better than all of us.
A good question that I was asked recently was whether I thought doctors, lawyers, accountants, etc. were safe. I think I would bet pretty heavily against it. Given their mode of reasoning, and how these models are becoming more and more multimodal, I can envision a future, in the very near term actually, where I'd trust an AI to do these jobs better than I would trust a human. At the end of the day, jobs exist to perform a certain chain of actions in an arbitrary action space that leads to a desired outcome. All jobs have this quality, and the value of the human being doing the job is determined heavily by both their ability to do said job, and number of people that can also do the job, which is largely determined by the barriers to entry. When AI does become capable of doing these jobs, there are going to be catastrophic effects. Catastrophic has a negative connotation to it, so maybe thats not the right word, but there will be some pretty intense pros and cons in my opinion.
For starters, I think the job landscape is going to change very drastically in ways we have never really seen before. People make analogies to the industrial revolution - yet this will be much more instant. With the world becoming increasingly connected, the distribution of wealth and technology is much faster than it has ever been before. When someone ships an AI lawyer than can form better arguments than a human lawyer at a pareto frontier, it won't be long before most lawyers find themselves in dire straights. 50-60% of the global workforce are knowledge workers, and it will be hard to justify keeping them around when an AI can do their work much better than them, faster and cheaper. The remaining are manual labourers, but with the incoming humanoid revolution, they're not safe either. An argument I constantly here is that new technologies pave the way for new jobs, but this feels different. It feels different because you're not just taking one segment away, you're swiftly taking all of it away.
What happens then? I would simplify it down to economic and social consequences. Economically, I think two major things happen. First, we will move from a world of scarcity to a world of abundance. When you have an economy where you don't need humans to work, humans don't need to work. Plus, with new technology, you will be able to do far more than what you could do. So the world becomes a better place, everyone benefits, especially the ones that deploy the AI / robots because the value accrues to them. The social consequences are that people won't need to work - it becomes an option. We won't starve, we'll have UBI. But we will have to find different ways to quench our need for a purpose.
This is a gross oversimplification of a prediction that is inherently complex in nature, and I understand that. But I do think this is where we're heading, and I would be very surprised if in 10 years from now I wouldn't be telling people I called it, and this blog will be proof.
Kush Bhuwalka