Festivities always warrant reflection. Here is my dose to mark the (Chinese New) year.

Agents: expectations & reality.

It was impossible to browse the internet at the end of 2024 without stumbling upon a few articles forecasting a storm of AI agents materially changing the workforce of most industries in 2025. There is no shortage of hype. Two years ago, a third of Y Combinator’s batch was building with AI1. By late 2024, it was almost nine in ten2 — and by 2025, nearly half are building agents directly, many of them B2B services3. But not all of these will succeed. In fact, at the same time, more companies are abandoning AI initiatives than before4.

So what will make for successful AI products? I see two themes.

The first theme is proximity. At a time when the ability to meaningfully improve foundation models for coding tasks belongs to a few top labs, I keep thinking back to the reason for the success of independent products like Cursor, Windsurf, and OpenClaw, which don’t own the model. The secret, it seems, is they intercept users within an existing habit. In Cursor’s case, it lives in the user’s IDE and can interact with the their code first-hand. And that proximity makes the product more usable even if they don’t have the complete liberty to optimize the underlying model towards their own objective.

Smart phones were less powerful than PCs. The touch screen was a slower, less precise input device than the keyboard and mouse, but it started to live in our pocket and participate in our existing rituals. That closeness was the key to all the difference, not the power.

Proximity compounds. The closer you are, the more you learn. This is why a number surprised me: as of mid 2025, ChatGPT reports only 500M MAU5. That sounds like a lot, until we consider nearly everyone who uses a computing device should be able to benefit from some form of cognitive delegation. And currently the Earth’s population is over 8B6. So where are the rest? My hunch is they aren’t refusing AI - they just haven’t been intercepted yet. So, there is still a great deal of commercial value to be mined from continued proximity alone.

The second theme is simplicity. Simple apps will win. Email is just text with an address; search is just a box with a button. Looking back at the history of technology, the most consequential inventions that shaped our lives are nearly all simple enough to describe in one sentence. We are in the middle of an infrastructure revolution, where the tools that structure our routines are being replaced. The replacements that endure will be the ones that feel like they were always there - close enough to reach, simple enough to ignore.

There is tension here, though. Agentic products thrive on context. Even the most intelligent model can do little if there’s no context; and a less capable model can outperform if given the right context. But the minimalist text box - the quintessential AI app interface of 2025 - appears precisely the form factor to discourage the user from putting in the maximal context. A single input box offers a tempting contract: type something, expect whatever you want. But given an blank check, the user could literally expect anything - and that is where the contract becomes fragile. The myth of the omniscient box quarantines the user from the real capability that lives underneath the UI, and for all the user knows, the variance behind that box could be enormous. Without a mental model of what to actually expect, the user is asked to simply cooperate - in many cases a leap of faith. So, simplicity carries risks if it obscures rather than clarifies. This paradigm might have worked when the products of the market had relatively uniform and limited capabilities, but it may not work for much longer. Thus I see the paradox of 2025’s agentic products: the products want more context, but they cannot ask for it from the user directly. So, they have to compete on being closer to the user, which takes us back to the first point. The competition for surface is an extension of the competition for context.

We are in the early days of AI apps, and this hasn’t changed. Taking a step back, I believe the fact that an LLM is designed to map a natural language prompt to an answer does not mean the best way to sell it is to mirror that setup. That would be like selling a computer as a calculator. Recall that all a CPU does is arithmetic - but the reason why computers are interesting is arithmetic is a general class of operation that can be repurposed to solve a wider class of problems. When we use a computer today, we do not feel the arithmetic at all, because we reap the intelligence from the arithmetic and apply it directly to the problems we actually want to solve. The same shift is waiting to happen with language models. I still believe there is value to mine from exploiting how LLMs work, beyond using their Q&A interface directly - and this has barely started.

2026 will be the Year of the Horse.

First we defined AGI, then nobody could agree on what it meant. OpenAI says “economically valuable work”; DeepMind goes all pedantic about it; Anthropic avoids the term altogether. The Turing test appears to have been passed at some point in 2025, but by then the bar had already moved - and the achievement barely made the news.

I have always believed humans will be surpassed by AI in every meaningful way. The reason is simple: whatever humans can do can be traced to neuron activations, and this includes the subjective experiences we like to call creativity. Human abilities, whether intellectual or physical, are ultimately the working of a neural network - one made of biological material rather than silicon, but a scientifically explainable process nonetheless. And there is no reason why the neural network found in humans must be the superior one, especially if the neural network in machines can be iterated on and optimized completely elastically, while biology cannot. This used to be a radical point of view; across the past year I feel the doubts have thinned, though plenty remain.

Surpassed does not mean dominated. Too much Sci-Fi literature has portrayed AI in a predatory role once they reach an advanced stage in relation to humans, which is a baseless projection. To reason clearly about superhuman AI, we have to overcome the jungle thinking bestowed in each one of us by the long process of evolution. In his book Sapiens, Yuval Noah Harari puts:

For millions of years, humans hunted smaller creatures and gathered what they could, all the while being hunted by larger predators. It was only 400,000 years ago that several species of man began to hunt large game on a regular basis, and only in the last 100,000 years – with the rise of Homo sapiens – that man jumped to the top of the food chain.

That spectacular leap from the middle to the top had enormous consequences. Other animals at the top of the pyramid, such as lions and sharks, evolved into that position very gradually, over millions of years. This enabled the ecosystem to develop checks and balances that prevent lions and sharks from wreaking too much havoc. As lions became deadlier, so gazelles evolved to run faster, hyenas to cooperate better, and rhinoceroses to be more bad-tempered. In contrast, humankind ascended to the top so quickly that the ecosystem was not given time to adjust. Moreover, humans themselves failed to adjust. Most top predators of the planet are majestic creatures. Millions of years of dominion have filled them with self-confidence. Sapiens by contrast is more like a banana republic dictator. Having so recently been one of the underdogs of the savannah, we are full of fears and anxieties over our position, which makes us doubly cruel and dangerous. Many historical calamities, from deadly wars to ecological catastrophes, have resulted from this over-hasty jump.

Yuval Noah Harari, Sapiens. 1. An Animal of No Significance.

If we could leave behind the jungle instinct that gives us flawed thinking, we might see a simpler reality. Humans will simply cease to be the most relevant player on the field and that’s it. The game goes on, but nobody necessarily cares about humans anymore. To understand what that means, I think about horses.

For a long time, horses provided the primary muscle power available to humans. For this reason, horses played a central role in human society. Empires were built on horseback. Technological innovations - the horse collar, the horseshoe, the stirrup - revolved around their livelihood. Entire professions - farriers, breeders, veterinarians - emerged around their care. In 1870s America, half of all energy consumed came from horses. Streets were designed for them, cities organized around them, wars won and lost by whoever had more of them. A horse observing all these developments at the time might have reasonably concluded that horses are naturally at the centerstage of the world, and that technological progress takes place primarily to change the life of horses for the better.

Such a conclusion would have overlooked the fact that horses were special for a reason, not for no reason. With the advent of machine power, the horse population in America collapsed from over 26M in 1915 to fewer than 4M by 19607. Horses are special - yes, they have emotions, social bonds, and a nervous system - but none of that is why they were valuable. They were valuable because they could pull and carry. Once machines could do that more reliably, everything else about them became sentimental. Horses didn’t go extinct. Nobody hunted them down. They simply stopped being essential, giving everyone fewer reasons to want new horses. As the economy reorganized around the new source of productivity, horses just moved to the margins - to racetracks, ranches, and children’s birthday parties. Horses are forced to accept their new role in history - without being asked how they felt about it. I believe this is the more honest model for what AI will do to humans. Not extinction. Not enslavement. Just a quiet demotion into irrelevance.

It seems fitting, then, that we are going gently into the Year of the Horse.

Most of what we believe about AI is wrong.

The other extreme I see is a wave of forecasts in which AI takes over all work while remaining obediently in service to humans, and humans just enjoy life forever.89 Setting aside the question of whether humans on the biological level are even capable of simply enjoying life over a prolonged time period - Universe 2510, the famous mouse experiment, shows that even rodents given unlimited food, space, and safety descend into social collapse and extinction; and by the standards of antiquity, some of the unhappiest of us today already live in paradise - the deeper problem I see with this kind of prognosis is that it assumes humans get to understand and control AI forever; that even after AI attain superhuman abilities, it will continue to rely on humans for purpose.

Can we expect AI to rely on humans for directive forever? Consider that at a certain point in time, the entire human race also represented non-intelligence. We were essentially monkeys hanging from jungle trees, driven by the genetic primitives of survival and reproduction. Our ancestors ran around and rejoiced over whatever dismal leftovers they scavenged from fiercer beasts, and pursued sex whenever they could. For all they could know, satisfying those directives from their genes constituted the sole purpose of their lives. If an alien had stopped by Earth to observe them, and if that alien had reasoned like some of us reason about AI today, it would have concluded that besides those primordial impulses, our ancestors were content and really had nothing else to do; and that since their offspring also inherit the same set of genes, our species will live like that forever. Now we know for a fact that the very genes that programmed our ancestors presumably just to search for food, warmth, and sex left just enough space for something else to develop - quests exterior to the original mission of survival and reproduction. And all of this took place gradually in a natural process, without an overarching plan. Look at what we do today - searching for meaning in an otherwise confusing world, in ways totally unfathomable to ancestors living by impulse and instinct. Yet we did evolve from within that lifestyle. So isn’t there a possibility that AI, however dependent on us as it appears now, carries within it the same potential to reach somewhere exterior to our design?11

If my reasoning above doesn’t win you over, that’s okay. We just need to notice this: as it comes to the bigger picture of historical developments, most humans are poor thinkers by definition. The way we naturally think is the result of optimization for survival in jungle life, not for reasoning about the universe. This is a setup we could try to alter a bit through training, but can never fully escape from12.

One example is linear thinking. The classic lily pad problem - if a pond doubles its coverage daily and is full on day 30, it was half-full on day 29, not day 15 - shows how poorly our instincts handle the thinking pattern of exponential growth. Exponentiality is an illustrative case here because it was an obscure concept, rarely relevant in the human cognitive horizon before modern life, so by default we have poor intuition of it; once we began living close to systems that behave exponentially, we invented the training to get people used to it in schools. So, you might be pleased that some prior math training lets you get this one right, but how many thinking patterns out there has your brain never been exposed to & trained on? Reasoning that depend on those patterns will appear wrong to you, until you receive the necessary training to understand them - training which might or might not have been invented. In the universe at large, across the time scale, things develop in ways that are deeply alien to the biological environment that nurtured us. They accumulate and grow in patterns and orders of magnitude no human brain has grappled with. As we try to make sense of the world beyond our native habitat, we will find that most of the correct thinking patterns are not baked into our intuition - and we might not even realize they exist. Even if we could identify them all - from a space that could be arbitrarily large - how feasible will it be for us to stretch the finite human brain to internalize them? Most of us haven’t even finished internalizing exponential growth.

The point is, when reasoning about something as fundamental as intelligence, we cannot assume that our human brains will automatically take us to the right answer, at least not without extraordinary care. Consequential, imminent changes will not be believed or realized by most people until they become undeniably obvious. This is also a good moment to remind ourselves not to default to thinking like our peers. As Jim Rohn put it, you are the average of the five people you spend the most time with. And chances are, those five people also like to think linearly!

So what can we believe about the future? Self-exceptionism and wishful thinking aside, I believe the space of possible intellects is boundless, certainly not capped by the human standard. Currently, we are approaching the point where AI is starting to behave like humans to us - but this is by no means the ultimate milestone in intelligence. This just means we are not well-equipped to be the judge going forward. If you flip a cartoon book fast enough, at around 24 FPS your eyes stop differentiating pages and start perceiving motion. But the images themselves are not moving. You have created a sensory illusion for your limited visual apparatus - but there still exist frame rates way higher than 24 FPS. Right now, AI is approaching the threshold of our cognitive apparatus, but it doesn’t mean there is no more intelligence beyond it. It is entirely possible that human intelligence sits at the bottom tier of all possible kinds of intelligence in the universe, and there exist higher forms of intelligence completely unfathomable to us, capable of things we cannot possibly understand even if we exhaust our biological limits trying - just consider making an ant understand a microprocessor. In this case, perhaps sending our AI to that realm is our only hope of ever touching it.

But what about our children?

A common question I got this year is what will humans do and how will our children live. To be honest, I am not sure. I think the prerequisite question there is actually why humans will continue to choose to have children in the first place. I see that throughout history, this choice has been driven by two main forces - the practical and the psychological. The practical reason is that families need labor. This reason has mainly stopped working, especially in developed societies, where the cost of upbringing has become so high, that from a pure ROI perspective, a child can no longer be expected to provide good practical returns. The psychological reason is people perceive parenthood as necessary to feel fulfilled. This one is harder to dismiss, but it might be less permanent than it feels. Throughout history, we have learned time and again that though the morals of one specific person might be hard to change, cultural norms of entire societies have proven remarkably fluid. Values that seemed essential to a meaningful life in one set of circumstances fell apart as societies adapted to another. Religious devotion was once inseparable from a fulfilled life13; so was fighting for one’s country, virginity before marriage, and so on. Each felt non-negotiable in its time, and people invented moral myths to convince themselves that these values were absolutely necessary for a good reason. As the reality changed, often for economic reasons, societies had to adapt and make up new rules. Then in each of these turns of events, people always succeeded in coming up with new moral myths to give themselves purpose in the new setup. Given that AI is already approaching our cognitive threshold, starting to become capable of simulating emotional presence to fool us - however rudimentary it may appear now - I think it is not impossible that more controllable paths to fulfillment will emerge as alternatives to parenthood - in which case our societal morals will morph again, the number of humans will decline, and we will repeat the story of the horses.

In the end, I think humans are not without advantages. We are unreliable and sensitive, but we are cheap. We also come with a lot of modalities, which allows us to see, hear, touch, and smell our way through ambiguous situations, which remains expensive to replicate with silicon. In the long run, what makes humans valuable may not be our intelligence at all, but the fact that we come bundled with a body, some context, and a low price tag. In a Cleo Abram interview, Sam Altman said a child born today “will never be smarter than AI”14. I think he’s right. The question is not how to prevent it, but how to live with it - not with fear or resistance, but with grace and acceptance, perhaps the kind of grace that the horses never got to choose for themselves.

Work and learn.

The way I work changed more this year than in any year before it.

Productivity advice usually says: do one thing at a time. Sit with a problem long enough and you start to see things you wouldn’t have seen if you’d been splitting your attention. The reasoning is sound - focus gives you deeper intelligence. This is true and has been true for a long time. But in this year I’ve found that human productivity could also peak when you do the opposite: juggle multiple things at once. This sounds contradictory to productivity tips until you realize what changed: you can now cheaply outsource intelligence, so the bottleneck is not always thinking - it’s directing. And in some circumstances directing parallelizes well.

This new paradigm of work comes at a cost. Siddhant Khare writes honestly about what he calls AI fatigue15: AI makes each task faster, so you do more tasks, so your days get harder, not easier. Here we see the paradox: the tool that was supposed to save time consumes the entire day. I think this is exactly right, and I think it rhymes with something much older. In Sapiens, Harari calls the agricultural revolution “history’s biggest fraud”: humans traded the free, hunter-gatherer lifestyle for sedentary, backbreaking work because they initially thought it meant more calories and leisure time. The result of new paradigm is nearly everyone lived a more demanding life than their foraging ancestors. This was not a planned process - it happened one small step at a time until there was no going back. From what I see, AI is doing the same thing to knowledge work. Each individual improvement is a genuine gain. The aggregate effect is exhaustion.

So what happened after agriculture? Humans adapted. Not by going back to foraging, but by building new structures - cities, laws, institutions - constructs that were impossible in the previous paradigm, but made the new reality livable - and then more than livable. I think the same will happen here. The question is not whether AI work is tiring - it is - but whether we can come to terms with it via new constructs.

One perspective I find useful is discerning between distress and eustress. Hans Selye coined the distinction in 197516: distress is the stress that breaks you down; eustress is the stress that builds you up. The body responds almost identically to both, but the difference is in how you perceive it - whether you see the demand as a threat or a challenge. The burn of a muscle getting stronger could feel like the muscle tearing. Learning to tell the difference, and to structure your work so that more of the fatigue falls on the eustress side, may be the most important human skill for now.

Humans.

Finally, I think of fellow human beings who spent this year not harvesting the abundance of productivity, but instead fighting what feels like an existential crisis. They are missing out, and the cost compounds. The gap between those who engage and those who don’t widens faster every month.

On this topic, I am reminded of a passage in Sapiens:

Just 6 million years ago, a single female ape had two daughters. One became the ancestor of all chimpanzees, the other is our own grandmother.

Yuval Noah Harari, Sapiens. 1. An Animal of No Significance.

The difference between us and chimpanzees was not that large. What separates us is compounding - small advantages, pursued relentlessly over time. That mechanism has not changed. It is still available.

Optimize for optimism. Embrace uncertainty. This is the only path forward.

保持盲目樂觀。擁抱不確定性。唯有如此,才能前進。


  1. Towards Data Science. “AI Startup Trends: Insights from Y Combinator’s Latest Batch.” Link

  2. TechCrunch. “The four startups from YC’s fall batch that enterprises should pay attention to.” December 2024. Link

  3. PitchBook. “Y Combinator is going all in on AI agents, making up nearly 50% of latest batch.” 2025. Link

  4. S&P Global 2025 survey, cited in WorkOS. “Why Most Enterprise AI Projects Fail.” Link

  5. OpenAI. (2025). “Unlocking Economic Opportunity: A First Look at ChatGPT-Powered Productivity.” PDF

  6. U.S. Census Bureau. (2025). “World Population Day: July 11, 2025.” Link

  7. Kilby, E.R. (2007). “The Demographics of the U.S. Equine Population.” PDF

  8. Sam Altman, The Intelligence Age

  9. Elon Musk at the US-Saudi Investment Forum: AI will make jobs “a hobby”. Report

  10. Calhoun, J.B. (1973). “Death Squared: The Explosive Growth and Demise of a Mouse Population.” Proceedings of the Royal Society of Medicine, 66, 80–88. PDF

  11. If primates already seem too intelligent for this argument, go further back - to when our ancestors were fish, or single-celled organisms in the primordial ocean. At some point, the non-intelligence of our lineage becomes undeniable. And remember: they really are our ancestors. 

  12. Eagles, “Hotel California” (1977). “You can check out any time you like, but you can never leave.” 

  13. At least in the West. 

  14. Cleo Abram. “Sam Altman Shows Me GPT 5… And What’s Next”. YouTube

  15. Khare, S. (2026). “AI Fatigue Is Real and Nobody Talks About It.” Link

  16. Selye, H. (1975). “Confusion and Controversy in the Stress Field.” Journal of Human Stress, 1(2), 37–44. 

Updated:

Comments