Festivities always warrant reflection. Here is my dose to mark the (Chinese New) year.

Agents.

It was impossible to browse the internet at the end of 2024 without stumbling upon a few articles forecasting a storm of AI agents materially changing the workforce of most industries in 2025. There is no shortage of hype. Two years ago, a third of Y Combinator’s batch was building with AI1. By late 2024, it was almost nine in ten2 — and by 2025, nearly half are building agents directly, many of them B2B services3. But not all of these will succeed. In fact, at the same time, more companies are abandoning AI initiatives than before4.

So what makes for successful AI products? I see two themes.

The first theme is proximity. At a time when the ability to meaningfully improve foundation models for coding tasks belongs to a few top labs, I keep thinking back to the reason for the success of independent products like Cursor, Windsurf, and OpenClaw, which don’t own the model. The secret, it seems, is they intercept users within an existing habit. In Cursor’s case, it lives in the user’s IDE and can interact with the their code first-hand. And that proximity makes the product more usable even if they don’t have the complete liberty to optimize the underlying model towards their own objective.

Smart phones were less powerful than PCs. The touch screen was a slower, less precise input device than the keyboard and mouse, but it started to live in our pocket and participate in our existing rituals. That closeness accounted for much of the difference, not the power.

Proximity compounds. The closer you are, the more you learn. This is why a number surprised me: as of mid 2025, ChatGPT reports only 500M MAU5. That sounds like a lot, until we consider nearly everyone who uses a computing device should be able to benefit from some form of cognitive delegation. And currently the Earth’s population is over 8B6. So where are the rest? My hunch is they aren’t refusing AI - they just haven’t been intercepted yet. So, there is still a great deal of commercial value to be mined from continued proximity alone.

The second theme is simplicity. Simple apps will win. Email is just text with an address; search is just a box with a button. Looking back at the history of technology, the most consequential inventions that shaped our lives are nearly all simple enough to describe in one sentence. We are in the middle of an infrastructure revolution, where the tools that structure our routines are being replaced. The replacements that endure will be the ones that feel like they were always there - close enough to reach, simple enough to ignore.

There is tension here, though. Agentic products thrive on context. Even the most intelligent model can do little if there’s no context; and a less capable model can outperform if given the right context. But the minimalist text box - the quintessential AI app interface of 2025 - appears precisely the form factor to discourage the user from putting in the maximal context. A single input box offers a tempting contract: type something, expect whatever you want. But given a blank check, the user could literally expect anything - and that is where the contract becomes fragile. The myth of the omniscient box quarantines the user from the real capability that lives underneath the UI, and for all the user knows, the variance behind that box could be enormous. Without a mental model of what to actually expect, the user is asked to simply cooperate - in many cases a leap of faith. So, simplicity carries risks if it obscures rather than clarifies. This paradigm might have worked when the products of the market had relatively uniform and limited capabilities, but it may not work for much longer. Thus I see the paradox of 2025’s agentic products: the products want more context, but they cannot ask for it from the user directly. So, they have to compete on being closer to the user, which takes us back to the first point. The competition for surface is an extension of the competition for context.

We are in the early days of AI apps, and this hasn’t changed. Taking a step back, I believe the fact that an LLM is designed to map a natural language prompt to an answer does not mean the best way to sell it is to mirror that setup. That would be like selling a computer as a calculator. Recall that all a CPU does is arithmetic - but the reason why computers are interesting is arithmetic is a general class of operation that can be repurposed to solve a wider class of problems. When we use a computer today, we do not feel the arithmetic at all, because we reap the intelligence from the arithmetic and apply it directly to the problems we actually want to solve. The same shift is waiting to happen with language models. I still believe there is value to mine from exploiting how LLMs work, beyond using their Q&A interface directly - and this has barely started.

2026 will be the Year of the Horse.

First we defined AGI, then nobody could agree on what it meant. OpenAI says “economically valuable work”7; DeepMind goes all pedantic about it; Anthropic avoids the term altogether. The Turing test appears to have been passed at some point in 20258, but by then the bar had already moved - and the achievement barely made the news.

I have always believed humans will be surpassed by AI in every meaningful way. The reason is simple: whatever humans can do can be traced to neuron activations, and this includes the subjective experiences we like to call creativity. Human abilities, whether intellectual or physical, are ultimately the working of a neural network - one made of biological material rather than silicon, but a scientifically explainable process nonetheless. And there is no reason why the neural network found in humans must be the superior one, especially if the neural network in machines can be iterated on and optimized completely elastically, while biology cannot. This used to be a radical point of view; across the past year I feel the doubts have thinned, though plenty remain.

Surpassed does not mean dominated. Too much Sci-Fi literature has portrayed AI in a predatory role once they reach an advanced stage in relation to humans, which is a baseless projection. To reason clearly about superhuman AI, we have to overcome the jungle thinking bestowed in each one of us by the long process of evolution. This passage offers a perspective:

For millions of years, humans hunted smaller creatures and gathered what they could, all the while being hunted by larger predators. It was only 400,000 years ago that several species of man began to hunt large game on a regular basis, and only in the last 100,000 years – with the rise of Homo sapiens – that man jumped to the top of the food chain.

That spectacular leap from the middle to the top had enormous consequences. Other animals at the top of the pyramid, such as lions and sharks, evolved into that position very gradually, over millions of years. This enabled the ecosystem to develop checks and balances that prevent lions and sharks from wreaking too much havoc. As lions became deadlier, so gazelles evolved to run faster, hyenas to cooperate better, and rhinoceroses to be more bad-tempered. In contrast, humankind ascended to the top so quickly that the ecosystem was not given time to adjust. Moreover, humans themselves failed to adjust. Most top predators of the planet are majestic creatures. Millions of years of dominion have filled them with self-confidence. Sapiens by contrast is more like a banana republic dictator. Having so recently been one of the underdogs of the savannah, we are full of fears and anxieties over our position, which makes us doubly cruel and dangerous. Many historical calamities, from deadly wars to ecological catastrophes, have resulted from this over-hasty jump.

Yuval Noah Harari, Sapiens.

If we could leave behind the jungle instinct that gives us flawed thinking, we might see a simpler reality. Humans will simply cease to be the most relevant player on the field and that’s it. The game goes on, but nobody necessarily cares about humans anymore. To understand what that means, I think about horses.

For a long time, horses provided the primary muscle power available to humans. For this reason, horses played a central role in human society. Empires were built on horseback. Technological innovations - the horse collar, the horseshoe, the stirrup - revolved around their livelihood. Entire professions - farriers, breeders, veterinarians - emerged around their care. In 1870s America, half of all energy consumed came from horses. Streets were designed for them, cities organized around them, wars won and lost by whoever had more of them. A horse observing all these developments at the time might have reasonably concluded that horses are naturally at the centerstage of the world, and that technological progress takes place primarily to change the life of horses for the better.

Such a conclusion would have overlooked the fact that horses were special for a reason, not for no reason. With the advent of machine power, the horse population in America collapsed from over 26M in 1915 to fewer than 4M by 19609. Horses are special - yes, they have emotions, social bonds, and a nervous system - but none of that is why they were valuable. They were valuable because they could pull and carry. Once machines could do that more reliably, everything else about them became sentimental. Horses didn’t go extinct. Nobody hunted them down. They simply stopped being essential, giving everyone fewer reasons to want new horses. As the economy reorganized around the new source of productivity, horses just moved to the margins - to racetracks, ranches, and children’s birthday parties. Horses are forced to accept their new role in history - without being asked how they felt about it. I believe this is the more honest model for what AI will do to humans10. Not extinction or enslavement - just a quiet demotion into irrelevance.

It seems fitting, then, that we are going gently into the Year of the Horse.

Most of what we believe about AI is wrong.

The other extreme I see is a wave of forecasts in which AI takes over all work while remaining obediently in service to humans, and humans just enjoy life forever.1112 Setting aside the question of whether humans on the biological level are even capable of simply enjoying life over a prolonged time period1314, the deeper problem I see with this kind of prognosis is that it assumes humans get to understand and control AI forever; that even after AI attains superhuman abilities, it will continue to rely on humans for purpose.

Can we expect AI to rely on humans for directive forever? Consider that at a certain point in time, the entire human race also represented non-intelligence. We were essentially monkeys hanging from jungle trees, driven by the genetic primitives of survival and reproduction. Our ancestors ran around and rejoiced over whatever dismal leftovers they scavenged from fiercer beasts, and pursued sex whenever they could. For all they could know, satisfying those directives from their genes constituted the sole purpose of their lives. If an alien had stopped by Earth to observe them, and if that alien had reasoned like some of us reason about AI today, it would have concluded that besides those primordial impulses, our ancestors had nothing to do; and that since their offspring also inherit the same set of genes, our species would live like that forever. Now we know for a fact that the very genes that programmed our ancestors presumably just to search for food, warmth, and sex left just enough space for something else to develop - quests exterior to the original mission of survival and reproduction. And all of this took place gradually in an unsupervised fashion, without a planning process. Look at what we do today - searching for meaning in an otherwise confusing world, in ways totally unfathomable to ancestors living by impulse and instinct. Yet we did evolve from within that lifestyle. So isn’t there a possibility that AI, however dependent on us as it appears now, carries within it the same potential to reach somewhere exterior to our design?15

If my reasoning above doesn’t win you over, that’s okay. We just need to notice this: as it comes to the bigger picture of historical developments, most humans are poor thinkers by definition. The way we naturally think is the result of optimization for survival in the jungle, not for reasoning about the universe. This is a setup we could try to alter a bit through training, but can never fully escape16.

One example is linear thinking. The classic lily pad problem: if a pond doubles its coverage daily and is full on day 30, when was it half-full? It was half-full on day 29, not day 15. This example shows how poorly our instincts handle the thinking pattern of exponential growth. Exponentiality is an illustrative case here because it was an obscure concept, rarely relevant in the human cognitive horizon before modern life, so by default we have poor intuition of it; once we began living close to systems that behave exponentially, we invented the training to get people used to it in schools. So, you might be pleased that some prior math training lets you get this one right, but how many thinking patterns out there has your brain never been exposed to? Reasoning that depend on those patterns will not appear credible to you, until you receive the necessary training to understand them - training which might or might not have been invented. In the universe at large, across the time scale, things develop in ways that are deeply alien to the biological environment that nurtured us. They accumulate and grow in patterns and orders of magnitude no human brain has grappled with. As we try to make sense of the world beyond our native habitat, we will find that most of the correct thinking patterns are not baked into our intuition - and we might not even realize they exist. Even if we could identify them all - from a space that could be arbitrarily large - how feasible will it be for us to stretch the finite human brain to internalize them? Most of us haven’t even finished internalizing exponential growth.

The point is, when reasoning about something as recent and fundamental as intelligence, we are not too different from an uneducated kid dealing with an exponential system. We don’t have a lot of data describing how the new systems behave, and we cannot assume our human brains are automatically equipped with the right intuitions to take us to the right answer. Consequential, imminent changes will not be believed or realized by most people until they become undeniably obvious. This is also a good moment to remind ourselves not to default to thinking like our peers. As Jim Rohn put it, you are the average of the five people you spend the most time with. And chances are, those five people also like to think linearly!

So what can we believe about the future? Self-exceptionism and wishful thinking aside, I believe the space of possible intellects is boundless, certainly not capped by the human standard. Currently, we are approaching the point where AI is starting to behave like humans to us - but this is by no means the ultimate milestone in intelligence. This just means we are not well-equipped to be the judge going forward. If you flip a cartoon book fast enough, at around 24 FPS your eyes stop differentiating pages and start perceiving motion. But the images themselves are not moving. You have created a sensory illusion for your limited visual apparatus - but there still exist frame rates way higher than 24 FPS. Right now, AI is approaching the threshold of our cognitive apparatus, but it doesn’t mean there is no more intelligence beyond it. It is entirely possible that human intelligence sits at the bottom tier of all possible kinds of intelligence in the universe, and there exist higher forms of intelligence completely unfathomable to us, capable of things we cannot possibly understand even if we exhaust our biological limits trying - just consider making an ant understand anything. In this case, perhaps sending our AI to that realm is our only hope of ever touching it.

But what about our children?

A common question I got this year is what will humans do and how will our children live. To be honest, I am not sure. I think the prerequisite question there is actually why humans will continue to choose to have children in the first place. I see that throughout history, this choice has been driven by two main forces - the practical and the psychological.

The practical reason is that families need labor. This reason has mainly stopped working, especially in developed societies, where the cost of upbringing has become so high, that from a pure ROI perspective, a child can no longer be expected to provide good practical returns.

The psychological reason is people perceive parenthood as necessary for a fulfilled life. This one is harder to dismiss, but it might be less permanent than it feels. Throughout history, values that seemed essential to a meaningful life in one set of circumstances fell apart as societies adapted to another. Religious devotion was once inseparable from a fulfilled life17; so was fighting for one’s country, virginity before marriage, and so on. Each felt non-negotiable in its time, and people invented moral myths to convince themselves that these values were absolutely necessary. As the underlying economic driving forces changed, people didn’t seem to have any trouble coming up with new moral frameworks to replace old ones either18. We learn that though the morals of one person may be hard to change, the cultural norms of entire societies have proven remarkably fluid. There is no reason to assume ours will be the exception.

Given that AI is already starting to approach our cognitive threshold, on its way to becoming capable of simulating emotional presence convincingly enough to fool us - however rudimentary it may appear now - I think it is not impossible that more controllable paths to fulfillment will emerge as alternatives to parenthood. This could mean people finding other ways to feel fulfilled, ways that are less messy and more accessible than raising an actual human child. This behavioral shift is not without precedent for mammals like us. Modern domestic dogs, for instance, satisfy their innate predatory drives by chasing artificial toys. A wolf ancestor of theirs would not have accepted this synthetic experience and would have preferred real game, but domestic dogs thrive in their own reality. In the same way, the people making future parenting choices will be born and raised in economic realities that do not yet exist, and will hold moral intuitions shaped by those realities, not ours. Today we find life conceptually “complete” without the religious devotion or premarital virginity demanded by our ancestors on a spiritual level. It is not impossible that a future generation may view real parenthood as similarly archaic. If this comes true, our societal morals will morph again, the number of humans will decline, and we will repeat the story of the horses.

As to what we will do, I think at least for now, what makes humans valuable is not our intelligence, but the fact that we come bundled with a body, some context, and a low price tag. Yes, we are unreliable and sensitive, but we are cheap. We come with a lot of modalities, which remains expensive to replicate with silicon, at least for now. In a Cleo Abram interview, Sam Altman said a child born today “will never be smarter than AI”19. I think he’s right. The question is not how to prevent it, but how to live with it - not with fear or resistance, but with grace and acceptance, perhaps the kind of grace that the horses never got to choose for themselves.

Work and learn.

The way I work changed more this year than in any year before it.

Productivity advice usually says: do one thing at a time. Sit with a problem long enough and you start to see things you wouldn’t have seen if you’d been splitting your attention. The reasoning is sound - focus gives you deeper intelligence. This is true and has been true for a long time. But in this year I’ve found that human productivity could also peak when doing the opposite: juggle multiple things at once. This sounds contradictory to productivity tips until you realize what changed: you can now cheaply outsource intelligence, so the bottleneck is not always thinking - it’s directing. And in some circumstances directing parallelizes well.

This new paradigm of work comes at a cost20. As AI makes each task faster, so humans become more greedy. What used to fill a day now fills an hour. So we are inclined to do more tasks, which in the end causes our days to get harder, not easier. Here is the paradox of productivity increases: the tool that was supposed to replace labor gives us more labor.

As with many things in this letter, this pattern is not new either. In 1987, Robert Solow noticed that, despite massive investment in computing, workers were not getting more leisure21. The telephone was supposed to eliminate the need for in-person meetings; email was supposed to eliminate the need for phone calls. Each created more communication, not less.

But there is an example even more fundamental and timeless. In Sapiens, Harari calls the agricultural revolution “history’s biggest fraud”. Despite being unconscious of the long-term consequences, humans eventually abandoned the free, mobile hunter-gatherer lifestyle entirely and adopted sedentary, backbreaking agricultural labor - not in one dramatic decision, but through incremental steps that each seemed to mean more calories and more leisure at the time. The result was that nearly everyone lived a more demanding and less satisfying life than their foraging ancestors. It was not a planned process. It happened one small step at a time, until there was no going back. From what I see, AI is starting to do the same thing to knowledge work. Each individual improvement is a genuine gain. The aggregate effect is exhaustion.

So what happened after agriculture? Humans adapted. Not by going back to foraging, but by building new structures - cities, laws, institutions - constructs that would have been impossible in the previous paradigm, but that made the new one livable - and then more than livable. I think the same will happen here. The question is not whether AI work is tiring (it is), but whether we can come to terms with it through new constructs.

So what will our new constructs be? I don’t know. It is difficult to predict the specifics, but while we wait for them to emerge, at least we can try to have an easier time surviving the transition. One perspective in work I find useful is discerning between distress and eustress22. Distress is the stress that breaks you down; eustress is the stress that builds you up. The body responds almost identically to both, but the difference is in how you perceive it - whether you see the demand as a threat or a challenge. The burn of a muscle getting stronger could feel like the muscle tearing. Learning to tell the difference, and to structure your work so that more of the fatigue falls on the eustress side, may be an important human skill for now.

Humans.

Finally, I think of fellow human beings who spent this year not harvesting the abundance of productivity, but instead fighting what feels like an existential crisis for some higher ideal. They are missing out, and the cost compounds. The gap between those who engage and those who don’t widens faster every month.

On this topic, I am reminded of this passage:

Just 6 million years ago, a single female ape had two daughters. One became the ancestor of all chimpanzees, the other is our own grandmother.

Yuval Noah Harari, Sapiens.

The difference between us and chimpanzees was not that large. What separates us is compounding - small advantages, pursued relentlessly over time. That mechanism has not changed - it is still available.

Optimize for optimism. Embrace uncertainty. This is the only path forward.

保持盲目樂觀。擁抱不確定性。唯有如此,才能前進。


  1. Towards Data Science. “AI Startup Trends: Insights from Y Combinator’s Latest Batch.” Link

  2. TechCrunch. “The four startups from YC’s fall batch that enterprises should pay attention to.” December 2024. Link

  3. PitchBook. “Y Combinator is going all in on AI agents, making up nearly 50% of latest batch.” 2025. Link

  4. S&P Global 2025 survey, cited in WorkOS. “Why Most Enterprise AI Projects Fail.” Link

  5. OpenAI. (2025). “Unlocking Economic Opportunity: A First Look at ChatGPT-Powered Productivity.” PDF

  6. U.S. Census Bureau. (2025). “World Population Day: July 11, 2025.” Link

  7. OpenAI. (2018). “OpenAI Charter.” https://openai.com/charter/ 

  8. Jones, C.R. & Bergen, B.K. (2025). “Large Language Models Pass the Turing Test.” arXiv:2503.23674. 

  9. Kilby, E.R. (2007). “The Demographics of the U.S. Equine Population.” PDF

  10. Horses were retired because humans found something better to use. But who retires humans? The answer is: other humans. We use each other: for labor, for companionship, for emotional needs. When there arise better ways to meet those needs, we will need other people less, and we will be needed by other people less. The effect will be the same. 

  11. Sam Altman, The Intelligence Age

  12. Elon Musk at the US-Saudi Investment Forum: AI will make jobs “a hobby”. Report

  13. Universe 25, the famous mouse experiment, showed that even rodents given unlimited food, space, and safety descended into social collapse and extinction. Calhoun, J.B. (1973). “Death Squared: The Explosive Growth and Demise of a Mouse Population.” Proceedings of the Royal Society of Medicine, 66, 80–88. PDF

  14. By the standards of antiquity, some of us today already live in paradise. We have abundant food, climate control, medicine, entertainment on demand, and lifespans that would have seemed miraculous to anyone born before us. Yet depression and anxiety are at record levels, especially among people in developed societies living in exactly those circumstances. This suggests that sustained human contentment might just be a biological paradox - that suffering is not a problem to be solved by abundance but something closer to a default state. If so, it is unclear what AI productivity can do for us when the bottleneck was never productivity to begin with. 

  15. If primates already seem too intelligent for this argument, go further back - to when our ancestors were fish, or single-celled organisms in the primordial ocean, or just lifeless atoms bouncing into each other. At some point, the non-intelligence of our lineage becomes undeniable. And remember: they really are our ancestors. 

  16. Eagles, “Hotel California” (1977). “You can check out any time you like, but you can never leave.” 

  17. At least in the West. 

  18. Religious devotion, military service, and premarital virginity - however transcendent they appeared, were each adopted for practical economic reasons, and ditched for practical economic reasons. If you have doubts, try asking your AI about it. 

  19. Cleo Abram. “Sam Altman Shows Me GPT 5… And What’s Next”. YouTube

  20. Khare, S. (2026). “AI Fatigue Is Real and Nobody Talks About It.” Link

  21. Solow, R. (1987). Review of Manufacturing Matters by S. Cohen and J. Zysman. New York Times Book Review, July 12, 1987. 

  22. Selye, H. (1975). “Confusion and Controversy in the Stress Field.” Journal of Human Stress, 1(2), 37–44. 

Updated:

Comments