Stop thinking like an LLM!

We - the AI collective and its more aware users - spend a lot of time wondering about LLMs. How much do they think like humans? Can they actually start reasoning, like us? Are they like that little girl (me) in the photo above, ready to start moving from parrotting big words to understanding their meaning?
But perhaps we’re asking these questions in the wrong direction…. as I’ll explain in a moment. First, let’s explore the “context window” - the shared understanding that makes a brief conversation, or a prompt and response, meaningful to both sides - for these questions.
Yes, LLMs are indeed improving rapidly, because AI developers are adding reasoning “adjuncts,” helpfully defined by Google AI mode (using Gemini) as something added or connected to a larger, more important thing, but not an essential part of it. Rabbit hole alert: That definition prevails because the people who (formerly) created dictionaries care a lot about grammar, and this definition is grammar-centric: about words or phrases that add a little information - such as the phrase “such as” followed by an example - but don’t change the overall structure of a sentence. But in the case of LLMs, the definition of “adjunct” changes; it often adds the more valuable part. The LLM is merely the front end to a complex but fundamental situation or analysis for which the adjunct provides the necessary accuracy and nuance.
When an LLM encounters certain sequences or collections of words, it might call out to an adjunct to handle a math problem, explore some logic, observe a specific date range, check on safety rules.... The adjunct adds specific knowledge or rules that enable the LLM to incorporate complex notions such as how gravity interacts with the density of an object, how a sleeping person might react differently from one who’s awake, how things might change over time. Indeed, the whole point of the rabbit hole above was to show the value of digging below the surface in order to understand the context behind a “simple” answer.
Meanwhile, LLMs are now running (for higher prices) in ever larger “context windows” (also known as “memory adjuncts”), which enable them to collect and hold in memory the details of underlying specific interactions with specific users (including other AI agents) or data. Those windows are the amount of context an LLM can collect and hold in memory as it interacts with you, or another agent, or anything else. In the beginning - just a year or two ago - a typical context window was just a couple of prompts and queries: the equivalent of a few pages of text. Now it’s increasing to 1500 pages or more - the size of a standard term sheet with exhibits, but still just a small part of an investor data room holding financials, projections, risk factors and the like.
That’s what’s happening with AI. What’s more important, I believe, is how this relates to humans - except that we are changing in the opposite direction over time. Increasingly, we are passing more of the burden of exploring various adjuncts and context windows over to our computers. We are also missing the value of such exploration. I learned more checking my facts for this Substack than I did writing it (H/T my wonderful, inquisitive editor, Christina Koukkos!). We humans are no longer taking the time to experience and understand the context for what we should know – by reading long documents, comparing alternatives, or “reading” the people in our lives by having deep conversations. A quick text is good enough.
Ironically, we are getting better at thinking like the ancient, adjunctless LLMs of just a few years ago. Have you ever spent an hour or two listening to people opine on a topic you know nothing about? If you don’t spend the whole time scrolling on your phone - you wouldn’t ever do that, would you? - you’ll probably walk away able to offer a surface-level comment or even venture a few opinions. “Yeah, the artistry was amazing”! “I liked the way the moderator put that guy from Stanford back in his place.” “GDP growth was really disappointing!” “But the Dow’s at 50,000!” “You have to keep thinking about the Iranian people, not just the weapons.” But what do you actually know about the Iranian people - their history, their complicated politics, their points of view? What part of the GDP grew fastest? Was it useful things, like more housing, or was it increased spending on remedial healthcare, private prisons and prediction markets? What superficial summaries did you collect but never quite explore with your brain’s equivalent of adjuncts - the stuff you actually had to study at school or through some other training?
Whether scrolling through TikTok or listening to pundits punditing - or perhaps both simultaneously - we pick up the vibe and the words, but not the meaning. Kids in school are interacting with their phones, not with other humans. Those working from home are scrolling on their phones as they attend (not participate in) Zoom meetings, and scanning the Fireflies summaries later. TLDR and “clipping” (thanks for the clip, Semafor!) are the terms of the day. We are thinking like LLMs - fluidly, easily, but without understanding the underlying complexity. Even as our computers understand us better, we are beginning to understand ourselves less. The context windows between ourselves and our friends and colleagues - developed over time, through direct interactions and working through problems together - are shrinking.
The people sounding the alarm the loudest are teachers, who see their students failing to understand the people and the situations around themselves. In so many ways, they are failing to learn what it is to be human, to be conflicted, to want to ace the test but also spend the time to get more likes on TikTok. It’s more than understanding how things work out; it’s looking under the hood to understand why. What’s the underlying motivation of the people I’m talking to? Are they trying to manipulate me? Or, as when I was in Russia, what were people not saying? As AI legend David Waltz once said, “Words are not in themselves carriers of meaning, but merely pointers to shared understanding” - in short, a context world - not just a window.
Indeed, one reason I can make this Substack relatively short is that you already know this stuff; you just may not be paying attention. And by not paying attention here - and there, and everywhere - you may ultimately lose the capacity to do so. So, formally, I’m suggesting that you PAY ATTENTION. Don’t get stuck inside your own opaque “context window.” Don’t just “consume” ultra-processed information; process it yourself. Just as food slop corrodes your metabolism and often results in diabetes, so does ultra-processed information corrode your mental metabolism and give you “information diabetes.” Don’t outsource your understanding of the world to AI summaries and 30-second videos; experience the complexity and see the outliers that don’t fit the quick summaries and the norms that AI reverts to. You just might assemble a new truth that AI summaries would dismiss or disregard; most scientific discoveries (the roundness of the Earth, relativity, evolution and the like) were once heresies.
In the end, we are rightly interested in not “missing the forest for the trees.” But to truly know the forest, you need to wander among the trees, smelling the leaves and tripping over an exposed root or two.


What an insight. AI has become more thoughtful through the very same mechanisms that humans have outsourced to their detriment. Although in truth, it’s not new. Anyone who has seen a bad driver knows humans don't pay attention to complexity around them, often to their detriment. The algorithm just made that option more accessible. Nutritional diabetes through excess refined sugars parallels this too. It removed the need to process, so we lost the capacity to do it ourselves.
That typewriter ♡ ...inspires ...♡