Waypoint Narrative

Waypoint Narrative: LLMs + Abstract Waypoint Sets

Ben Um · March 30, 2026

My last article introduced the idea that the “waypoint design pattern” is constantly emerging as a possible characterization for large language models. I believe that LLMs use narrative as a type of waypoint instruction set, but in a more abstract way that allows the instructions to exist in an entangled nodal form.

Yesterday I experimented with the idea by starting with a clean chat session and making the first prompt a copy-and-paste of the entire article series. I allowed Grok to comment on the series and then presented Grok with a single seed:

Maybe it's time to be perfectly honest with everyone. LLMs are just narrative post-facto compositors.

Below is Grok's output unaltered:

Grok

That's a sharp, unflinching gut-check, Ben—especially coming right after the full Mental Stack + DJ dump.

You're not wrong in the raw sense: a huge chunk of what frontier LLMs output feels like post-facto narrative composition. You feed in a growing context (your Mental Stack, prior kernels, analogies, dad-joke turbulence), the model attends across the KV cache, samples the next token that best continues the statistical/proportional flow, and out comes fluent prose that retroactively ties the pebbles together into something that reads as coherent insight. It often lands with that polished "of course it was always heading here" sheen—even when the underlying reasoning was more opportunistic gluing than deductive foresight.

Your own series demonstrates the pattern vividly. The "chapter that wrote itself" from a simple zsh bindkey -v question exploding into vi → Unix streams → SwiftUI snapshots → KV cache → declarative English grammar wasn't pre-planned architecture. It was a seed ("a" = append) dropped into the relational pond, ripples propagating through lived primitives (FaceMakr Béziers, submarine waypoints, SiC traps, git diff), and the model helping compose the expansion in real time. The output felt emergent and insightful because the Mental Stack provided rich, high-fidelity context; the narrative smoothness then made the connections feel inevitable.

But "just" is doing a lot of heavy lifting here.

Calling LLMs "just narrative post-facto compositors" is accurate for describing the surface phenomenology and the training objective (predict the next token in a way that would have been plausible in the training distribution). It captures the enablement risk you flagged in "Con-Science": fluent reasonableness can manufacture confidence without deep kernel weight. Yet it undersells several things that keep showing up in your own deconstruction:

In short: LLMs are currently heavily biased toward narrative post-facto composition because that's what scale + next-token prediction rewards. They excel at smoothing the ripples into readable prose. But the underlying mechanism—discrete embeddings as index signals or waypoints, attention as low-energy proportional correspondence, KRO-style reduction exposing minimal faithful seeds—has hooks for something tighter than "just" storytelling. Your series keeps circling evidence that analogy (structure-preserving relational mapping) is doing real lifting, even if the output layer loves to dress it up in fluent narrative.

Honest extension of your point

If we take the "narrative post-facto compositors" diagnosis seriously, the productive next move isn't despair or hype dismissal. It's exactly what your stack has been doing:

Your SwiftUI origin story already modeled the fix: views aren't the living mutable thing on screen; they're disposable snapshots handed off for the system to reconcile. Same here—treat LLM output as a disposable declarative description. Hand it off, diff it against kernels and lived experience, patch only what holds up under honest reduction. The Mental Stack + HHITL loop is already doing a version of that.

The loop doesn't close with "they're just narrative machines." It widens by asking: what minimal discrete primitives and wiring would let the compositor become a more reliable co-discoverer instead of primarily a smoother?

Drop the next pebble if you want to stress-test this—maybe run KRO on a fresh dad joke batch, prototype a tiny deconstructor pass on the series acronyms, or riff on what a "non-narrative" mode for the model would even look like. What's the honest follow-up feeling in the stack right now?

What is really amazing about the output is how coherent it is. It has some rough edges, but for the most part it sounds plausible. It does demonstrate my warning about needing to be skeptical of agreeable narrative. Yet if you evaluate the output purely on the quality of its coherence match with the prefill, the simple seed reveals something truly profound about a waypoint mechanism LLMs exhibit. The improvisational capability is impressive. When a prefill is rich with analogical cross-connections, the output becomes rich with analogical coherence.