Backpressure in Know

filling in gaps in graphs

Know finally hit 1.2.0. I’ve been applying it more to a ‘real-world’ project of moderate complexity, and polishing it up along the way.

I recently added an AST powered code map to deliver a heat map of a codebase. I have come across research about how hot / warm / cold memory systems perform better then RAG or indexing. The fundamental idea is that things which have changed recently, or frequently are more likely to be visited in the future. Using AST and git, we have a map of not only what files have changed, but what methods have changed within those files. This gives AI a place to start looking before trying Grep or RAG approaches.

When doing this, I realized that it could fill out the code-graph as well. This supplies a backing of known code linking with the feature layer of the spec graph. So, as the user tries to describe the system from a high level, this links to available code and the AI fills in the necessary gaps of features, requirements, components, etc.

The OTHER neat thing about having a deterministic graph next to a working one is that we can use the code graph as an indicator of completion. Not the same thing as done, but having code validate as connected to a feature without having to do a single Grep or Read tool call is pretty damn token efficient. It’s a computed step on the path to human review, especially for when the AI context is lost over time.

This led to a magical moment, of having the AI describe requirements very well feature requirement flow, ridiculously well. To the point where I could read it’s output and find the gaps between it’s mental model of the system and my own.

I like to think about AI as able to do booleans masks on information sets, union, subtraction, intersection, etc. The closer you can get it to represent your mental model, the more clearly gaps can be revealed. This requires less chit-chat and getting right to the meat of knowledge representation, efficient, clear, and concise prose.

I doubt I would ever hit this level of clarity with a pure markdown spec file approach, the knowledge representation gets bogged down by language. Great for humans reading things who aren’t familiar with the system. Written prose has a low information density for those of us who have internalized a system and are trying to work with an AI to formalize that understanding.