AI

Why your company processes should improve themselves using Claude

Processes that update their own documentation sound impossible until you see the architecture. An LLM compiler reads raw organizational data and writes structured process wikis that compound institutional knowledge automatically. Here is how to build it and what breaks.

What you will learn

  1. Why process knowledge decays faster than you think, and the three specific failure modes killing it
  2. How a compile-time architecture turns raw organizational data into continuously improving process documentation
  3. A practical setup using Claude Code with hooks, scheduled runs, and a three-folder structure you can copy

Last November I wrote that the wiki is dead. I argued that traditional knowledge management systems store information nobody finds, and that conversational AI like Claude Projects replaces the need for organized documentation entirely.

I was wrong. Sort of.

The wiki is dead when humans maintain it. That part was right. Nobody updates Confluence. Nobody maintains the runbook. The SOP was accurate for about three weeks after someone wrote it, and then reality diverged. But the concept of a living, structured knowledge base that captures how your company actually operates? That’s more valuable than ever. So who maintains it? That’s the question nobody asks until it’s too late. The problem was never the wiki. The problem was who maintains it.

In April 2026, Andrej Karpathy published a concept showing how LLMs can compile personal knowledge bases from raw sources into structured markdown wikis. It got 12 million views in days. The idea resonated because it solved a real problem: knowledge that accumulates and compounds instead of decaying. But Karpathy’s version is personal. One researcher, one wiki. The thing is, the same architecture applied to organizational processes solves a problem worth $31.5 billion annually in the US alone.

Here is what that looks like.

I said the wiki was dead and I was wrong

Running Tallyfy for over a decade gave me a front-row seat to how companies manage process knowledge. Or more accurately, how they fail to manage it. Tallyfy’s entire premise is turning static documents into live workflows - replacing SOPs that gather dust with trackable, repeatable processes that generate data with every execution. That data is the raw material this whole concept runs on.

The pattern at every company is the same: a motivated operations manager writes brilliant documentation, the team follows it for a month, then reality shifts, the docs go stale, and everyone reverts to asking Sarah because Sarah has been here 11 years and just knows how things work.

Then Sarah leaves. And 42% of your institutional knowledge walks out the door with her.

The wiki I declared dead was the human-maintained wiki. Static documents in Confluence or Notion that require someone to remember to update them after every process change. That wiki is absolutely dead and deserves to be.

But there is a different kind of wiki. One where the LLM reads what actually happens in your organization, Slack threads, ticket resolutions, meeting transcripts, code commits, and compiles that raw data into structured process documentation. Nobody has to remember to update it. The LLM does the maintenance that no human wants to do. Turns out, the wiki is brilliant when you remove the human maintenance bottleneck entirely.

The three things killing your process knowledge

Every company I advise hits the same three problems. They are not separate issues. They are three symptoms of one root cause: no system continuously compiles process knowledge from what actually happens.

Tribal knowledge loss. The Bureau of Labor Statistics reports roughly 5 million people turn over per month in the US. Every departure takes undocumented process knowledge with it. 11,000 Baby Boomers retire daily, carrying decades of undocumented shortcuts, workarounds, and context that never made it into any system. A company with 1,000 employees can expect to lose $2.4 million annually in productivity from this alone.

Documentation rot. Your SOPs were accurate on the day they were written. 73% of organizations report accuracy degradation in their knowledge systems within 90 days. The actual process diverges from the documented process almost immediately because processes evolve through daily decisions that nobody records. As I wrote about in detail on the Tallyfy blog, nobody follows your SOPs because static documents can’t enforce anything or keep up with reality.

Improvement insights never captured. Teams learn from failures. They discover workarounds. They find faster paths. But those learnings live in Slack threads and people’s heads, never feeding back into the process itself. Research from KaiNexus (cited in Tallyfy’s continuous improvement guide) found only 1 in 3 improvement initiatives delivers measurable financial impact. Not because the improvements are bad, but because they are not captured systematically.

The root cause is painful to admit. In building Tallyfy, I watched this pattern hundreds of times: the process and the documentation are always two different things. Always. The gap starts small and grows until the documentation is basically fiction.

What Docsie calls knowledge archaeology captures this well. Most growing organizations don’t have a documentation gap. They have a knowledge excavation backlog. Critical information already exists, buried in Slack threads, recorded meetings, and the minds of long-tenured employees. The problem isn’t creating documentation from scratch. It is surfacing and compiling what already exists before it vanishes.

Compile time vs query time for processes

Karpathy’s key insight is the distinction between compile-time and query-time knowledge assembly. Most companies use a query-time approach: someone searches Confluence, finds a 6-month-stale SOP, and hopes it is still accurate. RAG systems do this at scale, retrieving chunks from a vector database on every question. The knowledge is assembled fresh each time with no accumulation.

The compile-time approach is different. The LLM reads raw organizational data and writes structured documentation. Not retrieves. Writes. It compiles raw sources into a maintained wiki, the same way a compiler turns source code into an executable. The wiki is the compiled output. It evolves with every compilation pass.

For process knowledge, this means: after every batch of process executions, the AI reads what happened (tickets resolved, decisions made, exceptions handled) and updates the process wiki. The wiki doesn’t just store information. It compounds it. Each compilation pass builds on the previous one, incorporating new patterns, updating procedures, flagging inconsistencies.

Cycle showing process execution feeding raw data into an LLM compiler that updates a process wiki, which improves the next execution

The three-layer architecture applied to organizational processes:

Raw sources (immutable). Slack threads, meeting transcripts, support ticket resolutions, code commits, process execution logs, customer feedback. These are never modified by the LLM. They are the ground truth.

Compiled wiki (LLM-maintained). Markdown files containing current procedures, known failure modes, resolution patterns, improvement history, onboarding guides. Entirely written and maintained by the LLM. Every claim traces back to a specific raw source that a human can verify.

Active process (human-executed). The actual workflow your team follows, informed by the compiled wiki. When humans execute the process, they generate new raw data, which feeds the next compilation. The cycle compounds.

Three-layer architecture showing raw sources compiled by LLM into a process wiki that informs active process execution

This is stateful knowledge building. Each compilation pass doesn’t start from scratch. It reads the existing wiki, compares it against new raw data, and updates only what changed. Over months, the wiki becomes a thorough record of how your process actually works, not how someone imagined it would work when they wrote the SOP.

How to build this with Claude

The practical setup has four components. None of them are complicated individually. The value comes from the cycle.

Component 1: The folder structure. Three directories, mirroring Karpathy’s architecture:

process-wiki/
  raw/          # Immutable inputs (exports, transcripts, logs)
  wiki/         # LLM-compiled markdown (procedures, patterns)
  index.md      # Maps all wiki articles, fits in one context window

Component 2: The CLAUDE.md schema. This file tells Claude how to structure the wiki. What categories exist, how to format procedures, when to create new articles vs update existing ones, how to handle conflicting information. Think of it as the compilation rules. Without it, Claude produces inconsistent output. With it, every compilation follows the same structure. If you want to ensure Claude actually follows these rules consistently, enforcement hooks can block Claude from finishing until it verifies the wiki update was done correctly.

Component 3: Scheduled compilation. Run Claude Code non-interactively on a schedule to scan raw sources and update the wiki:

claude -p "Read all files in raw/ that are newer than the last
compilation timestamp. Update the wiki/ articles accordingly.
Follow the rules in CLAUDE.md." --dangerously-skip-permissions

This can run on a cron, hourly or daily depending on your volume. Each run reads new raw data, updates existing wiki articles, creates new ones if needed, and updates the index.

Component 4: Hook-triggered updates. If you use Claude Code for development or operations, PostToolUse hooks can trigger wiki updates after process-related files change. When someone closes a support ticket and exports the resolution, the hook fires a compilation pass that incorporates that resolution into the wiki. No manual step required.

Here is what this looks like for a customer support process. Raw sources include ticket data, resolution notes, escalation logs, and customer feedback exports. The compiled wiki contains the current triage procedure, common resolution patterns, known failure modes, escalation criteria, and a history of what changed and why. The active process is the support team following the compiled wiki. When they resolve tickets, those resolutions become new raw data for the next compilation.

The compile step is the key innovation. The LLM doesn’t just index your data. It reads it, understands the patterns, and writes structured documentation that humans can follow. If you want to explore this for your operations, reach out.

What compounds and what breaks

The compounding effect is real. Process knowledge, failure patterns, improvement history, onboarding quality, all of these get better with each compilation cycle. An organization that starts compiling now builds a knowledge advantage that compounds for years. A competitor who delays 24 months starts from zero while you have 24 months of accumulated process intelligence.

If I am being honest though, there are real limitations.

Context window limits. You can’t compile infinite raw sources in one pass. At roughly 100 articles and 400,000 words, the wiki fits comfortably in Claude’s context window. Beyond that, you need to either partition by department or supplement with RAG for cross-wiki queries. For most mid-size companies (50-500 employees), the context window is more than sufficient for a single process domain.

Human review is non-negotiable. Sensitive processes (compliance, financial, safety) need human review before wiki updates go live. The LLM can draft updates, but a domain expert should approve them. Build this into the cycle, not as an afterthought.

Ambiguous sources produce hallucinated procedures. If your raw data contains contradictory information (two Slack threads describing different approaches to the same problem), the LLM will resolve the contradiction somehow, and it might resolve it wrong. Quality of raw sources directly determines quality of compiled output. Sort of like scope creep in reverse: instead of the project growing beyond its boundaries, the wiki grows beyond what the data actually supports.

This isn’t a replacement for process execution tools. The wiki tells you how to do things. It doesn’t execute the process for you. You still need workflow management systems to actually run processes, track completion, and ensure compliance. The wiki compounds the knowledge. The execution tool compounds the discipline.

The bottom line is simple. US businesses lose $31.5 billion annually from poor knowledge sharing. That number isn’t going down. But the cost of fixing it just dropped dramatically. An LLM compiler that turns your existing organizational data into continuously improving process documentation isn’t science fiction. It is a folder structure, a schema file, and a cron job. The hard part was never the technology. It was having a system that maintains itself. Now you can.

About the Author

Amit Kothari is an experienced consultant, advisor, coach, and educator specializing in AI and operations for executives and their companies. With 25+ years of experience and as the founder of Tallyfy (raised $3.6m), he helps mid-size companies identify, plan, and implement practical AI solutions that actually work. Originally British and now based in St. Louis, MO, Amit combines deep technical expertise with real-world business understanding.

Disclaimer: The content in this article represents personal opinions based on extensive research and practical experience. While every effort has been made to ensure accuracy through data analysis and source verification, this should not be considered professional advice. Always consult with qualified professionals for decisions specific to your situation.

Contact me Follow