llama-conductor: router + memory store + RAG harness for predictable LLMs
Summary
llama-conductor is a router, memory store, and RAG harness designed to make LLMs behave as predictable components. It uses Vodka for memory and context control, Mentats for vault-grounded reasoning, and integrates with Qdrant, llama-swap, and frontends to enable verifiable, provenance-backed AI workflows.