Knowledge bases that write themselves.
Drop in raw notes, transcripts, PDFs, half-finished specs. Knowladex compiles them into an interconnected wiki — cross-linked articles, backlinks, search, the works — without you writing a single page. Your AI tool does the compiling through a built-in MCP server. Share with your team, or flip it public and send a read-only link to anyone.
Watch a raw pile of notes get compiled into a cross-linked, searchable wiki in real time — articles, backlinks, tags, and an A-Z index, all generated by an AI tool talking to Knowladex over MCP.
Frontier LLMs are good enough at structured writing to do the curation work for you. Read fifty notes. Find the themes. Write coherent articles. Link them together. Cite the sources. Flag what’s stale. Knowladex gives the model a structured place to put its output and an MCP interface to work through. You bring the chaos. The wiki appears.
Knowladex ships as a real MCP server — not as a wrapper, not as an afterthought. Point Claude Code, Cursor, or any MCP-speaking agent at your workspace and it can ingest documents, compile the wiki, search articles, and update content in plain language. The protocol is the SDK. There’s nothing to install.
Every project shares two ways. Invite your team with per-org user accounts, roles (viewer / editor / admin), and scoped API tokens — each person gets exactly the access they need. Or flip a project to public and hand out a clean read-only URL with built-in search, a tag cloud, and an Obsidian vault export. Raw docs stay private either way.
Raw game design notes, playtesting transcripts, and mechanic sketches become a fully cross-linked wiki — ready for the team to browse, search, and keep building on.
Five steps, fully automated once you’ve set it up. The schema is per-project free-form markdown — instructions written for an LLM, not a parser.
Raw documents land in the project’s store. Markdown, plain text, or PDF — up to 22 MB per file with native two-pass figure extraction that catches embedded bitmaps and vector-drawn diagrams other tools miss. Each doc gets an ID, title, tags, and optional source URL. Raw docs are never modified after ingest.
Your AI tool calls the compile MCP tool. Knowladex returns the raw docs (new or changed since the last run), the current wiki state, and the project’s schema — conventions the LLM should follow. The AI tool reads everything in its own context window.
The AI tool calls write_article for each article it creates or updates. Knowladex stores them, rebuilds the backlink graph, updates the search index, and regenerates the dashboard. Wikilinks resolve automatically.
Browse the wiki in the dashboard. Search across articles. Or wire your AI tools into the MCP server so they can query the wiki directly — structured knowledge instead of a folder of raw text.
The lint_wiki tool finds broken links, orphaned articles, and stale content. Your AI tool can fix them in the same session. The wiki stays healthy without manual gardening.
Cross-linked articles with backlinks, hover previews, tags, and a full A-Z index. Serif headings, clean body, monospace for code. Every article is a markdown file with frontmatter — readable as-is, portable, no proprietary format.
Ships as a real MCP server. Point Claude Code, Cursor, or any MCP-speaking agent at your workspace and it can search, read, write, ingest, and compile in plain language. No copy-paste, no context-stuffing, no glue code.
Two ways in. Drag a folder of files into the dashboard, or tell your AI tool to ingest them over MCP. Same backend either way. PDF figure extraction up to 22 MB per file. Text and figures land in the right articles automatically.
Flip a project to public and send a clean read-only link. Search, tag cloud, and article index included. No signup needed for readers. Raw docs stay private. Toggle off any time.
Download the entire compiled wiki as a zip — every article as a markdown file with frontmatter, an .obsidian/app.json so wikilinks resolve, and an index. Open it in Obsidian, edit it locally, take it anywhere. Your data is never trapped.
Run isolated workspaces for every client. Per-org users, per-org API tokens, per-org roles. Onboarding a new client takes minutes, not days. Built for consultants and agencies managing knowledge bases for many companies at once.
A pre-deploy audit identified and fixed stored XSS, API token leakage, password hash leakage, Host-header injection, and a TOCTOU race — all resolved before the first production deploy.
Cross-org access is structurally impossible — an org-scoped token cannot reach another org’s data even if a tool has a bug elsewhere.
timingSafeEqual defeats timing-based guessing. Same primitive for password verification.
Failed logins burn a dummy scrypt verify so response times don’t leak whether the email exists.
Temp-file + rename so a crash mid-write never leaves a half-written file.
5 failures / 15 minutes, then 15-minute lockout — layered on per-IP rate limits to defend against distributed attacks the IP limiter can’t see.
Same-origin scripts and styles only. No inline scripts, no framing, HSTS preload.
Minted once, shown once, then forgotten. Never in browser history, server access logs, or referrer headers.
Admin exports escape leading =, +, -, @ so a malicious waitlist signup can’t phish the operator through Excel.
The whole thing is npm run build && node dist/index.js and a Dockerfile.
Run knowledge bases for many clients. Each client gets their own private workspace, team logins, and API tokens. The operator sits above all of them. Onboarding a new client takes minutes, not days.
Internal docs queryable by your own agents. Same wiki, same scope, same permissions as your humans. Documentation as infrastructure, not an afterthought.
Drowning in scratch files and transcripts. One folder of chaos in, a structured wiki out. It compounds every time you add to it.
Production-ready. Closed private beta. Multi-tenant from day one. The product we wished existed — built first for ourselves.