MCP-Native · Closed Private Beta

Knowladex

Knowledge bases that write themselves.

Drop in raw notes, transcripts, PDFs, half-finished specs. Knowladex compiles them into an interconnected wiki — cross-linked articles, backlinks, search, the works — without you writing a single page. Your AI tool does the compiling through a built-in MCP server. Share with your team, or flip it public and send a read-only link to anyone.

See it in action

One folder of chaos in, a structured wiki out.

Watch a raw pile of notes get compiled into a cross-linked, searchable wiki in real time — articles, backlinks, tags, and an A-Z index, all generated by an AI tool talking to Knowladex over MCP.

knowladex — intro
Three things at the core

Knowledge organization as a build step, not a chore.

01

It writes itself.

Frontier LLMs are good enough at structured writing to do the curation work for you. Read fifty notes. Find the themes. Write coherent articles. Link them together. Cite the sources. Flag what’s stale. Knowladex gives the model a structured place to put its output and an MCP interface to work through. You bring the chaos. The wiki appears.

02

It speaks MCP.

Knowladex ships as a real MCP server — not as a wrapper, not as an afterthought. Point Claude Code, Cursor, or any MCP-speaking agent at your workspace and it can ingest documents, compile the wiki, search articles, and update content in plain language. The protocol is the SDK. There’s nothing to install.

03

It shares.

Every project shares two ways. Invite your team with per-org user accounts, roles (viewer / editor / admin), and scoped API tokens — each person gets exactly the access they need. Or flip a project to public and hand out a clean read-only URL with built-in search, a tag cloud, and an Obsidian vault export. Raw docs stay private either way.

Real project example

Watch a game design wiki compile itself.

Raw game design notes, playtesting transcripts, and mechanic sketches become a fully cross-linked wiki — ready for the team to browse, search, and keep building on.

knowladex — game-wiki
How it works

The compile loop.

Five steps, fully automated once you’ve set it up. The schema is per-project free-form markdown — instructions written for an LLM, not a parser.

01

Ingest

Raw documents land in the project’s store. Markdown, plain text, or PDF — up to 22 MB per file with native two-pass figure extraction that catches embedded bitmaps and vector-drawn diagrams other tools miss. Each doc gets an ID, title, tags, and optional source URL. Raw docs are never modified after ingest.

02

Compile

Your AI tool calls the compile MCP tool. Knowladex returns the raw docs (new or changed since the last run), the current wiki state, and the project’s schema — conventions the LLM should follow. The AI tool reads everything in its own context window.

03

Write

The AI tool calls write_article for each article it creates or updates. Knowladex stores them, rebuilds the backlink graph, updates the search index, and regenerates the dashboard. Wikilinks resolve automatically.

04

Query

Browse the wiki in the dashboard. Search across articles. Or wire your AI tools into the MCP server so they can query the wiki directly — structured knowledge instead of a folder of raw text.

05

Lint

The lint_wiki tool finds broken links, orphaned articles, and stale content. Your AI tool can fix them in the same session. The wiki stays healthy without manual gardening.

What you get

A real wiki — not a chat log.

Editorial typography

Cross-linked articles with backlinks, hover previews, tags, and a full A-Z index. Serif headings, clean body, monospace for code. Every article is a markdown file with frontmatter — readable as-is, portable, no proprietary format.

MCP-native, day one

Ships as a real MCP server. Point Claude Code, Cursor, or any MCP-speaking agent at your workspace and it can search, read, write, ingest, and compile in plain language. No copy-paste, no context-stuffing, no glue code.

Drag and drop, or talk to it

Two ways in. Drag a folder of files into the dashboard, or tell your AI tool to ingest them over MCP. Same backend either way. PDF figure extraction up to 22 MB per file. Text and figures land in the right articles automatically.

Public share URLs

Flip a project to public and send a clean read-only link. Search, tag cloud, and article index included. No signup needed for readers. Raw docs stay private. Toggle off any time.

Obsidian export

Download the entire compiled wiki as a zip — every article as a markdown file with frontmatter, an .obsidian/app.json so wikilinks resolve, and an index. Open it in Obsidian, edit it locally, take it anywhere. Your data is never trapped.

Multi-tenant from day one

Run isolated workspaces for every client. Per-org users, per-org API tokens, per-org roles. Onboarding a new client takes minutes, not days. Built for consultants and agencies managing knowledge bases for many companies at once.

Security posture

First-class, not bolt-on.

A pre-deploy audit identified and fixed stored XSS, API token leakage, password hash leakage, Host-header injection, and a TOCTOU race — all resolved before the first production deploy.

Tool-level authorization on every MCP call

Cross-org access is structurally impossible — an org-scoped token cannot reach another org’s data even if a tool has a bug elsewhere.

Constant-time token comparison

timingSafeEqual defeats timing-based guessing. Same primitive for password verification.

Email enumeration defense

Failed logins burn a dummy scrypt verify so response times don’t leak whether the email exists.

Atomic writes everywhere

Temp-file + rename so a crash mid-write never leaves a half-written file.

Per-username brute-force throttle

5 failures / 15 minutes, then 15-minute lockout — layered on per-IP rate limits to defend against distributed attacks the IP limiter can’t see.

CSP locked down

Same-origin scripts and styles only. No inline scripts, no framing, HSTS preload.

API tokens never appear in URLs

Minted once, shown once, then forgotten. Never in browser history, server access logs, or referrer headers.

CSV formula injection defense

Admin exports escape leading =, +, -, @ so a malicious waitlist signup can’t phish the operator through Excel.

Built with

No database server. No Redis. No message queue.

The whole thing is npm run build && node dist/index.js and a Dockerfile.

Runtime
Node.js 20+, TypeScript 5+
HTTP
Express 5, Helmet, express-rate-limit
MCP
@modelcontextprotocol/sdk (StreamableHTTP transport)
Storage
Filesystem on Fly.io volume, atomic writes, JSON
Search
MiniSearch (in-memory full-text, per-org)
Markdown
Marked + sanitize-html (strict allowlist)
Auth
scrypt password hashing, in-process sessions on disk
Email
Resend (transactional, 10 template types)
Billing
Stripe (checkout, webhooks, customer portal)
Hosting
Fly.io, single region, always-on, mounted volume
Marketing
Static HTML/CSS on Netlify, no build step
Who it’s for

Operators, not spectators.

Consultants & agencies

Run knowledge bases for many clients. Each client gets their own private workspace, team logins, and API tokens. The operator sits above all of them. Onboarding a new client takes minutes, not days.

AI-native product teams

Internal docs queryable by your own agents. Same wiki, same scope, same permissions as your humans. Documentation as infrastructure, not an afterthought.

Solo operators

Drowning in scratch files and transcripts. One folder of chaos in, a structured wiki out. It compounds every time you add to it.

You bring the chaos.
The wiki appears.

Production-ready. Closed private beta. Multi-tenant from day one. The product we wished existed — built first for ourselves.