Markdown in LLM chat — quikchat + quikdown
When an LLM streams back code blocks, tables, or bullet lists, those need to render as markdown — not show up as raw text. quikchat uses quikdown to do exactly this, in chat bubbles, with no framework and minimal bundle cost.
Live demo — simulated LLM chat with markdown rendering
Type a message (or use the canned ones below) and watch markdown render inside the chat bubbles. The "bot" replies with example markdown payloads — code, tables, lists, math notation — so you can see how each fence type displays.
The integration code
Here's the entire integration: a render function that uses quikdown on each incoming message, plus a small wrapper that streams chunks as they arrive from the LLM API. The same pattern works for OpenAI, Anthropic, Ollama, or any streaming text endpoint.
import quikdown from 'quikdown';
// Render a single chat message (markdown → HTML).
// quikdown is XSS-safe by default, so it's safe to drop directly into innerHTML.
function renderMessage(markdown) {
const div = document.createElement('div');
div.className = 'chat-bubble bot';
div.innerHTML = quikdown(markdown);
return div;
}
// Stream from an LLM. As chunks arrive, accumulate them into the buffer
// and re-render the bubble. quikdown re-parses on every chunk — this is
// fast enough that you can't see the re-renders.
async function streamFromLLM(prompt, target) {
const bubble = document.createElement('div');
bubble.className = 'chat-bubble bot';
target.appendChild(bubble);
let buffer = '';
const response = await fetch('/api/chat', {
method: 'POST',
body: JSON.stringify({ prompt }),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += decoder.decode(value);
bubble.innerHTML = quikdown(buffer); // ← re-render on each chunk
}
}
Why quikdown for chat
- Fast re-renders. The parser is small enough (9 KB) that you can re-parse the entire buffer on every streamed chunk without performance loss. No diffing, no virtual DOM.
- XSS-safe by default. LLM output is untrusted input. quikdown escapes HTML and sanitizes URL schemes (no
javascript:links). - Themable. Use
inline_styles: truefor emails or dark-themed chat bubbles where you can't ship a stylesheet. Or use the default class-based output and theme it with CSS variables. - Fence callbacks. Want code blocks to use highlight.js or shiki? Pass a
fence_plugin. Want diagrams? Pipemermaidfences through Mermaid.js. Same one-line hook. - Zero deps. No React, no Vue, no framework wrapper. The same import works in a browser script tag, an ES module, a Node SSR pipeline, or a Web Worker.
See quikchat in action
quikchat is a complete vanilla JavaScript chat widget — themable, < 5 KB gzipped — and it ships markdown rendering as an optional add-on built on quikdown.