Two chat widgets connected to a local LLM via Ollama — one using the completion API, the other using streaming. Neither has conversational memory.
Requires Ollama running locally on port 11434. Will not work on GitHub Pages.
// Completion (non-streaming)
const chat = new quikchat('#chat', getOllamaCompletionCallback, {
theme: 'quikchat-theme-light',
titleArea: { title: 'Completions Chat', show: true, align: 'left' }
});
// Streaming
const streamChat = new quikchat('#stream', getOllamaStreamingCallback, {
theme: 'quikchat-theme-light',
titleArea: { title: 'Streaming Chat', show: true, align: 'left' }
});
The callback functions getOllamaCompletionCallback and getOllamaStreamingCallback are defined in ollama_adapters.js.