QuikChat Examples

These examples demonstrate how to use QuikChat in your projects. Included are basic themes for light, dark, and debugging.

Basic Usage as Module

This example demonstrates how to create a basic chat widget with QuikChatJS using ESM module import

View Example ESM

Basic Usage as UMD

This example demonstrates how to create a basic chat widget with QuikChatJS using UMD script tag import

View Example UMD

Dual Chatrooms

This example demonstrates how to create two chatrooms that can send messages to each other

View Example Dual Chatrooms

History Demo

This demo shows saving and restore of full chat message history. Useful for apps where the history needs to be restored from a previous session.

View Example History Demo

Simple Ollama

This example shows how to use quikchat with a local LLM using Ollama

View Example Ollama

LLM with Conversational Memory

This example demonstrates how to use quikchat with a local LLM using Ollama where quikchat provides the chat history to Ollama provides the llm model. This allows the chat to "remember" what is being discussed.

View Example Ollama with Memory

LLM with Conversational Memory using LMStudio

This example demonstrates how to use quikchat with a local LLM using LMStudio where quikchat provides the chat history to LMStudio provides the llm model. This allows the chat to "remember" what is being discussed.
This example assumes LMStudio is installed and running on port 1234 locally. Also it assumes the llama3.1 model is running. This example will not work on the github pages demo website you must run this locally.

View Example LMStudio with Memory

OpenAI

This example demonstrates how to use QuikChat with OpenAI's GPT-4o model. You can use this example with any API that supports token streaming. The example uses the OpenAI API to generate responses to user prompts.

View Example OpenAI

Quikchat with Python FastAPI Backend

This example demonstrates how to use QuikChat with a FastAPI server. The server uses a local LLM to generate responses to user prompts. The example uses the FastAPI server to generate responses to user prompts.

**Note: This example requires the FastAPI server to be running locally and doesn't run on the github pages.** View Example FastAPI Instructions

Quikchat with Nodejs Express Backend

This example demonstrates how to use QuikChat with an Express server. The server uses a local LLM to generate responses to user prompts. The example uses the Express server to generate responses to user prompts.

**Note: This example requires the Express server to be running locally and doesn't run on the github pages.** View Example Express Instructions