QuikChat Examples

These examples demonstrate how to use QuikChat in your projects. Included are basic themes for light, dark, and debugging.

📚 Documentation

Complete guides and references for QuikChat:

📖 API Reference

Complete technical reference for all methods and options

🛠 Developer Guide

Practical recipes for theming, LLM integration, and frameworks

🚀 Quick Start

Installation, features, and getting started guide

Basic Usage as Module

This example demonstrates how to create a basic chat widget with QuikChatJS using ESM module import

View Example ESM

Basic Usage as UMD

This example demonstrates how to create a basic chat widget with QuikChatJS using UMD script tag import

View Example UMD

Dual Chatrooms

This example demonstrates how to create two chatrooms that can send messages to each other

View Example Dual Chatrooms

History Demo

This demo shows saving and restore of full chat message history. Useful for apps where the history needs to be restored from a previous session.

View Example History Demo

Message Visibility Demo (v1.1.13+)

This demo shows how to use the message visibility feature to add messages to the history without displaying them, and how to toggle their visibility. New in v1.1.14: Includes tagged visibility system for group-based message control.

View Message Visibility Demo

Simple Ollama

This example shows how to use quikchat with a local LLM using Ollama

View Example Ollama

LLM with Conversational Memory

This example demonstrates how to use quikchat with a local LLM using Ollama where quikchat provides the chat history to Ollama provides the llm model. This allows the chat to "remember" what is being discussed.

View Example Ollama with Memory

LLM with Conversational Memory using LMStudio

This example demonstrates how to use quikchat with a local LLM using LMStudio where quikchat provides the chat history to LMStudio provides the llm model. This allows the chat to "remember" what is being discussed.
This example assumes LMStudio is installed and running on port 1234 locally. Also it assumes the llama3.1 model is running. This example will not work on the github pages demo website you must run this locally.

View Example LMStudio with Memory

OpenAI

This example demonstrates how to use QuikChat with OpenAI's GPT-4o model. You can use this example with any API that supports token streaming. The example uses the OpenAI API to generate responses to user prompts.

View Example OpenAI

Quikchat with Python FastAPI Backend

This example demonstrates how to use QuikChat with a FastAPI server. The server uses a local LLM to generate responses to user prompts. The example uses the FastAPI server to generate responses to user prompts.

**Note: This example requires the FastAPI server to be running locally and doesn't run on the github pages.** View Example FastAPI Instructions

Quikchat with Nodejs Express Backend

This example demonstrates how to use QuikChat with an Express server. The server uses a local LLM to generate responses to user prompts. The example uses the Express server to generate responses to user prompts.

**Note: This example requires the Express server to be running locally and doesn't run on the github pages.** View Example Express Instructions

🔗 Additional Resources

GitHub Repository

Source code, issues, and contributions

NPM Package

Install via npm or view package info

Medium Article

In-depth tutorial and use cases

💡 Need Help? Check out the Developer Guide for detailed tutorials and best practices, or the API Reference for complete method documentation.