These examples demonstrate how to use QuikChat in your projects. Included are basic themes for light, dark, and debugging.
This example demonstrates how to create a basic chat widget with QuikChatJS using ESM module import
View Example ESMThis example demonstrates how to create a basic chat widget with QuikChatJS using UMD script tag import
View Example UMDThis example demonstrates how to create two chatrooms that can send messages to each other
View Example Dual ChatroomsThis demo shows saving and restore of full chat message history. Useful for apps where the history needs to be restored from a previous session.
View Example History DemoThis example shows how to use quikchat with a local LLM using Ollama
View Example OllamaThis example demonstrates how to use quikchat with a local LLM using Ollama where quikchat provides the chat history to Ollama provides the llm model. This allows the chat to "remember" what is being discussed.
View Example Ollama with MemoryThis example demonstrates how to use quikchat with a local LLM using LMStudio where quikchat provides the
chat
history to LMStudio provides the llm model. This allows the chat to "remember" what is being discussed.
This
example assumes LMStudio is installed and running on port 1234 locally.
Also it
assumes the llama3.1 model is running. This example will not work on the github pages demo website you must
run this
locally.
This example demonstrates how to use QuikChat with OpenAI's GPT-4o model. You can use this example with any API that supports token streaming. The example uses the OpenAI API to generate responses to user prompts.
View Example OpenAIThis example demonstrates how to use QuikChat with a FastAPI server. The server uses a local LLM to generate responses to user prompts. The example uses the FastAPI server to generate responses to user prompts.
**Note: This example requires the FastAPI server to be running locally and doesn't run on the github pages.** View Example FastAPI InstructionsThis example demonstrates how to use QuikChat with an Express server. The server uses a local LLM to generate responses to user prompts. The example uses the Express server to generate responses to user prompts.
**Note: This example requires the Express server to be running locally and doesn't run on the github pages.** View Example Express Instructions