This example demonstrates how to use quikchat with a local LLM using LMStudio where quikchat provides the
chat history to LMStudio provides the llm model. This allows the chat to "remember" what is being
discussed.
This example assumes LMStudio is installed and running on port 1234 locally. Also it assumes the llama3.1 model is running. This example will not work on the github pages demo website you must run this locally.