Ollama + Quikchat Demo with Conversational Memory

This example demonstrates how to use quikchat with a local LLM using Ollama where quikchat provides the chat history to Ollama provides the llm model. This allows the chat to "remember" what is being discussed.
This example assumes ollama is installed and running on port 11434 locally. Also it assume the llama3.1 model is available. This example will not work on the github pages demo website you must run this locally.