How to Use Ollama: Hands-On With Local LLMs and Building a Chatbot
by Arjun 7mMarch 14th, 2024 Too Long; Didn't Read
In the space of local LLMs, I first ran into LMStudio. While the app itself is easy to use, I liked the simplicity and maneuverability that Ollama provides. To learn more about Ollama you can go here.
tl;dr: Ollama hosts its own curated list of models that you have access to.
You can download these models to your local machine, and then interact with those models through a command line prompt. Alternatively, when you run the model, Ollama also runs an inference server hosted at port 11434 (by default) that you can interact with by way of APIs and other libraries like Langchain.
Share Your Thoughts