Write shit down, so you don't forget it.
Back to blog

How to run your own private AI assistant locally

In this article we will discuss how to run your own AI assistant using Ollama as the LLM (Large Language Model) and Chatbox in order to have a desktop GUI (Graphical User Interface) for your assistant.

Prerequisites

Selecting which model to use

Once you have installed Ollama you need to select which model you want to use, there are many different models you can choose from for different types of tasks so feel free to explore the different models and select the one that suits your needs.

Select the model you want to use with Ollama and instead of copying the code to run the model ollama run codellama we are going to change it to just download the model by typing ollama pull codellama:7b in your terminal (or powershell if you are on windows) which will download the model without running it.

Keep in mind the larger LLM require more memory and processing power so it might be better to use a smaller model if you are on a low-end device.

Configuring Chatbox

In order to use Ollama with Chatbox we need to change a few settings. Open Chatbox and click on settings, select the Model tab and change the following settings:

Set AI Model Provider to: Ollama

Set API Host to: http://localhost:11434

Set the Model to the model you downloaded in our case it was codellama:7b and click Save

Thats it, you are now ready to start using your personal AI assistant to help you with your tasks.

If you need more info regarding Ollama and it’s more advanced features refer to it’s docs on github.

Thanks for reading.


Hire me

Do you have a project you need help with? Would you like to improve your processes to work faster and more efficiently? Contact me for a free consult to see if I can help, let's get you and your team being productive and shipping products.

Let's chat