Ollama

1- Installing

1.1 – Linux

On Debian linux, Installing Ollama is a one liner, just enter the following in your terminal

curl -fsSL https://ollama.com/install.sh | sh

Yup, that is it, move on to using Ollama

1.2 Windows and MAC

Just go to https://ollama.com/, download it, and run the installer ! you are done

Using it !

Using Ollama is simple, just open your terminal window or command prompt , then activate your conda environment (Or venv) , and run the following command, for the sake of this example, I will run

conda activate projectName
ollama run llama3.2

llama3.3 with its 70 billion parameters will require a minimum of 64GB of ram, so don’t try that unless you have the RAM for it ! for comparison, 3.2 has 2 billion, which is around 3% of 3.3

It should now probably download about 2GBs of data (The model llama3.2 has around 2 billion parameters) And you are done, Now you can ask it anything

For example, create an article for me explaining this and that,

Once done, just enter “/bye” to exit the ollama prompt and quit the session

If you want to for example clear the context or do anything else, just use the command /? for a list of commands

Now, you have used the lama3.2, but on this ollama models page, you will find that there are many others that you can use !

Others include models that help you with coding, or models that are more targeted towards chat-bot QA, either way, you should take a close look at them, even if for the fun of it

Ollama API

So, above, your terminall allowed you to chat with the model, much like what you do when you open Claude or ChatGPT, if you want to access things via API, here is how

Leave a Reply

Your email address will not be published. Required fields are marked *