stable-diffusion-webui from Automatic1111

This is a great front end for Stable Diffusion

the official GIT repo for this is here, to install it alongside stable diffusion and everything else you need, make sure you are running in your local environment with conda or pip, and run the following command

wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh

then make it executable and run it with the following command

./webui.sh --listen --api

You can add Automatic1111 to your openwebUI (Go to settings/images)

OpenWebUI

1- Installing as docker container

  • Install docker like you would, by adding its repositories or however you are used to installing it
  • sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  • docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  • to check if it is running, run sudo docker ps
  • Now, localhost:8080 should have your OpenWebUI running
  • Do a signup or create account (Local instance bound), the first account you create will automatically become an admin account !

At this stage, you should be good to go,

Hardware requirements for AI

Every model is a different story, but before you start looking into models, there are a few useful things to know, and a few useful tools that you can use

nvidia

Nvidia is a very popular AI hardware provider, the cool things about modern AI models is that they can be split into layers, hence, you can have more than one card doing the work ! So, I have 2 x 4090 cards doing the work, you can combine the ram to see if your model fits on both when split in half, some models even provide an option to offload some of the model’s data onto the system ram, but that is a story for another day,

To inspect the GPU and RAM usage of your GPUs, you can use the following command

watch -n 0.5 nvidia-smi

The command should show you what processes live in your VRAM (VRAM is your cards ram)

Anthropic / Claude

Anthropic’s claude is probably the main competing AI to OpenAI’s ChatGPT

Anthropic has an edge over OpenAI in many areas, I personally think it is better than OpenAI for Code generation

To simply chat with Claude, you can create an account at https://claude.ai/

To obtain API keys from anthropic and access the system programmatically, you will need to get the API keys from https://console.anthropic.com/

Your .env file

In the context of my AI notes, here is what you need to know about the .env file

Any file that starts with a . is a hidden file according to linux, windows is a bot funny about such files, it allows their existance, as long as you don’t try to use windows explorer to name or rename into them.

On linux, if you want the ls command to show such files, you would execute the command “ls -a

Now, with that out of the way, let me start with a few variables for various providers

# Open AI (ChatGPT)
OPENAI_API_KEY=sk-proj-xxxx
# Google AI (Gemeni)
GOOGLE_API_KEY=xxxx
# Anthropic (Claude)
ANTHROPIC_API_KEY=xxxx
# Huging face (HF_TOKEN is short for hugging face token)
HF_TOKEN=xxxx

OpenAI – ChatGPT

I am not sure this requires any introduction !

While the ChatGPT app can be accessed through, the API portal is at platform.openai.com, and even though the website provides a free tier, the API does not, you must have prepaid credits in order to use it, the minimum is $5, those $5 are pay as you go, you use them up by sending more API calls.

GPT 4o is rumored to be 10 Trillion parameters !! for comparison, deepseek, which is an amazing LLM that came out lately is 671 Billion parameters, meaning it is 7% of the size of ChatGPT !

Models and API pricing

Python virtual environment with pip

If you are familiar with python, you probably are also familiar with pip, and very likely familiar with venv (Virtual environments)

There is nothing special in particular about this for AI, it is exactly as you would for an odoo installation for example

For AI, i would totally recommend anaconda, but if for some reason that is not an option, this option will do 100% of the time

So, you need to start by installing Python 3 !, On debian, I would just use the repository with apt, it may be true at the time of writing that in the repo it uses 3.11 rather than the latest 3.13, but that is absolutely fine

sudo apt update
// The following command should do
sudo apt install python3
//But i would much rather install everything python in one go
apt install build-essential wget git python3-pip python3-dev python3-venv \
python3-wheel libfreetype6-dev libxml2-dev libzip-dev libsasl2-dev \
python3-setuptools

Now, with that out of the way, navigate to the project’s folder (Assuming you have downloaded a project for example), and create a virtual environment

python3 -m venv venv

Now, you can activate that project with

source venv/bin/activate
//On windows, the above should look something like
venv\Scripts\activate

That is basically it, you will now need to, from within the command prompt of the venv, install dependencies either one by one using the pip command, or point it to a file containing the dependencies, for example

pip install -r requirements.txt

You should now be good to go !

Jupyter Notebooks

Launch

Within the context of our AI tutorial, start by activating your conda environment (Or venv)

Once active, run the following command to launch Jupyter

jupyter lab

Jupyter Notebook = The original three languages that were supported were Julia, Python and R. Sure there is a missing E in the words, but i guess the name is 90% cool. The logo pays homage to Galileo’s discovery of the moons of Jupiter, So I am not sure whether the name can be called a pun since the one of them is a shortening of three words and not a name, and a name with a misspelled letter.

Enough about the name and logo, let us get into what a Jupyter Notebook is

Technically, it is a JSON file with a ipynb extension, in reality, it is a way to create a web document “With live code”, equations, among other things

the types of cells in a notebook are

  • Code Cells: Execute code and display the output.
  • Markdown Cells: Write formatted text using Markdown.
  • Raw NBConvert Cells: Include content that is not evaluated by the notebook kernel

So, the markup inside that JSON file is not specific to AI per say, it is used in many science fields, but as you work your way through this blog, you will see how important it is for what we are trying to achieve ! important in the sense that it makes development simpler, it is not a thing you will use in your final product

In any case, to start Jupyter Notebook. activate your Conda environment, then run the following command from the terminal where the current directory is your project directory !

jupyter lab

Then, assuming you are working on a project that your downloaded from github for example, you can open the ipynb files found in there by clicking on them form the menu on the left !

Code cells get executed by a background python thread (Kernel) in the background, step by step when you go to that cell and click shift+enter