I’m getting into the fray that is Generative AI since, according to some, my job as a programmer will soon be taken over by some literal code cranking machine.
There are a few things to set up:
- conda — (optional) “an environment and package manager” according to Welcome! — Anaconda documentation. Though if this is desired, it should be installed before installing PyTorch (below). Supposedly
pip
will work fine. - PyTorch — the tool (one of them, anyway) used for learning Generative AI. Start Locally | PyTorch
Like most tools and libraries, the instructions assume that I have nothing else happening on my machine, so install just Anaconda globally according to their simplistic assumptions. All the other dependencies like even the Python version on my machine can go to hell.
Dockerizing the environment
Well. Until I have enough $$$ to buy a new machine for each new tool I want to try out, I’ll be using Docker (think goodness I don’t need an actual VM) to isolate an environment to play with.
After some trial and error, this is a template Dockerfile
and docker-compose.yml
I’m using:
Dockerfile
FROM python:3.11-bookworm
ENV PYTHONUNBUFFERED 1
WORKDIR /code
RUN apt update && apt install -y \
vim
RUN curl -O https://repo.anaconda.com/miniconda/Miniconda3-py311_24.1.2-0-Linux-x86_64.sh
RUN sh Miniconda3-py311_24.1.2-0-Linux-x86_64.sh -b
ENV PATH="$PATH:/root/miniconda3/bin"
RUN conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
RUN conda init
The above environment has:
- Vim
- Python 3.11
- Miniconda3
- PyTorch with CUDA 12.1
docker-compose.yml
services:
app:
build: .
volumes:
- ./:/code
tty: true
stdin_open: true
The docker-compose.yml is a template one I use for other things. Nothing special here.