Celery/RabbitMQ Notes on Task Timings

celery, django, rabbitmq

The following notes apply when using Celery with the RabbitMQ broker.

Celery Settings

task_acks_late

The Celery setting task_acks_late (by default disabled), if set, will defer message ACK with RabbitMQ until the task completes successfully.

  • If it is enabled, and the Celery task takes too long (see consumer_timeout below), the message will be requeued and redelivered to the next worker. This will cause the message to be processed multiple times.
  • If it is disabled, then Celery will ACK the message to RabbitMQ as soon as the task starts. This tells RabbitMQ that the message was “delivered and processed,” and RabbitMQ will delete this message from the queue. This will cause the message to be lost if the Celery task was interrupted before it finishes.

task_acks_on_failure_or_timeout

Then there is task_acks_on_failure_or_timeout (by default enabled). According to the doc, this will ACK the message even if the task failed or timed out. This may or may not be the correct choice depending on the task.

RabbitMQ Settings

consumer_timeout

The RabbitMQ consumer_timeout configuration (by default 30 minutes) gives a task a maximum timeout before requeuing the message. If a client (a Celery task in this case) does not ACK the message before this timeout expires, the message will be requeued and redelivered.

TTL / message TTL

Furthermore, the TTL / message TTL configuration (by default ??) determines how long a message will stay in RabbitMQ before being discarded. If a message stays in a queue for longer than this time, then it is removed from the queue.

Implications

To increase the chance that tasks are not aborted/lost due to restarts (e.g. deployments):

  • Design each task to finish as soon as possible. Spawning additional smaller tasks if necessary.
  • The entire task is designed to survive a partial completion state and is able to completely restart without ending up in a bad state (e.g. duplicate records created, abandoned/orphaned/partial updates).
  • Enable task_acks_late so that RabbitMQ ACKs are not sent until tasks are finished.
  • Extend the TTL / message TTL for RabbitMQ so that we have enough time to consume messages (in case the Celery task needs to be paused for some time to fix bugs).

There is no support to retry failed tasks. In fact, unless there is a Result Backend set up, there won’t even be a good way to audit which tasks failed (other than application logs).

PyTorch in Docker

docker, programming, Python

I’m getting into the fray that is Generative AI since, according to some, my job as a programmer will soon be taken over by some literal code cranking machine.

There are a few things to set up:

Like most tools and libraries, the instructions assume that I have nothing else happening on my machine, so install just Anaconda globally according to their simplistic assumptions. All the other dependencies like even the Python version on my machine can go to hell.

Dockerizing the environment

Well. Until I have enough $$$ to buy a new machine for each new tool I want to try out, I’ll be using Docker (think goodness I don’t need an actual VM) to isolate an environment to play with.

After some trial and error, this is a template Dockerfile and docker-compose.yml I’m using:

Dockerfile

FROM python:3.11-bookworm
ENV PYTHONUNBUFFERED 1

WORKDIR /code

RUN apt update && apt install -y \
    vim

RUN curl -O https://repo.anaconda.com/miniconda/Miniconda3-py311_24.1.2-0-Linux-x86_64.sh
RUN sh Miniconda3-py311_24.1.2-0-Linux-x86_64.sh -b
ENV PATH="$PATH:/root/miniconda3/bin"

RUN conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
RUN conda init

The above environment has:

  • Vim
  • Python 3.11
  • Miniconda3
  • PyTorch with CUDA 12.1

docker-compose.yml

services:
  app:
    build: .
    volumes:
      - ./:/code
    tty: true
    stdin_open: true

The docker-compose.yml is a template one I use for other things. Nothing special here.

Redux Toolkit w/ Vite in Docker

docker, Node.js, programming, react

Notes on setting up a Docker/Vite/React/Redux Toolkit service in Docker. That’s a mouthful.

Docker Files

Start with a name for the project (e.g. “myproject”). Then create these Docker files:

Dockerfile

FROM node:20.10

RUN apt update && apt install -y \
  xdg-utils

EXPOSE 5173

docker-compose.yml

version: '3'
services:
  app:
    build: .
    command: >
      sh -c "npm run dev"
    ports:
      - "5173:5173"
    expose:
      - "5173"
    volumes:
      - .:/app
    tty: true
    stdin_open: true

Now build the working image and shell into the container:

docker-compose build

Redux Toolkit Setup

Shell into a container of the working image and initialize the project:

docker-compose run --rm app /bin/bash
...
npx degit reduxjs/redux-templates/packages/vite-template-redux myproject

Of course, use the name of the project instead of myproject.

This will create the starter files for the project under myproject/.

Exit the shell.

File Updates

Dockerfile

Modify Dockerfile to build the image with the project:

FROM node:20.10

RUN apt update && apt install -y \
  xdg-utils

EXPOSE 5173

WORKDIR /app/myproject

RUN npm install -g npm 

vite.config.ts

Modify the Vite config to allow it to host from a Docker container:

import { defineConfig } from "vitest/config"
import react from "@vitejs/plugin-react"

// https://vitejs.dev/config/
export default defineConfig({
  plugins: [react()],
  server: {
    host: true,  // <-- ADD THIS
    open: true,
  },
  ....

Start

Finally, to start the server, first build the image again:

docker-compose build

Then bring up the app:

docker-compose up

This should now bring up the app at http://localhost:5173/

To work on the project’s settings (e.g. installing packages, running tests, etc.), shell in with:

docker-compose run --rm --service-ports app /bin/bash

The --service-ports option will allow commands like npm run dev to start the app correctly (i.e. so that http://localhost:5173/ works). Without it, the port 5173 will not be mapped to the host, and docker-compose up will be the only way to run the app.