Self-hosting Snikket and GoToSocial behind Caddy

Idea

After a couple of months running my GotoSocial instance, I wanted to add more federated-opensource services to my stack, so I decided to add an XMPP server. After a little research (a simple Google search) I decided to use Snkikket.

As my instance is a 5€ monthly machine, my first intent was to create another similar instance and install Snikket on it. But I wondered if all the stack could work in the same machine and maybe save so money

As I deployed my GotoSocial instance using Docker, and the Snikket’s installation guide refers to Docker, I thought it would be easy to add to my docker-compose.yml

Initial

Deploying a GotoSocial instance using Docker is as "simple" as running a docker-compose.yml similar to

docker-compose.yml
services:
  gotosocial:
    image: docker.io/superseriousbusiness/gotosocial:latest
    container_name: gotosocial
    user: 1000:1000
    networks:
      - gotosocial
    environment:
      GTS_HOST: social.jagedn.dev
      GTS_DB_TYPE: sqlite
      GTS_DB_ADDRESS: /gotosocial/storage/sqlite.db
      GTS_LETSENCRYPT_ENABLED: "false"
      GTS_PORT: "8080"
      GTS_TRUSTED_PROXIES: "172.18.0.0/16"
      GTS_LETSENCRYPT_EMAIL_ADDRESS: "jorge@xxxx.xxxx"
      GTS_WAZERO_COMPILATION_CACHE: /gotosocial/.cache
      GTS_STATUSES_MAX_CHARS: 512
      TZ: Europe/Madrid
      GTS_MEDIA_LOCAL_MAX_SIZE: 60MiB
      GTS_INSTANCE_EXPOSE_PUBLIC_TIMELINE: "true"
    expose:
      - "8080"
    volumes:
      - /home/ubuntu/gotosocial/data:/gotosocial/storage
    restart: "always"

Caddy

My first (easy) step was to include Caddy in my stack and let Caddy work as a proxy-reverse against my GotoSocial.

Add Caddy to the docker-compose

docker-compose.yml
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
      - acme_challenges:/var/www/challenges
    networks:
      - gotosocial
Caddyfile
social.jagedn.dev {
    reverse_proxy gotosocial:8080
}

Now Caddy will negociate the Let’s Encrypt SSL Certificate and redirect all requests to the gotosocial instance

Simple.

INFO

As a plus, this configuration allows me to add more applications

The Challenge: The Snikket "Monolith"

While Snikket is often deployed as a single Docker image, running it behind an existing reverse proxy like Caddy requires breaking it down into its core components:

  • The Server (Prosody): The brain of the XMPP operations.

  • The Portal: The web interface for users and admins.

  • The Cert-Manager: To handle XMPP-specific encryption

Following Snikket’s tutorial, it’s straightforward to deploy all these services in a docker-composer but only if they act as a hole and not behind a reverse proxy

docker-compose.yml
  snikket_certs:
    container_name: snikket-certs
    image: snikket/snikket-cert-manager:stable
    networks:
      - gotosocial
    env_file: snikket.conf
    volumes:
      - snikket_data:/snikket
      - acme_challenges:/var/www/.well-known/acme-challenge
    restart: "unless-stopped"

  snikket_portal:
    container_name: snikket-portal
    image: snikket/snikket-web-portal:stable
    networks:
      - gotosocial
    env_file: snikket.conf
    restart: "unless-stopped"

  snikket_server:
    container_name: snikket
    image: snikket/snikket-server:stable
    networks:
      - gotosocial
    expose:
      - "5280"
      - "5281"
    ports:
      - "5222:5222" # XMPP Client
      - "5269:5269" # XMPP Federation
      - "3478:3478" # TURN
      - "3478:3478/udp"
      - "5000:5000/tcp"
      - "5000:5000/udp"
    volumes:
      - snikket_data:/snikket
    env_file: snikket.conf
    restart: "unless-stopped"
snikket.conf
SNIKKET_DOMAIN=chat.jagedn.dev
SNIKKET_ADMIN_EMAIL=jorge@edn.es

(I will omit the part you need to configure DNS in your internet provider as the aim of the post is not a HOWTO install Snikket)

Caddyfile
social.jagedn.dev {
    reverse_proxy gotosocial:8080
}

chat.jagedn.dev,
groups.chat.jagedn.dev,
share.chat.jagedn.dev {

    handle_path /.well-known/acme-challenge/* {
        root * /var/www/challenges
        file_server
    }

    handle {
        reverse_proxy snikket_portal:5765
    }
}

As the Cert-Manager is responsible to negociate the certificate for mobile comms, we have to align it with Caddy. This part was solved easily sharing the acme_challenges folder

Also, the docker-compose uses typical ports 443 and 80, but in my case Caddy manages these ports

Another issue is that with this default mapping, the portal tries to talk to the server via 127.0.0.1, which fails in a multi-container Docker setup, and we need to find the way to configure the services.

After several tries and errors, and digging a lot into the Snikket GitHub repo, I was able to figure how to configure all the pieces

The "Aha!" Moment

At the end it was so easy as configure snikket.conf:

snikket.conf
SNIKKET_DOMAIN=chat.jagedn.dev
SNIKKET_ADMIN_EMAIL=jorge@edn.es

SNIKKET_TWEAK_PORTAL_INTERNAL_HTTP_INTERFACE=0.0.0.0
SNIKKET_TWEAK_INTERNAL_HTTP_INTERFACE=0.0.0.0

SNIKKET_WEB_PROSODY_ENDPOINT=http://snikket:5280
SNIKKET_TWEAK_INTERNAL_HTTP=true

In sum up:

  • configure portal to listen in all interfaces not only 127.0.0.1, so Caddy can redirect calls to it

  • configure server to listen in all interfaces so portal can validate requests against it

  • configure portal to call snikket container instead to 127.0.0.1

(to be honest, not sure if SNIKKET_TWEAK_INTERNAL_HTTP is required)

Conclusion

Docker networking can be a real pain if you don’t keep an eye on where your traffic is going.

I spent too much time fighting against 502 errors just to realize I was pointing Caddy to the wrong port or the wrong container name.

In the end, keeping it simple and forcing the internal endpoints was all it took. Now I have a full social stack running on a tiny VPS, and it actually works!

But the most important part is, since this is all Open Source, I didn’t have to just "guess" why it was failing. When the logs kept showing the portal was stuck on 127.0.0.1, I could actually look into the code and the internal scripts to see which environment variables it was looking for.

Now I have a full social stack running on a tiny VPS, and it actually works because I took the time to understand what was happening under the hood.

Este texto ha sido escrito por un humano

This post has been written by a human

2019 - 2026 | Mixed with Bootstrap | Baked with JBake v2.6.7 | Terminos Terminos y Privacidad