Install PixlStash

Pick the method that suits your setup and follow the steps below.

Windows Installer

The easiest way to get started on Windows — no Python or Node.js required.

1 — Download the installer

Go to the latest release on GitHub and download the .exe installer.

2 — Run the installer

Double-click the downloaded .exe and follow the prompts.

3 — Start PixlStash

Use the PixlStash Server shortcut in the Start Menu, then open your browser to:

http://localhost:9537

⚠️ Windows SmartScreen warning

Because the installer is not yet signed with a paid code-signing certificate, Windows SmartScreen may show a red "Windows protected your PC" dialog. This is expected. Click More info and then Run anyway to proceed.

Windows SmartScreen – click More info then Run anyway

Docker

A pre-built image is published to the GitHub Container Registry on every release — no clone required. Works on Linux, MacOS, and Windows. AI inference runs on CPU by default; GPU is optional.

1 - Install Docker

Download and install Docker Desktop (Windows / macOS) or Docker Engine (Linux).

2a - Trial Image
Just want to try it?
Run this one command — no data is saved after you stop it (CPU inference only):
docker run --rm -e PIXLSTASH_HOST=0.0.0.0 -p 9537:9537 ghcr.io/pikselkroken/pixlstash:latest
Press Ctrl+C to stop.
Everything is discarded when the container exits.
2b - Permanent Storage

First, create the storage folder so it is owned by your user (if Docker creates it, it will be owned by root and the container won't be able to write to it):

mkdir -p ~/Pictures/pixlstash

Then start the container:

docker run -d \ --user $(id -u):$(id -g) \ -e HOME=/home/pixlstash \ -e PIXLSTASH_HOST=0.0.0.0 \ -p 9537:9537 \ -v ~/Pictures/pixlstash:/home/pixlstash \ --name pixlstash \ ghcr.io/pikselkroken/pixlstash:latest

Replace ~/Pictures/pixlstash with whatever folder you want to store your data in.


The chosen folder will be mapped to the container's internal folder /home/pixlstash, which PixlStash uses for all its data storage. This means that all your pictures, tags, characters, stacks, and settings will be saved to the provided folder and persist even if you stop or remove the container.
The --user $(id -u):$(id -g) flag runs the container process as your host user, so it can read and write the folder you created above without needing any extra permission changes.

3 - Open in browser
http://localhost:9537
💡 To pin to a specific release, replace latest with a version tag, e.g. ghcr.io/pikselkroken/pixlstash:0.9.5.

To enable GPU inference with NVIDIA (Linux / WSL2)

Skip this section if you are happy with CPU inference. To use your NVIDIA GPU, first install the NVIDIA Container Toolkit:

Install NVIDIA Container Toolkit (Linux)
distribution=$(. /etc/os-release; echo $ID$VERSION_ID) curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey \ | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg curl -sL "https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list" \ | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \ | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
Install for WSL2

On Windows, install WSL2 with an NVIDIA Windows driver ≥ 525, then run the same commands inside your WSL2 distro.

Verify GPU access

Run the following commands to verify that Docker can access your NVIDIA GPU:

docker run --rm --gpus all nvidia/cuda:12.8.1-base-ubuntu24.04 nvidia-smi
Run the container with GPU support

Create the storage folder first if you haven't already (so it is owned by your user, not root):

mkdir -p ~/Pictures/pixlstash

Then start the container:

docker run -d \ --runtime nvidia \ --user $(id -u):$(id -g) \ -e HOME=/home/pixlstash \ -e NVIDIA_VISIBLE_DEVICES=all \ -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \ -e PIXLSTASH_HOST=0.0.0.0 \ -p 9537:9537 \ -v ~/Pictures/pixlstash:/home/pixlstash \ --name pixlstash \ ghcr.io/pikselkroken/pixlstash:latest-gpu

Replace ~/Pictures/pixlstash with whatever folder you want to store your data in.

pip + Virtual Environment

Install from PyPI. Requires Python 3.10 or newer. Using a virtual environment is strongly recommended.

1 — Create and activate a virtual environment
python -m venv venv

Linux / macOS

source venv/bin/activate

Windows

venv\Scripts\activate
2 — Install PixlStash
pip install pixlstash
3 — Start the server
pixlstash-server
4 — Open in browser
http://localhost:9537
💡 On first startup PixlStash downloads AI model weights (several hundred MB). An internet connection is required this first time — subsequent starts use the cached models.

Clone & Run from Source

Run directly from the Git repository. Requires Python 3.10+, Node.js 20+, and npm.

1 — Clone the repository
git clone https://github.com/pikselkroken/pixlstash.git cd pixlstash
2 — Create and activate a virtual environment
python -m venv venv

Linux / macOS

source venv/bin/activate

Windows

venv\Scripts\activate
3 — Install Python dependencies
pip install --upgrade pip pip install -e .
4 — Build the frontend
cd frontend && npm ci && npm run build && cd ..
5 — Start the server
pixlstash-server
6 — Open in browser
http://localhost:9537
💡 On first startup PixlStash downloads AI model weights (several hundred MB). An internet connection is required this first time — subsequent starts use the cached models.

GPU Acceleration (optional)

Skip this section if you are happy with CPU inference. To use your NVIDIA GPU, install CUDA 12.8 and the matching PyTorch and ONNX Runtime packages.

A — Install NVIDIA driver and CUDA Toolkit

Install or update your NVIDIA driver (must support CUDA 12.x), then install the CUDA 12.8 Toolkit for your OS. Verify the installation:

nvcc --version nvidia-smi
B — Install PyTorch with CUDA 12.8
pip install torch torchvision --force-reinstall --index-url https://download.pytorch.org/whl/cu128
C — Install ONNX Runtime GPU
pip uninstall -y onnxruntime pip install onnxruntime-gpu
D — Verify GPU availability

Linux / macOS

python - <<EOF import torch print("CUDA available:", torch.cuda.is_available()) EOF

Windows

py -c "import torch; print('CUDA available:', torch.cuda.is_available())"

Then set "default_device": "cuda" in your server-config.json to enable GPU inference.