| crates | ||
| Cargo.lock | ||
| Cargo.toml | ||
| README.md | ||
Jupiter
A self-hosted, wire-compatible replacement for hercules-ci.com.
Jupiter lets you run your own Hercules CI server. Unmodified hercules-ci-agent binaries connect to Jupiter instead of the official cloud service, giving you full control over your CI infrastructure while keeping the same agent tooling and Nix integration.
Features
- Wire-compatible with the real Hercules CI agent protocol (WebSocket + JSON, matching Haskell
aesonserialization) - Full CI pipeline: evaluation, building, and effects (deployments, notifications, etc.)
- Forge integrations: GitHub (App-based), Gitea / Forgejo, and Radicle
- Built-in Nix binary cache compatible with
cache.nixos.orgprotocol - Admin CLI (
jupiter-ctl) for managing accounts, projects, agents, jobs, and tokens - SQLite by default, with a trait-based storage abstraction designed for adding PostgreSQL
Architecture
Forge webhook ──> /webhooks/{github,gitea} ──> SchedulerEvent::ForgeEvent
│
v
CLI / UI ──> REST API (/api/v1/...) SchedulerEngine
│
dispatches tasks via AgentHub
│
v
hercules-ci-agent <── WebSocket (/api/v1/agent/socket)
The server is built with Axum and Tokio. The scheduler runs as a single background task receiving events over a bounded mpsc channel. Agents connect via WebSocket and are dispatched evaluation, build, and effect tasks.
Job pipeline
Pending → Evaluating → Building → RunningEffects → Succeeded / Failed
Builds are deduplicated by derivation store path across jobs. Effects on the same (project, ref) pair are serialized via sequence numbers to prevent concurrent deploys. If an agent disconnects, its in-flight tasks are re-queued to Pending.
Crate structure
| Crate | Purpose |
|---|---|
jupiter-server |
Axum HTTP/WebSocket server, route handlers, auth, config |
jupiter-api-types |
Wire-compatible type definitions shared across all crates |
jupiter-db |
StorageBackend trait and SQLite implementation (sqlx) |
jupiter-scheduler |
Event-driven scheduler engine driving the job pipeline |
jupiter-forge |
ForgeProvider trait with GitHub, Gitea, and Radicle backends |
jupiter-cache |
Optional built-in Nix binary cache (narinfo + NAR storage) |
jupiter-cli |
jupiter-ctl admin CLI |
Quick start
Prerequisites
- Rust 1.75+ (2021 edition)
- A
hercules-ci-agentbinary (the standard one from Nixpkgs)
Build
cargo build --release
This produces two binaries:
target/release/jupiter-server-- the CI servertarget/release/jupiter-ctl-- the admin CLI
Run
# Start with built-in defaults (localhost:3000, SQLite at ./jupiter.db)
./target/release/jupiter-server
# Or with a custom config file
./target/release/jupiter-server /path/to/jupiter.toml
On first run without a config file, Jupiter uses sensible development defaults and creates the SQLite database automatically.
Set up an account and agent token
# Create an account
jupiter-ctl account create myorg
# Create a cluster join token (shown only once)
jupiter-ctl token create --account-id <ACCOUNT_ID> --name "my-agent"
Put the token in your hercules-ci-agent configuration and point the agent at your Jupiter server.
Configuration
Jupiter is configured via a TOML file. All fields use camelCase to match the Hercules CI convention.
listen = "0.0.0.0:3000"
baseUrl = "https://ci.example.com"
jwtPrivateKey = "change-me-in-production"
[database]
type = "sqlite"
path = "/var/lib/jupiter/jupiter.db"
# GitHub App integration
[[forges]]
type = "GitHub"
appId = "12345"
privateKeyPath = "/etc/jupiter/github-app.pem"
webhookSecret = "your-webhook-secret"
# Gitea / Forgejo integration
[[forges]]
type = "Gitea"
baseUrl = "https://gitea.example.com"
apiToken = "your-gitea-token"
webhookSecret = "your-webhook-secret"
# Optional built-in Nix binary cache
[binaryCache]
storage = "local"
path = "/var/cache/jupiter"
maxSizeGb = 50
Environment variables
| Variable | Purpose |
|---|---|
RUST_LOG |
Tracing filter (default: jupiter=info) |
JUPITER_URL |
Server URL for jupiter-ctl (default: http://localhost:3000) |
JUPITER_TOKEN |
Bearer token for jupiter-ctl |
CLI reference
jupiter-ctl provides administrative access to every server resource:
jupiter-ctl health # Server liveness check
jupiter-ctl account list # List accounts
jupiter-ctl account create <name> # Create account
jupiter-ctl agent list # List connected agents
jupiter-ctl project list # List projects
jupiter-ctl project create --account-id ... --repo-id ... --name ...
jupiter-ctl job list --project-id <id> # List jobs
jupiter-ctl job rerun <id> # Re-run a job
jupiter-ctl job cancel <id> # Cancel a running job
jupiter-ctl token create --account-id ... --name ... # New join token
jupiter-ctl token revoke <id> # Revoke a join token
jupiter-ctl state list --project-id <id> # List state files
jupiter-ctl state get --project-id ... --name ... [--output file]
jupiter-ctl state put --project-id ... --name ... --input file
API
The REST API is served under /api/v1/. Key endpoint groups:
| Path | Description |
|---|---|
GET /api/v1/health |
Health check |
/api/v1/accounts |
Account CRUD |
/api/v1/agents |
Agent listing |
/api/v1/projects |
Project CRUD, enable/disable |
/api/v1/projects/{id}/jobs |
Jobs for a project |
/api/v1/jobs/{id} |
Job details, rerun, cancel |
/api/v1/jobs/{id}/builds |
Builds for a job |
/api/v1/jobs/{id}/effects |
Effects for a job |
/api/v1/projects/{id}/state/{name}/data |
State file upload/download |
/api/v1/agent/socket |
Agent WebSocket endpoint |
POST /webhooks/github |
GitHub webhook receiver |
POST /webhooks/gitea |
Gitea webhook receiver |
POST /auth/token |
JWT token issuance |
When the binary cache is enabled, the Nix cache protocol is also served:
| Path | Description |
|---|---|
GET /nix-cache-info |
Cache metadata |
GET /{hash}.narinfo |
NARInfo lookup |
PUT /{hash}.narinfo |
NARInfo upload |
GET /nar/{file} |
NAR archive download |
License
Apache-2.0