Compare commits

...

10 Commits

11 changed files with 439 additions and 0 deletions

2
.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
.DS_Store
*.secret

42
nomad_jobs/REAMDE.md Normal file
View File

@ -0,0 +1,42 @@
# Nomad Job Specs
This directory contains two sub-directories: `apps` and `services`. If you are going to ignore this README you should at least read how this repository [Manages Secrets](#Managing%20Secrets).
## Apps Directory
The `apps` directory includes the Nomad job specifications for deployable applications along with their service dependencies. In this directory, you can find the Nomad job spec for the Penpot application, which includes all the necessary services required for its deployment.
These job specs are quite large and tough to reason about so it is recommended that you use the services directory to deploy applications after having deployed their dependencies. This is a good way to just get an app up and running. You will still need to investigate the job specs to make sure they meet your requirements such as having the proper host volumes available. (By default, host volumes are not used meaning data will not persist across restarts.)
**WARNING**: The orchestrator could restart your service at any time. If you do not have a host volume, you will lose all your data.
## Services Directory
The `services` directory contains standalone services that can be deployed without embedding dependencies in the job specifications. These are much smaller specs and easier to update but the administrator needs to ensure that the necessary services are deployed in advance such as Postgres being available before deploying Gitea. These dependencies are documented in the service readme.
## Managing Secrets
Many of the nomad jobs require secrets to be placed in the job spec. While you could integrate with a secrets provider like [Hashicorp Vault](https://www.vaultproject.io/), this is an additional service to manage and maintain. I definitely encourage you to take a look as it provides a lot of value such as secret rotation and auditing.
This repo uses [1password secret references](https://developer.1password.com/docs/cli/secret-references) for anything such as credentials or crypto strings you would need to set upon deployment. This allows you to easily see what fields you may need to set or provides a secure way to manage all the secrets you need to deploy your applications without risking them being added to version control by mistake.
If you choose to use [1password](https://1password.com/), you will need to install the [1password cli](https://support.1password.com/command-line-getting-started/) and login to your account. You can then use the `op` command to retrieve secrets from their respective vault and create an output file with the secrets injected.
The [1password cli](https://developer.1password.com/docs/cli/) is used to retrieve secrets and inject them into the job spec. This is done through the `op inject` command documented [here](https://developer.1password.com/docs/cli/secrets-config-files#step-2-inject-the-secrets).
```bash
op inject -i postgres.nomad.hcl -o postgres.nomad.hcl.secret
```
> Anything ending in `.secret` is ignored by git so you can safely output the secrets in the job spec without worrying about them being committed to version control.
# Available Services
| Service | Description by LLM | Service | App Spec |
| --- | --- | --- | -- |
| Caddy | Caddy is a web server and reverse proxy with automatic HTTPS written in Go. | [Service Readme](./services/caddy/README.md) |
| Gitea | Gitea is a self-hosted Git service written in Go. | [Service Readme](./services/gitea/README.md) | [App Spec](./apps/gitea-standalone.nomad.hcl) |
| Minio | MinIO is a high performance object storage server compatible with Amazon S3 APIs | [Service Readme](./services/minio/README.md) | |
| Penpot | Penpot is the first Open Source design and prototyping platform meant for cross-domain teams. Non dependent on operating systems, Penpot is web based and works with open web standards (SVG). For all and empowered by the community. | [Service Readme](./services/penpot/README.md) | [App Spec](./apps/penpot-standalone.nomad.hcl) |
| Postgres | PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. | [Service Readme](./services/postgres/README.md) | |
| Redis | Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. | [Service Readme](./services/redis/README.md) | |

View File

@ -0,0 +1,110 @@
# Deploy Gitea with dependancies encapsulated in the nomad job spec. This spec
# will not persist data between restarts. Good for getting started.
# WARNING: Set a secure password for the postgres user. Line 38
# WARNING: Update the domain gitea should be deployed to on traefik. Line 90
job "gitea-standalone" {
datacenters = ["dc1"]
group "database" {
count = 1
network {
mode = "bridge"
}
service {
name = "gitea-postgres-standalone"
port = "5432"
tags = ["traefik.enable=false"] # Hide postgres from traefik
connect {
sidecar_service {
tags = ["traefik.enable=false"] # Hide postgres envoy from traefik
}
}
}
task "postgres" {
driver = "docker"
config {
image = "postgres:16.1-alpine3.19"
}
env = {
"POSTGRES_USER"="gitea",
"POSTGRES_PASSWORD"="not-a-secure-password",
"POSTGRES_DB"="gitea"
}
}
}
group "frontend" {
count = 1
network {
mode = "bridge"
port "ingress" {
to = 3000
}
}
# Attach to Postgres Instance
service {
name = "postgres-gitea-standalone-envoy"
port = "ingress"
tags = ["traefik.enable=false"] # Hide envoy from traefik
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "gitea-postgres-standalone"
local_bind_address = "127.0.0.1"
local_bind_port = 5432
}
}
tags = ["traefik.enable=false"] # Hide envoy from traefik
}
}
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
# Expose to Traefik as a service
service {
name = "gitea-standalone"
port = "ingress"
tags = [
"traefik.enable=true",
"traefik.http.routers.gitea-standalone.tls=true",
"traefik.http.routers.gitea-standalone.entrypoints=websecure",
"traefik.http.routers.gitea-standalone.rule=Host(`git.example.local`)"
]
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
task "gitea-standalone" {
driver = "docker"
config {
image = "gitea/gitea:1.21.1"
ports = ["ingress"]
}
}
}
}

View File

@ -0,0 +1,44 @@
job "caddy" {
datacenters = ["dc1"]
type = "service"
group "caddy" {
count = 1
network {
port "http" {
to = 80
}
}
service {
name = "caddy"
provider = "consul"
port = "http"
tags = [
"traefik.enable=true",
"traefik.http.routers.caddy.tls=true",
"traefik.http.routers.caddy.entrypoints=websecure",
"traefik.http.routers.caddy.rule=Host(`example.local`)"
]
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
task "caddy" {
driver = "docker"
config {
image = "caddy:alpine"
ports = ["http"]
}
}
}
}

View File

@ -0,0 +1,16 @@
# Caddy Web Server
Caddy is a simple and performant web server / reverse proxy written in Go. It is designed to be easy to use and configure. It is great to test network connectivity and routing similar to how someone might use nginx just to make sure the host can be reached and server content.
While you can absolutely use Caddy as a reverse proxy, Traefik easily integrates with consul for service discovery so that is what is used for the reverse proxy and this nomad job spec is placed behind Traefik.
## Nomad Job for Caddy
There are no caddy configurations configured for this job spec. If you run it, it will register with consul and be available to Traefik for routing. If the domain name is configured correctly, you should be able to reach the Caddy welcome page.
## TODO
If you want to deploy this, you will need to update the domain name in the job spec.
| Line | Default | Adjustment |
| --- | --- | --- |
| 23 | `"traefik.http.routers.caddy.rule=Host('example.com')"` | Change `example.com` to your domain name |

View File

@ -0,0 +1,66 @@
job "gitea" {
datacenters = ["dc1"]
type = "service"
group "application" {
count = 1
network {
mode = "bridge"
port "ingress" {
to = 3000
}
}
volume "gitea-data" {
type = "host"
source = "gitea-data"
}
service {
name = "gitea"
port = "ingress"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "postgres"
local_bind_address = "127.0.0.1"
local_bind_port = 5432
}
}
tags = ["traefik.enable=false"] # Hide envoy from traefik
}
}
tags = [
"traefik.enable=true",
"traefik.http.routers.gitea.tls=true",
"traefik.http.routers.gitea.entrypoints=websecure",
"traefik.http.routers.gitea.rule=Host(`git.example.local`)"
]
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
task "gitea" {
driver = "docker"
config {
image = "gitea/gitea:1.21.1"
ports = ["ingress"]
}
volume_mount {
volume = "gitea-data"
destination = "/data"
}
}
}
}

View File

@ -0,0 +1,26 @@
# Gitea
Gitea is a self-hosted git service. It is a great alternative to GitHub or GitLab. It is lightweight and easy to use. It is also easy to deploy and manage while still providing for functionality like SSO and LDAP integration.
Gitea should be configured to not utilize SSH as the job spec does not support it so that SSH is not exposed outside of the home network. If you want to use SSH, you will need to modify the job spec to expose the port and configure the service to use it. You can still run git operations over HTTPS.
## Nomad Job for Gitea
You will need to modify the job spec items listed under [TODO](./readme.md#TODO) but there are no Gitea specific adjustments needed. If you run it, it will register with consul and be available to Traefik for routing. If the domain name is configured correctly, you should be able to reach the Gitea setup page to make the needed configuration changes.
## Service Dependencies
- A Valid [Host Volume](../../../host_init/README.md#Storage%20and%20ZFS)
- [Postgres](../postgres/readme.md)
## TODO
If you want to deploy this, you will need to verify you have a valid host volume and update the domain name in the job spec.
| Line | Default | Adjustment |
| --- | --- | --- |
| 17 | `source = "gitea-data"` | Change `gitea-data` to a valid host volume name |
| 46 | `"traefik.http.routers.caddy.rule=Host('git.example.com')"` | Change `git.example.com` to your domain name |
| 66 | `volume = "gitea-data"` | Change `gitea-data` to the host volume defined on line 15 if applicable |
## Configuring Gitea
There is no need to embed secrets in the nomad job spec. When you first visit the domain name you configured, you will be prompted to configure Gitea. Postgres should be mounted to the container on the standard `5432` port so you can select postgres as the database type and use `127.0.0.1:5432` as the address and input the username, password, and database name you created for Gitea to use.
If you need help making those credentials, take a look at the [postgres readme](../postgres/readme.md#Make%20a%20New%20Database).

View File

@ -0,0 +1,53 @@
job "postgres" {
datacenters = ["dc1"]
type = "service"
group "database" {
count = 1
network {
mode = "bridge"
port "ingress" {
to = 5432
}
}
volume "postgres-data" {
type = "host"
source = "postgres"
}
service {
# Make available to other services by the 'postgres' name
name = "postgres"
port = "5432"
tags = ["traefik.enable=false"] # Hide postgres from traefik
# Make available through the consul service mesh
connect {
sidecar_service {
tags = ["traefik.enable=false"] # Hide postgres envoy from traefik
}
}
}
task "postgres" {
driver = "docker"
volume_mount {
volume = "postgres-data"
destination = "/var/lib/postgresql/data"
}
config {
image = "postgres:16.1-alpine3.19"
ports = ["ingress"]
}
env = {
POSTGRES_USER="op://InfraSecrets/Postgres Root/username",
POSTGRES_PASSWORD="op://InfraSecrets/Postgres Root/password"
}
}
}
}

View File

@ -0,0 +1,36 @@
# Postgres
Postgres is a relational database that is open source and widely used. This is a single instance of postgres relying on the host volume for storage. This is not a highly available or fault tolerant setup. The only data redundancy is at the storage layer through ZFS but that is on a single host. If high availability or scalability is a requirement for you, consider a cloud provider like [Neon](https://neon.tech/) or a more robust setup.
## Nomad Job for Postgres
Nomad requires a Host Volume to persist data across restarts. This will limit the portability of the running instance but it is simple to configure. If you want have dynamic storage, you will need to modify the job spec to use a different storage driver such as [Ceph](https://docs.ceph.com/en/latest/start/intro/) or [Seaweedfs](https://github.com/seaweedfs/seaweedfs/wiki).
Postgres will have a default user which is a good one to use for making application specific users and databases. These are defined through environment variables in the nomad job spec so you only need to edit the job spec to meet your requirements. If you run it, it will register with consul but be hidden to Traefik meaning you can only access it through the service mesh.
## Service Dependencies
- A Valid [Host Volume](../../../host_init/README.md#Storage%20and%20ZFS)
## TODO
If you want to deploy this, you will need to verify you have a valid host volume and set the initial postgres root credentials.
| Line | Default | Adjustment |
| --- | --- | --- |
| 17 | `source = "postgres"` | Change `postgres` to a valid host volume name |
| 38 | `volume = "postgres-data"` | Change `postgres-data` to the host volume defined on line 15 if applicable |
| 48 | `"POSTGRES_USER"="op://InfraSecrets/Postgres Root/username"` | Change the value to the root username you want. By default, this is a 1password path. See [Managing Secrets](../../REAMDE.md#Managing_Secrets) for more information |
| 49 | `"POSTGRES_PASSWORD"="op://InfraSecrets/Postgres Root/password"` | Change the value to the root password you want. By default, this is a 1password path. See [Managing Secrets](../../REAMDE.md#Managing_Secrets) for more information |
## Make a New Database
You can easily deploy a new postgres database by changing the job name on `line 1` and the service name it is exposed as on `line 22`. You should of course make the other changes mentioned above in the TODO section as well to not cause resource conflicts or use duplicated credentials.
Alternatively, postgres is a relational database management system (RDBMS) meaning we can actually have multiple databases for different applications within a single instance. This is not recommended for production environments as it is a single point of failure. If the postgres instance goes down, all the databases go down but it is a good way to reduce the overhead of running multiple database instances.
You can make a new user and database by entering the exec shell of the postgres
container through the nomad UI and run psql to connect using the root credentials. From there run the following example command to make a user and database for your application:
```sql
CREATE USER appname WITH PASSWORD 'not-a-secure-password';
CREATE DATABASE appname WITH OWNER appname;
```
The user and database can be the same name because they are records in different tables but feel free to make them whatever you think is best.

View File

@ -0,0 +1,7 @@
# Redis
Redis is a Remote Dictionary Server (that's where Redis get's its name) that is open source and widely used. This is a single instance of in-memory storage for use primarily as a caching layer for other services where data does not need to be persisted. This is not a highly available or fault tolerant setup. If high availability, data persistance, or scalability is a requirement for you, consider a cloud provider like [upstash](https://upstash.com/) or a more robust setup.
## Nomad Job for Redis
Redis requires no configuration but is only available through the service mesh. This means you will need to deploy a service that can connect to the service mesh to access redis. This is a good thing because it means you can easily deploy a redis instance for your application without having to worry about the security of the instance.
If you need to use the CLI, you can access it through Nomad's exec shell. This will default to /bin/bash which does not exist on alpine linux so you will need to change it to /bin/ash. Once you are in the shell, you can run the redis-cli command to connect to the redis instance.

View File

@ -0,0 +1,37 @@
job "redis-cache" {
datacenters = ["dc1"]
group "cache" {
count = 1
network {
mode = "bridge"
port "redis" {
to = 6379
}
}
service {
# Make available to other services by the 'redis-cache' name
name = "redis-cache"
port = "6379"
tags = ["traefik.enable=false"] # Hide redis from traefik
# Make available through the consul service mesh
connect {
sidecar_service {
tags = ["traefik.enable=false"] # Hide redis envoy from traefik
}
}
}
task "redis" {
driver = "docker"
config {
image = "redis:7.2.3-alpine"
ports = ["redis"]
}
}
}
}