Compare commits

..

68 Commits

Author SHA1 Message Date
eea4c61537 Quick A record for testing static website migration
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 17s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 1m25s
2026-01-31 12:29:49 -08:00
ee860c6e1f Common names now line up with hostnames in certificate through the 1 ingress (fire emoji)
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 8s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2026-01-13 23:18:41 -08:00
1c11410c2d More resource re-factors, upgrades and fixes for future work
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
Housekeeping but the wiki got hosed :((
2026-01-07 00:53:11 -08:00
4d71994b85 Upgrading provider versions
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2026-01-07 00:21:12 -08:00
79cb4eb1a6 Cleaning up unused code
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2026-01-07 00:02:11 -08:00
e8817fe093 Adding wiki to DNS and opening it up on the ingress for public read access 2026-01-06 19:12:31 -08:00
97bffd2042 Adding note regarding git.shockrah.xyz & code.shockrah.xyz 2026-01-06 19:06:23 -08:00
37305fd74e Exposing 2222 in gitea service however ingress still needs configuration
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2026-01-06 00:06:47 -08:00
555124bf2f Shortening ingress definition
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2026-01-03 23:07:33 -08:00
e209da949b Adding wiki service with a basic page for now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2026-01-03 21:43:16 -08:00
caa2eba639 Removing unused helm charts
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2025-12-28 19:30:13 -08:00
982669ed4a Cleaning up the logging namespace and resource as they are not getting value
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 24s
2025-12-12 14:41:29 -08:00
4446ef813f Fixing auto_scaler issue with root node pool in athens cluster 2025-12-12 14:40:54 -08:00
9dc2f1d769 Adding sample filese and fluent bit configs which still need some work
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-11-10 14:18:05 -08:00
01b7b4ced8 Moving logging related things to the new logging namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-11-05 21:55:40 -08:00
29cdfcb695 openobserve inimal setup running now with it's own namespace and volumes
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-11-04 23:24:16 -08:00
bbbc9ed477 Upsizing the singular node to accomodate the new observability stack 2025-11-04 23:20:03 -08:00
d64c5526e6 Creating namespace for the openserve 2025-11-04 23:18:39 -08:00
469b3d08ce Adding hashicorp/random provider 2025-11-04 23:16:58 -08:00
7f5b3205d0 Ingress functional however this is all in a cooked af namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-11-03 02:14:06 -08:00
67ff5ce729 Gitea appearing functional with the service in place, now waiting on LB setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-11-03 01:48:29 -08:00
6aadb47c61 Adding code.shockrah.xyz to DNS member list 2025-11-03 01:48:09 -08:00
0624161f53 Fixing the PV for gitea which now lives in the dev namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-11-03 01:30:16 -08:00
c6b2a062e9 Creating dev namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-11-03 01:17:54 -08:00
718647f617 Adding a new uptime service to configure later on
For now I'm staging this in the playground namespace since IDK if I'm going to keep it 5ever + it's an excuse to learn how to use basic volumes
2025-11-02 21:31:22 -08:00
cfe631eba7 Creating pvc for gitea setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-10-22 16:15:04 -07:00
29e049cf7f Moving legacy yaml 2025-10-21 15:15:24 -07:00
990d29ae6c Adding annotations & host field to ingress
Also updating the staging target to production target for lets encrypt cluster issuer
2025-10-21 12:42:02 -07:00
859201109e Adding required annotations for cert-manager on the ingress resource
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-10-03 18:04:20 -07:00
de3bff8f14 Creating cluster issuer with yaml piped into terraform
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-10-03 18:01:16 -07:00
54a6ddbe5d Changing out the kubectl provider for a new one 2025-10-03 17:59:01 -07:00
82333fe6ce Setting up cert-manager helm_release
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-10-03 17:25:23 -07:00
cddf67de2f Updating health ingress resource with better naming/referecing
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-28 15:56:24 -07:00
affa03bed5 Updating DNS for load balancer sanity.shockrah.xyz A record
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-09-28 13:56:47 -07:00
34e1f6afdf Converting backend provider helm config to use config.yaml
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-27 14:38:26 -07:00
fd9bd290af Adding support for helm releases
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
Intended for setting up the nginx-ingress controller
2025-09-20 11:05:00 -07:00
d992556032 Basic sanity service now working under public DNS
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-17 23:32:50 -07:00
fce73d06e0 Adding dns vars for sanity.shockrah.xyz
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-17 23:08:33 -07:00
7f5d81f0ee Deployment and Service for a simple health check tier service
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-09-17 22:37:03 -07:00
410790765f Creating new namespace in cluster for random k8s experiments 2025-09-17 22:33:34 -07:00
9454e03f53 Example service now uses tls for some reason
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-09 18:05:12 -07:00
e6ed85920d Creating semi-functional tls cert with k8s
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 9s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 5s
Certificate resource is created but not deployed at this time
2025-09-08 21:00:24 -07:00
2775d354f8 Creating functional ingress 2025-09-08 20:58:44 -07:00
1f6f013634 Cleaning up unused resources
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 10s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 9s
2025-09-08 20:58:29 -07:00
778b995980 Adding DNS entry for VKE LB
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-03 14:27:52 -07:00
fc897bdd0e New yaml for a working MVP
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
Still need to add thing like TLS but this will basically
be  the template for routing and service setup, going forward
2025-08-29 16:34:18 -07:00
8f06ef269a Basic health setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-08-27 18:13:39 -07:00
f15da0c88d Removing old kubernetes tf infrastructure
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-08-27 00:30:38 -07:00
c602773657 Removing tarpit project to save on costs
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-08-08 07:32:33 -07:00
cd908d9c14 Sample cronjob for k3s
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-08-05 21:50:30 -07:00
56e9c0ae4a Merge branch 'master' of ssh://git.shockrah.xyz:2222/shockrah/infra
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-07-25 12:43:56 -07:00
30bc6ee2fa Ignore kubectl config 2025-07-25 12:43:53 -07:00
cd9822bb85 Basic health service on port 30808 on the new k3s cluster 2025-07-25 12:43:30 -07:00
0efe6ca642 removing nomad configs 2025-07-25 12:02:31 -07:00
2ef4b00097 Removing old nomad configs
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 13s
2025-07-14 20:43:40 -07:00
e183055282 Nomad removal
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 31s
2025-07-14 20:15:54 -07:00
514909fc8d Removing nomad and consul in favor of K3S for more well supported architecture 2025-07-14 20:14:37 -07:00
5b4a440cb4 Removing the last remnants of nomad and getting k3s setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-07-10 20:47:27 -07:00
826d334c3c Completing the state required to setup k3s
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 15s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 37s
2025-07-10 14:06:53 -07:00
77590b067a Create the static host volume for the new NFS
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-06-18 17:08:00 -07:00
850570faf5 Creating simple bastion host for testing deployment setup scripts
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-06-16 15:15:09 -07:00
12831fbaf3 Adding vars for the new bastion host
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
2025-06-12 09:20:21 -07:00
a6123dd7e2 Adding tls into required providers
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-06-12 09:16:10 -07:00
9c2e0a84d7 Creating VKE cluster in a private VPC
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-06-04 23:29:26 -07:00
1281ea8857 simplifying vars
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-28 21:22:10 -07:00
ee2d502ca6 Slimming down the cluster definition
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
2025-05-28 18:24:31 -07:00
88059a5e0f upgrading providers 2025-05-28 17:43:56 -07:00
4024809cc4 Removing scripts that won't ever be used again 2025-05-29 00:35:32 -07:00
61 changed files with 741 additions and 906 deletions

9
ansible/nomad.yaml Normal file
View File

@@ -0,0 +1,9 @@
---
- name: Setup all the responsibilities of the nomad server
hosts: nigel.local
remote_user: nigel
tasks:
- name: Apply the nomad role
ansible.builtin.include_role:
name: nomad

View File

@@ -1,12 +1,14 @@
--- ---
- name: Setup bare metal requirements for nomad - name: Setup bare metal requirements
hosts: nigel.local hosts: nigel.local
remote_user: nigel remote_user: nigel
tasks: tasks:
- name: Setup basic role on nigel - name: Apply the base role to the nuc
tags:
- setup
- nomad
- volumes
ansible.builtin.include_role: ansible.builtin.include_role:
name: local-server-head name: base
- name: Apply the k3s base role
ansible.builtin.include_role:
name: k3s
- name: Apply the proxy role
ansible.builtin.include_role:
name: proxy

View File

@@ -0,0 +1,8 @@
- name: Download the setup script
ansible.builtin.get_url:
url: https://get.k3s.io
dest: /tmp/k3s.sh
mode: "0644"
- name: Run installation script
ansible.builtin.command:
cmd: bash /tmp/k3s.sh

View File

@@ -15,7 +15,7 @@
become: true become: true
tags: tags:
- setup - setup
- name: Run through nomad installation steps - name: Run through nomad removal steps
tags: nomad tags: nomad
ansible.builtin.include_tasks: ansible.builtin.include_tasks:
file: nomad.yaml file: nomad.yaml

View File

@@ -1,55 +0,0 @@
- name: Ensure prerequisite packages are installed
ansible.builtin.apt:
pkg:
- wget
- gpg
- coreutils
update_cache: true
- name: Hashicorp repo setup
vars:
keypath: /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpgpath: /tmp/hashicorp.gpg
block:
- name: Download the hashicorp GPG Key
ansible.builtin.get_url:
url: https://apt.releases.hashicorp.com/gpg
dest: "{{ gpgpath }}"
mode: "0755"
- name: Dearmor the hashicorp gpg key
ansible.builtin.command:
cmd: "gpg --dearmor --yes -o {{ keypath }} {{ gpgpath }}"
register: gpg
changed_when: gpg.rc == 0
- name: Add the hashicorp linux repo
vars:
keyfile: "{{ keypath }}"
ansible.builtin.template:
src: hashicorp.list
dest: /etc/apt/sources.list.d/hashicorp.list
mode: "0644"
- name: Update apt repo cache
ansible.builtin.apt:
update_cache: true
- name: Install consul
ansible.builtin.apt:
name: consul
- name: Install nomad package
ansible.builtin.apt:
pkg: nomad
- name: Copy in the consul configuration
vars:
ip: "{{ ansible_default_ipv4['address'] }}"
ansible.builtin.template:
src: consul.hcl
dest: /etc/consul.d/consul.hcl
mode: "0644"
- name: Start nomad
ansible.builtin.systemd_service:
name: nomad
state: started
enabled: true
- name: Make sure the consul service is NOT available
ansible.builtin.systemd_service:
name: consul
state: stopped
enabled: true

View File

@@ -0,0 +1,11 @@
- name: Download the installation script
ansible.builtin.get_url:
url: https://get.k3s.io
dest: /tmp
register: install_script
- name: Run installation script
become: true
environment:
INSTALL_K3S_EXEC: server
ansible.builtin.command:
cmd: sh {{ install_script.dest }}

View File

View File

@@ -16,3 +16,9 @@ host_volume "registry" {
path = "/opt/volumes/registry" path = "/opt/volumes/registry"
read_only = false read_only = false
} }
host_volume "nfs" {
path = "/opt/volumes/nfs"
read_only = false
}

View File

@@ -1,10 +1,18 @@
- name: Ensure the root data directory is present - name: Nomad server configuration
ansible.builtin.file: become: true
path: "{{ nomad.volumes.root }}" block:
state: directory - name: Ensure the root data directory is present
mode: "0755" ansible.builtin.file:
- name: Ensure registry volume is present path: "{{ nomad.volumes.root }}"
ansible.builtin.file: state: absent
path: "{{ nomad.volumes.registry }}" mode: "0755"
state: directory - name: Ensure registry volume is present
mode: "0755" ansible.builtin.file:
path: "{{ nomad.volumes.registry }}"
state: absent
mode: "0755"
- name: Ensure the MinIO diretory is present
ansible.builtin.file:
path: "{{ nomad.volumes.nfs }}"
state: absent
mode: "0755"

View File

@@ -2,3 +2,4 @@ nomad:
volumes: volumes:
root: /opt/volumes root: /opt/volumes
registry: /opt/volumes/ncr registry: /opt/volumes/ncr
nfs: /opt/volumes/nfs

View File

@@ -37,7 +37,10 @@ locals {
{ name = "www.shockrah.xyz", records = [ var.vultr_host ] }, { name = "www.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "resume.shockrah.xyz", records = [ var.vultr_host ] }, { name = "resume.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "git.shockrah.xyz", records = [ var.vultr_host ] }, { name = "git.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "lmao.shockrah.xyz", records = [ "207.246.107.99" ] }, { name = "sanity.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "uptime.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "code.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "wiki.shockrah.xyz", records = [ var.vke_lb ] },
] ]
} }

View File

@@ -33,3 +33,11 @@ resource "aws_route53_record" "temper-tv-mx" {
"50 fb.mail.gandi.net.", "50 fb.mail.gandi.net.",
] ]
} }
resource "aws_route53_record" "temper-tv-test" {
zone_id = aws_route53_zone.temper-tv.id
name = "test.temper.tv"
type = "A"
ttl = 300
records = [ var.vke_lb ]
}

View File

@@ -26,3 +26,7 @@ variable "vultr_host" {
description = "IP of the temp Vultr host" description = "IP of the temp Vultr host"
} }
variable "vke_lb" {
type = string
description = "IP of our VKE load balancer"
}

View File

@@ -1 +1,2 @@
vultr_host = "45.32.83.83" vultr_host = "45.32.83.83"
vke_lb = "45.32.89.101"

1
infra/nigel-k3s/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
config.yaml

View File

@@ -0,0 +1,35 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
name: nginx-port
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
nodePort: 30808
targetPort: nginx-port

View File

@@ -0,0 +1,19 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the sample cron-container
restartPolicy: OnFailure

View File

@@ -1,46 +0,0 @@
# Nigel's Container Registry
job "ncr" {
type = "service"
group "ncr" {
count = 1
network {
port "docker" {
static = 5000
}
}
service {
name = "ncr"
port = "docker"
provider = "nomad"
}
volume "container_images" {
type = "host"
read_only = false
source = "registry"
}
restart {
attempts = 10
interval = "5m"
delay = "30s"
mode = "delay"
}
task "ncr" {
driver = "docker"
volume_mount {
volume = "container_images"
destination = "/registry/data"
read_only = false
}
config {
image = "registry:latest"
ports = [ "docker" ]
}
}
}
}

View File

@@ -1,30 +0,0 @@
# This 'service' job is just a simple nginx container that lives here as a kind of sanity check
# PORT: 8080
# DNS : sanity.nigel.local
job "health" {
type = "service"
group "health" {
count = 1
network {
port "http" {
static = 8080
}
}
service {
name = "health-svc"
port = "http"
provider = "nomad"
}
task "health-setup" {
driver = "docker"
config {
image = "shockrah/sanity:latest"
ports = [ "http" ]
}
}
}
}

View File

@@ -1,28 +0,0 @@
resource tls_private_key tarpit {
algorithm = "RSA"
rsa_bits = 4096
}
resource vultr_ssh_key tarpit {
name = "tarpit_ssh_key"
ssh_key = chomp(tls_private_key.tarpit.public_key_openssh)
}
resource vultr_instance tarpit {
# Core configuration
plan = var.host.plan
region = var.host.region
os_id = var.host.os
enable_ipv6 = true
ssh_key_ids = [ vultr_ssh_key.host.id ]
firewall_group_id = vultr_firewall_group.host.id
label = "Tarpit"
}
output tarpit_ssh_key {
sensitive = true
value = tls_private_key.host.private_key_pem
}

View File

@@ -1,62 +0,0 @@
resource kubernetes_namespace admin-servers {
count = length(var.admin_services.configs) > 0 ? 1 : 0
metadata {
name = var.admin_services.namespace
}
}
resource kubernetes_pod admin {
for_each = var.admin_services.configs
metadata {
name = each.key
namespace = var.admin_services.namespace
labels = {
app = each.key
}
}
spec {
node_selector = {
"vke.vultr.com/node-pool" = var.admin_services.namespace
}
container {
image = each.value.image
name = coalesce(each.value.name, each.key)
resources {
limits = {
cpu = each.value.cpu
memory = each.value.mem
}
}
port {
container_port = each.value.port.internal
protocol = coalesce(each.value.proto, "TCP")
}
}
}
}
resource kubernetes_service admin {
for_each = var.admin_services.configs
metadata {
name = each.key
namespace = var.admin_services.namespace
labels = {
app = each.key
}
}
# TODO: don't make these NodePorts since we're gonna want them
# to be purely internal to the Cluster.
# WHY? Because we want to keep dashboards as unexposed as possible
spec {
selector = {
app = each.key
}
port {
target_port = each.value.port.internal
port = each.value.port.expose
}
type = "NodePort"
}
}

View File

@@ -9,15 +9,31 @@ terraform {
required_providers { required_providers {
aws = { aws = {
source = "hashicorp/aws" source = "hashicorp/aws"
version = "~> 5.0" version = "6.27.0"
} }
vultr = { vultr = {
source = "vultr/vultr" source = "vultr/vultr"
version = "2.22.1" version = "2.26.0"
} }
kubernetes = { kubernetes = {
source = "hashicorp/kubernetes" source = "hashicorp/kubernetes"
version = "2.34.0" version = "3.0.1"
}
kubectl = {
source = "gavinbunney/kubectl"
version = " 1.19.0"
}
helm = {
source = "hashicorp/helm"
version = "3.0.2"
}
tls = {
source = "hashicorp/tls"
version = "4.1.0"
}
random = {
source = "hashicorp/random"
version = "3.7.2"
} }
} }
} }
@@ -39,4 +55,12 @@ provider kubernetes {
config_path = "config.yaml" config_path = "config.yaml"
} }
provider kubectl {
config_path = "config.yaml"
}
provider helm {
kubernetes = {
config_path = "config.yaml"
}
}

View File

@@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
preferredChain: "ISRG Root X1"
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx

View File

@@ -1,28 +1,17 @@
resource vultr_kubernetes athens { resource vultr_kubernetes athens {
region = var.cluster.region region = var.cluster.region
version = var.cluster.version version = var.cluster.version
label = var.cluster.label label = var.cluster.label
# BUG: only have this set when creating the resource for the first time # vpc_id = vultr_vpc.athens.id
# once the cluster is up, we should comment this out again
# enable_firewall = true
node_pools {
node_quantity = 1
plan = var.cluster.pools["meta"].plan
label = var.admin_services.namespace
min_nodes = var.cluster.pools["meta"].min
max_nodes = var.cluster.pools["meta"].max
# tag = var.admin_services.namespace
}
}
resource vultr_kubernetes_node_pools games { node_pools {
cluster_id = vultr_kubernetes.athens.id node_quantity = var.cluster.pools["main"].min_nodes
node_quantity = var.cluster.pools["games"].min plan = var.cluster.pools["main"].plan
plan = var.cluster.pools["games"].plan label = var.cluster.pools["main"].label
label = var.game_servers.namespace min_nodes = var.cluster.pools["main"].min_nodes
min_nodes = var.cluster.pools["games"].min max_nodes = var.cluster.pools["main"].max_nodes
max_nodes = var.cluster.pools["games"].max auto_scaler = true
tag = var.game_servers.namespace }
} }
output k8s_config { output k8s_config {

View File

@@ -0,0 +1,6 @@
data vultr_kubernetes athens {
filter {
name = "label"
values = [ var.cluster.label ]
}
}

View File

@@ -1,4 +0,0 @@
# created by virtualenv automatically
bin/
lib/

View File

@@ -1,59 +0,0 @@
from argparse import ArgumentParser
from argparse import Namespace
from kubernetes import client, config
import re
def get_args() -> Namespace:
parser = ArgumentParser(
prog="Cluster Search Thing",
description="General utility for finding resources for game server bot"
)
games = {"health", "reflex", "minecraft"}
parser.add_argument('-g', '--game', required=False, choices=games)
admin = {"health"}
parser.add_argument('-a', '--admin', required=False, choices=admin)
return parser.parse_args()
def k8s_api(config_path: str) -> client.api.core_v1_api.CoreV1Api:
config.load_kube_config("../config.yaml")
return client.CoreV1Api()
def get_admin_service_details(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
print('admin thing requested', args.admin)
services = api.list_service_for_all_namespaces(label_selector=f'app={args.admin}')
if len(services.items) == 0:
print(f'Unable to find {args.admin} amongst the admin-services')
return
port = services.items[0].spec.ports[0].port
node_ips = list(filter(lambda a: a.type == 'ExternalIP', api.list_node().items[0].status.addresses))
ipv4 = list(filter(lambda item: not re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
ipv6 = list(filter(lambda item: re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
print(f'{args.admin} --> {ipv4}:{port} ~~> {ipv6}:{port}')
def get_game_server_ip(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
services = api.list_service_for_all_namespaces(label_selector=f'app={args.game}')
port = services.items[0].spec.ports[0].port
# Collecting the IPV4 of the node that contains the pod(container)
# we actually care about. Since these pods only have 1 container
# Now we collect specific data about the game server we requested
node_ips = list(filter(lambda a: a.type == 'ExternalIP', api.list_node().items[0].status.addresses))
ipv4 = list(filter(lambda item: not re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
ipv6 = list(filter(lambda item: re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
print(f'{args.game} --> {ipv4}:{port} ~~> {ipv6}:{port}')
if __name__ == '__main__':
args = get_args()
api = k8s_api('../config.yaml')
if args.game:
get_game_server_ip(args, api)
if args.admin:
get_admin_service_details(args, api)

View File

@@ -1,8 +0,0 @@
home = /usr
implementation = CPython
version_info = 3.10.12.final.0
virtualenv = 20.13.0+ds
include-system-site-packages = false
base-prefix = /usr
base-exec-prefix = /usr
base-executable = /usr/bin/python3

View File

@@ -1,18 +0,0 @@
cachetools==5.5.0
certifi==2024.8.30
charset-normalizer==3.4.0
durationpy==0.9
google-auth==2.36.0
idna==3.10
kubernetes==31.0.0
oauthlib==3.2.2
pyasn1==0.6.1
pyasn1_modules==0.4.1
python-dateutil==2.9.0.post0
PyYAML==6.0.2
requests==2.32.3
requests-oauthlib==2.0.0
rsa==4.9
six==1.17.0
urllib3==2.2.3
websocket-client==1.8.0

View File

@@ -1,31 +1,10 @@
resource vultr_firewall_rule web_inbound { # resource vultr_firewall_rule web_inbound {
for_each = toset([for port in [80, 443, 6443] : tostring(port) ]) # for_each = toset([for port in [80, 443, 6443] : tostring(port) ])
firewall_group_id = vultr_kubernetes.athens.firewall_group_id # firewall_group_id = vultr_kubernetes.athens.firewall_group_id
protocol = "tcp" # protocol = "tcp"
ip_type = "v4" # ip_type = "v4"
subnet = "0.0.0.0" # subnet = "0.0.0.0"
subnet_size = 0 # subnet_size = 0
port = each.value # port = each.value
} # }
resource vultr_firewall_rule game-server-inbound {
for_each = var.game_servers.configs
firewall_group_id = vultr_kubernetes.athens.firewall_group_id
protocol = "tcp"
ip_type = "v4"
subnet = "0.0.0.0"
subnet_size = 0
port = each.value.port.expose
}
resource vultr_firewall_rule admin-service-inbound {
for_each = var.admin_services.configs
firewall_group_id = vultr_kubernetes.athens.firewall_group_id
protocol = "tcp"
ip_type = "v4"
subnet = "0.0.0.0"
subnet_size = 0
notes = each.value.port.notes
port = each.value.port.expose
}

View File

@@ -1,55 +0,0 @@
resource kubernetes_namespace game-servers {
count = length(var.game_servers.configs) > 0 ? 1 : 0
metadata {
name = var.game_servers.namespace
}
}
resource kubernetes_pod game {
for_each = var.game_servers.configs
metadata {
name = each.key
namespace = var.game_servers.namespace
labels = {
app = each.key
}
}
spec {
container {
image = each.value.image
name = coalesce(each.value.name, each.key)
resources {
limits = {
cpu = each.value.cpu
memory = each.value.mem
}
}
port {
container_port = each.value.port.internal
protocol = coalesce(each.value.proto, "TCP")
}
}
}
}
resource kubernetes_service game {
for_each = var.game_servers.configs
metadata {
name = each.key
namespace = var.game_servers.namespace
labels = {
app = each.key
}
}
spec {
selector = {
app = each.key
}
port {
target_port = each.value.port.internal
port = each.value.port.expose
}
type = "NodePort"
}
}

View File

@@ -0,0 +1,74 @@
# NOTE: this is a simple deployment for demo purposes only.
# Currently it does support SSH access and lacks Gitea runners.
# However a fully working setup can be found at: https://git.shockrah.xyz
resource kubernetes_deployment gitea {
metadata {
name = "gitea"
namespace = var.playground.namespace
labels = {
"app" = "gitea"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "gitea"
}
}
template {
metadata {
labels = {
"app" = "gitea"
}
}
spec {
container {
name = "gitea"
image = "gitea/gitea:latest"
port {
container_port = 3000
name = "gitea-main"
}
port {
container_port = 2222
name = "gitea-ssh"
}
volume_mount {
name = "gitea"
mount_path = "/data"
}
}
volume {
name = "gitea"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.gitea.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service gitea {
metadata {
name = "gitea"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "gitea"
}
port {
target_port = "gitea-main"
port = 3000
name = "http"
}
port {
target_port = "gitea-ssh"
port = 2222
name = "ssh"
}
}
}

View File

@@ -0,0 +1,47 @@
resource kubernetes_deployment_v1 health {
metadata {
name = "health"
namespace = var.playground.namespace
}
spec {
replicas = 1
selector {
match_labels = {
name = "health"
}
}
template {
metadata {
labels = {
name = "health"
}
}
spec {
container {
name = "health"
image = "quanhua92/whoami:latest"
port {
container_port = "8080"
}
}
}
}
}
}
resource kubernetes_service_v1 health {
metadata {
name = "health"
namespace = var.playground.namespace
}
spec {
selector = {
name = "health"
}
port {
port = 80
target_port = 8080
name = "http"
}
}
}

View File

@@ -0,0 +1,7 @@
resource helm_release nginx {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "ingress-nginx"
create_namespace = true
}

View File

@@ -0,0 +1,48 @@
locals {
services = {
"code.shockrah.xyz" = kubernetes_service.gitea
"sanity.shockrah.xyz" = kubernetes_service_v1.health
"uptime.shockrah.xyz" = kubernetes_service.kuma
"wiki.shockrah.xyz" = kubernetes_service.otterwiki
}
}
resource kubernetes_ingress_v1 health {
metadata {
name = "health-ingress"
namespace = var.playground.namespace
annotations = {
"cert-manager.io/cluster-issuer" = "letsencrypt"
"cert-manager.io/ingress.class" = "nginx"
}
}
spec {
ingress_class_name = "nginx"
dynamic tls {
for_each = local.services
content {
hosts = [tls.key]
secret_name = "${tls.value.metadata[0].name}-secret"
}
}
dynamic "rule" {
for_each = local.services
content {
host = "${rule.key}"
http {
path {
path = "/"
backend {
service {
name = rule.value.metadata[0].name
port {
number = rule.value.spec[0].port[0].port
}
}
}
}
}
}
}
}
}

View File

@@ -1 +0,0 @@
terraform.yaml

View File

@@ -1,33 +0,0 @@
terraform {
required_version = ">= 0.13"
backend s3 {
bucket = "project-athens"
key = "infra/vke/k8s/state/build.tfstate"
region = "us-west-1"
encrypt = true
}
required_providers {
# For interacting with S3
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.30.0"
}
}
}
provider aws {
access_key = var.aws_key
secret_key = var.aws_secret
region = var.aws_region
max_retries = 1
}
provider kubernetes {
config_path = "terraform.yaml"
}

View File

@@ -1,50 +0,0 @@
resource kubernetes_ingress_v1 athens {
metadata {
name = var.shockrahxyz.name
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = "websites"
}
}
spec {
rule {
host = "test.shockrah.xyz"
http {
path {
backend {
service {
name = var.shockrahxyz.name
port {
number = 80
}
}
}
path = "/"
}
}
}
}
}
resource kubernetes_service athens_lb {
metadata {
name = "athens-websites"
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = "websites"
}
}
spec {
selector = {
app = kubernetes_ingress_v1.athens.metadata.0.labels.app
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
external_ips = [ var.cluster.ip ]
}
}

View File

@@ -1,5 +0,0 @@
resource kubernetes_namespace websites {
metadata {
name = "websites"
}
}

View File

@@ -1,62 +0,0 @@
# First we setup the ingress controller with helm
```sh
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
# Now we can install this to our cluster
helm install --kubeconfig config.yaml traefik traefik/traefik
```
# Prove the service is present with
```sh
kubectl --kubeconfig config.yaml get svc
```
# Create the pods
```sh
kubectl --kubeconfig config.yaml -f k8s/nginx-dep.yaml
```
# Expose on port 80
```sh
kubectl --kubeconfig config.yaml -f k8s/nginx-service.yaml
```
# Create ingress on k8s
```sh
kubectl --kubeconfig config.yaml -f k8s/traefik-ingress.yaml
```
# Take the external IP from the ingress
Put that into terraform's A record for the domain since this is a load balancer
in Vultr ( actual resource apparantly )
# Configure cert-manager for traefik ingress
Using the latest version from here:
https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.crds.yaml
```sh
kubectl --kubeconfig config.yaml \
apply --validate=false \
-f https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.yaml
```
# Create the cert issuer and certificate
```sh
kubectl --kubeconfig config.yaml apply -f k8s/letsencrypt-issuer.yaml
kubectl --kubeconfig config.yaml apply -f k8s/letsencrypt-issuer.yaml
```
Because we just have 1 cert for now we are looking for it's status to be `READY`

View File

@@ -1,21 +0,0 @@
Plain nginx for now so that we can test out reverse dns
resource kubernetes_pod shockrah {
metadata {
name = var.shockrahxyz.name
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = var.shockrahxyz.name
}
}
spec {
container {
image = "nginx"
name = "${var.shockrahxyz.name}"
port {
container_port = 80
}
}
}
}

View File

@@ -1,35 +0,0 @@
# API Keys required to reach AWS/Vultr
variable vultr_api_key {
type = string
sensitive = true
}
variable aws_key {
type = string
sensitive = true
}
variable aws_secret {
type = string
sensitive = true
}
variable aws_region {
type = string
sensitive = true
}
variable shockrahxyz {
type = object({
name = string
port = number
dns = string
})
}
variable cluster {
type = object({
ip = string
})
}

View File

@@ -1,37 +0,0 @@
# Here we are going to define the deployment and service
# Basically all things directly related to the actual service we want to provide
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: alternate-nginx-web
namespace: default
labels:
app: alternate-nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: alternate-nginx-web
template:
metadata:
labels:
app: alternate-nginx-web
spec:
# Container comes from an example thing i randomly found on docker hub
containers:
- name: alternate-nginx-web
image: dockerbogo/docker-nginx-hello-world
---
apiVersion: v1
kind: Service
metadata:
name: alternate-nginx-web
namespace: default
spec:
selector:
app: alternate-nginx-web
ports:
- name: http
targetPort: 80
port: 80

View File

@@ -1,30 +0,0 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hello.temprah-lab.xyz
namespace: default
spec:
secretName: hello.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: hello.temprah-lab.xyz
dnsNames:
- hello.temprah-lab.xyz
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-hello
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz
privateKeySecretRef:
name: letsencrypt-prod-hello
solvers:
- http01:
ingress:
class: traefik

View File

@@ -1,13 +0,0 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sample.temprah-lab.xyz
namespace: default
spec:
secretName: sample.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: sample.temprah-lab.xyz
dnsNames:
- sample.temprah-lab.xyz

View File

@@ -1,20 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-web
namespace: default
labels:
app: nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: nginx-web
template:
metadata:
labels:
app: nginx-web
spec:
containers:
- name: nginx
image: nginx

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-web
namespace: default
spec:
selector:
app: nginx-web
ports:
- name: http
targetPort: 80
port: 80

View File

@@ -1,44 +0,0 @@
# This is the first thing we need to create, an issue to put certs into
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz
privateKeySecretRef:
name: letsencrypt-temprah-lab
solvers:
- http01:
ingress:
class: traefik
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hello.temprah-lab.xyz
namespace: default
spec:
secretName: hello.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-temprah-lab
kind: ClusterIssuer
commonName: hello.temprah-lab.xyz
dnsNames:
- hello.temprah-lab.xyz
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sample.temprah-lab.xyz
namespace: default
spec:
secretName: sample.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-temprah-lab
kind: ClusterIssuer
commonName: sample.temprah-lab.xyz
dnsNames:
- sample.temprah-lab.xyz

View File

@@ -1,31 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: default
labels:
name: project-athens-lb
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sample.temprah-lab.xyz
http:
paths:
- backend:
service:
name: nginx-web
port:
number: 80
path: /
pathType: Prefix
- host: hello.temprah-lab.xyz
http:
paths:
- backend:
service:
name: alternate-nginx-web
port:
number: 80
path: /
pathType: Prefix

View File

@@ -1,15 +1,14 @@
apiVersion: cert-manager.io/v1 apiVersion: cert-manager.io/v1
kind: ClusterIssuer kind: Issuer
metadata: metadata:
name: letsencrypt-prod name: letsencrypt-nginx
namespace: default
spec: spec:
acme: acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz email: dev@shockrah.xyz
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef: privateKeySecretRef:
name: letsencrypt-prod name: example
solvers: solvers:
- http01: - http01:
ingress: ingress:
class: traefik class: nginx

View File

@@ -0,0 +1,36 @@
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
selector:
name: whoami
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
annotations:
cert-manager.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- secretName: whoami-tls
hosts:
- example.shockrah.xyz
rules:
- host: example.shockrah.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-service
port:
number: 80

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
name: whoami-lb
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-interval: "30"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: "5"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: "5"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: "5"
spec:
type: LoadBalancer
selector:
name: whoami
ports:
- name: http
port: 80
targetPort: 8080

View File

@@ -0,0 +1,20 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
spec:
replicas: 3
selector:
matchLabels:
name: whoami
template:
metadata:
labels:
name: whoami
spec:
containers:
- name: whoami
image: quanhua92/whoami:latest
imagePullPolicy: Always
ports:
- containerPort: 8080

View File

@@ -0,0 +1,37 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
preferredChain: "ISRG Root X1"
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

View File

@@ -0,0 +1,10 @@
resource kubernetes_namespace playground {
metadata {
annotations = {
names = var.playground.namespace
}
name = var.playground.namespace
}
}

View File

@@ -0,0 +1,30 @@
resource helm_release shockrah_cert_manager {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = "v1.18.2"
namespace = "cert-manager"
create_namespace = true
cleanup_on_fail = true
set = [
{
name = "crds.enabled"
value = "true"
}
]
}
data kubectl_file_documents cluster_issuer {
content = file("cluster-issuer.yaml")
}
resource kubectl_manifest cluster_issuer {
for_each = data.kubectl_file_documents.cluster_issuer.manifests
yaml_body = each.value
depends_on = [
data.kubectl_file_documents.cluster_issuer
]
}

View File

@@ -0,0 +1,61 @@
resource kubernetes_deployment kuma {
metadata {
name = "kuma"
namespace = var.playground.namespace
labels = {
"app" = "kuma"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "kuma"
}
}
template {
metadata {
labels = {
"app" = "kuma"
}
}
spec {
container {
name = "kuma"
image = "louislam/uptime-kuma:2"
port {
container_port = 3001
name = "uptime-kuma"
}
volume_mount {
name = "kuma-data"
mount_path = "/app/data"
}
}
volume {
name = "kuma-data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.kuma.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service kuma {
metadata {
name = "kuma"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "kuma"
}
port {
target_port = "uptime-kuma"
port = 3001
name = "http"
}
}
}

View File

@@ -26,46 +26,24 @@ variable cluster {
label = string label = string
version = string version = string
pools = map(object({ pools = map(object({
plan = string node_quantity = number
autoscale = bool plan = string
min = number label = string
max = number min_nodes = number
max_nodes = number
tag = string
})) }))
}) })
} }
variable game_servers {
variable playground {
type = object({ type = object({
namespace = string namespace = string
configs = map(object({ # TODO: Re-incorporate this var for templating later
name = optional(string) tls = object({
image = string email = string
cpu = string })
mem = string
port = object({
internal = number
expose = number
})
proto = optional(string)
}))
})
}
variable admin_services {
type = object({
namespace = string
configs = map(object({
name = string
image = string
cpu = string
mem = string
port = object({
notes = optional(string)
internal = number
expose = number
})
proto = optional(string)
}))
}) })
} }

View File

@@ -1,42 +1,24 @@
cluster = { cluster = {
region = "lax" region = "lax"
label = "athens-cluster" label = "athens-cluster"
version = "v1.31.2+1" version = "v1.34.1+2"
pools = { pools = {
meta = { main = {
plan = "vc2-1c-2gb" node_quantity = 1
autoscale = true plan = "vc2-1c-2gb"
min = 1 label = "main"
max = 2 min_nodes = 1
} max_nodes = 2
games = { tag = "athens-main"
plan = "vc2-1c-2gb"
autoscale = true
min = 1
max = 3
} }
} }
} }
game_servers = { playground = {
namespace = "games" namespace = "playground"
configs = { # Sanity check service that is used purely for the sake of ensuring
} # things are ( at a basic level ) functional
} tls = {
email = "dev@shockrah.xyz"
admin_services = {
namespace = "admin-services"
configs = {
health = {
image = "nginx:latest"
name = "health"
cpu = "200m"
mem = "64Mi"
port = {
notes = "Basic nginx sanity check service"
expose = 30800
internal = 80
}
}
} }
} }

View File

@@ -0,0 +1,49 @@
resource kubernetes_persistent_volume_claim_v1 kuma {
metadata {
name = "kuma-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}
resource kubernetes_persistent_volume_claim_v1 gitea {
metadata {
name = "gitea-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}
resource kubernetes_persistent_volume_claim_v1 otterwiki {
metadata {
name = "otterwiki-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}

View File

@@ -0,0 +1,63 @@
resource kubernetes_deployment otterwiki {
metadata {
name = "otterwiki"
namespace = var.playground.namespace
labels = {
"app" = "otterwiki"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "otterwiki"
}
}
template {
metadata {
labels = {
"app" = "otterwiki"
}
}
spec {
container {
name = "otterwiki"
image = "redimp/otterwiki:2"
port {
container_port = 8080
name = "otterwiki-main"
}
volume_mount {
name = "otterwiki-data"
mount_path = "/var/lib/otterwiki"
}
}
volume {
name = "otterwiki-data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.otterwiki.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service otterwiki {
metadata {
name = "otterwiki"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "otterwiki"
}
port {
port = 80
target_port = "otterwiki-main"
protocol = "TCP"
name = "http"
}
}
}