Compare commits

...

36 Commits

Author SHA1 Message Date
3f0c8a865d DNS Endpoint for a tarpit meme project
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-23 00:31:24 -07:00
3f2e6d86f6 Proxying a new container registry service
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-05-23 00:31:00 -07:00
08560c945b Removing succ build script
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 13s
2025-05-21 22:24:50 -07:00
506a9b32d9 renaming tarpit
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-21 21:36:48 -07:00
d4ece741e0 tarpit server that i'll use for the lulz
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-05-21 21:35:20 -07:00
311a592d6e Adding a task subset for host volume setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-05-20 15:06:04 -07:00
153ea8e982 Improving nomad nginx config to be more responsive or soemthing
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
https://developer.hashicorp.com/nomad/tutorials/manage-clusters/reverse-proxy-ui#extend-connection-timeout
2025-05-15 22:59:07 -07:00
943e9651da Swapping the health container to our own thing
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
it's just nginx on port 8080 :)
2025-05-13 18:40:12 -07:00
669c414288 Simple sanity container on port 8080 for testing purposes
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Has been cancelled
2025-05-13 18:39:47 -07:00
e3afed5e4f sanity service on 8080 now 2025-05-12 02:01:31 -07:00
e337989a59 just roll with it at this point 2025-05-12 02:01:20 -07:00
7f36ff272e boostrap since we have literally 1 node
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-12 01:44:25 -07:00
79e6698db1 Templatizing consul config
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-12 01:28:39 -07:00
603559b255 omfg this config i swear
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-12 01:08:11 -07:00
4851b6521c consul config
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-12 01:05:54 -07:00
9785e8a40a Even more file shuffling
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 27s
2025-05-12 00:21:08 -07:00
79bd7424c3 Moving around more stuff
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-05-12 00:18:24 -07:00
5227bea568 renaming stuff to note that it's not used anymore 2025-05-12 00:17:30 -07:00
47b69d7f49 Nomad now responds to the basic nomad.nigel.local DNS name
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-10 17:26:45 -07:00
a3fdc5fcc7 Sanity check job with nomad :D
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-10 15:38:16 -07:00
5a1afb4a07 Make sure the nomad agent is running on boot
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-10 14:58:19 -07:00
e03daa62e5 removing unused role
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-05-10 14:52:32 -07:00
15dfaea8db Nomad completely setup with --tags nomad now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2025-05-04 23:35:58 -07:00
ef4967cd88 wari wari da it's so over ( im using ansible again )
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
:(((
2025-04-23 23:25:34 -07:00
55217ce50b Ensure nigel sudo ability is setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-04-23 22:25:23 -07:00
2bbc9095f7 Removing services role as it's being replaced by terraform
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
2025-04-23 22:14:50 -07:00
fcf7ded218 Removing docker resources for now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 22s
Migrating to terraform for better state control
2025-04-23 22:14:05 -07:00
b68d53b143 Opting for an example minio setup over filebrowser
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-04-16 21:00:39 -07:00
3c6bc90feb health container and filebrowser container now active
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
Configuration needed at this point however
2025-04-16 20:28:48 -07:00
3521b840ae Seperating the roles of basic infra requirements and docker service requirements into seperate roles
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
With this we have a working proof of concept for a proper simple docker host
2025-04-16 18:25:24 -07:00
5f10976264 Docker now setup with ansible
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-04-16 17:34:03 -07:00
10e936a8da Basic docker setup verified by ansible-lint locally
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-04-16 14:55:02 -07:00
8bbaea8fd9 Simple admin user setup on a clean buntu machine
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-04-11 02:43:22 -07:00
d39e0c04e5 Adding health to games selector set
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-02-10 22:11:09 -08:00
b99525955e Swapping health pod to admin-services 2025-02-10 22:10:46 -08:00
9b6f9b6656 Fixing tag issues with pod selector
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-02-10 22:10:02 -08:00
49 changed files with 392 additions and 84 deletions

3
ansible/inventory.yaml Normal file
View File

@@ -0,0 +1,3 @@
nigel:
hosts:
nigel.local:

4
ansible/linter.yaml Normal file
View File

@@ -0,0 +1,4 @@
---
skip_list:
- role-name
- var-naming[no-role-prefix]

View File

@@ -0,0 +1,27 @@
# This playbook is meant to be a oneshot to be ran manually on the dev box
# The rest of the role stuff is meant to be ran as the admin user that
# this playbook creates for us
---
- hosts: nigel.local
remote_user: nigel
vars:
admin:
username: nigel
tasks:
- name: Copy the nigel admin key
ansible.builtin.authorized_key:
user: "{{ admin.username }}"
state: present
key: "{{ lookup('file', '~/.ssh/nigel/admin.pub') }}"
- name: Prevent password based logins
become: true
ansible.builtin.lineinfile:
dest: /etc/ssh/sshd_config
line: PasswordAuthentication no
state: present
backup: true
- name: Restart SSH Daemon
become: true
ansible.builtin.service:
name: ssh
state: restarted

12
ansible/nuc.yaml Normal file
View File

@@ -0,0 +1,12 @@
---
- hosts: nigel.local
remote_user: nigel
tasks:
- name: Setup basic role on nigel
tags:
- setup
- nomad
- proxy
- volumes
ansible.builtin.include_role:
name: local-server-head

View File

@@ -0,0 +1 @@
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu noble stable

View File

@@ -0,0 +1,15 @@
127.0.0.1 localhost
127.0.1.1 nigel
# Our own dns stuff
127.0.1.1 nigel.local
127.0.1.1 nomad.nigel.local
127.0.1.1 sanity.nigel.local
127.0.1.1 ncr.nigel.local
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

View File

@@ -0,0 +1,6 @@
server {
server_name ncr.nigel.local;
location / {
proxy_pass http://localhost:5000;
}
}

View File

@@ -0,0 +1,25 @@
server {
server_name nomad.nigel.local;
location / {
proxy_pass http://nomad-ws;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 319s;
# This is for log streaming requests
proxy_buffering off;
# Upgrade and Connection headers for upgrading to websockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Origin "${scheme}://${proxy_host}";
}
}
upstream nomad-ws {
ip_hash;
server nomad.nigel.local:4646;
}

View File

@@ -0,0 +1,7 @@
server {
server_name sanity.nigel.local;
location / {
proxy_pass http://localhost:8080;
}
}

View File

@@ -0,0 +1,41 @@
- name: Ensure we have basic updated packages setting up docker
ansible.builtin.apt:
name: "{{ item }}"
update_cache: true
loop:
- ca-certificates
- curl
- name: Running install on the keyrings directory
ansible.builtin.command:
cmd: install -m 0755 -d /etc/apt/keyrings
register: install
changed_when: install.rc == 0
- name: Fetch Docker GPG Key
vars:
keylink: https://download.docker.com/linux/ubuntu/gpg
ansible.builtin.get_url:
url: "{{ keylink }}"
dest: /etc/apt/keyrings/docker.asc
mode: "0644"
- name: Add repo to apt sources
ansible.builtin.copy:
src: docker.list
dest: /etc/apt/sources.list.d/docker.list
mode: "0644"
- name: Update Apt cache with latest docker.list packages
ansible.builtin.apt:
update_cache: true
- name: Ensure all docker packages are updated to the latest versions
ansible.builtin.apt:
name: "{{ item }}"
loop:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
- name: Verify that the docker components are installed properly
ansible.builtin.command:
cmd: docker run hello-world
register: docker
changed_when: docker.rc == 0

View File

@@ -0,0 +1,41 @@
- name: Ensure docker components are installed
tags:
- setup
ansible.builtin.include_tasks:
file: ensure-docker-basic.yaml
apply:
become: true
tags:
- setup
- name: Ensure nigel can use sudo without password
become: true
tags:
- setup
ansible.builtin.lineinfile:
path: /etc/sudoers
state: present
line: "nigel ALL=(ALL) NOPASSWD:ALL"
- name: Run through nomad installation steps
tags: nomad
ansible.builtin.include_tasks:
file: nomad.yaml
apply:
become: true
tags:
- nomad
- name: Setup the reverse proxy outside of nomad
tags: proxy
ansible.builtin.include_tasks:
file: reverse_proxy.yaml
apply:
become: true
tags:
- proxy
- name: Setup data directory for the nomad host volumes
tags: volumes
ansible.builtin.include_tasks:
file: nomad-host-volumes.yaml
apply:
become: true
tags:
- volumes

View File

@@ -0,0 +1,8 @@
- name: Ensure the root data directory is present
ansible.builtin.file:
path: "{{ host_vol_root }}"
state: directory
- name: Ensure registry volume is present
ansible.builtin.file:
path: "{{ host_vol_root }}/ncr"
state: directory

View File

@@ -0,0 +1,54 @@
- name: Ensure prerequisite packages are installed
ansible.builtin.apt:
pkg:
- wget
- gpg
- coreutils
update_cache: true
- name: Hashicorp repo setup
vars:
keypath: /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpgpath: /tmp/hashicorp.gpg
block:
- name: Download the hashicorp GPG Key
ansible.builtin.get_url:
url: https://apt.releases.hashicorp.com/gpg
dest: "{{ gpgpath }}"
- name: Dearmor the hashicorp gpg key
ansible.builtin.command:
cmd: "gpg --dearmor --yes -o {{ keypath }} {{ gpgpath }}"
register: gpg
changed_when: gpg.rc == 0
- name: Add the hashicorp linux repo
vars:
keyfile: "{{ keypath }}"
ansible.builtin.template:
src: hashicorp.list
dest: /etc/apt/sources.list.d/hashicorp.list
mode: "0644"
- name: Update apt repo cache
ansible.builtin.apt:
update_cache: true
- name: Install consul
ansible.builtin.apt:
name: consul
- name: Install nomad package
ansible.builtin.apt:
pkg: nomad
- name: Copy in the consul configuration
vars:
ip: "{{ ansible_default_ipv4['address'] }}"
ansible.builtin.template:
src: consul.hcl
dest: /etc/consul.d/consul.hcl
mode: "0644"
- name: Start nomad
ansible.builtin.systemd_service:
name: nomad
state: started
enabled: true
- name: Make sure the consul service is NOT available
ansible.builtin.systemd_service:
name: consul
state: stopped
enabled: true

View File

@@ -0,0 +1,31 @@
- name: Keep /etc/hosts up to date
ansible.builtin.copy:
dest: /etc/hosts
src: host-file
mode: "0644"
- name: Ensure nginx is setup as latest
ansible.builtin.apt:
name: nginx
- name: Copy the nomad.conf to available configurations
ansible.builtin.copy:
src: "{{ item }}"
dest: "/etc/nginx/sites-available/{{ item }}"
mode: "0644"
loop:
- nomad.conf
- sanity.conf
- ncr.conf
- name: Link the nomad.conf to sites-enabled
ansible.builtin.file:
path: "/etc/nginx/sites-enabled/{{ item }}"
state: link
src: "/etc/nginx/sites-available/{{ item }}"
mode: "0644"
loop:
- nomad.conf
- sanity.conf
- ncr.conf
- name: Restart nginx
ansible.builtin.systemd_service:
name: nginx
state: restarted

View File

@@ -0,0 +1,12 @@
bind_addr = "{{ ip }}"
advertise_addr = "{{ ip }}"
bootstrap = true
bootstrap_expect = 1
client_addr = "{{ ip }}"
server = true
data_dir = "/opt/consul"
ui_config {
enabled = true
}

View File

@@ -0,0 +1 @@
deb [signed-by={{ keyfile }}] https://apt.releases.hashicorp.com jammy main

View File

@@ -0,0 +1 @@
host_vol_root: /opt/volumes

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -e
bucket="$1"
s3env=/opt/nginx/s3.env
[[ -z "$bucket" ]] && echo "No bucket selected" && exit 1
[[ ! -f $s3env ]] && echo "No credentials to source!" && exit 1
source $s3env
pull() {
aws s3 sync s3://$bucket /opt/nginx/$bucket
}
case $bucket in
resume.shockrah.xyz|shockrah.xyz|temper.tv) pull;;
*) echo "Invalid bucket name" && exit 1 ;;
esac

View File

@@ -0,0 +1,4 @@
# Because I just really needed ok?
FROM nginx:latest
COPY default /etc/nginx/conf.d/default.conf

View File

@@ -0,0 +1,15 @@
server {
listen 8080;
listen [::]:8080;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}

View File

@@ -1,34 +0,0 @@
#!/bin/bash
set -e
opt=$1
plan=tfplan
build_plan() {
echo Generating plan
set -x
terraform plan -var-file variables.tfvars -input=false -out $plan
}
deploy_plan() {
terraform apply $plan
}
init() {
terraform init
}
help_prompt() {
cat <<- EOF
Options: plan deploy help
EOF
}
# Default to building a plan
source ./secrets.sh
case $opt in
plan) build_plan;;
deploy) deploy_plan;;
*) help_prompt;;
esac

View File

@@ -37,6 +37,7 @@ locals {
{ name = "www.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "resume.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "git.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "lmao.shockrah.xyz", records = [ "207.246.107.99" ] },
]
}

View File

@@ -0,0 +1,30 @@
# This 'service' job is just a simple nginx container that lives here as a kind of sanity check
# PORT: 8080
# DNS : sanity.nigel.local
job "health" {
type = "service"
group "health" {
count = 1
network {
port "http" {
static = 8080
}
}
service {
name = "health-svc"
port = "http"
provider = "nomad"
}
task "health-setup" {
driver = "docker"
config {
image = "shockrah/sanity:latest"
ports = [ "http" ]
}
}
}
}

View File

@@ -0,0 +1,28 @@
resource tls_private_key tarpit {
algorithm = "RSA"
rsa_bits = 4096
}
resource vultr_ssh_key tarpit {
name = "tarpit_ssh_key"
ssh_key = chomp(tls_private_key.tarpit.public_key_openssh)
}
resource vultr_instance tarpit {
# Core configuration
plan = var.host.plan
region = var.host.region
os_id = var.host.os
enable_ipv6 = true
ssh_key_ids = [ vultr_ssh_key.host.id ]
firewall_group_id = vultr_firewall_group.host.id
label = "Tarpit"
}
output tarpit_ssh_key {
sensitive = true
value = tls_private_key.host.private_key_pem
}

View File

@@ -17,7 +17,7 @@ resource kubernetes_pod admin {
}
spec {
node_selector = {
NodeType = var.admin_services.namespace
"vke.vultr.com/node-pool" = var.admin_services.namespace
}
container {
image = each.value.image

View File

@@ -22,7 +22,7 @@ resource vultr_kubernetes_node_pools games {
label = var.game_servers.namespace
min_nodes = var.cluster.pools["games"].min
max_nodes = var.cluster.pools["games"].max
tag = var.admin_services.namespace
tag = var.game_servers.namespace
}
output k8s_config {

View File

@@ -8,7 +8,7 @@ def get_args() -> Namespace:
prog="Cluster Search Thing",
description="General utility for finding resources for game server bot"
)
games = {"reflex", "minecraft"}
games = {"health", "reflex", "minecraft"}
parser.add_argument('-g', '--game', required=False, choices=games)
admin = {"health"}
@@ -21,11 +21,19 @@ def k8s_api(config_path: str) -> client.api.core_v1_api.CoreV1Api:
def get_admin_service_details(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
print('admin thing requested', args.admin)
services = api.list_service_for_all_namespaces(label_selector=f'app={args.admin}')
if len(services.items) == 0:
print(f'Unable to find {args.admin} amongst the admin-services')
return
port = services.items[0].spec.ports[0].port
node_ips = list(filter(lambda a: a.type == 'ExternalIP', api.list_node().items[0].status.addresses))
ipv4 = list(filter(lambda item: not re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
ipv6 = list(filter(lambda item: re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
print(f'{args.admin} --> {ipv4}:{port} ~~> {ipv6}:{port}')
def get_game_server_ip(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
pods = api.list_pod_for_all_namespaces(label_selector=f'app={args.game}')
node_name = pods.items[0].spec.node_name
services = api.list_service_for_all_namespaces(label_selector=f'app={args.game}')
port = services.items[0].spec.ports[0].port

View File

@@ -29,4 +29,3 @@ resource vultr_firewall_rule admin-service-inbound {
notes = each.value.port.notes
port = each.value.port.expose
}

View File

@@ -21,31 +21,22 @@ cluster = {
game_servers = {
namespace = "games"
configs = {
# minecraft = {
# image = "itzg/minecraft-server"
# cpu = "1000m"
# mem = "2048Mi"
# port = {
# expose = 30808
# internal = 80
# }
# }
}
}
admin_services = {
namespace = "admin-services"
configs = {
# health = {
# image = "nginx:latest"
# name = "health"
# cpu = "200m"
# mem = "64Mi"
# port = {
# notes = "Basic nginx sanity check service"
# expose = 30800
# internal = 80
# }
# }
health = {
image = "nginx:latest"
name = "health"
cpu = "200m"
mem = "64Mi"
port = {
notes = "Basic nginx sanity check service"
expose = 30800
internal = 80
}
}
}
}