Compare commits

...

20 Commits

Author SHA1 Message Date
79bd7424c3 Moving around more stuff
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-05-12 00:18:24 -07:00
5227bea568 renaming stuff to note that it's not used anymore 2025-05-12 00:17:30 -07:00
47b69d7f49 Nomad now responds to the basic nomad.nigel.local DNS name
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-10 17:26:45 -07:00
a3fdc5fcc7 Sanity check job with nomad :D
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-05-10 15:38:16 -07:00
5a1afb4a07 Make sure the nomad agent is running on boot
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-05-10 14:58:19 -07:00
e03daa62e5 removing unused role
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-05-10 14:52:32 -07:00
15dfaea8db Nomad completely setup with --tags nomad now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2025-05-04 23:35:58 -07:00
ef4967cd88 wari wari da it's so over ( im using ansible again )
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
:(((
2025-04-23 23:25:34 -07:00
55217ce50b Ensure nigel sudo ability is setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-04-23 22:25:23 -07:00
2bbc9095f7 Removing services role as it's being replaced by terraform
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
2025-04-23 22:14:50 -07:00
fcf7ded218 Removing docker resources for now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 22s
Migrating to terraform for better state control
2025-04-23 22:14:05 -07:00
b68d53b143 Opting for an example minio setup over filebrowser
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-04-16 21:00:39 -07:00
3c6bc90feb health container and filebrowser container now active
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
Configuration needed at this point however
2025-04-16 20:28:48 -07:00
3521b840ae Seperating the roles of basic infra requirements and docker service requirements into seperate roles
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
With this we have a working proof of concept for a proper simple docker host
2025-04-16 18:25:24 -07:00
5f10976264 Docker now setup with ansible
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-04-16 17:34:03 -07:00
10e936a8da Basic docker setup verified by ansible-lint locally
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-04-16 14:55:02 -07:00
8bbaea8fd9 Simple admin user setup on a clean buntu machine
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-04-11 02:43:22 -07:00
d39e0c04e5 Adding health to games selector set
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-02-10 22:11:09 -08:00
b99525955e Swapping health pod to admin-services 2025-02-10 22:10:46 -08:00
9b6f9b6656 Fixing tag issues with pod selector
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-02-10 22:10:02 -08:00
36 changed files with 274 additions and 50 deletions

3
ansible/inventory.yaml Normal file
View File

@@ -0,0 +1,3 @@
nigel:
hosts:
nigel.local:

4
ansible/linter.yaml Normal file
View File

@@ -0,0 +1,4 @@
---
skip_list:
- role-name
- var-naming[no-role-prefix]

View File

@@ -0,0 +1,27 @@
# This playbook is meant to be a oneshot to be ran manually on the dev box
# The rest of the role stuff is meant to be ran as the admin user that
# this playbook creates for us
---
- hosts: nigel.local
remote_user: nigel
vars:
admin:
username: nigel
tasks:
- name: Copy the nigel admin key
ansible.builtin.authorized_key:
user: "{{ admin.username }}"
state: present
key: "{{ lookup('file', '~/.ssh/nigel/admin.pub') }}"
- name: Prevent password based logins
become: true
ansible.builtin.lineinfile:
dest: /etc/ssh/sshd_config
line: PasswordAuthentication no
state: present
backup: true
- name: Restart SSH Daemon
become: true
ansible.builtin.service:
name: ssh
state: restarted

11
ansible/nuc.yaml Normal file
View File

@@ -0,0 +1,11 @@
---
- hosts: nigel.local
remote_user: nigel
tasks:
- name: Setup basic role on nigel
tags:
- setup
- nomad
- proxy
ansible.builtin.include_role:
name: local-server-head

View File

@@ -0,0 +1 @@
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu noble stable

View File

@@ -0,0 +1,14 @@
127.0.0.1 localhost
127.0.1.1 nigel
# Our own dns stuff
127.0.1.1 nigel.local
127.0.1.1 nomad.nigel.local
127.0.1.1 sanity.nigel.local
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

View File

@@ -0,0 +1,8 @@
server {
server_name nomad.nigel.local;
location / {
proxy_pass http://localhost:4646;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

View File

@@ -0,0 +1,7 @@
server {
server_name sanity.nigel.local;
location / {
proxy_pass http://localhost:8000;
}
}

View File

@@ -0,0 +1,41 @@
- name: Ensure we have basic updated packages setting up docker
ansible.builtin.apt:
name: "{{ item }}"
update_cache: true
loop:
- ca-certificates
- curl
- name: Running install on the keyrings directory
ansible.builtin.command:
cmd: install -m 0755 -d /etc/apt/keyrings
register: install
changed_when: install.rc == 0
- name: Fetch Docker GPG Key
vars:
keylink: https://download.docker.com/linux/ubuntu/gpg
ansible.builtin.get_url:
url: "{{ keylink }}"
dest: /etc/apt/keyrings/docker.asc
mode: "0644"
- name: Add repo to apt sources
ansible.builtin.copy:
src: docker.list
dest: /etc/apt/sources.list.d/docker.list
mode: "0644"
- name: Update Apt cache with latest docker.list packages
ansible.builtin.apt:
update_cache: true
- name: Ensure all docker packages are updated to the latest versions
ansible.builtin.apt:
name: "{{ item }}"
loop:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
- name: Verify that the docker components are installed properly
ansible.builtin.command:
cmd: docker run hello-world
register: docker
changed_when: docker.rc == 0

View File

@@ -0,0 +1,33 @@
- name: Ensure docker components are installed
tags:
- setup
ansible.builtin.include_tasks:
file: ensure-docker-basic.yaml
apply:
become: true
tags:
- setup
- name: Ensure nigel can use sudo without password
become: true
tags:
- setup
ansible.builtin.lineinfile:
path: /etc/sudoers
state: present
line: "nigel ALL=(ALL) NOPASSWD:ALL"
- name: Run through nomad installation steps
tags: nomad
ansible.builtin.include_tasks:
file: nomad.yaml
apply:
become: true
tags:
- nomad
- name: Setup the reverse proxy outside of nomad
tags: proxy
ansible.builtin.include_tasks:
file: reverse_proxy.yaml
apply:
become: true
tags:
- proxy

View File

@@ -0,0 +1,39 @@
- name: Ensure prerequisite packages are installed
ansible.builtin.apt:
pkg:
- wget
- gpg
- coreutils
update_cache: true
- name: Hashicorp repo setup
vars:
keypath: /usr/share/keyrings/hashicorp-archive-keyring.gpg
gpgpath: /tmp/hashicorp.gpg
block:
- name: Download the hashicorp GPG Key
ansible.builtin.get_url:
url: https://apt.releases.hashicorp.com/gpg
dest: "{{ gpgpath }}"
- name: Dearmor the hashicorp gpg key
ansible.builtin.command:
cmd: "gpg --dearmor --yes -o {{ keypath }} {{ gpgpath }}"
register: gpg
changed_when: gpg.rc == 0
- name: Add the hashicorp linux repo
vars:
keyfile: "{{ keypath }}"
ansible.builtin.template:
src: hashicorp.list
dest: /etc/apt/sources.list.d/hashicorp.list
mode: "0644"
- name: Update apt repo cache
ansible.builtin.apt:
update_cache: true
- name: Install nomad package
ansible.builtin.apt:
pkg: nomad
- name: Make sure the nomad service is available
ansible.builtin.systemd_service:
name: nomad
state: started
enabled: true

View File

@@ -0,0 +1,29 @@
- name: Keep /etc/hosts up to date
ansible.builtin.copy:
dest: /etc/hosts
src: host-file
mode: "0644"
- name: Ensure nginx is setup as latest
ansible.builtin.apt:
name: nginx
- name: Copy the nomad.conf to available configurations
ansible.builtin.copy:
src: "{{ item }}"
dest: "/etc/nginx/sites-available/{{ item }}"
mode: "0644"
loop:
- nomad.conf
- sanity.conf
- name: Link the nomad.conf to sites-enabled
ansible.builtin.file:
path: "/etc/nginx/sites-enabled/{{ item }}"
state: link
src: "/etc/nginx/sites-available/{{ item }}"
mode: "0644"
loop:
- nomad.conf
- sanity.conf
- name: Restart nginx
ansible.builtin.systemd_service:
name: nginx
state: restarted

View File

@@ -0,0 +1 @@
deb [signed-by={{ keyfile }}] https://apt.releases.hashicorp.com jammy main

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -e
bucket="$1"
s3env=/opt/nginx/s3.env
[[ -z "$bucket" ]] && echo "No bucket selected" && exit 1
[[ ! -f $s3env ]] && echo "No credentials to source!" && exit 1
source $s3env
pull() {
aws s3 sync s3://$bucket /opt/nginx/$bucket
}
case $bucket in
resume.shockrah.xyz|shockrah.xyz|temper.tv) pull;;
*) echo "Invalid bucket name" && exit 1 ;;
esac

View File

@@ -0,0 +1,31 @@
# This 'service' job is just a simple nginx container that lives here as a kind of sanity check
# PORT: 8000
# DNS : sanity.nigel.local
job "health" {
type = "service"
group "health" {
count = 1
network {
port "http" {
static = 8000
to = 80
}
}
service {
name = "health-svc"
port = "http"
provider = "nomad"
}
task "health-setup" {
driver = "docker"
config {
image = "nginx:latest"
ports = [ "http" ]
}
}
}
}

View File

@@ -17,7 +17,7 @@ resource kubernetes_pod admin {
}
spec {
node_selector = {
NodeType = var.admin_services.namespace
"vke.vultr.com/node-pool" = var.admin_services.namespace
}
container {
image = each.value.image

View File

@@ -22,7 +22,7 @@ resource vultr_kubernetes_node_pools games {
label = var.game_servers.namespace
min_nodes = var.cluster.pools["games"].min
max_nodes = var.cluster.pools["games"].max
tag = var.admin_services.namespace
tag = var.game_servers.namespace
}
output k8s_config {

View File

@@ -8,7 +8,7 @@ def get_args() -> Namespace:
prog="Cluster Search Thing",
description="General utility for finding resources for game server bot"
)
games = {"reflex", "minecraft"}
games = {"health", "reflex", "minecraft"}
parser.add_argument('-g', '--game', required=False, choices=games)
admin = {"health"}
@@ -21,11 +21,19 @@ def k8s_api(config_path: str) -> client.api.core_v1_api.CoreV1Api:
def get_admin_service_details(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
print('admin thing requested', args.admin)
services = api.list_service_for_all_namespaces(label_selector=f'app={args.admin}')
if len(services.items) == 0:
print(f'Unable to find {args.admin} amongst the admin-services')
return
port = services.items[0].spec.ports[0].port
node_ips = list(filter(lambda a: a.type == 'ExternalIP', api.list_node().items[0].status.addresses))
ipv4 = list(filter(lambda item: not re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
ipv6 = list(filter(lambda item: re.match('[\d\.]{3}\d', item.address), node_ips))[0].address
print(f'{args.admin} --> {ipv4}:{port} ~~> {ipv6}:{port}')
def get_game_server_ip(args: ArgumentParser, api: client.api.core_v1_api.CoreV1Api):
pods = api.list_pod_for_all_namespaces(label_selector=f'app={args.game}')
node_name = pods.items[0].spec.node_name
services = api.list_service_for_all_namespaces(label_selector=f'app={args.game}')
port = services.items[0].spec.ports[0].port

View File

@@ -29,4 +29,3 @@ resource vultr_firewall_rule admin-service-inbound {
notes = each.value.port.notes
port = each.value.port.expose
}

View File

@@ -21,31 +21,22 @@ cluster = {
game_servers = {
namespace = "games"
configs = {
# minecraft = {
# image = "itzg/minecraft-server"
# cpu = "1000m"
# mem = "2048Mi"
# port = {
# expose = 30808
# internal = 80
# }
# }
}
}
admin_services = {
namespace = "admin-services"
configs = {
# health = {
# image = "nginx:latest"
# name = "health"
# cpu = "200m"
# mem = "64Mi"
# port = {
# notes = "Basic nginx sanity check service"
# expose = 30800
# internal = 80
# }
# }
health = {
image = "nginx:latest"
name = "health"
cpu = "200m"
mem = "64Mi"
port = {
notes = "Basic nginx sanity check service"
expose = 30800
internal = 80
}
}
}
}