Compare commits

50 Commits

Author SHA1 Message Date
eea4c61537 Quick A record for testing static website migration
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 17s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 1m25s
2026-01-31 12:29:49 -08:00
ee860c6e1f Common names now line up with hostnames in certificate through the 1 ingress (fire emoji)
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 8s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2026-01-13 23:18:41 -08:00
1c11410c2d More resource re-factors, upgrades and fixes for future work
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 16s
Housekeeping but the wiki got hosed :((
2026-01-07 00:53:11 -08:00
4d71994b85 Upgrading provider versions
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2026-01-07 00:21:12 -08:00
79cb4eb1a6 Cleaning up unused code
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2026-01-07 00:02:11 -08:00
e8817fe093 Adding wiki to DNS and opening it up on the ingress for public read access 2026-01-06 19:12:31 -08:00
97bffd2042 Adding note regarding git.shockrah.xyz & code.shockrah.xyz 2026-01-06 19:06:23 -08:00
37305fd74e Exposing 2222 in gitea service however ingress still needs configuration
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2026-01-06 00:06:47 -08:00
555124bf2f Shortening ingress definition
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2026-01-03 23:07:33 -08:00
e209da949b Adding wiki service with a basic page for now
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2026-01-03 21:43:16 -08:00
caa2eba639 Removing unused helm charts
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
2025-12-28 19:30:13 -08:00
982669ed4a Cleaning up the logging namespace and resource as they are not getting value
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 24s
2025-12-12 14:41:29 -08:00
4446ef813f Fixing auto_scaler issue with root node pool in athens cluster 2025-12-12 14:40:54 -08:00
9dc2f1d769 Adding sample filese and fluent bit configs which still need some work
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-11-10 14:18:05 -08:00
01b7b4ced8 Moving logging related things to the new logging namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-11-05 21:55:40 -08:00
29cdfcb695 openobserve inimal setup running now with it's own namespace and volumes
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-11-04 23:24:16 -08:00
bbbc9ed477 Upsizing the singular node to accomodate the new observability stack 2025-11-04 23:20:03 -08:00
d64c5526e6 Creating namespace for the openserve 2025-11-04 23:18:39 -08:00
469b3d08ce Adding hashicorp/random provider 2025-11-04 23:16:58 -08:00
7f5b3205d0 Ingress functional however this is all in a cooked af namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 14s
2025-11-03 02:14:06 -08:00
67ff5ce729 Gitea appearing functional with the service in place, now waiting on LB setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-11-03 01:48:29 -08:00
6aadb47c61 Adding code.shockrah.xyz to DNS member list 2025-11-03 01:48:09 -08:00
0624161f53 Fixing the PV for gitea which now lives in the dev namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-11-03 01:30:16 -08:00
c6b2a062e9 Creating dev namespace
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-11-03 01:17:54 -08:00
718647f617 Adding a new uptime service to configure later on
For now I'm staging this in the playground namespace since IDK if I'm going to keep it 5ever + it's an excuse to learn how to use basic volumes
2025-11-02 21:31:22 -08:00
cfe631eba7 Creating pvc for gitea setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 7s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-10-22 16:15:04 -07:00
29e049cf7f Moving legacy yaml 2025-10-21 15:15:24 -07:00
990d29ae6c Adding annotations & host field to ingress
Also updating the staging target to production target for lets encrypt cluster issuer
2025-10-21 12:42:02 -07:00
859201109e Adding required annotations for cert-manager on the ingress resource
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-10-03 18:04:20 -07:00
de3bff8f14 Creating cluster issuer with yaml piped into terraform
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-10-03 18:01:16 -07:00
54a6ddbe5d Changing out the kubectl provider for a new one 2025-10-03 17:59:01 -07:00
82333fe6ce Setting up cert-manager helm_release
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-10-03 17:25:23 -07:00
cddf67de2f Updating health ingress resource with better naming/referecing
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-28 15:56:24 -07:00
affa03bed5 Updating DNS for load balancer sanity.shockrah.xyz A record
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-09-28 13:56:47 -07:00
34e1f6afdf Converting backend provider helm config to use config.yaml
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-27 14:38:26 -07:00
fd9bd290af Adding support for helm releases
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
Intended for setting up the nginx-ingress controller
2025-09-20 11:05:00 -07:00
d992556032 Basic sanity service now working under public DNS
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 3s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-17 23:32:50 -07:00
fce73d06e0 Adding dns vars for sanity.shockrah.xyz
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 15s
2025-09-17 23:08:33 -07:00
7f5d81f0ee Deployment and Service for a simple health check tier service
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-09-17 22:37:03 -07:00
410790765f Creating new namespace in cluster for random k8s experiments 2025-09-17 22:33:34 -07:00
9454e03f53 Example service now uses tls for some reason
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-09 18:05:12 -07:00
e6ed85920d Creating semi-functional tls cert with k8s
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 9s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 5s
Certificate resource is created but not deployed at this time
2025-09-08 21:00:24 -07:00
2775d354f8 Creating functional ingress 2025-09-08 20:58:44 -07:00
1f6f013634 Cleaning up unused resources
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 10s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 9s
2025-09-08 20:58:29 -07:00
778b995980 Adding DNS entry for VKE LB
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-09-03 14:27:52 -07:00
fc897bdd0e New yaml for a working MVP
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 20s
Still need to add thing like TLS but this will basically
be  the template for routing and service setup, going forward
2025-08-29 16:34:18 -07:00
8f06ef269a Basic health setup
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 17s
2025-08-27 18:13:39 -07:00
f15da0c88d Removing old kubernetes tf infrastructure
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 6s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 19s
2025-08-27 00:30:38 -07:00
c602773657 Removing tarpit project to save on costs
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 4s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 21s
2025-08-08 07:32:33 -07:00
cd908d9c14 Sample cronjob for k3s
Some checks failed
Ansible Linting / ansible-lint (push) Failing after 5s
Secops Linting and Safety Checks / checkov-scan-s3 (push) Failing after 18s
2025-08-05 21:50:30 -07:00
43 changed files with 605 additions and 499 deletions

View File

@@ -37,7 +37,10 @@ locals {
{ name = "www.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "resume.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "git.shockrah.xyz", records = [ var.vultr_host ] },
{ name = "lmao.shockrah.xyz", records = [ "207.246.107.99" ] },
{ name = "sanity.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "uptime.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "code.shockrah.xyz", records = [ var.vke_lb ] },
{ name = "wiki.shockrah.xyz", records = [ var.vke_lb ] },
]
}

View File

@@ -33,3 +33,11 @@ resource "aws_route53_record" "temper-tv-mx" {
"50 fb.mail.gandi.net.",
]
}
resource "aws_route53_record" "temper-tv-test" {
zone_id = aws_route53_zone.temper-tv.id
name = "test.temper.tv"
type = "A"
ttl = 300
records = [ var.vke_lb ]
}

View File

@@ -26,3 +26,7 @@ variable "vultr_host" {
description = "IP of the temp Vultr host"
}
variable "vke_lb" {
type = string
description = "IP of our VKE load balancer"
}

View File

@@ -1 +1,2 @@
vultr_host = "45.32.83.83"
vke_lb = "45.32.89.101"

View File

@@ -0,0 +1,19 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the sample cron-container
restartPolicy: OnFailure

View File

@@ -1,28 +0,0 @@
resource tls_private_key tarpit {
algorithm = "RSA"
rsa_bits = 4096
}
resource vultr_ssh_key tarpit {
name = "tarpit_ssh_key"
ssh_key = chomp(tls_private_key.tarpit.public_key_openssh)
}
resource vultr_instance tarpit {
# Core configuration
plan = var.host.plan
region = var.host.region
os_id = var.host.os
enable_ipv6 = true
ssh_key_ids = [ vultr_ssh_key.host.id ]
firewall_group_id = vultr_firewall_group.host.id
label = "Tarpit"
}
output tarpit_ssh_key {
sensitive = true
value = tls_private_key.host.private_key_pem
}

View File

@@ -9,7 +9,7 @@ terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.98.0"
version = "6.27.0"
}
vultr = {
source = "vultr/vultr"
@@ -17,13 +17,24 @@ terraform {
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.37.1"
version = "3.0.1"
}
kubectl = {
source = "gavinbunney/kubectl"
version = " 1.19.0"
}
helm = {
source = "hashicorp/helm"
version = "3.0.2"
}
tls = {
source = "hashicorp/tls"
version = "4.1.0"
}
random = {
source = "hashicorp/random"
version = "3.7.2"
}
}
}
@@ -44,4 +55,12 @@ provider kubernetes {
config_path = "config.yaml"
}
provider kubectl {
config_path = "config.yaml"
}
provider helm {
kubernetes = {
config_path = "config.yaml"
}
}

View File

@@ -1,27 +0,0 @@
resource tls_private_key bastion {
algorithm = "ED25519"
}
resource vultr_ssh_key bastion {
name = "bastion"
ssh_key = tls_private_key.bastion.public_key_openssh
}
resource vultr_instance bastion {
region = var.cluster.region
vpc_ids = [ vultr_vpc.athens.id ]
plan = var.bastion.plan
os_id = var.bastion.os
label = var.bastion.label
ssh_key_ids = [ vultr_ssh_key.bastion.id ]
enable_ipv6 = true
disable_public_ipv4 = false
activation_email = false
}
output bastion_ssh {
value = tls_private_key.bastion.private_key_pem
sensitive = true
}

View File

@@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
preferredChain: "ISRG Root X1"
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx

View File

@@ -2,7 +2,7 @@ resource vultr_kubernetes athens {
region = var.cluster.region
version = var.cluster.version
label = var.cluster.label
vpc_id = vultr_vpc.athens.id
# vpc_id = vultr_vpc.athens.id
node_pools {
node_quantity = var.cluster.pools["main"].min_nodes
@@ -10,6 +10,7 @@ resource vultr_kubernetes athens {
label = var.cluster.pools["main"].label
min_nodes = var.cluster.pools["main"].min_nodes
max_nodes = var.cluster.pools["main"].max_nodes
auto_scaler = true
}
}

View File

@@ -0,0 +1,6 @@
data vultr_kubernetes athens {
filter {
name = "label"
values = [ var.cluster.label ]
}
}

View File

@@ -8,16 +8,3 @@
# port = each.value
# }
resource vultr_firewall_group bastion {
description = "For connections into and out of the bastion host"
}
resource vultr_firewall_rule bastion_inbound {
firewall_group_id = vultr_firewall_group.bastion.id
protocol = "tcp"
ip_type = "v4"
subnet = "0.0.0.0"
subnet_size = 0
port = 22
}

View File

@@ -0,0 +1,74 @@
# NOTE: this is a simple deployment for demo purposes only.
# Currently it does support SSH access and lacks Gitea runners.
# However a fully working setup can be found at: https://git.shockrah.xyz
resource kubernetes_deployment gitea {
metadata {
name = "gitea"
namespace = var.playground.namespace
labels = {
"app" = "gitea"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "gitea"
}
}
template {
metadata {
labels = {
"app" = "gitea"
}
}
spec {
container {
name = "gitea"
image = "gitea/gitea:latest"
port {
container_port = 3000
name = "gitea-main"
}
port {
container_port = 2222
name = "gitea-ssh"
}
volume_mount {
name = "gitea"
mount_path = "/data"
}
}
volume {
name = "gitea"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.gitea.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service gitea {
metadata {
name = "gitea"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "gitea"
}
port {
target_port = "gitea-main"
port = 3000
name = "http"
}
port {
target_port = "gitea-ssh"
port = 2222
name = "ssh"
}
}
}

View File

@@ -0,0 +1,47 @@
resource kubernetes_deployment_v1 health {
metadata {
name = "health"
namespace = var.playground.namespace
}
spec {
replicas = 1
selector {
match_labels = {
name = "health"
}
}
template {
metadata {
labels = {
name = "health"
}
}
spec {
container {
name = "health"
image = "quanhua92/whoami:latest"
port {
container_port = "8080"
}
}
}
}
}
}
resource kubernetes_service_v1 health {
metadata {
name = "health"
namespace = var.playground.namespace
}
spec {
selector = {
name = "health"
}
port {
port = 80
target_port = 8080
name = "http"
}
}
}

View File

@@ -0,0 +1,7 @@
resource helm_release nginx {
name = "ingress-nginx"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
namespace = "ingress-nginx"
create_namespace = true
}

View File

@@ -0,0 +1,48 @@
locals {
services = {
"code.shockrah.xyz" = kubernetes_service.gitea
"sanity.shockrah.xyz" = kubernetes_service_v1.health
"uptime.shockrah.xyz" = kubernetes_service.kuma
"wiki.shockrah.xyz" = kubernetes_service.otterwiki
}
}
resource kubernetes_ingress_v1 health {
metadata {
name = "health-ingress"
namespace = var.playground.namespace
annotations = {
"cert-manager.io/cluster-issuer" = "letsencrypt"
"cert-manager.io/ingress.class" = "nginx"
}
}
spec {
ingress_class_name = "nginx"
dynamic tls {
for_each = local.services
content {
hosts = [tls.key]
secret_name = "${tls.value.metadata[0].name}-secret"
}
}
dynamic "rule" {
for_each = local.services
content {
host = "${rule.key}"
http {
path {
path = "/"
backend {
service {
name = rule.value.metadata[0].name
port {
number = rule.value.spec[0].port[0].port
}
}
}
}
}
}
}
}
}

View File

@@ -1 +0,0 @@
terraform.yaml

View File

@@ -1,33 +0,0 @@
terraform {
required_version = ">= 0.13"
backend s3 {
bucket = "project-athens"
key = "infra/vke/k8s/state/build.tfstate"
region = "us-west-1"
encrypt = true
}
required_providers {
# For interacting with S3
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.30.0"
}
}
}
provider aws {
access_key = var.aws_key
secret_key = var.aws_secret
region = var.aws_region
max_retries = 1
}
provider kubernetes {
config_path = "terraform.yaml"
}

View File

@@ -1,50 +0,0 @@
resource kubernetes_ingress_v1 athens {
metadata {
name = var.shockrahxyz.name
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = "websites"
}
}
spec {
rule {
host = "test.shockrah.xyz"
http {
path {
backend {
service {
name = var.shockrahxyz.name
port {
number = 80
}
}
}
path = "/"
}
}
}
}
}
resource kubernetes_service athens_lb {
metadata {
name = "athens-websites"
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = "websites"
}
}
spec {
selector = {
app = kubernetes_ingress_v1.athens.metadata.0.labels.app
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
external_ips = [ var.cluster.ip ]
}
}

View File

@@ -1,5 +0,0 @@
resource kubernetes_namespace websites {
metadata {
name = "websites"
}
}

View File

@@ -1,62 +0,0 @@
# First we setup the ingress controller with helm
```sh
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
# Now we can install this to our cluster
helm install --kubeconfig config.yaml traefik traefik/traefik
```
# Prove the service is present with
```sh
kubectl --kubeconfig config.yaml get svc
```
# Create the pods
```sh
kubectl --kubeconfig config.yaml -f k8s/nginx-dep.yaml
```
# Expose on port 80
```sh
kubectl --kubeconfig config.yaml -f k8s/nginx-service.yaml
```
# Create ingress on k8s
```sh
kubectl --kubeconfig config.yaml -f k8s/traefik-ingress.yaml
```
# Take the external IP from the ingress
Put that into terraform's A record for the domain since this is a load balancer
in Vultr ( actual resource apparantly )
# Configure cert-manager for traefik ingress
Using the latest version from here:
https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.crds.yaml
```sh
kubectl --kubeconfig config.yaml \
apply --validate=false \
-f https://github.com/cert-manager/cert-manager/releases/download/v1.14.2/cert-manager.yaml
```
# Create the cert issuer and certificate
```sh
kubectl --kubeconfig config.yaml apply -f k8s/letsencrypt-issuer.yaml
kubectl --kubeconfig config.yaml apply -f k8s/letsencrypt-issuer.yaml
```
Because we just have 1 cert for now we are looking for it's status to be `READY`

View File

@@ -1,21 +0,0 @@
Plain nginx for now so that we can test out reverse dns
resource kubernetes_pod shockrah {
metadata {
name = var.shockrahxyz.name
namespace = kubernetes_namespace.websites.metadata.0.name
labels = {
app = var.shockrahxyz.name
}
}
spec {
container {
image = "nginx"
name = "${var.shockrahxyz.name}"
port {
container_port = 80
}
}
}
}

View File

@@ -1,35 +0,0 @@
# API Keys required to reach AWS/Vultr
variable vultr_api_key {
type = string
sensitive = true
}
variable aws_key {
type = string
sensitive = true
}
variable aws_secret {
type = string
sensitive = true
}
variable aws_region {
type = string
sensitive = true
}
variable shockrahxyz {
type = object({
name = string
port = number
dns = string
})
}
variable cluster {
type = object({
ip = string
})
}

View File

@@ -1,37 +0,0 @@
# Here we are going to define the deployment and service
# Basically all things directly related to the actual service we want to provide
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: alternate-nginx-web
namespace: default
labels:
app: alternate-nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: alternate-nginx-web
template:
metadata:
labels:
app: alternate-nginx-web
spec:
# Container comes from an example thing i randomly found on docker hub
containers:
- name: alternate-nginx-web
image: dockerbogo/docker-nginx-hello-world
---
apiVersion: v1
kind: Service
metadata:
name: alternate-nginx-web
namespace: default
spec:
selector:
app: alternate-nginx-web
ports:
- name: http
targetPort: 80
port: 80

View File

@@ -1,30 +0,0 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hello.temprah-lab.xyz
namespace: default
spec:
secretName: hello.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: hello.temprah-lab.xyz
dnsNames:
- hello.temprah-lab.xyz
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-hello
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz
privateKeySecretRef:
name: letsencrypt-prod-hello
solvers:
- http01:
ingress:
class: traefik

View File

@@ -1,13 +0,0 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sample.temprah-lab.xyz
namespace: default
spec:
secretName: sample.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: sample.temprah-lab.xyz
dnsNames:
- sample.temprah-lab.xyz

View File

@@ -1,20 +0,0 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-web
namespace: default
labels:
app: nginx-web
spec:
replicas: 1
selector:
matchLabels:
app: nginx-web
template:
metadata:
labels:
app: nginx-web
spec:
containers:
- name: nginx
image: nginx

View File

@@ -1,12 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-web
namespace: default
spec:
selector:
app: nginx-web
ports:
- name: http
targetPort: 80
port: 80

View File

@@ -1,44 +0,0 @@
# This is the first thing we need to create, an issue to put certs into
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz
privateKeySecretRef:
name: letsencrypt-temprah-lab
solvers:
- http01:
ingress:
class: traefik
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: hello.temprah-lab.xyz
namespace: default
spec:
secretName: hello.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-temprah-lab
kind: ClusterIssuer
commonName: hello.temprah-lab.xyz
dnsNames:
- hello.temprah-lab.xyz
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: sample.temprah-lab.xyz
namespace: default
spec:
secretName: sample.temprah-lab.xyz-tls
issuerRef:
name: letsencrypt-temprah-lab
kind: ClusterIssuer
commonName: sample.temprah-lab.xyz
dnsNames:
- sample.temprah-lab.xyz

View File

@@ -1,31 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: default
labels:
name: project-athens-lb
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sample.temprah-lab.xyz
http:
paths:
- backend:
service:
name: nginx-web
port:
number: 80
path: /
pathType: Prefix
- host: hello.temprah-lab.xyz
http:
paths:
- backend:
service:
name: alternate-nginx-web
port:
number: 80
path: /
pathType: Prefix

View File

@@ -1,15 +1,14 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: default
name: letsencrypt-nginx
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: dev@shockrah.xyz
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
name: example
solvers:
- http01:
ingress:
class: traefik
class: nginx

View File

@@ -0,0 +1,36 @@
apiVersion: v1
kind: Service
metadata:
name: whoami-service
spec:
selector:
name: whoami
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
annotations:
cert-manager.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- secretName: whoami-tls
hosts:
- example.shockrah.xyz
rules:
- host: example.shockrah.xyz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-service
port:
number: 80

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
name: whoami-lb
annotations:
service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-algorithm: "least_connections"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-protocol: "http"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-path: "/health"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-interval: "30"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-response-timeout: "5"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-unhealthy-threshold: "5"
service.beta.kubernetes.io/vultr-loadbalancer-healthcheck-healthy-threshold: "5"
spec:
type: LoadBalancer
selector:
name: whoami
ports:
- name: http
port: 80
targetPort: 8080

View File

@@ -0,0 +1,20 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
spec:
replicas: 3
selector:
matchLabels:
name: whoami
template:
metadata:
labels:
name: whoami
spec:
containers:
- name: whoami
image: quanhua92/whoami:latest
imagePullPolicy: Always
ports:
- containerPort: 8080

View File

@@ -0,0 +1,37 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
preferredChain: "ISRG Root X1"
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: dev@shockrah.xyz
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

View File

@@ -0,0 +1,10 @@
resource kubernetes_namespace playground {
metadata {
annotations = {
names = var.playground.namespace
}
name = var.playground.namespace
}
}

View File

@@ -0,0 +1,30 @@
resource helm_release shockrah_cert_manager {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = "v1.18.2"
namespace = "cert-manager"
create_namespace = true
cleanup_on_fail = true
set = [
{
name = "crds.enabled"
value = "true"
}
]
}
data kubectl_file_documents cluster_issuer {
content = file("cluster-issuer.yaml")
}
resource kubectl_manifest cluster_issuer {
for_each = data.kubectl_file_documents.cluster_issuer.manifests
yaml_body = each.value
depends_on = [
data.kubectl_file_documents.cluster_issuer
]
}

View File

@@ -0,0 +1,61 @@
resource kubernetes_deployment kuma {
metadata {
name = "kuma"
namespace = var.playground.namespace
labels = {
"app" = "kuma"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "kuma"
}
}
template {
metadata {
labels = {
"app" = "kuma"
}
}
spec {
container {
name = "kuma"
image = "louislam/uptime-kuma:2"
port {
container_port = 3001
name = "uptime-kuma"
}
volume_mount {
name = "kuma-data"
mount_path = "/app/data"
}
}
volume {
name = "kuma-data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.kuma.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service kuma {
metadata {
name = "kuma"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "kuma"
}
port {
target_port = "uptime-kuma"
port = 3001
name = "http"
}
}
}

View File

@@ -37,19 +37,13 @@ variable cluster {
}
variable personal {
variable playground {
type = object({
namespace = string
# TODO: Re-incorporate this var for templating later
tls = object({
email = string
})
})
}
variable bastion {
type = object({
plan = string
os = string
label = string
})
}

View File

@@ -1,7 +1,7 @@
cluster = {
region = "lax"
label = "athens-cluster"
version = "v1.33.0+1"
version = "v1.34.1+2"
pools = {
main = {
node_quantity = 1
@@ -14,14 +14,11 @@ cluster = {
}
}
personal = {
namespace = "athens-main"
playground = {
namespace = "playground"
# Sanity check service that is used purely for the sake of ensuring
# things are ( at a basic level ) functional
tls = {
email = "dev@shockrah.xyz"
}
bastion = {
plan = "vc2-1c-2gb"
label = "bastion"
os = "1743"
}

View File

@@ -0,0 +1,49 @@
resource kubernetes_persistent_volume_claim_v1 kuma {
metadata {
name = "kuma-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}
resource kubernetes_persistent_volume_claim_v1 gitea {
metadata {
name = "gitea-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}
resource kubernetes_persistent_volume_claim_v1 otterwiki {
metadata {
name = "otterwiki-data"
namespace = var.playground.namespace
}
spec {
volume_mode = "Filesystem"
access_modes = [ "ReadWriteOnce"]
resources {
requests = {
storage = "10Gi"
}
}
}
}

View File

@@ -1,4 +0,0 @@
resource vultr_vpc athens {
description = "Private VPC for private and personal service projects"
region = var.cluster.region
}

View File

@@ -0,0 +1,63 @@
resource kubernetes_deployment otterwiki {
metadata {
name = "otterwiki"
namespace = var.playground.namespace
labels = {
"app" = "otterwiki"
}
}
spec {
replicas = 1
selector {
match_labels = {
"app" = "otterwiki"
}
}
template {
metadata {
labels = {
"app" = "otterwiki"
}
}
spec {
container {
name = "otterwiki"
image = "redimp/otterwiki:2"
port {
container_port = 8080
name = "otterwiki-main"
}
volume_mount {
name = "otterwiki-data"
mount_path = "/var/lib/otterwiki"
}
}
volume {
name = "otterwiki-data"
persistent_volume_claim {
claim_name = kubernetes_persistent_volume_claim_v1.otterwiki.metadata[0].name
}
}
}
}
}
}
resource kubernetes_service otterwiki {
metadata {
name = "otterwiki"
namespace = var.playground.namespace
}
spec {
selector = {
"app" = "otterwiki"
}
port {
port = 80
target_port = "otterwiki-main"
protocol = "TCP"
name = "http"
}
}
}