116 lines
3.9 KiB
Markdown
116 lines
3.9 KiB
Markdown
---
|
|
title: Hosting websites with Fargate ECS
|
|
date: 2023-03-27
|
|
draft: false
|
|
description: What I learned from using Fargate and ECS to host websites
|
|
category: article
|
|
image: /media/img/fargate/first.jpg
|
|
---
|
|
|
|
# Context - How my Websites have been Hosted in the Past
|
|
|
|
A very straight forward setup that lent itself well so keeping costs low and
|
|
which may inspire you to also use a similar setup should you want to host
|
|
your own website with a Fargate cluster:
|
|
|
|
## Infrastructure Maintained with Terraform
|
|
|
|
* Single EC2 machine
|
|
* A security group for SSH and web ingress, as well as HTTPS egress
|
|
* Elastic IP attached
|
|
* `A record`'s in route53 pointing to the Elastic IP
|
|
|
|
## Software Required for this Setup
|
|
|
|
* Nginx as a reverse proxy which pointed to all the HTML/CSS/JS required for each website
|
|
|
|
The config for this was usually hand written then deployed with Ansible.
|
|
Configuration for TLS was then deployed afterwards with Ansible as well.
|
|
Worked well enough and it meant I had a decent solution to TLS cert renewals.
|
|
|
|
* Gitlab pipelines for deploying static files
|
|
|
|
Since this required SSH access a service user was created and would usually
|
|
setup SSH in Gitlab with the pipeline script below:
|
|
|
|
```yml
|
|
before_script:
|
|
- eval "$(ssh-agent -s)"
|
|
- echo "${CI_USER_KEY}" | tr -d '\r' | ssh-add -
|
|
- mkdir -p ~/.ssh/
|
|
- chmod 700 ~/.ssh/
|
|
- ssh-keyscan shockrah.xyz 2>&1 >> ~/.ssh/known_hosts
|
|
- chmod 644 ~/.ssh/known_hosts
|
|
```
|
|
|
|
# Technical Side of the Current Setup
|
|
|
|
* 1 Fargate Cluster containing 1 service for each website
|
|
|
|
Technically the same could be achieved with 1 service for all websites but that
|
|
will come in the next post.
|
|
|
|
* 1 Load balancer for all websites
|
|
|
|
Basically just abusing the load balancer rules to direct traffic to the correct
|
|
service target group in the desired cluster.
|
|
|
|
* ACM managed certificates
|
|
|
|
Super easy renewal of certificates, be it through Terraform or manually.
|
|
|
|
## Software/Docker Images used for Services
|
|
|
|
* [nginx-s3-gateway](https://github.com/nginxinc/nginx-s3-gateway)
|
|
|
|
This image allows the usage of nginx to proxy **private** S3 buckets.
|
|
This is basically how I am able to keep my S3 buckets totally locked down
|
|
while still allowing the content to be cleanly viewable to the public.
|
|
|
|
|
|
# Why the switch to the new setup
|
|
|
|
1. Easier to manage tooling
|
|
|
|
Being that [Project Athens](https://gitlab.com/shockrah/project-athens/) is not
|
|
my only project that requires attention in regards to infrastructure and upkeep
|
|
so reducing the amount of tooling and documentation _required_ to manage
|
|
things is worth it for me personally.
|
|
|
|
2. Updating content is much easier
|
|
|
|
The biggest benefit is that I have to maintain less code to deploy content to
|
|
the websites themselves meaning less code per project's pipeline.
|
|
In many cases this also means I can remove Ansible entirely from a project which
|
|
means much less to manage in terms of keys and pipeline code.
|
|
|
|
|
|
# Plans for Future Technical Changes
|
|
|
|
1. Making the buckets public to leverage a reguar Nginx container
|
|
|
|
The advantage of this is that I can use 1 container to balance all my websites
|
|
just like before and still let the ACM + application load balancer do the
|
|
heavy lifting for TLS.
|
|
With some clever auto-scaling I can then reduce the average amount of containers
|
|
I pay for while still retaining good availability to the sites themselves.
|
|
|
|
|
|
# Cost
|
|
|
|
The old setup was definitely way cheaper.
|
|
The load balancer alone runs me about 25-40$ USD + the two containers
|
|
that I use for each site. Overall cost is slightly higher than just the single
|
|
EC2 machine however you can minimize this cost by just running less containers.
|
|
|
|
Reason for the whack estimation is because I have't looked too hard into how
|
|
much this costs exactly as I have other clusters that I manage at the moment
|
|
and things get lost in the noise.
|
|
|
|
Just know that if you're looking to minimize price at all costs then EC2 machines + autoscaling is the way to go.
|
|
Fargate _does_ however make things much easier to deploy/manage.
|
|
|
|
|
|
|
|
|