# Hardware deployment Strategies ## Virtual Desktop Interface aka 0-Clients: network hosted OS is what each client would use. In some cases that network is a pool of servers which are tapped into. Clients can vary in specs like explained below(context: university): > Pool for a Library Clients retain low hardware specs since most are just using office applications and not much else. > Pool for an Engineering department Clients connect to another pool where both clients and pool have better hardware specs/resources. The downside is that there is _1 point of failure_. The pool goes down and so does everyone else, meaning downtime is going to cost way more than a single machine going down. # Server Hardware Strategies > All eggs in one basket Imagine just one server doing everything * Important to maintain redundancy in this case * Upgrading is a pain sometimes > Buy in bulk, allocate fractions Basically have a server that serves up varies virtual machines. # Live migration Allows us to move live running virtual machines onto new servers if that server is running out of resources. # Containers _docker_: Virtualize the service, not the whole operating system # Server Hardware Features > Things that server's benefit from * fast i/o * low latency cpu's(xeons > i series) * expansion slots * lots of network ports available * EC memory * Remote control Patch/Version control on server's Scheduling is usually slow/more lax so that server's don't just randomly break all the time. # Misc Uptime: more uptime is _going_ to be more expensive. Depending on what you're doing figure out how much downtime you can afford. # Specs Like before _ecc memory_ is basically required for servers, good number of network interfaces, and solid disks management. Remember that the main parameters for choosing hardware is going to be budget, and necessity; basically what can you get away with on the budget at hand.