How to build a Colo-in-a-Box
We're going to start with a mac mini, because they're cheap, and they support ESXi 6
(For reference: VirtuallyGhetto).
There are a lot of steps. Later, hopefully, some of these steps will be downloadable images either from Docker Hub, Github, or similar.
Until then there are a lot of manual steps. So let's get started...
- Buy a Mac Mini (duh). Make sure it's at least "macmini4,1". You can find out by booting it, clicking on the Apple in the upper left corner, going to "About this Mac", clicking on "System Report...", then look at the Model Identifier.
- Make sure the Mac Mini has at least 4Gs of RAM. Most likely, if it only has 4Gs of RAM it doesn't have enough! It has to be 4Gs of usable space or else you'll get the error "out of safe memory" when you attempt to install ESXi.
- Create a USB with ESXi 6.0 of maximum revision
- Plug the USB stick into the Mac Mini, boot it up and hold the option key to select the boot volume. Select your ESXi USB stick.
- The ESX installation is straightforward. My suggestion is to install ESX onto the USB stick. This leaves the entire Mac Mini disk as usable space, and means if the hypervisor dies you don't lose any VMs. It also means that ESX is more likely to die, and will require more care and maintenance. It's up to you.
- The first thing to do is make your ESX server available via network. Assign it a static IP address. Make a note of this IP address. You can always get back here via monitor, but it's much easier to remember how to get here without having to drag a monitor and keyboard to wherever the Mac Mini might be stuck. If you're not planning on using IPv6, now's the time to turn it off, before you've gotten deep into the process and have VMs that you'll lose.
- It's time to attach to the Mac Mini! Grab a network cable and plug it either directly into your computer, or plug it into a switch that allows the same IPs that the ESX management network is on.
- First, browse directly to the IP address to make sure the site is active. If you're not seeing a "welcome" page, something's wrong. Make sure you're on the correct subnet, that you have access to the proper subnet, etc.
- I suggest to download the vSphere client for Windows while you're here. There are times when it's convenient to have available. But for now, click on the "Open the VMWare Host Client", which just appends "/ui" onto the URL.
- First you need to create a volume...
- Click on "Storage"
- Select "Devices"
- Click on "New Datastore"
- Enter the name for the volume, for partitioning scheme "use full disk", and click "Finish"
- Now click on "Storage" again, and then click on the "Datastore Browser" link.
- Create a new directory "ISOs".
- Click on "Upload" and Upload your favorite linux distro ISOs into the ISOs folder. If you think you'll need a Windows VM as well, now's the time. If you want to have virtualized Macs, since it is Mac hardware under the hood, you can upload an ISO for that as well.
- Now click on "Virtual Machines" on the left, and click "Create / Register VM", it's time to create your first VM
This VM will be the "core", it will be hosting a LOT of things. PXE, DNS, and Puppet, to begin with. It'll help to have at least 8Gs of RAM in this machine.
- It's generally best to name the VM it's eventual domain name, and my favorite is to add a number so that if it needs to be replaced later, it's easy to do.
- Make sure to choose your correct O/S.
- Select your newly created storage volume.
- Select a reasonable amount of disk, CPU, and memory. Depending on your Mac Mini you may not have much available.
- Finish the VM
- Refresh your list of VMs if your new VM doesn't show.
- This being the first VM, we need to install it from scratch.
- Right click on the VM on the left and select "Edit Settings".
- On the CD/ROM section, select "Datastore ISO file".
- Select one of the ISOs that you uploaded earlier. For my purposes I'll be using CentOS 7 because it's not uncommon to find CentOS in Enterprise environments.
- Save your changes
- Start the VM. It should boot right into the ISO. Go through the linux distros normal server creation process.
- Once everything's installed, reboot and log in. The first thing you'll want to do is update all packages. On CentOS this will be roughly "yum update"
- Set the host's hostname. On CentOS, run "hostname " and update the /etc/hostname file with the same .
- After updating, you'll want to install some basic materials. On CentOS this will be roughly: yum install docker git vim-enhanced
Whew. Ok. We now have a single VM on an ESX server. For this to be fully-fledged, we want to be able to stand up anything quickly, easily, and however makes the most sense. To make sure we don't lose anything, we'll start with a gitolite master, create a repo, and use this to create a dockerized name server. From there we'll be setting up a PXE server to help us springboard any other VMs. All of this will occur on this one opening VM so that we squeeze as much as possible as small as possible.
Setting up a gitolite server
Firstly, the rough instructions are here: http://gitolite.com/gitolite/install.html
Start by creating your own user (if you haven't yet), since you'll be the admin, and create an ssh key for your user: ssh-keygen -t rsa -b 4096
Follow the quick install instructions referenced above. You may have to yum install perl-Data-Dumper (as appropriate for your platform).
Once done, there are a couple changes you'll probably want to make, so the first order of business is to checkout the gitolite-admin repo. If you can't, you've already gone wrong!
- Set a group as the admins, not a single user. "@admins = ".
- You can relabel your key. The label for the key is whatever exists before the ".pub" in the confdir. So "git mv .pub .pub" in the keydir, then update the conf. Having appropriate labels for keys is incredibly useful. You don't want everyone named "id_rsa_", you'll never remember who's who.
- It's time to create a couple of extra repos. The current plans are to install Docker, Puppet, and DNS -- among other things -- that can all use git to make sure that things don't go missing. Since we'll be managing these all the same way, we can create a group for the repos and declare them all at once with the same RW+ line.
- You can get most of your gitolite questions answered here: http://gitolite.com/gitolite/basic-admin.html.
We'll want Docker and DNS first, since we'll be standing up the DNS server inside a Docker container. Clone them imediately (they'll be empty) and do a first check-in. If you have issues checking out these newly created repos then you might need to blow away your gitolite setup and start from scratch. It's important to get this over with as quickly as possible! Find errors early...
Start up a Docker Repo
I hope you installed docker earlier. Start up the Docker Daemon now.
You don't yet have a name server, so -- temporarily -- munge /etc/hosts and set 127.0.0.1 as an alias for your docker host name. This will make life easier later.
Edit the appropriate docker file (/etc/sysconfig/docker on CentOS) and add your named docker repo as an insecure registry.
You can learn more about the docker registry here: https://docs.docker.com/registry/deploying/
But really all you need once Docker is started is: "docker run -d -p 5000:5000 --restart=always --name registry registry:2"
Checkout your git repo re: docker. Docker is layered, so it's best to create your docker containers in layers. Since we don't yet have an orchestrator, I also suggest creating Makefiles to help run the individual docker commands, so that you don't have to type them all by hand. The Dockerfile's and the Makefile's will be what goes into the git repo.
Stand up a base image for your desired base O/S. Your first app will be DNS.
Start up a DNS container
Time to make your first docker container!
First you have to make your base image. Check out your docker repo. Inside of it create some directories to split apart base images from applications and services. This tree will get deeper as you stand up more services.
In your base images directory, create a directory for your preferred O/S. Walk into it and use your favorite editor to create a Dockerfile.
Your first line will be "FROM :", which will pull from Docker Hub. Your second line should be a combination of things that make sure the image pulls the latest patches, because security.
After that install the smallest number of utilities you think will be necessary. You want as little inside the container as possible!
The CMD line for this Dockerfile will likely just be "bash" or something similar. This image doesn't server anything, it's just as base.
My suggestion is to also create a Makefile next to the Dockerfile to help you remember the specific commands you'll need. This one will be easy, it'll mostly be:
- build:
docker build --rm -t $(REGISTRY)/$(REPO):$(VERSION) .
Go over to your apps and services directory in the Docker repo and create a directory for your dns server.
Walk inside that directory and create a Dockerfile. This one will be more complicated.
- Your "FROM" line here should include the docker registry you created above.
- You won't need them yet (you're just starting) but over time it becomes more and more useful to have tags in your Dockerfiles to classify them. They should generally go just below the FROM line at the top of the file so that updating a tag will update the entire Docker image. It's a nice "get-out-of-jail-free" card to combat Docker caching.
- Do an install of your favorite DNS server. If you don't have one, I'd suggest starting with BIND first. Not because it's the best, but there's a lot of documentation out there and it's been used for so long in so many places that it's important to know how it works.
- Next, git clone your DNS repo into the container. If you're using BIND, you can clone it into /var/named directly.
- Add a wget line to pull root hints from IANA. One of the nice benefits here is that whenever you update your DNS and restart the container, you'll re-pull the root hints and pick up any potential changes.
- Your CMD line here, if using BIND, will be something like "/usr/sbin/named -f -u named". I suggest at first to have your CMD line be "sleep 3600" so that you can shell into the container and figure out anything you might have gotten wrong.
I also suggest making a Makefile here. This one will be more complicated, some command suggestions:
- The "build" command should do a "docker pull" from the local repo first, but add a beginning "-" to the pull command so that if you can't find it, you'll build anyway. This will let you cache as much as possible.
- A restart command that does "stop run"
- A stop command that not only stops the container but -- suggested -- also removes the container ("rm", not "rmi" which removes the image)
- A run command that has all options it takes to run this image. For your DNS service you'll want to have port 53 open (udp and tcp), make sure to "--detach=true", it's useful to have "--restart=always" so that if the service does something stupid it comes back alive on it's own (if it keeps flapping you'll need to investigate), and I suggest adding --dns entries for this so that it points at localhost first, since it is in fact running it's own DNS service.
- A shell command is the most useful ever. "make shell" is much easier to type than "docker exec -t -i /bin/bash"
- And since you have a registry, I recommend adding pull and push commands.
- Lastly, an "all", that builds, stops, and runs. If it fails to build, it should not stop or run!
Now checkout your DNS repo from git inside the apps and services directory you created .
Create a zone file with basic SOA, MX, and A records.
Since so much of your infrastructure will be running on one machine, your DNS server will be part of your core machine, and many other services will also be running here. So you can just create CNAMEs to your DNS "A" record over and over for all subsequent services that will be run on the same core host.
Now add any other services you expect to be running. If you plan on standing up a web server later, you can add a record for it now, in expectation.
Once your DNS server is active, you can point your local desktop at it to start referencing things by name rather than IP, which is a little easier.