Customizing a Bosh Deployment
Overview
I’ve been spending a lot of time with BOSH recently and a lot of the work revolves around creating customized deployments.
Most projects you work with will have a deployment manifest that you need to customize through ops files. You will want to store your customizations in a repo that you control becuase your configuration will almost always be different than the configuration provided by default. The perfect example of this is the number of AZs.
Principles
Lets go over a few principles that we want to establish which will guide the workflow we adopt.
- Modification Principle - Avoid modifing deployment repo dependencies. Modifying dependencies can result in a brittle deployment. If the dependency updates and you have made modifications you will need to track those modifications with merges which is a lot of overhead over just getting the latest version and re-deploying.
- Dependency Principle - Take a hard dependency on a version/release of a deployment repo. In order to make your deployment more resilient, fix your dependencies to a specific version. This can take the form of a specific URI or tags/commit identifiers in source control.
- Secrets Principle - Don’t store secrets in source control, always use a configuration server like CredHub. The best way get breached or to be exploited by farming bots is to put your secrets in source control (so don’t do it). Using a config server eliminates the risk of accidently checking in a secret vars file or forgetting to mask a directory with .gitignore. Not to mention it prevents secrets from being stored in plain text on a file system.
- Portability Principle - In order to remain portable, try to push configuration that varies between deployments to environment specific configuration. This is a customization of a deployment so this includes structural things like number of AZs and instance counts. Having a portable configuration will make it easier to move to a different environment. In general, if something changes between deployment environments, it is a good candidate for pulling out into environment specific configuration.
Architecture
Lets address each princple with a decision and then show how these decisions can be supported with a workflow, directory structure and source control features.
Git Submodules
In order to follow the modification principle and dependency principle we can use git submodules. Submodules allow us to take a snapshot of a deployment repo and track the version information of that snapshot in source control along side our deployment customizations.
The commands for adding a submodule are pretty simple
git submodule add https://github.com/cloudfoundry-incubator/kubo-deployment submodules/github.com/cloudfoundry-incubator/kubo-deployment
The key is to pin the submodule to a particular commit or tag. Often deployment authors will create releases of their deployments and tag the releases. This creates a reference commit and associates a tag with the reference commit. If a release is not specified, you can just pin to the commit on master at the time you are creating your customized deployment.
by commit hash
cd submodules/github.com/cloudfoundry-incubator/kubo-deployment
git checout a291edc
or by tag
cd submodules/github.com/cloudfoundry-incubator/kubo-deployment
git checkout v0.13.0
For all intensive purposes, this is a version identifier that pins your deplendency which is great. The last thing we want is to take a dependency on the ever changing latest commit on master and get out of sync with the deployment development team.
Configuration Server
The BOSH Configuration Server is key to keeping the customized repo clean of secrets and specific configuration items that limit its portability. This will allow us to implement the secrets principle. The configuration server should be kept clear of structural things like number of AZs or instance counts. Those elements are good candidates for environment specific configuration which adhears to the portability principle. Storing secrets in the config server is not only safe and protects you from secret leaks, it also has the added benefit of keeping our customized deployment clean of gitignore masked directories.
CredHub is the defualt configuration server supplied through the https://github.com/cloudfoundry/bosh-deployment/credhub.yml ops file. You simply need to add the uaa.yml and credhub.yml as ops files to your deployment of bosh in order to set it up. This repo does an excellent job of stepping through the setup.
You will still need to deal with the zero secrets problem, luckily password managers like lastpass and onepassword have CLIs that can be used to pull the initial secrets needed to access CredHub.
Shell Scripts
If we follow the principles and architecture decisions above, we can create shell scripts that farm out our deployment information to a single command. With a shell script, we no longer need to remember a lengthy command and, as long as we utilize a config server, use path variables instead of hard coding values.
Secrets can be specified with the credhub cli out of band of the deployment script. With secrets securly in credhub, we only need to reference the secret by path in our deployment manifest.
Here is a sample deployment script for deploying kubo (aka cfcr) using the techniques above:
bosh deploy -d kubo submodules/github.com/cloudfoundry-incubator/kubo-deployment/manifests/cfcr.yml \
-o submodules/github.com/cloudfoundry-incubator/kubo-deployment/manifests/ops-files/vm-types.yml \
-o submodules/github.com/cloudfoundry-incubator/kubo-deployment/manifests/ops-files/iaas/vsphere/cloud-provider.yml \
-o submodules/github.com/cloudfoundry-incubator/kubo-deployment/manifests/ops-files/iaas/vsphere/master-static-ip.yml \
-o submodules/github.com/cloudfoundry-incubator/kubo-deployment/manifests/ops-files/iaas/vsphere/set-working-dir-no-rp.yml \
-o ops-files/one-az.yml \
-o ops-files/deployment-name.yml \
-o ops-files/remote-kubo-release.yml \
-v master_vm_type=default \
-v worker_vm_type=default \
-v deployment_name=kubo \
-v kubernetes_master_host=192.168.2.20
As a general practice I put external dependencies above specific dependencies of my deployment so that my deployment settings override the settings of the dependency (which is often the indended behavior).
You’ll see I have the master IP address coded in this shell script, I would argue this violates the portability principle as it ties me to a specific network configuration. I did it for simplicity when setting up this deployment and as you will see later, this can be easily moved to an environment specific configuration.
Ops Files
Ops files are the key to customizing a deployment. Most deployments place ops files in some distinct directory like “ops-files”, “ops” or “operations”, I recommend doing the same as it keeps the root directory clear of your modifications and makes it clear at first glance how to execute the deployment. Using ops files allows us to adhear to the modification principle by not mutating our dependency and the portability principle by exposing variables that we can later configure with environment specific configurations.
You can see in the shell scripts section above that I used custom ops files to:
- create a single availiblity zone
- change the deployment name
- update the location of the kubo release artifacts (I prefer to have BOSH check my dependencies).
The updating the release artifact location helps to implement the dependencies principle. We want to know the exact version of the release we have a dependency on. Leaving this value floating can lead to a broken deployment if another deployment happens to use the same release but of a different version.
In general I try to make ops files declarative in name and fulfill a specific purpose. For example ‘one-az.yml’ is very clear that the ops file sets the AZ count to 1. If you need more flexibilty, you can add an ops file that creates a variable. Do this if the value needs to be updated regularly or is specified through another variable that you want to reference. In the AZ case, I would call this ops file ‘azs.yml’ denoting it changes the AZs.
Defaults
Cloud Config
I think it is a good idea to specify defaults in your cloud config. Example deployments often specify default VM types with the name ‘default’ and this can greatly speed up the time to your first successful deployment. In general, you should specify a default:
- vm_type
- network
- disk_type
Environment
I like to specify a default environment using the $BOSH_ENVIRONMENT variable. If you are using more than one bosh environment from the same client, I’d suggest instead pushing the environment selection to the -e flag of the bosh cli. This will avoid the accidental overwriting of the wrong environment.
Workflow
Now that we have an archtiecture base, lets dive into the workflow.
Create Initial Repo
The start of the process is to create a repo to store your customizations. Typically you create a directory, change the current directory into the directory you just created and then type ‘git init’. Lets say we are creating a deployment for Vault:
mkdir vault-deployment-vsphere
cd vault-deployment-vsphere
git init
You can name the directory anything you like. Typically the dependencies will be named [something]-deployment so I try to make the name distinct to avoid confusion.
Create Dependency Submodules
As we showed above, we then need to establish our dependencies. For the vault release, we need to take a dependency on the vault bosh release. Sometimes a distinct repo exists for a deployment, and sometimes the deployment and release are coupled in the same repo. This is an example of the release and deployment being coupled.
For an example of distinct release and deployment repos, see the concourse deployment and concourse release repos.
To establish the dependency, we use the submodules command from above to create a dependency and then use the checkout command within the submodule directory to pin it to a specific version.
git submodule add https://github.com/cloudfoundry-community/vault-boshrelease/ submodules/github.com/cloudfoundry-community/vault-boshrelease
cd submodules/github.com/cloudfoundry-community/vault-boshrelease
git checkout v0.7.0
I like to organize submodules by source control provider, org | user then repo (similar to how golang organizes dependencies), though you can use whatever scheme works for you. |
Work Loop
Shell Scripts
As we are working on the shell script to customize our deployment, we will most likely go through a modification loop between updating the shell script, writing secrets to the config server and creating ops files.
To get started, lets create a shell script for deploying vault. I’ll call it ‘deploy-vault.sh’.
bosh deploy -d vault \
submodules/cloudfoundry-community/vault-boshrelease/manifests/vault.yml
Next modify it for execute
chmod u+x depoy-vault.sh
Fire it off :). Something I learned from a dojo is to fail as soon as possible so you can start your iteration loop. No fear, bosh is here.
./deploy-vault.sh
The vault release is very simple and as long as we specify defaults, we won’t need to bring other ops files in or variables. The next section will cover a more complicated deployment configuration.
Don’t forget to save your changes and push everything to the remote repo!
git add -A
git commit -m "created a customized bosh deployment for vault"
# add your remote here if you haven't done so
git push origin master
Ops Files
Now I want to modify the deployment to change the number of AZs from 1 to 3. The vault bosh release comes with a ops file for modifying the AZ list here. This allows me to specify a different number of AZs for each of my environments. This is great if I have say a lab environment with only one AZ and a QA or Production environment with the recommended 3.
Update the deploy-vault.sh file with the ops file
bosh deploy -d vault \
submodules/cloudfoundry-community/vault-boshrelease/manifests/vault.yml \
--ops-file submodules/cloudfoundry-community/vault-boshrelease/manifests/operators/azs.yml \
--var azs=[az1,az2,az3]
Deploy changes:
./deploy-vault.sh
Save changes:
git add -A
git commit -m "adds ops file"
# add your remote here if you haven't done so
git push origin master
Updating the deployment
Now that I have a sample deployment of minor complexity, I am going to update the deployment from v0.7.0 to v0.8.0. This is relatively easy because I’m using a submodule that is tracked by source control.
First things first, I’ll change the working directory to the submodule directory
cd submodules/cloudfoundry-community/vault-boshrelease
Next I need to get the tagged version of the repo
git checkout v0.8.0
Next I need to deploy
cd ../../..
./deploy-vault.sh
Now that my vault deployment is completed, I’m going to save my changes:
git add -A
git commit -m "updates vault bosh release version"
# add your remote here if you haven't done so
git push origin master
Environment Specific Vars Files
I want to keep my deployment portable so I’m going to write the az configuration to distinct files that mirror my environment structures. I’m then going to update the bash script with a configurable parameter with a default. This will allow me to keep the script simple and override the defaults in my production CI pipeline.
Like I mentioned before, I have a single AZ in my lab environment and have three AZs in my production environemnt. To represent those two environments I’m going to create environment specific vars files.
Why do this with vars files and not the config server?
Well, we really only want to use the config server to store secrets. If we start storing more and more configuration information in the config server we could end up with a situation where our state is no longer tracked by source control and it is very difficult to see what changed between deployments. For this reason, keep things that are static between deployments configured with ops files and use variables with environment specific ops files for things that change between environments. Remember, you are not creating ‘the’ reusable deployment, you are creating a customized iteration of the more generic deployment manifest. So its OK to hard code things.
mkdir vars-files
cat <<EOF > vars-files/lab.yml
---
azs: [z1]
EOF
cat <<EOF > vars-files/prod.yml
---
azs: [z1,z2,z3]
EOF
By default I want to deploy the lab configuration. When I script this out for CI, I’m going to keep it simple and specify a single parameter of the environment name. If the enviornment name cannot be found, I’ll throw an error and exit the script.
#!/bin/bash
# set defaults
export ENVIRONMENT=lab
# check parameters
# see: https://stackoverflow.com/a/7069755
while test $# -gt 0; do
case "$1" in
-h|--help)
echo "options:"
echo "-h, --help show help"
echo "-e, --environment specify environment (lab|prod) default (lab)"
;;
-e|--environment)
shift
if test $# -gt 0; then
export ENVIRONMENT=$1
else
echo "no environment specified"
exit 1
fi
shift
;;
esac
done
# verify the vars file exists. If not throw a descriptive message
export ENVIRONMENT_VARS_FILE=vars-files/$ENVIRONMENT.yml
if [ ! -f $ENVIRONMENT_VARS_FILE]; then
echo "$ENVIRONMENT_VARS_FILE not found. Did you specify the correct environment?"
exit 1
fi
# do the deployment with the environment vars file
bosh deploy -d vault \
submodules/cloudfoundry-community/vault-boshrelease/manifests/vault.yml \
--ops-file submodules/cloudfoundry-community/vault-boshrelease/manifests/operators/azs.yml \
--vars-file $ENVIRONMENT_VARS_FILE
Locally for development I can still type
./deploy-vault.sh
When I add this to my CI pipeline, I’ll change it to:
./deploy-vault.sh --environment prod
Saving changes and pushing we now have a repo that supports local development and can be re-used for production with a simple flag.
git add -A
git commit -m "adds environment specific support"
# add your remote here if you haven't done so
git push origin master
Conclusion
Hopefully you find this post useful in your adventures with BOSH. We have covered how to create a deployment that is free of secrets and flexible enough to implement your architecture but loose enough to inject environment specific configuration.
If you have any additions or comments on this post, I maintain my blog through github. Feel free to submit an issue or pull request here: https://github.com/patrickhuber/patrickhuber.github.io