Installing Spinnaker Using Jenkins

Jeff Wenzbauer
8 min readJul 30, 2019
Source: https://github.com/spinnaker/spinnaker.github.io/blob/master/assets/images/spinnaker-logo-transparent-color.png

When I first started looking into how to use Spinnaker I immersed myself in their documentation and in various videos available on YouTube. There are quite a few good resources available to help in getting started with Spinnaker, and more documentation is always being added.

From my understanding, it seemed like the best way to install spinnaker is using halyard to create a distributed installation on a Kubernetes cluster. Halyard provides a CLI to configure how Spinnaker will be deployed and to deploy updates to an existing cluster. Halyard basically provides an easy-to-use interface to generate halconfig. The nice thing about halconfig is that you can back it up easily since it is just a file(s). Then when the time comes to make changes again, you just import the config and continue where you previously left off.

Problem

The problem with this approach is that it relies on people to run commands. Any time people are involved there is always an inevitable failure waiting to happen. Maybe a command is fat-fingered, maybe someone new is working on the project, who knows. Anything is possible when humans are involved.

The other problem to this approach is that it is difficult to know exactly what commands were executed to get to the current configuration state. There are a huge number of available commands.

Solution

I decided to make use of Jenkins to run all of my halyard commands. This allows me to place all of the halyard commands into scripts that can be checked into git and managed as code (with history, etc). I ran into a few difficulties along the way, so I figured I would share my what I learned.

Implementation

Halyard consists of 3 main components: halyard daemon, hal cli, and halconfig. The halyard daemon has to be running in order for the hal cli to function properly to generate the halconfig files. Normally when you run the hal cli it will automatically detect that the halyard daemon is not running and start it for you. For some reason this didn’t work when executing in the context of a Jenkins pipeline. So to start up the halyard daemon I run this before executing any hal cli commands:

docker.image('gcr.io/spinnaker-marketplace/halyard:1.20.2').inside() {
stage('Start Halyard') {
sh label: 'Start Halyard Daemon', script: "./scripts/start-halyard-daemon"
}
...

And here are the contents of /scripts/start-halyard-daemon :

#!/bin/bash
set -Eeuxo pipefail

# Print the current version of hal
echo "hal version: $(hal -v)"

# Start up halyard in the background with logs sent to /tmp/halyard.log
"/opt/halyard/bin/halyard" > /tmp/halyard.log 2>&1 &

# Wait until halyard has fully started
( tail -f -n +1 /tmp/halyard.log & ) | grep -q -E "Tomcat started|Started Main"
echo "halyard daemon started"

This script starts the daemon in the background then tails the log to look for the message that is printed when the daemon has started successfully. This was the best way I found to wait for the daemon to startup.

I use linux servers for my Jenkins slaves, and the default user that is used for executing job steps is ‘jenkins’. Spinnaker uses the ‘spinnaker’ user by default when running in a ‘Distributed Installation’. Due to this, the first problem I ran into was that the spinnaker applications (running in my k8s cluster) were unable to access any credential/secret files that were created by the hal (halyard) cli during the Jenkins job execution. As a simple fix I ran this at the end of my Jenkins pipeline execution to update the owner of all the halconfig files to user spinnaker: chown spinnaker:spinnaker -R ~/.hal/* . Unfortunately, this didn’t seem to fix the problem. The only thing I could think of that was causing the continued file permissions error was that Jenkins applies some sort of funky file ownership to the files within the workspace. So my next thought was that I would execute all of the hal cli commands as user ‘spinnaker’ rather than ‘jenkins’. I was making use of docker in my Jenkins pipeline to run commands, so I just changed the user that was used to run the docker container: docker.image('gcr.io/spinnaker-marketplace/halyard:1.20.2').inside('-u spinnaker') { <jenkins steps here> } . Unfortunately, this causes Jenkins to not be able to retrieve the logs of the steps being executed because the log is owned by user ‘jenkins’. Third times the charm! For my 3rd attempt, I executed the halyard docker container as user ‘root’ which provided me with the ability to use su to execute particular commands as user ‘spinnaker’. This looks something like this (notice the bolded text):

docker.image('gcr.io/spinnaker-marketplace/halyard:1.20.2').inside('-u root') {
stage('Start Halyard') {
sh label: 'Start Halyard Daemon', script: 'su spinnaker -c "./scripts/start-halyard-daemon"'
}
...

To make this possible, I chown all of my scripts to user ‘spinnaker’ and make them all executable at the beginning of my pipeline before executing any of them:

docker.image('bash:5.0.7').inside('-u root') {
stage('Prepare scripts') {
dir('scripts') {
sh 'chmod 755 *'
sh 'adduser spinnaker -D'
sh 'chown spinnaker:spinnaker *'
}
}
}

Following this pattern, I was able to execute all other configuration steps as the proper user.

There are tons of options available for configuring spinnaker. You can view them all with some documentation here. The pattern that I followed to configure spinnaker is to write a bash script that contains all the hal cli commands then execute the script as user ‘spinnaker’. I created 1 bash script for each config type. For example, here is how I configured artifacts.

I started by creating a script to configure github artifacts (notice this script takes 2 required arguments GITHUB_ACCOUNT_NAME and GITHUB_PAT_FILE):

#!/bin/bash
set -Eeuxo pipefail

GITHUB_ACCOUNT_NAME="$1"
GITHUB_PAT_FILE="$2"

if [[ -z "$GITHUB_ACCOUNT_NAME" || -z "$GITHUB_PAT_FILE" ]]; then
echo 'one or more variables are undefined:'
echo ' GITHUB_ACCOUNT_NAME($1) - name of github account as it will appear in spinnaker'
echo ' GITHUB_PAT_FILE($2) - path to a file containing a personal access token that will be used by spinnaker to authenticate to GitHub'
exit 1
fi

# make sure artifacts are enabled
hal config features edit --artifacts true

# https://www.spinnaker.io/setup/artifacts/github/
hal config artifact github enable
hal config artifact github account add "$GITHUB_ACCOUNT_NAME" \
--token-file "$GITHUB_PAT_FILE"

Next I created a script to configure s3 artifacts (notice this script takes 1 required argument S3_ACCOUNT_NAME and 2 environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY):

#!/bin/bash
set -Eeuxo pipefail

S3_ACCOUNT_NAME="$1"

if [[ -z "$S3_ACCOUNT_NAME" || -z "$AWS_ACCESS_KEY_ID" || -z "$AWS_SECRET_ACCESS_KEY" || -z "$AWS_REGION" ]]; then
echo 'one or more variables are undefined:'
echo ' S3_ACCOUNT_NAME($1) - name of s3 account as it will appear in spinnaker'
echo ' AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION - AWS environment variables'
exit 1
fi

# make sure artifacts are enabled
hal config features edit --artifacts true

# https://www.spinnaker.io/setup/artifacts/github/
hal config artifact s3 enable
echo "$AWS_SECRET_ACCESS_KEY" | hal config artifact s3 account add "$S3_ACCOUNT_NAME" \
--aws-access-key-id "$AWS_ACCESS_KEY_ID" \
--region "$AWS_REGION" \
--aws-secret-access-key

After defining the configuration for all of my desired artifact types each in their own bash script I defined a Jenkins stage to execute all of these scripts as user ‘spinnaker’. Here is what that looks like:

stage('Configure Artifacts') {
// Get the personal access token for accessing github
withCredentials([file(credentialsId: 'SPINNAKER_GITHUB_ARTIFACTS', variable: 'GITHUB_PAT_TOKEN_FILE')]) {
// Execute the script to configure github artifacts
sh label: 'Github', script: "su spinnaker -c './scripts/configure-artifacts-github my-github-user \"${GITHUB_PAT_TOKEN_FILE}\"'"
}
// Get the aws access keys for accessing s3 artifacts
withCredentials([usernamePassword(credentialsId: 'SPINNAKER_AWS_ACCESS', passwordVariable: 'AWS_SECRET_ACCESS_KEY', usernameVariable: 'AWS_ACCESS_KEY_ID')]) {
// Execute the script to configure s3 artifacts
sh label: 'S3 Bucket ${SPINNAKER_S3_BUCKET_ARTIFACTS}', script: "su spinnaker -c './scripts/configure-artifacts-s3 my-spinnaker-s3-demo'"
}
}

In this Jenkins stage I execute each artifact script with the necessary credentials by pulling the credentials out of the Jenkins credential store.

This process can be repeated for all hal configuration steps.

Once all configuration has been completed using the hal cli it is time to deploy. This is done following the same pattern as defined above. First I created a bash script that executes the hal commands necessary to deploy spinnaker:

#!/bin/bash
set -Eeuxo pipefail

VERSION="$1"

if [[ -z "$VERSION" || -z "$CRED_DIR" ]]; then
echo 'one or more variables are undefined:'
echo ' VERSION($1) - version of spinnaker to be installed'
exit 1
fi

# List the available versions of spinnaker that can be deployed
hal version list

# Set the desired version to be deployed
hal config version edit --version "$VERSION"

# Execute the spinnaker deployment
hal deploy apply

Now I add a stage in my jenkins pipeline to execute the script as the spinnaker user:

stage('Hal Deploy') {
sh label: 'Deploy Spinnaker', script: "su spinnaker -c './scripts/deploy \"${SPINNAKER_DEPLOY_VERSION}\"'"
}

As with any deployment, it is always a good idea to backup what was done so it can be restored if needed. All of the configuration steps executed by hal cli are all written in code and committed to git, so this probably serves as a good enough backup. Executing all of the hal commands again should generate the same halconfig files that were generated before. It never hurts to be safe, though, so I backed up the halconfig to an s3 bucket using this Jenkins stage:

docker.image('gcr.io/spinnaker-marketplace/halyard:1.20.2').inside('-u root') {// all configuration and deployment steps done here    
...
stage('Hal Config Backup') {
sh label: 'generate backup file', script: "su spinnaker -c 'hal backup create'"
sh 'cp /home/spinnaker/halbackup-*.tar .'
sh 'chmod 777 halbackup-*.tar'
// allow jenkins default user to own the file
sh 'chown 1001:1001 halbackup-*.tar'
stash includes: 'halbackup*.tar', name: 'halbackupstash'
}
}
docker.build("awscli", "-f Dockerfile.awscli .").inside {
stage('Push Spinnaker Backup to S3') {
unstash 'halbackupstash'
sh 'ls -la halbackup-*.tar'
// Get the aws access keys for accessing s3 artifacts
withCredentials([usernamePassword(credentialsId: 'SPINNAKER_AWS_ACCESS', passwordVariable: 'AWS_SECRET_ACCESS_KEY', usernameVariable: 'AWS_ACCESS_KEY_ID')]) {
sh 'aws s3 cp halbackup*.tar s3://$SPINNAKER_S3_BUCKET_BACKUP --region $AWS_REGION'
}
}
}

In this code I made use of 2 different docker images to execute different stages so I copied files between the containers using stash/unstash. I also had to chown the tar file to user 1001 since to allow the default user within the awscli container to access the files.

Conclusion

This lays down the groundwork of what it takes to execute hal cli commands inside of a Jenkins pipeline. It is up to you to determine which configuration steps you wish to run for your specific installation. Here are all of the previously discussed steps as an example of a full Jenkins file that would do what was described throughout this article:

node {
try {
checkout scm
// SPINNAKER_DEPLOY_VERSION sets the version of spinnaker that will be deployed
env.SPINNAKER_DEPLOY_VERSION = "1.14.4"
docker.image('bash:5.0.7').inside('-u root') {
stage('Prepare scripts') {
dir('scripts') {
sh 'chmod 755 *'
sh 'adduser spinnaker -D'
sh 'chown spinnaker:spinnaker *'
}
}
}
docker.image('gcr.io/spinnaker-marketplace/halyard:1.20.2').inside('-u root') {
stage('Start Halyard') {
sh label: 'Start Halyard Daemon', script: 'su spinnaker -c "./scripts/start-halyard-daemon"'
}
stage('Configure Artifacts') {
// Get the personal access token for accessing github
withCredentials([file(credentialsId: 'SPINNAKER_GITHUB_ARTIFACTS', variable: 'GITHUB_PAT_TOKEN_FILE')]) {
// Execute the script to configure github artifacts
sh label: 'Github', script: "su spinnaker -c './scripts/configure-artifacts-github my-github-user \"${GITHUB_PAT_TOKEN_FILE}\"'"
}
// Get the aws access keys for accessing s3 artifacts
withCredentials([usernamePassword(credentialsId: 'SPINNAKER_AWS_ACCESS', passwordVariable: 'AWS_SECRET_ACCESS_KEY', usernameVariable: 'AWS_ACCESS_KEY_ID')]) {
// Execute the script to configure s3 artifacts
sh label: 'S3 Bucket ${SPINNAKER_S3_BUCKET_ARTIFACTS}', script: "su spinnaker -c './scripts/configure-artifacts-s3 my-spinnaker-s3-demo'"
}
}
stage('Hal Deploy') {
sh label: 'Deploy Spinnaker', script: "su spinnaker -c './scripts/deploy \"${SPINNAKER_DEPLOY_VERSION}\"'"
}
stage('Hal Config Backup') {
sh label: 'generate backup file', script: "su spinnaker -c 'hal backup create'"
sh 'cp /home/spinnaker/halbackup-*.tar .'
sh 'chmod 777 halbackup-*.tar'
// allow jenkins default user to own the file
sh 'chown 1001:1001 halbackup-*.tar'
stash includes: 'halbackup*.tar', name: 'halbackupstash'
}
}
docker.build("awscli", "-f Dockerfile.awscli .").inside {
stage('Push Spinnaker Backup to S3') {
unstash 'halbackupstash'
sh 'ls -la halbackup-*.tar'
// Get the aws access keys for accessing s3 artifacts
withCredentials([usernamePassword(credentialsId: 'SPINNAKER_AWS_ACCESS', passwordVariable: 'AWS_SECRET_ACCESS_KEY', usernameVariable: 'AWS_ACCESS_KEY_ID')]) {
sh 'aws s3 cp halbackup*.tar s3://$SPINNAKER_S3_BUCKET_BACKUP --region $AWS_REGION'
}
}
}
}
catch (e) {
echo "========Pipeline Failed========"
throw e
}
finally {
stage('Cleanup') {
echo "========Cleaning up the mess we made========"
cleanWs()
}
}
}

--

--