Docker Swarm deployments with Semaphore
I hope to show you how to easily setup a docker build from your project branch upon every push to that branch, run integration tests, deploy your Docker image or Docker Compose file to a server using SSH, and notify you on Slack or email.
Why use a Continuous Integration and Deployment service?
Continuous integration and deployment is a great methodology for speeding up development and community organization in your app. Imagine every time a community member submits a pull request, your CI service automatically runs integration tests to make sure the code works with your existing base? And what if deploying your code to production was as simple as merging your code into the master branch and pushing.
Having these tasks automated for you will save you a lot of time and will help prevent mistakes from occurring from manually deploying.
However, some services have a lot of complexity that prevents developers from taking the time to set it up. And some services run slow enough that you end up waiting 15–45 minutes before you see your code in a production environment.
When you have a difference between your QA and local environments you didn’t notice then tracking down those bugs can require a push and deployment several times before you discover those differences. We’d like to say this never happens to us, but this does occur often in development teams.
If your CI/CD deployment solution is taking 15–45 minutes, you end up having one of those days where you spend 4–5 hours tracking down a deployment bug. Not the best use of your time!
Semaphore boasts it is the fastest CI/CD service and can get the job done in a fraction of the time. On top of that their interface is non-intimidating so you’re not worried about lost time setting up your CI/CD service.
- Gitlab-CI took 6–7 minutes to complete my tasks
- Semaphore did the same work in 2.5 minutes
Semaphore is free for unlimited open source projects, and $25 monthly fee for unlimited private projects and jobs.
How to Set Up CI/CD with Semaphore
I’m assuming you already have a Docker Swarm server cluster setup with at least 1 manager node. You can check out my Docker Swarm cheatsheet for that.
Connect Semaphore to your code for automatic jobs
After clicking the “Create new” link, a wizard will guide you through adding a project. By default, branches will be automatically built when a git push is detected, but this can also be configured in project settings. Semaphore has a dedicated platform for Docker, so make sure this is selected in the platform tab, in project settings.

If your project has a Docker file, this option will be presented in the wizard as well.
Next, to connect your Docker Hub account with Semaphore, click on Add-Ons on the top right and set up your credentials after selecting “Docker Registry”.


How to set up a secure connection between your manager server and Semaphore
We will generate a public/private key pair (make sure to leave the password blank):
ssh-keygen -t rsa
This can be run on your local development machine or anywhere, but I would recommend your development machine. Add the contents of the public key (~/.ssh/id_rsa.pub
) to our manager server’s ~/.ssh/authoried_keys
list and the private key as a file in our Semaphore’s interface. Then store the manager node’s IP address as an environment variable in the Semaphore interface so we can refer to it in our scripts.
How to handle environment variables
The environment variables tab allows you to set up environment variables during your build jobs, they can optionally be encrypted so they can’t be viewed or edited through the interface.

Additionally you may create configuration files on the build server using the “Configuration files” tab. This allows you to specify a path and name for the file on the build server and the contents. We will use this to paste in our private key and make sure we check the “encrypt file” button. Use the path /home/runner/.ssh/your_private_key_name

How to set up and run integration tests automatically with MongoDB
Semaphore allows you to set up “jobs” which are groups of terminal commands, executed in parallel, in isolated environments. We will use one such job, and run the tests. There’s also a shared job named “Setup”, which will be prepended to every regular job, so it’s a good place to install dependencies and execute other bootstrapping steps here.

Then you run your unit tests in a parallel job. It’s best practice to do most of the heavy lifting in your package.json
npm scripts and simply run these commands.

There is a MongoDB service automatically running on mongodb://localhost:27017/db_name
. All you have to do is refer to this address in your code. In your npm test script, you may set an environment variable or set it in Semaphore and select the localhost mongo instance
# npm run test
cross-env NODE_ENV=test && mocha test/ --recursive
To see an example of this, check out my Feathers-Vue project. Notice in my config/default.js
file I have:
"mongodb": process.env.DATABASE_URL || "mongodb://localhost:27017/feathers-vue"
In this case, I do not set the DATABASE_URL
environment variable when running my tests so it will fallback to the address Semaphore supports.
When my build is successful and my tests pass, Semaphore will then run my deployment scripts.
How to write our Docker Swarm deployment scripts
We are building our service’s Docker image from a dockerfile. Something like:
FROM node:8-alpine
RUN apk add --update git && rm -rf /tmp/* /var/cache/apk/*WORKDIR /var/www/
COPY package.json /var/www/ENV NODE_ENV=production
COPY package.json /var/www/package.json
RUN npm installCOPY public/ /var/www/public
COPY config/ /var/www/config
COPY server/ /var/www/serverEXPOSE 80CMD ["npm", "run", "production"]
With npm run production
being:
# npm run production
cross-env NODE_ENV=production node server/
NOTE: I would recommend pm2, a popular choice, instead of plain node for serving your file, however pm2 at the time of this writing has a some security problems that are worth avoiding.
We push the built image to our Docker repo. Next we add the manager server’s IP address to our known_hosts
so it won’t prompt us to add it. We copy over any files we want to copy over using scp
. In this case, I’ve added an environment.env
in Semaphore configuration files that I refer to in my Docker Swarm yaml file. So I need to copy that over. I’ve also added the private key in the path /home/runner/your_folder_name/.ssh/semaphore
.

We will configure Semaphore, to deploy our app whenever a build on the master branch passes. This can be done by clicking the “Set Up Deployment” button, under the branch, on the main project page. Here, select “Generic Deployment”.
Now we can select which branch (if green) will trigger the deployment. Add the following commands and ignore the “Add SSH key” step, as we already added our key as a configuration file (/home/runner/your_folder_name/.ssh/semaphore
), and it will be loaded during the deployment.
docker pull $CACHE_IMAGE:$BRANCH_NAME || true
docker build -f $DOCKER_FILE --cache-from $CACHE_IMAGE:$BRANCH_NAME --tag $CACHE_IMAGE:$BRANCH_NAME .
docker push $CACHE_IMAGE:$BRANCH_NAMEssh-keyscan -H $DEPLOY_SERVER_URL >> ~/.ssh/known_hostsscp -r environment.env .env $COMPOSE_FILE root@$DEPLOY_SERVER_URL:~/ssh root@$DEPLOY_SERVER_URL "env $(cat .env | grep ^[A-Z] | xargs) docker stack deploy -c $COMPOSE_FILE --with-registry-auth f"

We pull from our previous docker image so the process will build faster. Semaphore adds the $BRANCH_NAME environment variable for us to identify the branch the image was built on. See more here. The $COMPOSE_FILE
is an environment variable I set up to refer to the name of my Docker Swarm production yml file.
In our final command we SSH into our manager server, load our environment variables from the .env file and run
docker stack deploy -c $COMPOSE_FILE -with-registry-auth f
docker stack deploy
will start services if they don’t exist or update existing services using the changes in the compose file. The -with-registry-auth
allows the credentials we typed into our manager server to be forwarded onto our slave nodes so they can download the images they need to download. This is only required if the Docker registry is private.
Docker login is not required since Semaphore does that for you behind the scenes. Also caching your Docker builds with — cache-from will speed them up.
How to set up notifications when a build passes or fails with Slack or email
Semaphore makes this really easy. In your Slack group click apps and search Semaphore. Login and install the semaphore extension and go through the wizard and it will provide a webhook url. Simply paste this into notifications > slack > webhook url
in the Semaphore interface. You can also click the email tab and simply choose when you get notified.
Hopefully this takes some of the intimidation away from setting up your own CI workflow. I have an example of this in practice if you want to see more details in the code. https://github.com/codingfriend1/Feathers-Vue
Also check out Semaphore — https://semaphoreci.com/