Continuous Integration with containers

This post is part of The Containerization Chronicles, a series of posts about Containerization. In them, I write about my experiments and what I’ve learned on Containerization of applications. The contents of this post might make more sense if you read the previous posts in this series.

Now that we have some containers in place for development, we can integrate with a CI server. I chose Scrutinizer CI because of the code analysis it makes, the fact that it is also a CI server it’s a plus. However, as a CI, it has some limitations, which I will talk about later on when integrating with Heroku, so we will most likely move to another CI engine later in the future while keeping Scrutinizer only for the code analysis but, for now, this is all we need.

To set up the CI, we will:

  1. Containerize an environment to run the tests in the CI
  2. Configure Scrutinizer
  3. Add the scrutinizer badges to the README.md
  4. Integrate Scrutinizer with GitHub

If you want to jump right into the code, this is the tag on GitHub.

1. Containerize an environment to run the tests in the CI

In the production container, we don’t want to have code that is not used. Why not? Because the container image will be bigger, the attack surface will also be bigger, and it’s just plain dirty.

Ideally, the container used to run the tests is the same as the one used in production, which is called something like “testing/production parity”.  It is the only way to guarantee that a container will perform as expected in production. However, in order to run the unit/integration/functional tests, we need them in the container, as well as all the needed dependencies to run them… So the idea is that we create a container to run the tests, based on the production container but where we add all the tests and their dependencies.

Unfortunately, this means that the tests run in a project context that is not the same as the context that runs in production. Testing/production parity is broken.

The danger here is that, for example, our production application will use a dependency which is mistakingly installed as a test dependency. In such a case, the tests will be green but in production the dependency will be missing.

The solution for that is to have acceptance tests running outside the container against the production container, simulating a user on a browser. The project, however, does not have acceptance tests so I had to create some.

I created three acceptance tests simulating an anonymous user, a logged-in user and an admin user.  The tests themselves are not very elegant, they should probably be broken up into several more tests and not depend on any hardcoded variables, but they are also not the main point of this post, I just want you to know that you shouldn’t take that code as a good example, there are better ways of doing acceptance tests. For the purpose of this blog post, those tests will illustrate perfectly.

build-container-ci-app.dockerfile
build/container/ci/app.dockerfile

The dockerfile for the CI environment is based on the production image and the only thing it does is adding the tests and its dependencies to the image. This could also be done with multi-stage builds, but I will leave that for a later post where I will experiment with it and analyze the pros and cons.

ci_docker_compose
build/container/ci/docker-compose.yml

The docker-compose file is also pretty straightforward. When compared to the docker-compose files explained before, we just change the image and container names, the path to the dockerfile and a few environment variables.

2. Configure Scrutinizer

Scrutinizer is configured in two places, the configuration file in our project root and on their website. In the configuration file, we set up the actual configuration, while in the website we mostly set the environment variables.

This is how the configuration on the website looks like:

scrutinizer_web_01

The configuration file is more interesting:

 

scrutinizer_01
.scrutinizer.yml

 

Many of those configurations are self-explanatory. It is, however, relevant to mention that scrutinizer already has some default commands it runs to set up the dependencies and to run the tests.

As you surely noticed above, we used a new make command in the scrutinizer config. These were the relevant changes:

makefile_test-ci
Makefile

The unit/integration/functional tests will run inside the test container, but the acceptance tests will run on the scrutinizer host, hitting the production container (without any extra tests inside nor dependencies).

We don’t want to use the default command to run the tests because we run them inside the container, so we override it, use our make command and we specify where scrutinizer can find the coverage report.

A bit lower in the scrutinizer config file, we have another important configuration: the build_failure_conditions directive. It tells Scrutinizer when to fail the build even if the tests are all green. In our case, we fail the build if any new issue exists or if the coverage drops more than 5%.

3. Add the scrutinizer badges to the README.md

We want to show the world how cool, well coded and well tested our application is, so we want to add some badges to it. It is fairly simple, we just need to add the following snippets to the README.md:

scrutinizer_readme_1

 

scrutinizer_readme_2
README.md

4. Integrate Scrutinizer with GitHub

The only thing left to do is integrate Scrutinizer with GitHub, so that when the tests fail, we will not be allowed to merge to master.

This is done in the GitHub repository configuration:

scrutinizer_github_1

And it will yield something like this in case the tests fail:

scrutinizer_github_2

 

This is it! the integration with Scrutinizer CI is done!

Please, feel free to share your thoughts and/or ways to improve this.

 

2 thoughts on “Continuous Integration with containers

Leave a comment