Containerized Build Agents – Simplify Your DevOps Infrastructure Using Docker

DevOps Continuous Delivery Tools from Docker are Rewriting the Provisioning Playbook

The Prolifics DevOps Labs focus on creating accelerators for our customers while applying DevOps continuous delivery tools and principles to optimize their software release processes. As a tier-one IBM business partner, many of our accelerators support DevOps automation for apps designed for one of the many IBM runtime platforms. These include IBM BPM Advanced, IBM Portal, IBM Integration Bus, IBM DataPower and IBM WebSphere Application Server, to name a few.

We have been so successful that Prolifics offers this automation to customers under the name Prolifics Build Conductor (PBC). PBC provides reusable build, deploy and test automation steps you can use in your favorite automation engine. It has been used extensively in Rational Team Concert, Jenkins or UrbanCode Deploy, but it can easily add value to any number of other popular automation engines.

Prolifics Build Conductor is an out-of-the-box solution for customers—install the automation in your automation engine and you are ready to go. Setup a new automated build, deployment or test immediately by filling in the relevant properties and you will have an automated release pipeline for your apps in no time!

At Prolifics DevOps Labs, we are not too fond of manually intensive tasks. They are boring and a waste of valuable time. That said, something we found we were doing a lot was manually provisioning new build agents due to Build Conductor’s popularity.

I use the term build agent to refer to the machine that processes build, deployment or test scripts on behalf of an automation engine/server. We could talk about an automation agent—because it automates more than just builds—but we will stick with “build agent” for simplicity’s sake. Examples of build agent software would be a Jenkins slave, an RTC Jazz Build Engine or an UrbanCode Deploy agent.

We standardized using Linux for our build infrastructure quite some time ago, which meant spinning up a new base machine was pretty simple. However, each new build agent host additionally needs to have the following installed:

  1. The build agent software (the Jazz-build engine, a Jenkins slave, or an UrbanCode Deploy agent depending on which automation engine is being used).
  2. The automation steps – the reusable scripts that build, deploy and test apps.
  3. A variety of supporting tools required for the automation steps. The exact supporting tools depend on the application runtime platform we are targeting. In many cases, this was especially labor intensive, as certain platforms can require a significant amount of supporting software with lengthy install processes.

To get a better idea of the type of supporting tools required on a build agent, let's look at an example for IBM Integration Bus (IIB).

To support the automation that assembles and deploys Broker Archive (BAR) files, we need to install the following supporting tools on each build agent host:

  1. IBM Integration Toolkit (IIT). This is used by our automation steps to create new BAR files and to apply BAR overrides.
  2. IBM MQ. This is used by our automation steps to deploy BAR files.
  3. IIB Server. This is used by our automation steps to deploy BAR files.
  4. X Virtual Frame Buffer (Xvfb). A pre-requisite for IBM Integration Toolkit to run headlessly.
  5. Testing Tool(s). One of a number of popular test tools for testing our BAR files.

Creating a new IIB build agent host manually involves a lot of time installing and configuring these tools. You are right to think that this time would be better spent elsewhere. Also, whenever we upgrade our automation steps to support a newer version of IIB, all of our build agent hosts could require upgrades of their supporting tools (i.e., upgrades of IIB, MQ, IIT, PBC and the test tools). More wasted manual effort.

Time for a change. Enter Docker.

DevOps Continuous Delivery Tools with Docker – Reducing Infrastructure Overheads and Increasing Flexibility

If you have not experienced the joys of using Docker yet, you are missing out.

My first hands-on experience with Docker was working on a companion offering to Prolifics Build Conductor called Prolifics OneClick Ready-2-Run (OCR2R). OCR2R’s goal is to allow our customers to quickly provision new tools and platforms with little effort. Docker was chosen as one of the key technologies to meet this objective. (We also offer this capability using Chef and IBM Pure Applications).

It quickly became apparent that Docker could be used to codify the steps required to provision a new build agent host. Doing so allows us to provision quickly and painlessly and decommission build agents with ease. I use the term build agent container to describe a Docker container that contains all of the software required to act as a build agent, including the supporting tools and automation steps.

There are many great resources out there that will give you an excellent introduction to Docker. Check out the links here and here. I’ll briefly talk you through the basics.

Docker automates the job of provisioning a running set of software of your choosing. It takes the pain out of installing and starting up new instances of the software by automating these tasks. It provides a mechanism known as a Dockerfile, which is used to codify (script) the steps that install and configure your chosen set of software. This is a plain text file that uses a standard set of commands to execute the install and configure steps.

Docker allows you to create a Docker Image from a Docker file by building this file. The created Docker Image is a static image of the full set of software you have chosen to run.

An image is then run to become a Docker Container, which is a unique running instance of the software with its own processes, state and configuration. You can start as many Docker Containers from an image as you wish. This gives you the ability to quickly spin up multiple running instances of your software, each with their own processes, state and configuration.

All of this together means that Docker is the ideal technology to solve the issue of spending too much time setting up new build agent hosts.

Let’s use IIB again as an example.

We start by creating a Dockerfile that contains all of the steps to install and configure a new IIB build agent host—the steps to install IIB, IIT, MQ, Xvfb, the test tool(s) and our PBC automation—and then configuring the lot. This configuration involves automating steps to do things like setup users, user groups and permissions; copying in start-up scripts; setting environment properties and preparing folder structures.

We then run a Docker build to create an image from that file. The image locks in the exact configuration. This means we can now reliably recreate the same running configuration of software with absolute certainty.

We publish the image to our internal Docker repository, and we are ready to go! From there we can ship it to any host running Docker (using a pull command) and instantly start up as many containerized build agents as we wish. It is as simple as that!

Direct Benefits of Docker for Build Agent Containers

How is this of benefit to Prolifics and our customers?

  1. Once you have a “gold” build agent host image, you can be sure that same setup can be shipped across to all your projects and customers to reliably and quickly set up new build agent hosts. There are no errors in recreating the perfect build infrastructure with the correct versions; there is no time wasted by manually having to install all the requisite software.
  2. You get much more flexible usage from your infrastructure because new build agents can be provisioned within minutes. Also, you can quickly refactor your DevOps topology by starting/stopping containers on different hosts within your infrastructure. The sudden need for lots of new IIB build agents? Easy. Scale back a set of build agents and free up machine resources? No problem.
  3. Upgrading build agents is lightening quick. Creating a new “gold” image is just a matter of making available the new install binaries for IIB, MQ, IIT and PBC and then updating the Dockerfile to use these new installs. Once the new “gold” image has been regression tested, place it in the shared Prolifics Docker repository, and it is then available to download by your project teams and customers. It is as easy as that.
  4. Running different build containers (i.e., supporting builds for different application platforms) on the same host becomes trivial. Because each of the containers acts as an entirely separate machine with its own state and configuration, the build agents are isolated from each other, which helps avoid inter-compatibility issues.
  5. Most importantly, you are freeing up a lot of time Prolifics could better spend innovating for you, our customers!

For all these reasons, we find Docker to be a fantastic way of simplifying your DevOps infrastructure.

Getting Started with Containerized Builds using Docker

How do you get started?

  1. Identify the number of different unique types of build agent hosts you need. There might only be a single type, or you could have a more complex Enterprise Architecture with a number of different application platforms, each requiring their own build agent configuration. You should plan to automate the provisioning of each.
  2. Determine the requisite software for each type of build agent host.
  3. Create a Dockerfile to codify the setup and installation of each of these unique types of build agent host.
  4. From there it is easy: use Docker to create a Docker Image based on these Dockerfiles.
  5. Once you’ve determined where you would like instances (containers) of each image to run, ship the image files over to those hosts and start-up your Docker Containers.

You are now on your way to enjoying the benefits of a containerized DevOps infrastructure!

More Hints and Tips for a Containerized DevOps Infrastructure

  1. Use the Docker ENTRYPOINT command along with a start-up script copied into your Docker Image to get your containerized build agent to start immediately servicing builds on container start-up.
  2. Improve efficiency by creating a shared location for all tool install binaries and pull from that location in your Dockerfiles.
  3. Save space on your build agent hosts by creating a proper hierarchy of Docker Images that reuse common base layers.
  4. Put in place suitable DevOps practices for managing and releasing Dockerfiles and images! For example, source control Dockerfiles and use your automation engine to build and deploy images. We’ve implemented this as a new Build Conductor module, which means we can run Continuous Integration of our Dockerfiles to ensure they are always in a “gold” state. This is great for uncovering defects inadvertently delivered into source control.

… And More on Prolifics DevOps Continuous Delivery

We at the Prolifics DevOps Lab live and breathe DevOps on a daily basis, and are always looking to optimize. We’d love your feedback, so please send us your comments and questions.

Also, we love sharing our progress and accelerators, so please get in touch if you’d like to hear more. We often provide our accelerators to customers at zero license cost!

  • Find out more about Prolifics Build Conductor for providing reusable build, deploy and test automation steps for all of your favorite application platforms and tools.
  • Find out more about Prolifics One-Click Ready-2-Run for automatically provisioning and configuring your favorite application platforms and tools.
  • Alternatively, learn more about DevOps and how it can help reduce the speed and cost at which you can deliver new apps.

Greg Hodgkinson

Greg Hodgkinson
Director of Lifecycle Tools and Methodology

Gregory Hodgkinson is Director of Lifecycle Tools and Methodology at Prolifics, working in the CTO’s Office. He is also a recognized IBM Champion for Rational. Before his role at Prolifics, he was Founder, Director, and Visionary at 7irene, a leading SOA Technology Solutions company in the United Kingdom. He has approaching 20 years of experience in delivering Technology Solutions, initially specializing in the field of Component-Based Development (CBD) before moving seamlessly into Service-Oriented Architecture (SOA). His current areas of expertise include DevOps and the Software Development Lifecycle (SDLC), and he provides leadership for the Prolifics DevOps Labs. He is still very much a practitioner and has been responsible for technology solutions for a number of Fortune 500 and FTSE 100 companies. He regularly presents on DevOps and other methods at both IBM and other events, has co-authored a Redbook on SOA solutions and contributes to forums such as DeveloperWorks.

Tags: