Simplify Software Delivery Using Containers and the Cloud, Part 1: Docker Containers

Wouldn’t it be great if your teams could just focus on the coding? After all, we want to be innovative and write great software!

Unfortunately, most really interesting software requires you to think about other things.

  • How do you package up and deliver all the components of the solution?
  • How do you take care of all of the dependencies?
  • How do you make sure the end-to-end solution is scalable?
  • How do you make sure that the solution is easy to manage?

Answering these questions requires you to consider a lot more than just “coding the logic.”

In times past, the only software that had these sorts of demands were large-scale business IT systems. To meet the demands, they made use of a great platform: the mainframe.

Unfortunately, the mainframe did a lot of this at the expense of choice. The software had limitations—all UIs tended to be of the “green-screen” variety. All your code was written in Cobol. Integration was with CICS, as was data storage and retrieval. But you could mostly focus on your business logic, and the platform took care of things like build and deploy, dependency management, integration, data, scalability, and operational needs.

 

 

Containers and cloud have become foundational components of a new, modern type of software platform.

 

 

Now this post isn’t about making a case for the mainframe. I mention it to consider that with the explosion of choice brought about in the distributed world, business IT systems got a lot more complex and at the same time had to make do without the benefit of a standardized set of platform services to help deal with this complexity. With PC-based servers being stuck into data centers alongside the mainframe, there was an explosion of choice in terms of operating systems, development languages, and application and data server platforms.

At the same time, “consumer-focused” software has become a lot more interesting. This is no longer just desktop applications: think of all the mobile applications on your phone, the social media platforms you interact with, and the cloud-hosted solutions you consume. Both consumer-focused software as well as business software now have a lot of complexity to deal with.

How is all this backstory relevant to containers and the cloud?

The short answer is that each has become a foundational component of a new, modern type of software platform which, like the mainframe in times past, allows our teams to focus on the task of writing software that is simpler to develop, deliver, and operate.

Let’s start with containers, which (at least in their Docker form) burst onto the scene in a big way in 2014. How have containers helped?

Most importantly, containers have given us a standardized way of handling the packaging and delivery of application components with the bold claim of “build, ship, and run any app, anywhere.”

We start by installing Docker on any host machine (Windows, Linux, MacOs). We all know the basic docker commands:

 

 

docker build

takes a simple text Dockerfile recipe and uses it to build a Docker image containing our app and all its configurations and dependencies.

 

 

docker push

lets us share this image by uploading it to a registry (such as Dockerhub), from where it can be downloaded using a simple docker pull.

 

 

docker run

lets us run the application, starting up a container based on the image that contains the running software application.

 

There are more commands, but these basics give enough context to discuss the wealth of benefits.

The immediate benefit that draws most people to Docker is the speed with which you can distribute and start up a running “dockerized” application. Just executing a docker run command specifying a docker image in your registry will automatically pull down the image and start up a running instance. The software is quicker to download because Docker images are substantially smaller than an equivalent virtual machine image. The software starts up quicker because Docker images are substantially faster than virtual machine images.

All of this makes it incredibly quick to reliably distribute new versions of software.

Each image/container is totally isolated from the other. This means that on a Docker host machine, you can download a variety of different bits of software without ever having to worry about them having incompatible dependencies.

 

 

This means developers can focus on making the appropriate implementation choices for their specific component without having to worry about the knock-on effect on all other components in the software system.

 

 

This is great if you use Docker to allow yourself to experiment with software on your desktop machine without ever having to worry about any of the dependencies leaking out onto your host machine—or ever getting stuck where two bits of software use different versions of the same package, library, or other piece of software. This used to be a major headache in managing applications. Docker solves this neatly.

This also means that shipping an application component as a container means that the implementation choices made by the developers of that component are isolated from other components. Developers can focus on making the appropriate implementation choices for their specific component without having to worry about the knock-on effect to all other components in the software system. This is a really big deal. It allows you to be more innovative in the technologies you use, allowing you to experiment safely. This safety allows your teams to evolve their software to make use of new technologies and frameworks.

Dockerfiles allow you to codify all the steps that you would normally perform manually to install your application on a host server, such as updates to the operating system, installation of tools, libraries, and application server software, or installation and configuration of the application.

Just having this recipe written down somewhere is hugely useful, as it helps in understanding your application’s various dependencies.

But more than that, it means that modifying the install is so much easier. Just update the Dockerfile, re-run the docker build command, and Docker will create a new image with your changes included. Also, tags allow you to keep multiple versions of your image should you wish to support multiple versions of your application and its configuration.

The fact that this is all driven by a simple text file means that the “recipe” can be placed in source control, which means that now you can track all the changes that you make to the application, its dependencies, its configuration, and its underlying platform. This is immensely valuable for troubleshooting issues.

This also means that we can set up CI/CD pipelines that deploy more than just application changes. Now your entire application image with all its dependencies and configuration go through your CI/CD pipelines, meaning that you can push out changes to test and production environments very quickly.

 

 

Simple, neat, and tidy. It removes the “well, it works on my machine” class of problems.

 

 

Let’s talk about Docker's “Run Any App, Anywhere” claim. Before Docker, it used to be that there were certain approaches, tools, and workflows for running your application components that would often differ depending on whether you were running them in a local desktop or on a production server. Now you can run the very same image on your Windows or Mac desktop, on your bare-metal Linux server, in a virtual machine in your data center, or even in your choice of cloud. It just runs anywhere with the exact same functional behaviour.

It is so much easier to experiment with software by pulling it down from DockerHub and running it on your desktop. I was able to get up and running with the entire ElasticStack set of software on my desktop within minutes of reading about it. When I was finished, I could just run a docker rm command to remove the container with the application, its data, and all its dependencies. Simple, neat, and tidy.

The code is executed locally on your desktop in the same way as it will be run in your test environments and in production—again, with all the dependencies and configuration packaged in. It removes the “well, it works on my machine” class of problems.

Now that we've heard about all the great benefits of packaging your applications in containers, let’s move our considerations to the cloud. As we’ll see, containers pop up here as well. In our next blog post, we will discuss how you can use containers in the cloud with Kubernetes.

 

Greg Hodgkinson
Practice Director

Gregory Hodgkinson is the Lifecycle Tools and Methodology Practice Director at Prolifics and an IBM Champion for Rational. Previous to that he was a Founder, Director, and the SOA Lead at 7irene, a visionary software solutions company in the United Kingdom. He has 16 years of experience in software architecture , initially specializing in the field of component-based development (CBD), then moving seamlessly into service-oriented architecture (SOA).

His extended area of expertise is the Software Development Lifecycle (SDLC), and he assists Prolifics and IBM customers in adopting agile development processes and SOA methods. He is still very much a practitioner, and has been responsible for service architectures for a number of FTSE 100 companies. He presents on agile SOA process and methods at both IBM (Rational and WebSphere) and other events, has also co-authored a Redbook on SOA solutions, and contributes to DeveloperWorks.