RightScale Blog

Cloud Management Blog
RightScale 2017 State of the Cloud Report
Cloud Management Blog

Migrating to Docker: Halfway There

Migrating to Docker: Halfway There

The RightScale engineering team is moving the entire RightScale cloud management platform comprising 52 services and 1,028 cloud instances to Docker. This article is the fourth in a series chronicling Project Sherpa and was contributed by Ryan Williamson, senior software engineer, and Tim Miller, vice president of engineering.

Week two of Project Sherpa is complete and we're getting into a good rhythm after establishing our ground rules for migrating to Docker. We are now halfway to our goal with 26 of 52 services migrated. Inputs proved to be the dominant theme of the first two weeks, so we wanted to dive into some details on how we approached them.

One of the draws of Docker is the allure of image promotion — the promise that what you build in your dev environment is what you run in your test environment and ultimately what you deploy to production. This is only half of the story, however, as you need to configure images differently depending on how and where they are running. For example, a containerized app that connects to MongoDB with multiple nodes in production needs to be configured differently than when it runs in staging where nodes for disaster recovery and high availability may not be required. When run on a development box, the same application may be backed by an in-memory store for performance or cost savings.

So how hard can this be? We took a look across our services and found between 60 and 100 configurable values. And we need the ability to run each service in a variety of environments where the container may be grouped in varying combinations with other containers, including:

  • On a developer laptop/desktop either inside or outside of a container                
  • Through our CI system for automated build and test
  • Integration environment for testing
  • Staging
  • Production  

Do the math and that's a lot of complexity around configuration.

It was clear that we needed a well-defined approach. We settled on the 12-factor methodology. According to 12factor.net: "In a 12-factor app, env vars are granular controls, each fully orthogonal to other env vars. They are never grouped together as 'environments,' but instead are independently managed for each deploy. This is a model that scales up smoothly as the app naturally expands into more deploys over its lifetime."

An example:

The 12-factor methodology gives us a simplified approach for managing inputs in our apps, but we still need to provide reasonable values for each environment. So where do these values come from?  We categorized configurable values based on the answers to a set of questions, such as:

  • Is the value dynamic per environment?
  • Which stakeholders are responsible for providing the value?
  • How often is the value changed?
  • Is the value a secret?  

Using the answers to these questions, we mapped each value to where it would come from. Then came the "horse trading," as we referred to it, convincing each other that our mapping made sense and agreeing to standards and naming conventions. We focused on standardizing inputs as much as we could so that we could split configuration between those deemed "common" and those that were application specific. Common inputs could be re-used across the platform to minimize duplicates and complexity. Our naming convention was driven by the tool we chose, the hierarchy for populating that tool, and the mechanism each container used for querying that tool for its common and application-specific inputs.

We settled on using Consul (from HashiCorp) as our centralized key/value store for providing configuration values that were needed for our integration/staging/production environments and that were specific to defined production shards and clusters. Other values that would never change for our operations team but that were valuable for developers to tweak went into the Dockerfile. Secrets were provided at runtime via credential inputs on RightScale ServerTemplates™.

We also generated .env files with sane defaults for dev/test workflows. We opted to use a Git repository to back Consul to give us historical data and an audit trail of changes to the values. Based on the Git repository, we also built CI/CD tooling to vet changes to inputs by validating the structure of the configuration document and doing a sanity check on the values to avoid error at deploy time. Phew.

In the end, designing a structured process to organize, store, and pass inputs will help us meet our goals of reusing Docker images across our set of use cases.

To learn more about how we are using Docker, watch our on-demand webinar, Overcoming 5 Common Docker Challenges: How We Did It at RightScale, which covers:

  • Tips for securely producing high-quality, easy-to-manage Docker images
  • Options for multi-host networking between laptops and the cloud
  • Orchestrating Docker in production: service discovery, configuration, and auditability
  • Increasing host density with Docker
  • Dynamic monitoring and alerting for Docker containers

Watch the on-demand webinar.

This article was originally published on FierceDevOps.

RightScale 2017 State of the Cloud Report

Read More Articles on Migrating to Docker:
Migrating to Docker: Getting the Flywheel Turning
Migrating to Docker: Barely Controlled Chaos
Migrating to Docker: Why We’re Going All in at RightScale
Migrating to Docker: The Second (Harder) Half
Migrating to Docker: The Final Results