skip to Main Content

Why do you do what you do? What unexamined assumptions guide your choices and actions? 

There was a little boy watching his father make dinner. His dad cut a chunk off the turkey before putting it into the oven, so the curious child asked why.

“I don’t know,” said the father. “Your grandma always made it like that when I was a boy.”

So the next time they saw her, the man asked, “Mom, why did you always cut that piece of the turkey off before putting it in the oven?

“I don’t know,” she replied. “That’s the way my mom always made it.”

So the next time this boy sees his great-grandmother he asks her, “Why did you cut the piece of turkey off before putting it in the oven?”

“Because otherwise it wouldn’t fit in the oven!”

Do you have a favorite recipe, maybe one handed down for a few generations? Have you ever seen the original? If it’s anything like the recipes I’ve seen, it’s written on an index card, and usually has at least half a dozen modifications scratched in here and there. 

Don’t get me started on measurements like “a pinch” or “one can.” Whose pinch? How big of a can? 

Even if you follow this update recipe to the letter, are you still sure you’ve made it just like it was? What changes didn’t make it onto the recipe card? What updates were kept in memory, not just a list written on a card, but a part of them?

Just because you have the recipe card for Grandma’s banana bread, doesn’t mean you can actually make Grandma’s banana bread.

And yet we try anyway. 

Oh wait, did I say Grandma’s banana bread? I meant your production infrastructure.

How long does it take to get your code tested through all environments and into production? What about infrastructure changes? How confident are you that your dev/test/qa environments are even anything like production? 

Have all your production hotfixes been migrated back down to lower environments so that your next production deploy doesn’t just reintroduce the bug that kept your support team working overnights 3 nights in a row?

My guess is that if you’re reading this, if you’ve ever been through one of those production bug overnighters, you don’t need to be convinced that you need a solution. But how do you pick the right one, and more importantly how do you convince your business that DevOps is a worthwhile investment?

The first thing to keep in mind is the importance of communication. Separation of Concerns is important to keeping your sanity and being effective, but when that SoC becomes a silo, it endangers your ability to transform and keep pace. A recent IDC report indicated that 70% of siloed digital transformation initiatives will fail.

So how do we communicate effectively? 

One of the chief ways developers have always communicated is through their code. All the requirements and design docs in the world don’t help to understand poorly written, poorly documented code. So the industry developed standards and practices to make code more meaningful, more reusable, both through “self-documenting” code and good inline commenting practices.

The same is both necessary and possible when it comes to your infrastructure. We are used to documenting “flow,” describing expected outputs, required inputs, et cetera. But since the way that infrastructure is provisioned often uses scripts that face a certain “flow,” to them, we document our infrastructure like we document our code. 

But what is the “output” of our script? How does a static resource “flow” once it has been provisioned?

Here we see the value and power of a declarative architecture for implementing infrastructure. The methods for provisioning a component are well known, and rebuilding these each time we need to provision infrastructure is akin to reinventing the wheel each time we get a new car. Instead of defining the process each time, define the end state, and use a toolset that can configure that end state reliably. 

Docker can be a great entrypoint to see what a declarative architecture looks like. For more examples check out https://www.docker.com/get-started/. Docker containers are lightweight VMs that abstract out the overhead of an entire operating system running in virtualization. Unlike a traditional VM which takes manual configuration to create an “image,” a Docker image is created from a simple file specifying a handful of directives such as the base image, basic configurations, and application code to be included in the image. 

Whereas a VM must be passed around as an image that may take up 6-10 Gb on your hard drive, a Docker image is much smaller, usually under 1 Gb, some under 100 Mb. However, there’s still some ambiguity here, as images are by their nature opaque. Instead, you can send over the “Dockerfile,” which is usually 10-20 lines of declarations that define what the image should be, which the Docker engine uses to generate the necessary image. 

Not only is this a more lightweight method of transmitting a virtualized runtime, it also contains within it documentation of precisely what this piece of infrastructure is, which is the piece so often lacking from modern systems.

As the use of automation increases, declarative tools such as Docker and Kubernetes will be an essential ingredient in your company’s ongoing recipe for success. By seeing more clearly why certain decisions are being made, when they were introduced to your infrastructure, and by whom, you will be better able to deliver a consistent infrastructure. And you won’t be cutting off that piece of turkey before putting it in the oven just because that’s how you’ve always done it. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top