DevOps: Infrastructure as Code – What about code quality and management?

For many years now, it has been important to script the configuration and deployment of systems, particularly virtual machines. Infrastructure as Code is now a common requirement but as the required configuration has become more complex, scripting in language like PowerShell has become more difficult.

It’s all very well to write code to add say a network adapter, but how do you check the current state of the machine?

  • Did that adapter already exist?
  • Is something else using the IP address?
  • How do you write a script to a cater for all the situations?

This leads to ever-more complex code and this is where my concerns start. Writing code for creating infrastructure needs the same discipline that writing any other code does. This includes code quality, coding conventions, error handling, source code control and versioning. Yet, who is writing this code?

Image by Wes Hicks

What I commonly see is that someone who’s come from a network admin background or similar, and who was able to hack together a few batch jobs successfully, suddenly is creating complex script for infrastructure creation. It often reminds me of cable infrastructure in 3rd world countries that sort-of gets the job done, most of the time.

And that’s just more than a bit scary for organizations that depend upon that code.

Image by Janko Ferlič

Regularly I see PowerShell scripts that wouldn’t pass newbie developer 101 tests. They are potentially highly unreliable. I think that organizations need to become much more involved in the management of this type of code, and start to exert appropriate controls on how it’s developed.

Organizations that depend on code the way it's often currently written, are likely to find things going very wrong, just when they can least afford them to do so.

Image by Sergi Viladesau

One shining light in this area is the appearance of better and better configuration management tools.. Rather than telling the operating system how to configure something step by step (ie: imperatively), you create a template of how you’d like the machine to be configured (ie: declaratively) and tell the system to “make itself look like that”.

For PowerShell, that’s Desired State Configuration (DSC).

DSC takes a JSON-based template that describes the outcome, not the process to achieve the outcome. And a bonus is that because it’s outcome driven, it’s also idempotent ie: you can run it again and again and still get the same outcome.

Image from MSDN

If you haven’t explored DSC, I’d encourage you to do so. Either way, it’s time to start to take control over the management of your infrastructure-related code.

If you'd like to learn more about Infrastructure as Code in general, Microsoft have published this course as part of the Microsoft Professional Program for DevOps. You can take the course for free (more details here: https://www.edx.org/course/infrastructure-code-microsoft-devops200-2x-0) , or pay to take it if you want to pursue the certification.

DevOps: Load Tests Need to be Part of Your Regular Deployments

One of the principles of DevOps is that you should be able to deploy regularly, with smaller low-risk changes. This addresses a key problem that I see in many sites:

  • Each code release is large
  • The code is difficult to test
  • Code merges are painful
  • Deployments take a long time
  • Deployments often fail
  • Deployments are considered to be high-risk procedures

The end result of this situation is that deployments are avoided and then a snowball effect occurs where:

  • The code is even larger
  • Code merges are even more painful
  • The deployments take even longer
  • The deployments are even more likely to fail
  • Deployments are considered to be even higher-risk procedures

It’s critical to work out how to break this cycle of pain. Smaller changes that are deployed more regularly is often the desired outcome.

But while regular deployments are helpful, so many teams leave performance (and specifically load testing) until way too late.

One of the challenges of my work is that my first involvement with many of these sites is when things are already in a tragic state and the regrets and blaming start:

Image by Tom Pumford

The worst I’ve ever seen was a start-up team that was building a new software as a service app. They needed 1000 concurrent users to have a viable business. 5000 concurrent users would be a wonderful business. But after building it for 4 years, and with less than 2 weeks before they needed to show it to the shareholders, they couldn’t get it past 9 concurrent users.

Another team was building an appointment application for a government department. They had built the entire logic on layers and layers of untestable triggers, and some triggers performed a very large number of actions. They could never get the application out of User Acceptance Testing (UAT). What was disappointing is that at that stage (long after it should have already been delivered to the customer), all they were hoping for was to find the minimal enhancement that would just get them over the line for delivery.

That’s really sad for the customer involved. If they don’t accept the app, they have a business issue. If they do accept it, they’ve bought a lemon.

Image by Charisse Kenion

Ongoing, they’re going to hate that app.

Load testing needs to be part of the routine development cycle of any app, and not just left to the end. Even if you haven’t made large changes to the app that you’re building, no doubt you have dependencies on other code or services, and any of those dependencies could have regressed.

 

DevOps: Microsoft Professional Program for DevOps

In the second half of 2016, I enrolled in the Microsoft Professional Program for Data Science, and completed it in early 2017. I have to say that I really enjoyed it overall. It was a bit challenging at times but I don't regret doing it.

If you want to get the certification, you need to enroll in the verified option for each course. Nowadays, that's pretty much $99 USD per course. You can do it for free, and if you're tight on funds, perhaps that's what you should do. I like to support the concept, and like to support both Microsoft and edX for creating these options. They are doing amazing work, so while I hear people say to just do the courses and not contribute to them, I can't say that I agree.

edX and their partners offer an incredible range of world-class courses that you can take for free, but if you want them to continue, you should consider contributing. And that applies to the non-Microsoft ones too.

I think that programs like these are more likely to be the real future for Microsoft certification in general.

Earlier this year, Microsoft created a Professional Program for DevOps. I've had an interest in DevOps for a long time, and I got the opportunity to help create one of the courses DevOps for Databases with the inimitable Steve Jones from Redgate Software. Databases are a specifically-challenging area for DevOps.

A few months back I decided to start pursuing this professional program as well. I've got one course to go (the container one) before the final capstone project. I can finish that container course in the next three months, but unfortunately the capstone project won't be available until April.

Here's the overall program:

Over the last few weeks, I've been involved in enhancing the existing Monitoring and Testing courses, and am looking forward to seeing how people find the updated versions.

To support my continuing interest in DevOps, in the upcoming weeks, you'll see DevOps-related posts from me.