Containers on Windows? You’ll have to upgrade to 10

In my last post, I shared my opinion on how I could see containerized development environments being beneficial to resource deprived developers where VMs may not be a viable option.

Investigating further into my muse with regards windows environments I unfortunately hit a fairly large road block.  Currently the only way that you can run a containerized environment is to either upgrade to windows 10 or set it up on Windows Server 2016.

As windows 7 is now off the active development list and receiving bug and security fixes only, there is very little hope that windows containers will ever be seen on there.  Containerization is still a relatively new idea in terms of managing infrastructure and application deployments so it’s too early to say what effect this will have in the long term strategy and uptake of it, but according to Stack Overflow’s recent developer survey to 2016, while Windows 7 has lost a lot of ground over the last couple of years, MacOS has lept ahead now accounting for 26.2% of the market share, stretching ahead of Linux for the second consecutive year running with a not to shabby 21.7%.

Compared to the 2015 figures, the survey does suggest that many of the new windows 10 adopters appear to have come from over half of the Windows 8 users and a third from 7 so as a Development OS, Windows 7 is certainly in decline however some of those systems have also been lost to the growing market of Mac OS and Linux. Could this be the year that Windows as a Development Operating System dips below that magic 50% mark?

For now it looks like containers are just outside of the reach for many windows development environments unless they make the push to upgrade to 10.  I think it’s safe to say they still remain more of a Linux based infrastructure tool for the time being.  Will windows 10 kick off or kill windows containers?  Only time will tell.

Containable Development Environments

While (in my opinion) the jury is still out on whether 2016 was indeed the year of DevOps as promised by Gartner, it certainly was a great year for innovation with many tools gaining much needed exposure.

It is one of those tools that I will focus on in this post.  Containers.  But while a lot of that hype has been around how awesome containers can be on an enterprise level, I’m going to examine them from the angle of how they could potentially be the number one tool used for development environments. And here’s why I think that.

Traditionally developers used to create their environments directly onto their own local workstations.  Along with all those great “Well it works on my machine” excuses, to borrow from a great writer, This has made a lot of people very angry and been widely regarded as a bad move. (Douglas Adams, Hitchhikers guide to the Galaxy).

When everyone was manually installing tools any old ad-hoc way it was to be expected that things woudn’t always work as intended in production environments (which would also have had their own manual configuration at some point).  Great days for coffee and head ache tabled manufacturers.

Over recent years organisations have been moving steadily towards virtualizing their development environments or at least automating the installation onto local machines so that they can have at least some kind of level playing field.

For the time being, I’m going to put aside the localised environment put directly onto the development workstation and focus on VM usage for a while.

One of the neat features toted by containers are how they are more efficient than VMs due to the way they function.  This is true.  When a virtual machine is running, the host has to emulate all the hardware such as devices, BIOS, network adapters and even the CPU and memory in some cases where proxying is not an option (such as none x86 architecture).

Containers function by running directly on the hardware of the host, using the hosts OS but segregating the application layer inside those handy shippable areas.  It does mean that you are limited to a certain extent.  For instance, A Hypervisor can run any operating system regardless of what it is. Windows VMs and Linux VMs can cohabit on the same host as happy as Martini with Ice but you can’t run say an MS Exchange server in a container on a Centos Docker host, or a full NginX Linux stack on the windows variant.  For large enterprise Full Wintel environments for example this won’t be an issue as they’d only need Windows container hosts, but for smaller mixed infrastructure, it means that they would need to run 2 container instances, doubling the required support for the 2 very different platforms and this is where containers do fall short of the mark as an enterprise level tool for production environments.

However, that being said, my focus isn’t to knock containers, but to praise them for the benefit that they could potentially bring in the actual creation of software!

Lets go back to the Developer who has stopped installing quasi production environments directly onto his workstation and has now adopted VM based development.  Depending on the spec of his machine, he could be fine or in for a very hard time.  As already mentioned, VMs are emulated which means they take up processing power, memory and more disk space than what is available to the guest.  They hog resources.  For enterprise solutions such as vCenter or QEMU, the overhead is not really an issue.  Many tests have proven that these enterprise solutions only loose fractions of a percent on overhead against running the same operating systems in a bare bones capacity and enterprise storage is cheap as chips.  Workstation virtualisation solutions however are a different story.  Where as the enterprise hypervisors will only be running that virtualization process, workstations will also be running Email clients, web browsers, IDEs such as visual studio, Mono-Develop, PHPStorm or Sublime to name a few plus many other processes and applications in the background.  The VMs will be sharing the available resources with all those others so you will never receive anywhere close to bare bones performance.  You will frequently find VMs locking up from time to time or being slow to respond (especially if a virus scan is running).  While these are small niggles and don’t occur regularly, they can be frustrating when you’re up against a deadline and sophos decides now is a great time to bring that VM to a grinding halt.

By moving to containers, you can eliminate a lot of that aggravation simply from not having all those resources sucked up to run another operating system within the operating system.  Instead, the container allows you to run the application stack directly to the hardware.  I’m not promising that it will cure all your problems when the computer grinds to it’s knees during those infernal virus scans, but if the workstation in question is limited in resources it can help to give the developer the environment they need without hogging HD, memory or CPU.

And finally probably what I feel is the best bit.  Providing that the VM the Developer was using was provisioned correctly with a CM tool such as Ansible, Puppet or Chef, there is no reason why the same CM script couldn’t be used to provision the container as well so moving from VM based development to container based is not as hard as you would think.  CM is the magic glue that holds it all together and allows us to create environments in vagrant, vCenter, Azure or physical boxes regardless what or where they are.  Including Containers.

In summary, I don’t see Containers being the enterprise production heal all tool some are purporting it to be.  The gains are too few and the costs too high, but for development environments, I see a very bright future.

 

 

Year in review – DevOps in Danger

Let me tell you a story.  There was a man.  He had a great idea.  He told his friends who listened and agreed it was a great idea.  He began to tell other people his idea who also thought it was a great idea.  One day he held a conference to tell the world, and he gave it a jazzy title.  The people came and the man tried to tell the people his idea, but the people had stopped listening.  All they were talking about was the jazzy title and what a great title it was.  The idea became lost and all the people remembered was the title.

Welcome to the state of DevOps at the end of 2016.

I was asked an interesting question recently.  What was my view on where DevOps is heading.  Was it progressing or would it stagnate?  My answer was neither.  It’s a bubble that’s ready to burst.  The issue is that the original intention and problem DevOps was seeking to solve is being forgotten and the focus has been shifted to new tools that want to jump on the band wagon hoping to claim they are the DevOps tool of the future.

What was the problem originally?  Software development!

There was a time when Agile was not as common place as it is today.  Most people used Waterfall to manage their projects.  Development cycles were defined into a long periods of development time to add all the features which was then passed to the test cycle, then the UAT cycle, then the NFT cycle, then pre-prod and finally after months of cascading down the rock face of QA to production.  This cycle could take months and result in companies only being able to produce software at most in bi-annual updates if they worked hard and didn’t hit any issues on the way.

As the world changed and customers began to expect new features yesterday, market demand began to be driven by trends and new starters popped up threatening larger companies market share it became clear that something had to be done to help developers produce features in smaller chunks so Agile began to become accepted.  But that wasn’t the only issue.

Many organisations faced issues releasing those updates so new methods of managing software releases had to be analysed and defined.  Sometimes the issues weren’t so much technological but down to the culture of the organisation, so ways of getting Dev teams and Ops teams to work together in parity became the key to making it all work together.  And thus DevOps was brought to the world.

So how has the focus of DevOps shifted from being about developing software to managing Infrastructure?

There are many factors but probably the main one is the explosion of the internet and connected devices.  At it’s concept, the internet was not what it is today and many of the tools created to support DevOps were geared towards desktop application development and server environments.  But then new devices came about.  Smart phones and other mobile devices became common place. The internet began to be something you could carry around in your pocket so web development began to gain a larger market share in application development.

As web traffic grew, it became clear that single dedicated physical servers could not cope on their own so load balanced clusters of web, application and database servers began appearing.  But these took up physical space and cost a lot of money to run, maintain, upgrade or replace, so those physical machines began to be migrated to Hypervisors, larger servers capable of hosting many virtual machines at the same time.  Organisations began to cluster their Hypervisors and ended up back at square one where they were with the dedicated servers so cloud companies formed, offering warehouses of clustered Hypervisors for customers to host their equipment on without the problem of worrying about upgrades.  Want more RAM? No problem.  CPU cores?  Here you go.

Then some bright people looked and thought, hey, why are we wasting so much overhead on the host emulating hardware and devices on Virtual Machines?  Can’t we just have a single mega computer and run processes and applications direct on the host in simple shipable containers?  Thus tools like Docker came about.

While these are all great tools for Ops and managing Infrastructure, they don’t help a great deal with developing and delivering software which is what DevOps should be doing.  How do containers aid Continuous Integration and continuous testing processes?  How does hosting on the Cloud help to ensure the integrity of the code base and that customer requirements are being met?  Configuration Management may help to streamline the deployment, but how does it help to ensure that the code being produced is always of a high quality and as bug free as possible?

These are the questions that DevOps should be seeking to answer.

As a DevOps specialist whether the client hosts on physical boxes, cloud VMs or containers should be irrelevant.  These are questions for the client to focus on depending on their business need.  They are entirely in the realm of Ops, and I certainly don’t get overly excited because a new container system or method of hosting VMs comes about.  What we need to focus on in our role as DevOps specialists are to ensure that the organisation is creating software that is driven by market demand, meeting customer requirements and can be quickly shipped out to said customer when QA deem the software fit for purpose.

We’re not super Ops people or sysadmins.  Infrastructure is a concern only when it becomes a problem to delivering software but should never be our main focus.

When we stop focusing on the Dev, we become Ops and this is why I see the bubble bursting very soon, when the world wakes up and asks the question, “What happened to the Dev?”