Bad development drives bad infrastructure decisions

The relationship between Development and IT Operations is a mutually inclusive one in that a decision that is made on one side can effect the other, not always for the better.

With all the focus in the world of DevOps seemingly on the Operations and infrastructure side I would like to take a moment to discuss some of the fundamental issues that can effect how your DevOps process is rolled out if focus isn’t also applied to Development as well.

Moving an organisation away from Waterfall to Agile is no mean feat.  It entails a dramatic shift in mind sets and organisational culture.  The adoption of new methods of working will cause many to find it overwhelming if thrown right into the deep end.  Fortunately the saving grace of DevOps is not only to help them move towards incremental changes to their products but to do so in small managed incremental changes it’s self.  Lead by example.

Yet even managing this transition of your infrastructure in bite sized digestible chunks can be thrown off track if fundamental issues in Development aren’t addressed first.

Development first

You have your new green field project and the go ahead to “do the DevOps thing”.  It’s time to plan your infrastructure right?

Not so fast, and here’s why.

While your Development team may already be functioning in an Agile fashion, holding scrums, code reviews and automated testing within their own controlled environments, it may not always mean that what is being produced by Development is in a state that can be readily absorbed by your engineering teams to deploy to live.  And this can be the root cause of many poor Infrastructure design decisions.

The usual Waterfall environment map may have anything between 5-7 environments to validate the build, stability and acceptability of release candidates. In that world you have a lot of quality gates to capture poor releases and as release cycles can stretch into months there is always time to capture defects and get them fixed before they hit production.

Under Agile our goal is to reduce that lead time dramatically and shorten the development cycle producing features on demand as and when they are required by the customer.  All good in theory, however it also means removing a lot of quality gates which until this point have been helping you stop poor release candidates getting through the door.  Under this scenario it can be the natural response for the architect to add additional quality gate environments into the map to help capture those instances.  While all good in theory, this is not the ideal response when trying to create LEAN, Agile infrastructure.  Instead of saving money, you will end up spending more on those additional environments to try and capture these bad RC instances.  In effect Infrastructure is still having to over engineer to protect themselves from bad development practices.

If this sounds familiar to you then your work with Development is not quite over.  In my experience I’ve identified the following key areas to sanity check and help drive better release candidates

Automated testing

There is no question as the importance of testing your code.  BDD and TDD will help developers create more efficient LEAN code, but are they testing correctly?

There is a world of difference between Unit and Functional testing and they should both be used in tandem.  While Functional testing can give you a good idea of how the code is functioning as a whole, without Unit tests it becomes harder to validate the individual stability of each component and you could be missing some important information to help diagnose bugs and defects.  For example a method could be changed in a controller which causes methods in a model to behave differently. Without Unit tests to protect the individual functionality of each method it is harder to diagnose where the fault is if the developer who implemented the change is unavailable.

Code review

I really can’t stress how important the role of the code review is to the release process.  While automated testing can capture issues with the functionality of code at the high level, it is difficult for automated tools to definitively assess whether the written code is of a high enough standard to pass through to production.  A more senior developer should always review code to ensure that what has been written meets coding standards, does not contain hard coded passwords and is of a sufficient quality to merge into the next Release Candidate.

Post-release feedback

Lets say that a release candidate has gone through the testing phases without incident but then is deployed into Production and something goes wrong.  A fix is found and the release is finalised.  Is that information being passed back to Development?

This sounds like the most basic of concepts yet in too many organisations I’ve worked with, this fundamental and crucial stage in the release process is missing.  It’s an area that underlines the importance of a proper implementation of DevOps methodologies.  In the test case we looked at earlier where Infrastructure are over engineering environments, it could simply be that the Developers have not been informed what went wrong therefore continue to make the same mistakes.  Putting in place that feed back loop allows the Developers to raise these issues in code reviews to ensure that the defects encountered can be alleviated in future.

The onus is now back on the Development team to fix their process but in a manner compatible with shared ownership as the Ops/Infrastructure team are actively engaged with Development to resolve release issues at root cause.  The dialog then allows Architects to simplify the environment map to the Minimum Viable Product so that extra environments aren’t required to try and catch development mistakes.

Summary

Effective communication is vital at all stages of the release process and attempting to fix fundamental development issues by over complicating infrastructure is most definitely not the correct way to go.  DevOps should always be used as an aligning tool to bring both sides of the product cycle into balance and ensure successful growth and stability.

Agile Waterfalls

There are many businesses today that still use Waterfall for managing software development projects.  While Agile has been around for a long time, Waterfall precedes it by a large margin and it is still the mainstay for larger organisations.  There are many reasons for this such as the fact that it fits in nicely with ITIL or Prince2 project strategies but more likely it was the strategy that was employed with legacy products and due to constraints to the project, customer requirements or technological debt can’t be easily migrated into a fully Agile approach.

Having said that, all is not lost in being able to bring aspects of Agile software development into the Waterfall structure and build the foundations of eventually moving towards a full Agile solution.

The usual Route To Live layout of projects that are controlled via Waterfall consists of multiple environments, usually labeled Dev, Test, UAT, PreProd, Production and Training.  The general flow will consist of sprint cycles on Dev and the changes then promoted to TEST where defects are identified and fixed.  Then UAT after the test cycle and so on until all the Pre-Production criteria are met and the business gives the go ahead to deploy to production.  In some cases Training may be deployed to before Production in order to bring staff up to speed on the new features about to hit production.

When viewing the whole software delivery cycle from this high vantage point it is difficult to see how Agile can make any beneficial change without breaking the whole delivery process wide open.  If your purpose is to completely replace Waterfall then yes this will inevitably happen.  The good news however is that Agile can be what you make of it.  Like ITIL, it is not necessarily a full list of dos and don’ts, but more of a framework on which you can build your business logic and processes on.  With this in mind it is possible to be more Agile within a Waterfall delivery system.

Redefining customer

From the business point of view, the customer is the end user or client who will be using your delivered product or service.  When working with Waterfall this external entity is generally the focus and the process is to ensure that the customer gets what they asked for or are expecting.

To begin implementing Agile it helps to fully understand and redefine what we’re talking about as the customer.

Taking on the aforementioned definition, we can define a customer as anyone who is expecting something from the work being undertaken.  In the case of the development team, they are producing features and implementing changes according to the functional specifications provided.  From their point of view, the customer would be the Testing team.  Testing feed into QA and QA feed into operations.  By defining these relationships we can begin to start seeing how Agile can be used to aid the process of delivery in a macro level by allowing the focus of “customer” to be narrowed down.

Agile in Development

Now that the focus of the customer is on the needs of the testing team, it is possible to start breaking down the requirements of Agile development into deliverable and achievable goals.  This can take on the form of redefining how sprint cycles are manged, Continuous Integration and build servers, automated unit testing and finally how changes are promoted to downstream environments.  Rather than trying to create a solution that will encompass the whole Waterfall structure, we’re creating macro solutions per environment where we can start bringing about tangible, measurable changes that won’t effect the overlying business logic and project plan.  To some, this may seem to be a bit of a cop out, however, when considering what it is we’re trying to achieve with Agile, it soon becomes apparent that this is a very Agile way of working within a predefined project structure.

Source control branching strategies

Probably the biggest issue that can be encountered with Waterfall and continuous integration strategies is how Branches are used as part of the environmental promotion.  I lean strongly towards using the Git-Flow branching methodology where there are 2 main permanent branches usually labeled master/production and development.  Development is where all the main line evolving features are implemented and master/production should be reflective of what is currently the state of the live production system.

I have seen situations where companies have adopted git as the source control then proceeded to create a branch per environment.  While this may seem a simple and logical step to take, it is by my opinion one of the hardest branching strategies to maintain.

Consider the flow of changes through that type of implementation.  Development finalize a release cycle and merge it into Test.  During the test cycle defects may be uncovered and fixed then those changes are merged into the UAT branch and hopefully back to Dev.  In UAT more defects are uncovered and fixed before being passed to PreProd.  But at this stage those UAT changes not only have to go back to Test but also Dev.  The further down the fall we go, the more merges are required to back port the fixes to previous branches and it can quickly escalate out of control when we have multiple releases going down the Waterfall behind each other.  In the worst case scenario a defect may be fixed in Dev which was also fixed with an earlier feature cycle in a downstream environment and then we end up with merge conflicts.  The whole process can get very messy and difficult to manage.

By far the easiest means of managing feature promotion through multiple environments is through Git-Flow’s use of Release branches.  As mentioned there are only 2 main permanent branches in existence, but there are also a number of transient branches called Feature, Release and Hotfix branches.  A good definition of the Git-Flow strategy can be found here.

It is then this release branch that is making the journey down the waterfall rather than attempting to move the change its self through multiple merges to environment branches.  These release branches can be tested, defects resolved and passed to downstream environments more efficiently than constantly merging changes across branches and the end result is that any defects found in the branch only have to be back merged to a single location, the Dev branch.  This won’t resolve all merge conflict issues such as the one mentioned before so effective communication between the Dev and Test teams is a must, but there is no reason why at the end of each environment cycle, changes can’t be merged back to Dev before moving forward.

The same can be said with relation to Hot fixing issues in Production.  The Hot fix will be cut from the Master branch and this can then be passed through the environments to be checked, tested and accepted easier than trying to revert each environmental branch back to production’s current state (which I have seen in the past) and then reapplying weeks of commits back after the fix to prepare the environments for the next cycle of feature releases.

Testing in Dev vs Test environment

The importance of automated Unit and Functional testing is apparent to everyone working within software development and is now becoming a normal aspect of all development environments.  Even User Acceptance Testing can now be automated to a certain extent allowing more of the applications features to be tested at the point of commit.

There are many Continuous Integration servers out there to pick from and so many ways of testing that the hardest part of creating an automated test platform is in picking the right tools to use for your software development requirements.

But a very serious question has to be asked here.  If it is possible to now test almost all aspects of the product with the use of CI servers and automated testing, is there a need for the Testing environment and team?

The answer to this question can all be down to how comprehensive and thorough the automated testing solution you’re implementing into Dev are going to be.

Effectively by adopting more and more of the functional and non functional testing into the Dev environment as part of the development process through automated testing, the lower the requirement will be for an extra testing team to handle those tasks in a separate environment.  In this scenario, it starts to become a very feasible option to deprecate the Test, UAT and Pre-Production environments and employ the Test team to write the functional and non functional tests against the CI server where all this testing is being carried out.  The byproduct of this approach is that an organisation can then merge Test and QA into a single function and the role of the tester is to ensure that the quality of the tests being ran and verifying the UAT function meets project requirements.

Summary

While we can’t blanket implement Agile into Waterfall development cycles, we can bring aspects of Agile to each environment in a way that the accumulative effect of the changes can be felt throughout the whole Route To Live.

It may not be possible to completely deprecate the use of multiple environments for development and testing immediately, but we can bring in elements of testing into the Development environment in a staged and controlled manner and through this process have a very real possibility of removing a large section of the Waterfall structure effectively adopting a more recognizable Agile development environment.

The end result of continuous automated testing carried out within the development environment and the migration of the Test function to a QA role, we can reduce the number of environments utilized in the delivery of the product down to 3, Dev, Production and Training.

Containers on Windows? You’ll have to upgrade to 10

In my last post, I shared my opinion on how I could see containerized development environments being beneficial to resource deprived developers where VMs may not be a viable option.

Investigating further into my muse with regards windows environments I unfortunately hit a fairly large road block.  Currently the only way that you can run a containerized environment is to either upgrade to windows 10 or set it up on Windows Server 2016.

As windows 7 is now off the active development list and receiving bug and security fixes only, there is very little hope that windows containers will ever be seen on there.  Containerization is still a relatively new idea in terms of managing infrastructure and application deployments so it’s too early to say what effect this will have in the long term strategy and uptake of it, but according to Stack Overflow’s recent developer survey to 2016, while Windows 7 has lost a lot of ground over the last couple of years, MacOS has lept ahead now accounting for 26.2% of the market share, stretching ahead of Linux for the second consecutive year running with a not to shabby 21.7%.

Compared to the 2015 figures, the survey does suggest that many of the new windows 10 adopters appear to have come from over half of the Windows 8 users and a third from 7 so as a Development OS, Windows 7 is certainly in decline however some of those systems have also been lost to the growing market of Mac OS and Linux. Could this be the year that Windows as a Development Operating System dips below that magic 50% mark?

For now it looks like containers are just outside of the reach for many windows development environments unless they make the push to upgrade to 10.  I think it’s safe to say they still remain more of a Linux based infrastructure tool for the time being.  Will windows 10 kick off or kill windows containers?  Only time will tell.

Containable Development Environments

While (in my opinion) the jury is still out on whether 2016 was indeed the year of DevOps as promised by Gartner, it certainly was a great year for innovation with many tools gaining much needed exposure.

It is one of those tools that I will focus on in this post.  Containers.  But while a lot of that hype has been around how awesome containers can be on an enterprise level, I’m going to examine them from the angle of how they could potentially be the number one tool used for development environments. And here’s why I think that.

Traditionally developers used to create their environments directly onto their own local workstations.  Along with all those great “Well it works on my machine” excuses, to borrow from a great writer, This has made a lot of people very angry and been widely regarded as a bad move. (Douglas Adams, Hitchhikers guide to the Galaxy).

When everyone was manually installing tools any old ad-hoc way it was to be expected that things woudn’t always work as intended in production environments (which would also have had their own manual configuration at some point).  Great days for coffee and head ache tabled manufacturers.

Over recent years organisations have been moving steadily towards virtualizing their development environments or at least automating the installation onto local machines so that they can have at least some kind of level playing field.

For the time being, I’m going to put aside the localised environment put directly onto the development workstation and focus on VM usage for a while.

One of the neat features toted by containers are how they are more efficient than VMs due to the way they function.  This is true.  When a virtual machine is running, the host has to emulate all the hardware such as devices, BIOS, network adapters and even the CPU and memory in some cases where proxying is not an option (such as none x86 architecture).

Containers function by running directly on the hardware of the host, using the hosts OS but segregating the application layer inside those handy shippable areas.  It does mean that you are limited to a certain extent.  For instance, A Hypervisor can run any operating system regardless of what it is. Windows VMs and Linux VMs can cohabit on the same host as happy as Martini with Ice but you can’t run say an MS Exchange server in a container on a Centos Docker host, or a full NginX Linux stack on the windows variant.  For large enterprise Full Wintel environments for example this won’t be an issue as they’d only need Windows container hosts, but for smaller mixed infrastructure, it means that they would need to run 2 container instances, doubling the required support for the 2 very different platforms and this is where containers do fall short of the mark as an enterprise level tool for production environments.

However, that being said, my focus isn’t to knock containers, but to praise them for the benefit that they could potentially bring in the actual creation of software!

Lets go back to the Developer who has stopped installing quasi production environments directly onto his workstation and has now adopted VM based development.  Depending on the spec of his machine, he could be fine or in for a very hard time.  As already mentioned, VMs are emulated which means they take up processing power, memory and more disk space than what is available to the guest.  They hog resources.  For enterprise solutions such as vCenter or QEMU, the overhead is not really an issue.  Many tests have proven that these enterprise solutions only loose fractions of a percent on overhead against running the same operating systems in a bare bones capacity and enterprise storage is cheap as chips.  Workstation virtualisation solutions however are a different story.  Where as the enterprise hypervisors will only be running that virtualization process, workstations will also be running Email clients, web browsers, IDEs such as visual studio, Mono-Develop, PHPStorm or Sublime to name a few plus many other processes and applications in the background.  The VMs will be sharing the available resources with all those others so you will never receive anywhere close to bare bones performance.  You will frequently find VMs locking up from time to time or being slow to respond (especially if a virus scan is running).  While these are small niggles and don’t occur regularly, they can be frustrating when you’re up against a deadline and sophos decides now is a great time to bring that VM to a grinding halt.

By moving to containers, you can eliminate a lot of that aggravation simply from not having all those resources sucked up to run another operating system within the operating system.  Instead, the container allows you to run the application stack directly to the hardware.  I’m not promising that it will cure all your problems when the computer grinds to it’s knees during those infernal virus scans, but if the workstation in question is limited in resources it can help to give the developer the environment they need without hogging HD, memory or CPU.

And finally probably what I feel is the best bit.  Providing that the VM the Developer was using was provisioned correctly with a CM tool such as Ansible, Puppet or Chef, there is no reason why the same CM script couldn’t be used to provision the container as well so moving from VM based development to container based is not as hard as you would think.  CM is the magic glue that holds it all together and allows us to create environments in vagrant, vCenter, Azure or physical boxes regardless what or where they are.  Including Containers.

In summary, I don’t see Containers being the enterprise production heal all tool some are purporting it to be.  The gains are too few and the costs too high, but for development environments, I see a very bright future.

 

 

Year in review – DevOps in Danger

Let me tell you a story.  There was a man.  He had a great idea.  He told his friends who listened and agreed it was a great idea.  He began to tell other people his idea who also thought it was a great idea.  One day he held a conference to tell the world, and he gave it a jazzy title.  The people came and the man tried to tell the people his idea, but the people had stopped listening.  All they were talking about was the jazzy title and what a great title it was.  The idea became lost and all the people remembered was the title.

Welcome to the state of DevOps at the end of 2016.

I was asked an interesting question recently.  What was my view on where DevOps is heading.  Was it progressing or would it stagnate?  My answer was neither.  It’s a bubble that’s ready to burst.  The issue is that the original intention and problem DevOps was seeking to solve is being forgotten and the focus has been shifted to new tools that want to jump on the band wagon hoping to claim they are the DevOps tool of the future.

What was the problem originally?  Software development!

There was a time when Agile was not as common place as it is today.  Most people used Waterfall to manage their projects.  Development cycles were defined into a long periods of development time to add all the features which was then passed to the test cycle, then the UAT cycle, then the NFT cycle, then pre-prod and finally after months of cascading down the rock face of QA to production.  This cycle could take months and result in companies only being able to produce software at most in bi-annual updates if they worked hard and didn’t hit any issues on the way.

As the world changed and customers began to expect new features yesterday, market demand began to be driven by trends and new starters popped up threatening larger companies market share it became clear that something had to be done to help developers produce features in smaller chunks so Agile began to become accepted.  But that wasn’t the only issue.

Many organisations faced issues releasing those updates so new methods of managing software releases had to be analysed and defined.  Sometimes the issues weren’t so much technological but down to the culture of the organisation, so ways of getting Dev teams and Ops teams to work together in parity became the key to making it all work together.  And thus DevOps was brought to the world.

So how has the focus of DevOps shifted from being about developing software to managing Infrastructure?

There are many factors but probably the main one is the explosion of the internet and connected devices.  At it’s concept, the internet was not what it is today and many of the tools created to support DevOps were geared towards desktop application development and server environments.  But then new devices came about.  Smart phones and other mobile devices became common place. The internet began to be something you could carry around in your pocket so web development began to gain a larger market share in application development.

As web traffic grew, it became clear that single dedicated physical servers could not cope on their own so load balanced clusters of web, application and database servers began appearing.  But these took up physical space and cost a lot of money to run, maintain, upgrade or replace, so those physical machines began to be migrated to Hypervisors, larger servers capable of hosting many virtual machines at the same time.  Organisations began to cluster their Hypervisors and ended up back at square one where they were with the dedicated servers so cloud companies formed, offering warehouses of clustered Hypervisors for customers to host their equipment on without the problem of worrying about upgrades.  Want more RAM? No problem.  CPU cores?  Here you go.

Then some bright people looked and thought, hey, why are we wasting so much overhead on the host emulating hardware and devices on Virtual Machines?  Can’t we just have a single mega computer and run processes and applications direct on the host in simple shipable containers?  Thus tools like Docker came about.

While these are all great tools for Ops and managing Infrastructure, they don’t help a great deal with developing and delivering software which is what DevOps should be doing.  How do containers aid Continuous Integration and continuous testing processes?  How does hosting on the Cloud help to ensure the integrity of the code base and that customer requirements are being met?  Configuration Management may help to streamline the deployment, but how does it help to ensure that the code being produced is always of a high quality and as bug free as possible?

These are the questions that DevOps should be seeking to answer.

As a DevOps specialist whether the client hosts on physical boxes, cloud VMs or containers should be irrelevant.  These are questions for the client to focus on depending on their business need.  They are entirely in the realm of Ops, and I certainly don’t get overly excited because a new container system or method of hosting VMs comes about.  What we need to focus on in our role as DevOps specialists are to ensure that the organisation is creating software that is driven by market demand, meeting customer requirements and can be quickly shipped out to said customer when QA deem the software fit for purpose.

We’re not super Ops people or sysadmins.  Infrastructure is a concern only when it becomes a problem to delivering software but should never be our main focus.

When we stop focusing on the Dev, we become Ops and this is why I see the bubble bursting very soon, when the world wakes up and asks the question, “What happened to the Dev?”

Ansible Tower – How to use System Tracking

It took me a while to get my head around how to use the System Tracking feature in tower.  Mainly because of how I was coming at it based on my current usage with Windows environments running on a VCenter hypervisor.  If you can’t get your system tracking to work, then it’s probably because you made some of the same fundamental mistakes I did.

Hosts

My initial scripts were dealing with creating the VMs on vCenter so in order to do this I had to target localhost (the tower box) to get this done.  Unfortunately I kept this logic through to my scan results, running the Scan job against a specific VM supplying the hostname in a survey and gathering facts like the IP address from vSphere.  This meant that I didn’t get facts collected against the target hosts, but against tower which is not what I wanted.  I was not getting the system tracking option to compare machines doing it this way.

Best practice is to leave the Hosts section to all and let tower handle the connection via the inventories.  You will then get facts in the system tracking page instead of the message telling you set up a scan job against and inventory.

Custom Facts

You do really need to read the documentation carefully.  On windows, you gather facts with Powershell scripts, but you need to make sure that the output can be easily converted into a JSON object by tower.  It will handle this for you.

You can write your scripts any way you want, but as I’m from Linux world I like to pipe.  Here is an example of gathering DLL information.

$object = @{}
Get-ChildItem C:\Windows\System32\ -Filter *.dll -Recurse -ErrorAction ‘silentlycontinue’ | Select-Object -ExpandProperty VersionInfo -ErrorAction ‘silentlycontinue’ | Foreach-Object {
$object.($_.FileName) = $_.ProductVersion
}
echo $object

You still need to tell Ansible where to find these facts so use the fact_path argument.  But be aware that you may need to escape the path

– setup: fact_path=’C:\\ProgramData\\Ansible’

USABILITY

I’m going to round up with not so much an issue, but an observation.  I found that the more facts you gather, the longer the list will get.  On a very basic installation, collecting dlls, services, features, installed programs and a selection of config files the page was getting to be over 3000 lines long.  This makes it hard to navigate and find potential conflicts between systems.  The colour scheme does hint at where sections are divided, but they’re not obvious initially and can be easily missed.  To that end I had a quick play and put together a small google chrome extension to make section headers into clickable sections that could collapse the information tables.  I also added a summary of total conflicts per section.

ansible_tower_system_tracking_extension

The source code can be found on my github : https://github.com/aidygus/AnsibleTowerChromeExtension

And the extension is available from chrome https://chrome.google.com/webstore/detail/ansible-tower-sytem-track/klmeimccnodnkacekjnkgoiojakibhha

A very simple addition that makes all the difference.

Ansible Tower and full Enterprise Infrastructure

One of the greatest joys I experience is when an idea catches on and key members in a client organisation engage into the message that I’m delivering.  I recently conducted a presentation to an Ops team detailing how Ansible Tower can be introduced to help manage their server infrastructure.  “But why stop at servers?” they asked me “Could this be used to manage the desktop environments as well?”.

I’ve always considered Ansible as the tool for production based environments, but the question intrigued me.  Can Ansible be used to manage desktops as well as servers?  When you get right down to it, from Ansible’s point of view connecting to a windows 7 desktop or a server 2012 instance do not represent a lot of difference.  Providing those environments are setup to accept WinRM connections and have a valid Service Account, Ansible should be able to provision and configure those environments as easily as it can Servers.

Even managing an infrastructure consisting of several hundred windows desktops in the Inventories section doesn’t pose too much of a challenge.  Ansible Tower comes with a handy command line tool called tower-manage inventory_import so getting an export from Active Directory and into Tower is a sinch.

So from a technical point of view, managing desktops with Ansible Tower is definitely a possibility that could be implemented into enterprise sized infrastructure.

The only issue that has to be considered carefully is the financial cost and gained benefit from following this path.

With regards to Servers, how the environment is configured, updated, provisioned and maintained are critical to the long term operational stability of an organisation.  It makes sense for large organizations to use CM tools like Ansible to manage their environments and reduce risk.

Desktops on the other hand are a different story.  With the exception of large data centers, there will generally be a greater number of desktops than servers.  Other than core applications like Anti Virus and system updates (which usually have their own automated update mechanic), it’s not so critical to keep Desktop environments up to date with the latest software releases.  Many Desktop devices may be mobile, such as laptops, and are often offsite and not connected to the internal network.  Most problems can be fixed by the use of a strategic disk defragmentation or turning it off and back on again.  With all these points brought into consideration it is clear that the financial benefit of using Tower to manage full-scale Enterprise Infrastructures is just not worth the capital expenditure.  For the cost of licenses required to manage anything above 500 nodes, you could easily hire 2 or 3 extra desktop technicians and receive a greater return.

While the potential is there for including Desktop devices into the scope of CM, companies such as Red Hat and Puppet Labs need to look further into their pricing models to make it worth an organisation’s while to invest in these tools.  As it stands, the standard per node costing model doesn’t work on anything else other than Servers.  Which is a shame considering the potential advantages of simplifying the Continuous Delivery cycle for developers producing desktop applications right to the desktop on release day.

A big bar of Chocolatey

I posted recently my first impressions of chocolatey, the package manager for windows.

This post is going to focus on some scenarios that many Enterprise customers may face when using this software deployment platform as part of their Configuration Managements solution.

Most of the applications you’ll be installing will be fairly light weight.  Things like Notepad++ (because we all know not to use notepad right?), java-jre/jdk, Anti-Virus are generically standard additions for server environments.  They are usually light weight (less than a few hundred meg at most) and Chocolately can install them with ease.  But there is one current limitation to Chocolatey I found that makes installing certain software not as easy as choco install this-package.

Currently the limit to the size of the nupkg is 2 gig. For the majority of your enterprise dependencies this will not be an issue.  But what about when it comes to installing things like SQL Developer edition/Enterprise/Datacentre or exchange which can come in at over 4 gig in size when you package the whole media? There may be options that you can strip out of the installation folder if you have a specific build and don’t need certain features, but this blog will assume you have a dynamic use case that could change over time or by project so will need the full installation media present.

You can certainly create large packages, but Chocolatey will throw an error when trying to install them.  So how do we install large packages within the bounds of this limitation?

Chocolatey I’ve found is a very dynamic and configurable tool.  The help guides on their website give us all the information we require to get up and running quickly and there’s plenty of choice for creating our local repos.  So while the current 2 gig limit on nupkg binaries does limit quick package creation and installs for the bigger software, all is not lost as there are ways to work around it.

Applications like SQL Server and Exchange aren’t like your standard MSBuild or MSI installers.  Notepad++ for example is an installer which contains all the required dependencies in a single package.  SQL on the other hand is a lot more complex.  There is a setup.exe, but that is used to call all the other dependencies on the media source.  If you try and package the whole thing up you’re going to be in for a hard time as I’ve already stated, but due to the way that Chocolatey works, these giant installations can potentially be the smallest packages you create.

Lets examine the innards of a package to see how this can be done.

At it’s most basic form, a package consists of a .nuspec file which details all the meta data, a chocolateyinstall.ps1 script which handles what is being installed and how and finally the installer it’s self.  Creating packages is as easy as :

choco new packagename

and packaging with

choco pack path/to/packagename.nuspec

With a business version you can generate packages automatically from the installer it’s self which is with out a doubt a very neat feature.

My initial attempt at installing SQL Enterprise was to put all the media in the tools directory which gave me a nupkg of around 4.5 gig.  Way too big.

As I mentioned Chocolatey is very dynamic in how packages can be installed.  Initially it creates the installer script with the following headers detailing what the name of the actual installer is and where it can find it :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = Join-Path $toolsDir ‘setup.exe’

So this would assume that I’m pulling a package from a repository that is specified when I set up Chocolatey initially, or from the –source argument.  Seeing as how SQL is too large to effectively package whole, I found that I could host the installation media on a UNC network share and map a drive to it.  So now my headers look like this :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = ‘Y:\SQL_Server_DC\setup.exe’

This also means that when creating the nupkg I didn’t need to include the setup.exe so the new size is just under 4k!  But that is just one of the hurdles I had to leap.

I’m installing all my packages via Ansible configuration management.  One of the included modules is win_chocolatey which for simple installations from a NuGet type rep works well enough.  Unfortunately I’m installing from UNC which requires that an authenticated drive is mapped.  Mapped drives require a persistent user connection which Ansible currently does not support.  If you try and map a drive as part of the provisioning process, it will exist for the lifetime of that WinRM connection only and be lost when the next command is initiated.  I manged to work around this by creating a Chocolatey bootstrap script :

param (
$netshare_password,
$package,
$arguments
)
$PWord = ConvertTo-SecureString $netshare_password -AsPlainText -Force
$netshare_cred = new-object -TypeName System.Management.Automation.PSCredential -ArgumentList “NUGET\netshareuser”,$PWord

New-PSDrive -Name “Y” -PSProvider “FileSystem” -Root “\\NUGET\Installation Media” -Persist -Credential $netshare_cred

choco install $package -y –force -source Y:\ –ia=$arguments

And called within Ansible like this :

– name: Installing SQL Server
raw: ‘C:\Windows\Temp\ChocoBootstrap.ps1 -netshare_password “M@d3Up9@55w0Rd” -package “microsoft-sql-sever-datacenter” -arguments “/ConfigurationFile=C:\Windows\Temp\ConfigurationFile.ini”‘

Through this work around, I am able to install packages larger than 2 Gb with ease.

LEAN, mean DevOps machine

With all the noise and excitement over new tools being used it’s easy to overlook that DevOps is not just a technical role.  There are many aspects that sets being a DevOps specialist apart from being another form of Systems Administrator and it is one of these areas that I’m going to talk about today.

Lean is a methodology that is usually found in marketing and manufacturing.  Toyota is noted for it’s Just In Time (JIT) manufacturing methods which Ford also implemented into his early production lines.   But what is it and why is it so important for someone like myself?

The shortest explanation is that Lean helps you look at processes that form up how a function is performed and allow you to identify waste.  That is in wasted time, effort, resources, money etc.  To me it is a brilliant framework to help me diagnose what is wrong with the Delivery cycle in a company and start being able to implement the right tools, methods, strategies to bring about a robust and stable Continuous Integration and Delivery solution.  Knowing how to automate a process I feel is only half the battle.  Knowing what to automate is where the biggest gains can be made and Lean allows you to identify those areas that need the attention most.

Lean also forms a foundation for me to Measure.  At some point in the DevOps process you will be asked to identify improvements and justify the need for you in the organisation.  When I identify waste through Lean, I take that opportunity to also identify measurable metrics.  There may be a process in the deployment cycle that requires 2 or 3 members and takes 5 hours to complete.  This is an easy metric as you can identify an actual cost of that process by the number of man hours dedicated to it.  Time as they say is money and here you can clearly calculate a cost.  There may be many such processes in the organisation and Lean coupled with Measure allows you to identify what are the greatest wastes and the more valuable lowest hanging fruit to change first.

Full of Chocolatey Goodness

The one thing that nobody can deny Nix based OSes have down pat is the package manager.  The ability to install software on demand from trusted sources is without a doubt one of the coolest things I’ve experienced using Linux.  You need a media editing suite?  No problem!  A better text editor?  Take your pick! Whether it’s RPMs or ppa’s, via command line with yum and apt-get or in the GUI with Synaptic, that ability to install packages, updates and full software products is simply amazing!

In terms of configuration management this makes provisioning Linux from infrastructure as code tools like Ansible, Puppet and Chef insanely easy.  Unfortunately Windows does not have this feature.  Sure there is an app store similar to current smart phones in windows 10 (if there were any apps to download that is), but pretty much all CM solutions are geared towards server based environments so fully automated configuration management isn’t as simple as it would be with Centos or Ubuntu.

So how do we deal with installing software through CM on Windows?  One way is to package the software as part of the CM script.  If you version control those scripts in Git, you could feasibly include each software package as a submodule to git, but that means that you have to create a separate git repository for every package you use.  In some of the environments that I’m dealing with, there may be as many as 30 or 40 software dependencies on a whole environment so that means a lot of repos.  Tracking binaries with git is not really efficient either.  Every time you update the package, it snapshots those binaries so you can end up with massive repos for small software packages.  These take time to download and can slow the entire CM process down massively.

If only there was a decent package manager for windows like ppa or rpm…….

Well hold on to your socks guys because we are in luck.  There is a package manager for windows that works just like it’s Linux cousins.  It’s called Chocolatey and even though it’s early days for me and I’ve not had much exposure yet, it’s phreaking amazing!

I had a demonstration from Rob Reynolds and Mike at RealDimensions software and my jaw was hitting the floor through the whole presentation.  There is a public repository with so many applications available that it a desktop user can get pretty much whatever they want.  For the corporate environments there is the ability to host your own private repo in which you can create your own secure validated apps on.  Creating packages is extremely easy and all the options you need to change are clearly laid out in the configuration files.  There is a business option that allows you to create packages from a host of windows installers.

I am impressed with what I’ve seen so far.  I’ll certainly be blogging about my experience over the coming weeks.