The future of computing is Collaberative

Once, there used to be 2 camps in the world of computing.  Lets call them the Softies and the Nixies.

On the one side you had the Nixies in their carpet slippers, pointing their pipes and scowling at the Softies on the other side who in turn wearing their polo shirts and denim jeans shook their heads back at the Nixies and never the twain should meet.  That was the world of IT 20 years ago when I first cut my teeth.

The world has revolved, time has passed and today things have changed dramatically.  Once Microsoft was a monopolistic megalith seeking world domination on your desktops, servers and even the very web pages you hosted.  They succeeded in the first instance, certainly in the enterprise they rule on servers, but they never could make a decent dent into the internet.  Today the internet is everywhere, on everything and our lives are being dominated by trends, fads, the need for information and to stay connected to our ever growing list of peers.

How has this effected the 2 camps?  I can only speak from my own experience, but certainly, the Nixies are no longer so scowly and the Softies are not so head shaky and there is even the first tentative blooms of respect between them.  But what caused this change of paradigm?

Microsoft caused it.  What?  Did I really say that?  I did.  You heard that right out of my own, well, fingers.  I do indeed believe in my own opinion that Microsoft has had a great part in helping to bring IT together.

This is all my own opinion so please don’t take what I say as lore.  I think what we have seen certainly over the last 10 years is the power of culture and how changing culture can lead to dramatic and long felt effects.  Once the Culture in Microsoft was very much they were top dog, did the right things and would take over the world.  And those attitudes filtered down through the organisation to the customers and to a certain extent that cultural attitude is still prevalent in a lot of people today.  Likewise, Linux culture viewed open source as far superior with no hidden agendas and would lead the way to taking over the world.  Clearly neither side are ruling the world.  Each dominates their own sphere of computing influence, but there is no clear winner at either side.  So how exactly did Microsoft lead the change?

Leading on from my previous statement on culture, CEOs and CTOs have retired, been replaced or moved on to better places.  Those coming up behind them had alternative goals and started to change the culture from the top which has already filtered down the organisation substantially.  This change is already permeating further into Microsoft’s customers and people are becoming more open minded about what devices they used.  Some may say that the reason for the change is due to the failure of Windows 8 and windows devices with the unified interface.  This may be partially true, I don’t know, but I am certain that a change in direction from the top was instigated and will continue to ripple through MS’s sphere of influence for some time to come.

Remember the Microsoft Loves Linux presentation followed closely by the release of dot net core?  Earlier on this year we heard about SQL Server and the recent release of PowerShell that had me all excited yesterday.  These are all signs on the increasing cultural change in Microsoft towards a more collaborative stance in working with customers.  Gone are the days when they would say “All this is mine, and all that you have over there will be mine”.  Today the clear message from Microsoft is “We respect your choice, so let us help you have more of a choice” and they certainly are offering many good alternative solutions for those who don’t want to lock themselves into a Wintel or Linux only type infrastructure.

Likewise, companies like Redhat are also striving to improve tools such as Ansible which were always considered to be Linux only tools by adding Windows support.  Times have changed and it is clearer than ever that through working together with Customers, not against them, Microsoft is altering direction.  What is next?  Who knows! But I can’t wait to find out!

These are indeed interesting and exciting times we’re living in!

Microsoft Open sources Powershell for Linux and Mac OS

An amazing announcement from Microsoft landed today in this video :

Yes you have read that right, PowerShell is going to be available for Linux and MacOS based platforms.  Now you might wonder why I would get so excited about this, what with me preferring Linux as my core OS.  Surely bash is better?  And with bash being released on Windows 10, isn’t this pointless?

There is a lot to be excited about with this news.  I’ve not had much exposure to PowerShell until I started on this project but so far from what I’ve seen, PS is not half bad.  Is it better than Bash?  I can’t really answer that.  It’s like comparing a screw driver and a hammer.  Which is better?  It depends on what you’re doing with it.

To me this means that it will soon be possible to manage mixed environment infrastructure with a unified toolset. And there is choice in which tools to use.  Bash on Windows or PowerShell on Linux, each have their unique and wonderful features, and each will find their niche use I’m sure.  I don’t think that we will suddenly see Linux based infrastructure suddenly sprouting PS or Wintel houses jumping on Bash, but I do see that companies who use a mix of both will be able to use the best tool for their need across all these environments considerably reducing the technical debt that these types of work environments inevitably gain.

This is nothing but good news for everyone.

Ansible Ask an Expert – Windows Webinar

Tonight I partook in the Ansible Ask an Export – Windows webinar.  It was a very informative evening and I got many of my questions answered by Mark Phillips and Matt Davis.

There are some very exciting changes comeing with 3.0.1, unfortunately the next release is not due until Jan 17 so for me that does push back my timeline to produce a viable proof of concept configuration management by a few months.

Some of the questions and answers I was given follow :

Is system tracking confirmed to be working with windows environments in the next release and when is the ETA for that release expected?

When experimenting with this feature I found that unfortunately I was getting an error message.  Ansible was looking for python on the windows platform in a place specific to Linux so of course failed.

Windows support for system tracking is confirmed for the next release but won’t be with us until Jan 17.

Apart from Chocolately are there any other package management tools that Redhat would reccomend?

Currently the only tool available for windows environments that gives anything close to Linux based package management is Chocolately.  Microsoft are apparently working on something to give a package management type environment but that is a long way off.

When installing certain software or running PowerShell scripts via winRM, they fail but when installing locally they work fine.  I get round that by using New-PSSession and using that against Invoke-Command which works.  Will ansible eventually be able to create a full session to do these tasks?

When ansible connects to a windows box it creates a batch connection which is not a full session.  So commands and software that require a full session (such as New-SelfSignedCertificate and SQL Server) they will fail.
Redhat are working on a Become feature for windows that will create a full PS Session rather than a batch session which they hope to implement in 3.0.2.

Windows sessions and shell access

I’m not a windows guru, it’s been many many years since I moved over into Linux space, so if I’m completely wrong here, please correct me.

I ran into an interesting and frustrating problem last week trying to get Ansible Tower to talk to a VM running on a vCenter host.  Creating it, giving it an IP address and changing it’s hostname with vpshere shell commands worked perfectly.  My troubles began when prepping the system by configuring the remote management using ansible’s available script.  Works perfectly on Vagrant. No problems there.

As I mentioned in an earlier blog, I found the only way to put the script onto the VM was via vsphere shell and the Set-Content command in PowerShell to save it into a local file, but trying to run the file I kept getting a frustratingly elusive error.

New-SefSignedCertificate : CertEnroll:CX509Enrollment::_CeateRequest: Provider type Not Defined. 0x80090017 (-2146893801 NTE_PROV_TYPE_NOT_DEF) At C:\Windows\Temp\ConfigureRemotingForAnsible.ps1:88 char:10 + $cert = New-SelfSignedCertificate -DnsName $SubjectName – CertStoreLocation “Cert:\LocalMachine\My” + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NoSpecified: (:) [New-SelfSignedCertificate], Exception + FullyQualifiedErrorId : System.Exception,Microsoft.CertificateServices.Commands.NewSelfSignedCertificateCommand

In powershell 4, the New-SelfSignedCertificate does not have a settable property for Provider, and all the googlefu I could muster was not turning anything up. But I did notice one particular pattern that put me on track to a fix.

I noticed that when I first started the VM fresh the script would inevitably fail.  I ran it again and it would fail, but I could run it from within the VM fine.  Then I noticed that I could run it if the user was logged in.  So a few hours were spent spinning up fresh VMs and testing conditions until I was satisfied that the only way that the script would run via ansible was when a user was logged in.  Speaking to the tech guys who look after the servers it seems that when vsphere creates a shell connection it seems to be a partial connection and doesn’t initiate a user session.  It appears that New-SelfSignedCertificate requires that a valid user session exists to validate the certificate against.

So the fix after that was fairly easy.

I found that you can create a session in PowerShell and Invoke a command against it.  So I ended up with this :

Param (
$pass = convertto-securestring $password -asplaintext -force
$mycred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$pass
$appsession = New-PSSession -ComputerName localhost -Credential $mycred
Invoke-Command -Session $appsession -FilePath c:\windows\temp\ConfigureRemotingForAnsible.ps1
Remove-PSSession $appsession

So my with_items now looks like this :

– ” -command Set-Content ConfigureRemotingForAnsible.ps1 @’
{{ lookup(‘file’, ‘files/setup.ps1’) }}
– ” -command Set-Content run-session.ps1 @’
{{ lookup(‘file’, ‘files/run.ps1’) }}
– ” -command \”& .\\run-session.ps1 -username {{ vguest_admin_user }} -password {{ vguest_admin_password }} | Out-File .\\powershell_output.txt\””

Now Ansible can talk to the vCenter VMs and start doing things properly with them.

Microsoft NO-tepad

This may be old hat news to some people, but I’ve been out of the Micro$oft camp for some time now.  I did mention I was a Penguinista!

One of the roles I’m working on is to automate the installation of MSSQL server through the use of an unattended configuration file.  While working on this I came upon a very interesting problem.  There are some options that will be different in production environments from localised development environments so I’ve placed the configuration in the role’s templates folder.  The various dynamic values I want to ensure match environment types were replaced with {{ variables }}.  Sounds easy right? What could go wrong?

As I found out, plenty.  When I edited the files in Atom locally on my Development environment (Windows 7) the configuration file looks ok. It even looked fine on my personal development environment (Linux Mint 17) but when the role is ran against the virtualized environment (a Vagrant Virutalbox machine) it failed.  Opening the file on the guest there were some extra odd symbols at the start of the file that weren’t on the host.  And all the values I’d entered had changed to what looked like a mix of Russian Crylic and Chinese.

The configuration file I’m using had already been written by the Ops team to help them define standard settings for their SQL servers.  Except that they had edited those settings in Microsoft Notepad.  Which I recently found out places a binary byte marker at the start of the file thus corrupting it when attempting to parse it through Ansible’s Templates module that uses the Jinja 2 templating language.

When I recreated the file, keeping it as far away from Notepad as I digitally could, the role parsed the new template perfectly.

So moral of the story is don’t use Notepad!  Use a proper text editor like Notepad++ or Atom.

The importance of pause:

In my last post I discussed how to interface with a vCenter via the vsphere_guest module to create a windows 2012 server from a template, how to set up the IP address with vmware_vm_shell then add features with win_feature.

One point I was remiss with was timing.  When ansible runs through a playbook, it will run from one task to another logically once the command responds as successful so if you attempt to go from vm creation straight into configuration management, it’s going to fail.  When creating virtual environments locally with vagrant, there are built in pauses and checks to ensure the environment is up and running before running the provisioner.  So when using  the vsphere module it will return a success on the creation before the machine has even had time to boot up.  So it’s a good idea to use the ansible pause: module to ensure that the VM is in a stable state before beginning with any configuration management work.

It’s also useful for other features that may respond with success before it’s had time to properly initialise.  I found this also with setting the IP.

If you find that after running a task, the next one fails, it may simply be down to the state of the previous task on the guest not being fully ready.

So make good use of the pause

Ansible Tower and vSphere : Talking to a Windows Server 2012 with no IP address

So far this week has been very productive and exciting.  There are still many things up in the air right now, but my priority for this week is to integrate Ansible Tower with the vCenter, create spin up and provision a windows 2012R2 server.

I started the week by upgrading Tower from 2.4.5 to 3.0.1.  Running the ./ took it right through without a hitch.  Logging into the tower front end I was pleased with the cleaner more professional dashboard and icons.  Not just that but the layouts of some of the forms are far better than in previous versions.  Well done RedHat!

Ops gave me my own vCenter to play with last week and with only 11 days left of my tower license I felt it prudent to get cracking.  As I have come to expect from Ansible, the documentation was clear enough with good examples that I could copy and paste into a playbook.  Edited in Atom and pushed into the git repository I was good to go.

The tower project has already been set up  to point to a locally hosted bitbucket SCM and when I created my first test playbook to create the vcenter guest, it pulled those changes and I was able to select the playbook in the job template.

To generate the right amount of dynamic information for the vSphere guest I have added fields to the custom survey.  Some already filled in but available to edit.  But on my first run, I hit a snag.  It told me I had to install pysphere.

pip install pysphere

Run again and now it’s cooking.  After about 5 minutes, it passed and going into my vSphere client it had indeed created the guest VM from the predefined template Ops put there.

This is a successful first stage but still a ways to go.  I still have to provision the guest!

Initially the Guest is sitting there with no network connectivity.  The vCenter resides in a server VLAN which does not have access to a DHCP server.  So the box automatically picks a 169 address.  How do you get an IP address onto a guest VM which can’t be connected to directly from Ansible Tower?

Some emails to redhat and googling brought me up with the wonderful module vmware_vm_shell.  Ok!  Now we’re talking.   I now have a way to interface with the guest through vCenter direct to it’s shell.

Before I continue, I will mention another dependancy.  vmware_vm_shell uses pyVmomi so you will have to install that.

pip install pyvmomi

We can now access PowerShell and set the IP address through that with this handy role and one liner :

- name: Configure IP address
    module: vmware_vm_shell
    hostname:"{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    datacenter: "{{ vcenter_datacenter }}"
    vm_id: "{{ vguest_name }}"
    vm_username: {{ vguest_admin }}
    vm_password: {{ vguest_password  }}
    vm_shell: "C:\Windows\System32\WindowsPowershell\1.0\powershell.exe"
    vm_shell_args: " -command (Get-NetAdapter -Name Ethernet |New-NetIPAddress -InterfaceAlias Ethernet -AddressFamily IPv4 -IPAddress {{ vguest_ipv4 }} -PrefixLength {{ vguest_mask }} -DefaultGateway {{ vguest_gateway}} )-and (Set-DnsClientServerAddress -InterfaceAlias Ethernet -ServerAddresses {{ vguest_dns }})"
    vm_shell_cwd: "C:\Windows\Temp"

Now we have an IP address on the Windows server, Ansible can talk to it.  Or can it?

In my earlier experiments with vagrant and ansible, one of the first things I did in the provisioning shell command was to run a WinRM PowerSHell script to enable PowerShell remoting.  And we hit another hurdle.  The vCenter I’m developing against does not have access to the domain, so I’m stuck for accessing any network resources.  But I have to run a powershell script on the guest, which is in the playbook assets on the tower server.

It’s a multiple line shell script so I can’t just pass it through the args on vm_shell.  Or can I?

Turns out I can.  Placing the ConfigureRemotingForAnsible.ps1 script into the $role/files directory makes it available to the role for funky things like I’m about to do.

So as not to duplicate the above block I added a with_items and moved the shell_args I’d written earlier into the list to join it’s siblings :

  vm_shell_args: {{ item }}
  vm_shell_cwd: "C:\Windows\Temp"
  - " -command (Get-NetAdapter -Name........"
  - " -command @'

 lookup('file', 'files/ConfigureRemotingForAnsble.ps1') '@ | Set-Content ConfigureRemotingForAnsible.ps1"
  - " -File ConfigureRemotingForAnsible.ps1"

Lets talk about what I’ve done here and why the 2nd command looks so odd.  You’ll notice that I’m using something called a Here-String (which is what the @’ ‘@ is all about.  This allows you to insert formatted multi line text into a variable.  But why the double line feed?

Ansible tower should be running on a Centos 7 box.  If you managed to get it running on Ubuntu then well done you, but I didn’t have the time to figure it out so Centos 7 is what I’m working on.  Windows and Linux handle line feeds and carriage returns differently so this is why you get all kinds of odd behaviour opening up some files in Notepad that look fine in other editors.

The Here-String requires you to start the text block on a new line (at least on 2012R2 it does) but because of the LR/LF discrepancy, a single feed to windows would be classed as the same line.  So double feed and you now have a Here-String that is piped into Set-Content and stored in a .ps1 in the C:\Windows\Temp folder.

The 3rd line then runs that file, setting up the PowerShell remoting.  It sounds easy, but believe me, it took me the better half of the day to get this figured out.

Final step was to prove that Ansible could provision the Guest environment.  Again not a straight forward task, but with a very easy solution.  The simplest method of provisioning is to add a feature.  There is already an IIS example on win_feature so I copied it into a new role and added the role to the create playbook.  But this is not going to work.  This is because currently the playbook is using Hosts: localhost but we need to point it to the guest for the next stage of provisioning.

This is how my top level playbook looks :

- hosts {{ target_host }} # set to localhost
  gather_facts: no

  - vsphere
  - networking
  - gather_facts

- hosts: virtual


Did I just change hosts in the middle of a playbook? I done did!  Yes you can change hosts in the middle of the playbook.  But where does it get the virutal hosts from?

See the 3rd role in the 1st block?  Gather_facts.  There’s a neat trick I did here.

- vsphere_guest:
    vcenter_hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    guest: "{{ vguest_name }}"
    vmware_guest_facts: yes

- name: Add Guest to virtual host group
  add_host: name="{{hw_eth0.ipaddresses[0]}}" groups="virtual"

Using the same vsphere_guest module, I got facts about the guest, and used those facts to add it dynamically to the virtual host group.  Theoretically I could have gotten it from the variable {{ vguest_ipv4 }} but this way looks a lot more awesome.

We’re not out of the woods yet though.  Simply trying to add the guest to the virtual group won’t get you a connection. It will try to connect, but with ssh.  We need to remind Ansible that this is a winrm connection.  The best way to do that is with group_vars.  Create a new $projectroot/group_vars/virtual.yml and add this option

ansible_ssh_connection: winrm

No further configuration needed after that and ansible tower connects to the guest via winrm over IP and without as much as breaking a sweat added via the win_feature module an IIS server.

- name: Install IIS
    name: "Web-Server"
    state: present
    restart: yes
    include_sub_features: yes
    include_management_tools: yes

So in summary I now have :

  • Ansible tower running on a Linux Centos 7 server
  • Communicating to a vmware vcenter hypervisor
  • Pulling playbooks for a locally hosted bitbucket (stash)
  • Spinning up a Guest VM from an existing template
  • Setting up the IP credentials
  • Enabling powershell remoting
  • Adding features

All with a few clicks of a mouse button.  I would say that today has been a good day

Vagrant, Ansible and Windows 2012

As part of helping the developers I’m putting together a vagrant VMWare Workstation environment.  The main application is based on JBOSS with MSSQL 2012 as the backend database.

Ops already have a very well documented process to creating these environments, but it’s a slow, methodical and laborious process.  The perfect low hanging fruit to prove how CM can help save time and effort.

Initially the greatest challenge I encountered with this choice of CM was that it currently does not run natively in windows environments.  Some may question my decision to proceed in this path due to this point, however my decision is based on many factors.  The thing I like about Ansible is that it is agentless.  There does not need to be anything installed in Linux or Windows for Ansible to work with it whereas Puppet and Chef would require the addition of an agent.

There is cost as well.  My client will require a centralised control mechanism to initiate the creation and provisioning of VMs on their vCenter cluster.  Ansible’s solution is considerably cheaper than it’s competitors.

And finally there is the consideration of staff training and organisational adoption.  Ansible I feel has the lower learning curve.  As it uses YAML, it has an easier structure to learn than either puppet and chef.  Ansible’s methods of structure in the form of roles allows for easy creation of playbooks that can use multiple roles in any combinations and it’s intuitive structure makes it very easy to get staff on board with it’s logical flow.

So how did I go about bringing about using a technology that won’t run on windows to provision a windows server environment?

Let’s first talk about vagrant.  For those new to it, vagrant is a VM abstraction layer that allows you to easily create and delete virtualized environments in VmWare or Oracle Virtualbox intsalled on your development workstation from standard templates without all the faf and hassle of needing to manually create, install OS and set them up.  With the proper provisioning commands you can get an entire environment up and running with the simple command

vagrant up

Vagrant allows you to do many neat tricks.  Such as spin up 2 VMs in a single call.  And this is how I have managed to get around the windows/ansible problem.

My vagrant solution first defines 2 machines.  The windows 2012 r2 which will be the main environment, and a minimal install centos 7 which will be the ansible control.

First the windows box is brought up, and shell commands are ran to prepare the Remote Management settings needed to allow ansible to connect.  Secondly, the centos box is brought up, ansible installed and a playbook ran against the 1st VM.

Yes it is a work around, but it works far better than I expected.  This now can form the template design for other development environments my client has. They would only need to change the playbooks with the correct configuration management for the application they’re developing on and be able to use the same vagrant file on all.

I’m provisioning what with Ansible?

I love ansible!  I’ve been using it for a good 5 or 6 years now.  Since it first came out nearly.  Yes I have experience with other CMs.  Puppet and a passing acquaintance with Chef, but neither of these tools have captured my attention with the simplicity and elegance that Ansible has.  This is strange in it’s self as I’m not a huge fan of Ruby, yet I do like the well laid out structure of the YAML files.

So when the client asked me what I thought the best Configuration Management solution would be, with them being a Windows house my first instinct was of course Puppet.  And I nearly made that decision, but fortunately I did take some time to look into the feasibility of using Ansible within this environment.  I was pleasantly surprised with how mature it has come in such a short time.  There are plenty of win_modules to use, and what isn’t available I’ve found I can get around with powershell scripts called either using the raw:  or (hurray) script: commands.

So far my current tests have proven that Ansible can indeed provision windows environments competently and this is going to be the technology I will recommend as part of my Proof of Concept core framework.

Hello world!

Supposedly the first post in your blog is the most important.  Capture the audience, state your intentions, introduce yourself yadda yadda yadda.  Sounds so easy until you sit there for half an hour with out a clue what to type.

Who am I, what am I, what do I do?  Existential dilemma aside all valid questions.

Those who know me well, know that I am a self certified Linux fan boy.  I love Linux! I use it as much as I can!  It’s been my main OS for nearly a decade and it has filled a major part of my career to date, so it may come as a surprise to many to find out that my current contract has almost no Linux technology.  My client is an entirely Windows based house with MSDN subscriptions to everything, yet they have employed me in the position as a DevOps consultant to analyse and implement improvements to their current development environments and software delivery processes.

There are many out there who think that DevOps is not something that can be successfully implemented into a windows based environment, but even though I come from the dark side of the force, I am going to take the challenge and prove that DevOps is not limited by technology, tools or operating systems, but is truly about the attitudes and willingness of people who want to improve their software development and delivery environments.

This blog is to help me keep track of my journey into implementing the processes, methodologies and some of the tools I’ve used in previous Linux based organisations into this Wintel house.  To talk about many of the challenges I face, the compromises I have had to and will have to make, and how I will bring about what is considered the impossible.