Ansible Tower – How to use System Tracking

It took me a while to get my head around how to use the System Tracking feature in tower.  Mainly because of how I was coming at it based on my current usage with Windows environments running on a VCenter hypervisor.  If you can’t get your system tracking to work, then it’s probably because you made some of the same fundamental mistakes I did.

Hosts

My initial scripts were dealing with creating the VMs on vCenter so in order to do this I had to target localhost (the tower box) to get this done.  Unfortunately I kept this logic through to my scan results, running the Scan job against a specific VM supplying the hostname in a survey and gathering facts like the IP address from vSphere.  This meant that I didn’t get facts collected against the target hosts, but against tower which is not what I wanted.  I was not getting the system tracking option to compare machines doing it this way.

Best practice is to leave the Hosts section to all and let tower handle the connection via the inventories.  You will then get facts in the system tracking page instead of the message telling you set up a scan job against and inventory.

Custom Facts

You do really need to read the documentation carefully.  On windows, you gather facts with Powershell scripts, but you need to make sure that the output can be easily converted into a JSON object by tower.  It will handle this for you.

You can write your scripts any way you want, but as I’m from Linux world I like to pipe.  Here is an example of gathering DLL information.

$object = @{}
Get-ChildItem C:\Windows\System32\ -Filter *.dll -Recurse -ErrorAction ‘silentlycontinue’ | Select-Object -ExpandProperty VersionInfo -ErrorAction ‘silentlycontinue’ | Foreach-Object {
$object.($_.FileName) = $_.ProductVersion
}
echo $object

You still need to tell Ansible where to find these facts so use the fact_path argument.  But be aware that you may need to escape the path

– setup: fact_path=’C:\\ProgramData\\Ansible’

USABILITY

I’m going to round up with not so much an issue, but an observation.  I found that the more facts you gather, the longer the list will get.  On a very basic installation, collecting dlls, services, features, installed programs and a selection of config files the page was getting to be over 3000 lines long.  This makes it hard to navigate and find potential conflicts between systems.  The colour scheme does hint at where sections are divided, but they’re not obvious initially and can be easily missed.  To that end I had a quick play and put together a small google chrome extension to make section headers into clickable sections that could collapse the information tables.  I also added a summary of total conflicts per section.

ansible_tower_system_tracking_extension

The source code can be found on my github : https://github.com/aidygus/AnsibleTowerChromeExtension

And the extension is available from chrome https://chrome.google.com/webstore/detail/ansible-tower-sytem-track/klmeimccnodnkacekjnkgoiojakibhha

A very simple addition that makes all the difference.

Ansible Tower and full Enterprise Infrastructure

One of the greatest joys I experience is when an idea catches on and key members in a client organisation engage into the message that I’m delivering.  I recently conducted a presentation to an Ops team detailing how Ansible Tower can be introduced to help manage their server infrastructure.  “But why stop at servers?” they asked me “Could this be used to manage the desktop environments as well?”.

I’ve always considered Ansible as the tool for production based environments, but the question intrigued me.  Can Ansible be used to manage desktops as well as servers?  When you get right down to it, from Ansible’s point of view connecting to a windows 7 desktop or a server 2012 instance do not represent a lot of difference.  Providing those environments are setup to accept WinRM connections and have a valid Service Account, Ansible should be able to provision and configure those environments as easily as it can Servers.

Even managing an infrastructure consisting of several hundred windows desktops in the Inventories section doesn’t pose too much of a challenge.  Ansible Tower comes with a handy command line tool called tower-manage inventory_import so getting an export from Active Directory and into Tower is a sinch.

So from a technical point of view, managing desktops with Ansible Tower is definitely a possibility that could be implemented into enterprise sized infrastructure.

The only issue that has to be considered carefully is the financial cost and gained benefit from following this path.

With regards to Servers, how the environment is configured, updated, provisioned and maintained are critical to the long term operational stability of an organisation.  It makes sense for large organizations to use CM tools like Ansible to manage their environments and reduce risk.

Desktops on the other hand are a different story.  With the exception of large data centers, there will generally be a greater number of desktops than servers.  Other than core applications like Anti Virus and system updates (which usually have their own automated update mechanic), it’s not so critical to keep Desktop environments up to date with the latest software releases.  Many Desktop devices may be mobile, such as laptops, and are often offsite and not connected to the internal network.  Most problems can be fixed by the use of a strategic disk defragmentation or turning it off and back on again.  With all these points brought into consideration it is clear that the financial benefit of using Tower to manage full-scale Enterprise Infrastructures is just not worth the capital expenditure.  For the cost of licenses required to manage anything above 500 nodes, you could easily hire 2 or 3 extra desktop technicians and receive a greater return.

While the potential is there for including Desktop devices into the scope of CM, companies such as Red Hat and Puppet Labs need to look further into their pricing models to make it worth an organisation’s while to invest in these tools.  As it stands, the standard per node costing model doesn’t work on anything else other than Servers.  Which is a shame considering the potential advantages of simplifying the Continuous Delivery cycle for developers producing desktop applications right to the desktop on release day.

A big bar of Chocolatey

I posted recently my first impressions of chocolatey, the package manager for windows.

This post is going to focus on some scenarios that many Enterprise customers may face when using this software deployment platform as part of their Configuration Managements solution.

Most of the applications you’ll be installing will be fairly light weight.  Things like Notepad++ (because we all know not to use notepad right?), java-jre/jdk, Anti-Virus are generically standard additions for server environments.  They are usually light weight (less than a few hundred meg at most) and Chocolately can install them with ease.  But there is one current limitation to Chocolatey I found that makes installing certain software not as easy as choco install this-package.

Currently the limit to the size of the nupkg is 2 gig. For the majority of your enterprise dependencies this will not be an issue.  But what about when it comes to installing things like SQL Developer edition/Enterprise/Datacentre or exchange which can come in at over 4 gig in size when you package the whole media? There may be options that you can strip out of the installation folder if you have a specific build and don’t need certain features, but this blog will assume you have a dynamic use case that could change over time or by project so will need the full installation media present.

You can certainly create large packages, but Chocolatey will throw an error when trying to install them.  So how do we install large packages within the bounds of this limitation?

Chocolatey I’ve found is a very dynamic and configurable tool.  The help guides on their website give us all the information we require to get up and running quickly and there’s plenty of choice for creating our local repos.  So while the current 2 gig limit on nupkg binaries does limit quick package creation and installs for the bigger software, all is not lost as there are ways to work around it.

Applications like SQL Server and Exchange aren’t like your standard MSBuild or MSI installers.  Notepad++ for example is an installer which contains all the required dependencies in a single package.  SQL on the other hand is a lot more complex.  There is a setup.exe, but that is used to call all the other dependencies on the media source.  If you try and package the whole thing up you’re going to be in for a hard time as I’ve already stated, but due to the way that Chocolatey works, these giant installations can potentially be the smallest packages you create.

Lets examine the innards of a package to see how this can be done.

At it’s most basic form, a package consists of a .nuspec file which details all the meta data, a chocolateyinstall.ps1 script which handles what is being installed and how and finally the installer it’s self.  Creating packages is as easy as :

choco new packagename

and packaging with

choco pack path/to/packagename.nuspec

With a business version you can generate packages automatically from the installer it’s self which is with out a doubt a very neat feature.

My initial attempt at installing SQL Enterprise was to put all the media in the tools directory which gave me a nupkg of around 4.5 gig.  Way too big.

As I mentioned Chocolatey is very dynamic in how packages can be installed.  Initially it creates the installer script with the following headers detailing what the name of the actual installer is and where it can find it :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = Join-Path $toolsDir ‘setup.exe’

So this would assume that I’m pulling a package from a repository that is specified when I set up Chocolatey initially, or from the –source argument.  Seeing as how SQL is too large to effectively package whole, I found that I could host the installation media on a UNC network share and map a drive to it.  So now my headers look like this :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = ‘Y:\SQL_Server_DC\setup.exe’

This also means that when creating the nupkg I didn’t need to include the setup.exe so the new size is just under 4k!  But that is just one of the hurdles I had to leap.

I’m installing all my packages via Ansible configuration management.  One of the included modules is win_chocolatey which for simple installations from a NuGet type rep works well enough.  Unfortunately I’m installing from UNC which requires that an authenticated drive is mapped.  Mapped drives require a persistent user connection which Ansible currently does not support.  If you try and map a drive as part of the provisioning process, it will exist for the lifetime of that WinRM connection only and be lost when the next command is initiated.  I manged to work around this by creating a Chocolatey bootstrap script :

param (
$netshare_password,
$package,
$arguments
)
$PWord = ConvertTo-SecureString $netshare_password -AsPlainText -Force
$netshare_cred = new-object -TypeName System.Management.Automation.PSCredential -ArgumentList “NUGET\netshareuser”,$PWord

New-PSDrive -Name “Y” -PSProvider “FileSystem” -Root “\\NUGET\Installation Media” -Persist -Credential $netshare_cred

choco install $package -y –force -source Y:\ –ia=$arguments

And called within Ansible like this :

– name: Installing SQL Server
raw: ‘C:\Windows\Temp\ChocoBootstrap.ps1 -netshare_password “M@d3Up9@55w0Rd” -package “microsoft-sql-sever-datacenter” -arguments “/ConfigurationFile=C:\Windows\Temp\ConfigurationFile.ini”‘

Through this work around, I am able to install packages larger than 2 Gb with ease.

LEAN, mean DevOps machine

With all the noise and excitement over new tools being used it’s easy to overlook that DevOps is not just a technical role.  There are many aspects that sets being a DevOps specialist apart from being another form of Systems Administrator and it is one of these areas that I’m going to talk about today.

Lean is a methodology that is usually found in marketing and manufacturing.  Toyota is noted for it’s Just In Time (JIT) manufacturing methods which Ford also implemented into his early production lines.   But what is it and why is it so important for someone like myself?

The shortest explanation is that Lean helps you look at processes that form up how a function is performed and allow you to identify waste.  That is in wasted time, effort, resources, money etc.  To me it is a brilliant framework to help me diagnose what is wrong with the Delivery cycle in a company and start being able to implement the right tools, methods, strategies to bring about a robust and stable Continuous Integration and Delivery solution.  Knowing how to automate a process I feel is only half the battle.  Knowing what to automate is where the biggest gains can be made and Lean allows you to identify those areas that need the attention most.

Lean also forms a foundation for me to Measure.  At some point in the DevOps process you will be asked to identify improvements and justify the need for you in the organisation.  When I identify waste through Lean, I take that opportunity to also identify measurable metrics.  There may be a process in the deployment cycle that requires 2 or 3 members and takes 5 hours to complete.  This is an easy metric as you can identify an actual cost of that process by the number of man hours dedicated to it.  Time as they say is money and here you can clearly calculate a cost.  There may be many such processes in the organisation and Lean coupled with Measure allows you to identify what are the greatest wastes and the more valuable lowest hanging fruit to change first.

Full of Chocolatey Goodness

The one thing that nobody can deny Nix based OSes have down pat is the package manager.  The ability to install software on demand from trusted sources is without a doubt one of the coolest things I’ve experienced using Linux.  You need a media editing suite?  No problem!  A better text editor?  Take your pick! Whether it’s RPMs or ppa’s, via command line with yum and apt-get or in the GUI with Synaptic, that ability to install packages, updates and full software products is simply amazing!

In terms of configuration management this makes provisioning Linux from infrastructure as code tools like Ansible, Puppet and Chef insanely easy.  Unfortunately Windows does not have this feature.  Sure there is an app store similar to current smart phones in windows 10 (if there were any apps to download that is), but pretty much all CM solutions are geared towards server based environments so fully automated configuration management isn’t as simple as it would be with Centos or Ubuntu.

So how do we deal with installing software through CM on Windows?  One way is to package the software as part of the CM script.  If you version control those scripts in Git, you could feasibly include each software package as a submodule to git, but that means that you have to create a separate git repository for every package you use.  In some of the environments that I’m dealing with, there may be as many as 30 or 40 software dependencies on a whole environment so that means a lot of repos.  Tracking binaries with git is not really efficient either.  Every time you update the package, it snapshots those binaries so you can end up with massive repos for small software packages.  These take time to download and can slow the entire CM process down massively.

If only there was a decent package manager for windows like ppa or rpm…….

Well hold on to your socks guys because we are in luck.  There is a package manager for windows that works just like it’s Linux cousins.  It’s called Chocolatey and even though it’s early days for me and I’ve not had much exposure yet, it’s phreaking amazing!

I had a demonstration from Rob Reynolds and Mike at RealDimensions software and my jaw was hitting the floor through the whole presentation.  There is a public repository with so many applications available that it a desktop user can get pretty much whatever they want.  For the corporate environments there is the ability to host your own private repo in which you can create your own secure validated apps on.  Creating packages is extremely easy and all the options you need to change are clearly laid out in the configuration files.  There is a business option that allows you to create packages from a host of windows installers.

I am impressed with what I’ve seen so far.  I’ll certainly be blogging about my experience over the coming weeks.

Windows sessions and shell access

I’m not a windows guru, it’s been many many years since I moved over into Linux space, so if I’m completely wrong here, please correct me.

I ran into an interesting and frustrating problem last week trying to get Ansible Tower to talk to a VM running on a vCenter host.  Creating it, giving it an IP address and changing it’s hostname with vpshere shell commands worked perfectly.  My troubles began when prepping the system by configuring the remote management using ansible’s available script.  Works perfectly on Vagrant. No problems there.

As I mentioned in an earlier blog, I found the only way to put the script onto the VM was via vsphere shell and the Set-Content command in PowerShell to save it into a local file, but trying to run the file I kept getting a frustratingly elusive error.

New-SefSignedCertificate : CertEnroll:CX509Enrollment::_CeateRequest: Provider type Not Defined. 0x80090017 (-2146893801 NTE_PROV_TYPE_NOT_DEF) At C:\Windows\Temp\ConfigureRemotingForAnsible.ps1:88 char:10 + $cert = New-SelfSignedCertificate -DnsName $SubjectName – CertStoreLocation “Cert:\LocalMachine\My” + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NoSpecified: (:) [New-SelfSignedCertificate], Exception + FullyQualifiedErrorId : System.Exception,Microsoft.CertificateServices.Commands.NewSelfSignedCertificateCommand

In powershell 4, the New-SelfSignedCertificate does not have a settable property for Provider, and all the googlefu I could muster was not turning anything up. But I did notice one particular pattern that put me on track to a fix.

I noticed that when I first started the VM fresh the script would inevitably fail.  I ran it again and it would fail, but I could run it from within the VM fine.  Then I noticed that I could run it if the user was logged in.  So a few hours were spent spinning up fresh VMs and testing conditions until I was satisfied that the only way that the script would run via ansible was when a user was logged in.  Speaking to the tech guys who look after the servers it seems that when vsphere creates a shell connection it seems to be a partial connection and doesn’t initiate a user session.  It appears that New-SelfSignedCertificate requires that a valid user session exists to validate the certificate against.

So the fix after that was fairly easy.

I found that you can create a session in PowerShell and Invoke a command against it.  So I ended up with this :

Param (
[string]$username,
[string]$password
)
$pass = convertto-securestring $password -asplaintext -force
$mycred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$pass
$appsession = New-PSSession -ComputerName localhost -Credential $mycred
Invoke-Command -Session $appsession -FilePath c:\windows\temp\ConfigureRemotingForAnsible.ps1
Remove-PSSession $appsession

So my with_items now looks like this :

with_items:
– ” -command Set-Content ConfigureRemotingForAnsible.ps1 @’
{{ lookup(‘file’, ‘files/setup.ps1’) }}
‘@”
– ” -command Set-Content run-session.ps1 @’
{{ lookup(‘file’, ‘files/run.ps1’) }}
‘@”
– ” -command \”& .\\run-session.ps1 -username {{ vguest_admin_user }} -password {{ vguest_admin_password }} | Out-File .\\powershell_output.txt\””

Now Ansible can talk to the vCenter VMs and start doing things properly with them.

Microsoft NO-tepad

This may be old hat news to some people, but I’ve been out of the Micro$oft camp for some time now.  I did mention I was a Penguinista!

One of the roles I’m working on is to automate the installation of MSSQL server through the use of an unattended configuration file.  While working on this I came upon a very interesting problem.  There are some options that will be different in production environments from localised development environments so I’ve placed the configuration in the role’s templates folder.  The various dynamic values I want to ensure match environment types were replaced with {{ variables }}.  Sounds easy right? What could go wrong?

As I found out, plenty.  When I edited the files in Atom locally on my Development environment (Windows 7) the configuration file looks ok. It even looked fine on my personal development environment (Linux Mint 17) but when the role is ran against the virtualized environment (a Vagrant Virutalbox machine) it failed.  Opening the file on the guest there were some extra odd symbols at the start of the file that weren’t on the host.  And all the values I’d entered had changed to what looked like a mix of Russian Crylic and Chinese.

The configuration file I’m using had already been written by the Ops team to help them define standard settings for their SQL servers.  Except that they had edited those settings in Microsoft Notepad.  Which I recently found out places a binary byte marker at the start of the file thus corrupting it when attempting to parse it through Ansible’s Templates module that uses the Jinja 2 templating language.

When I recreated the file, keeping it as far away from Notepad as I digitally could, the role parsed the new template perfectly.

So moral of the story is don’t use Notepad!  Use a proper text editor like Notepad++ or Atom.

The importance of pause:

In my last post I discussed how to interface with a vCenter via the vsphere_guest module to create a windows 2012 server from a template, how to set up the IP address with vmware_vm_shell then add features with win_feature.

One point I was remiss with was timing.  When ansible runs through a playbook, it will run from one task to another logically once the command responds as successful so if you attempt to go from vm creation straight into configuration management, it’s going to fail.  When creating virtual environments locally with vagrant, there are built in pauses and checks to ensure the environment is up and running before running the provisioner.  So when using  the vsphere module it will return a success on the creation before the machine has even had time to boot up.  So it’s a good idea to use the ansible pause: module to ensure that the VM is in a stable state before beginning with any configuration management work.

It’s also useful for other features that may respond with success before it’s had time to properly initialise.  I found this also with setting the IP.

If you find that after running a task, the next one fails, it may simply be down to the state of the previous task on the guest not being fully ready.

So make good use of the pause

Ansible Tower and vSphere : Talking to a Windows Server 2012 with no IP address

So far this week has been very productive and exciting.  There are still many things up in the air right now, but my priority for this week is to integrate Ansible Tower with the vCenter, create spin up and provision a windows 2012R2 server.

I started the week by upgrading Tower from 2.4.5 to 3.0.1.  Running the ./setup.sh took it right through without a hitch.  Logging into the tower front end I was pleased with the cleaner more professional dashboard and icons.  Not just that but the layouts of some of the forms are far better than in previous versions.  Well done RedHat!

Ops gave me my own vCenter to play with last week and with only 11 days left of my tower license I felt it prudent to get cracking.  As I have come to expect from Ansible, the documentation was clear enough with good examples that I could copy and paste into a playbook.  Edited in Atom and pushed into the git repository I was good to go.

The tower project has already been set up  to point to a locally hosted bitbucket SCM and when I created my first test playbook to create the vcenter guest, it pulled those changes and I was able to select the playbook in the job template.

To generate the right amount of dynamic information for the vSphere guest I have added fields to the custom survey.  Some already filled in but available to edit.  But on my first run, I hit a snag.  It told me I had to install pysphere.

pip install pysphere

Run again and now it’s cooking.  After about 5 minutes, it passed and going into my vSphere client it had indeed created the guest VM from the predefined template Ops put there.

This is a successful first stage but still a ways to go.  I still have to provision the guest!

Initially the Guest is sitting there with no network connectivity.  The vCenter resides in a server VLAN which does not have access to a DHCP server.  So the box automatically picks a 169 address.  How do you get an IP address onto a guest VM which can’t be connected to directly from Ansible Tower?

Some emails to redhat and googling brought me up with the wonderful module vmware_vm_shell.  Ok!  Now we’re talking.   I now have a way to interface with the guest through vCenter direct to it’s shell.

Before I continue, I will mention another dependancy.  vmware_vm_shell uses pyVmomi so you will have to install that.

pip install pyvmomi

We can now access PowerShell and set the IP address through that with this handy role and one liner :

- name: Configure IP address
  local_action:
    module: vmware_vm_shell
    hostname:"{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    datacenter: "{{ vcenter_datacenter }}"
    vm_id: "{{ vguest_name }}"
    vm_username: {{ vguest_admin }}
    vm_password: {{ vguest_password  }}
    vm_shell: "C:\Windows\System32\WindowsPowershell\1.0\powershell.exe"
    vm_shell_args: " -command (Get-NetAdapter -Name Ethernet |New-NetIPAddress -InterfaceAlias Ethernet -AddressFamily IPv4 -IPAddress {{ vguest_ipv4 }} -PrefixLength {{ vguest_mask }} -DefaultGateway {{ vguest_gateway}} )-and (Set-DnsClientServerAddress -InterfaceAlias Ethernet -ServerAddresses {{ vguest_dns }})"
    vm_shell_cwd: "C:\Windows\Temp"

Now we have an IP address on the Windows server, Ansible can talk to it.  Or can it?

In my earlier experiments with vagrant and ansible, one of the first things I did in the provisioning shell command was to run a WinRM PowerSHell script to enable PowerShell remoting.  And we hit another hurdle.  The vCenter I’m developing against does not have access to the domain, so I’m stuck for accessing any network resources.  But I have to run a powershell script on the guest, which is in the playbook assets on the tower server.

It’s a multiple line shell script so I can’t just pass it through the args on vm_shell.  Or can I?

Turns out I can.  Placing the ConfigureRemotingForAnsible.ps1 script into the $role/files directory makes it available to the role for funky things like I’m about to do.

So as not to duplicate the above block I added a with_items and moved the shell_args I’d written earlier into the list to join it’s siblings :

  vm_shell_args: {{ item }}
  vm_shell_cwd: "C:\Windows\Temp"
 with_items:
  - " -command (Get-NetAdapter -Name........"
  - " -command @'

 lookup('file', 'files/ConfigureRemotingForAnsble.ps1') '@ | Set-Content ConfigureRemotingForAnsible.ps1"
  - " -File ConfigureRemotingForAnsible.ps1"

Lets talk about what I’ve done here and why the 2nd command looks so odd.  You’ll notice that I’m using something called a Here-String (which is what the @’ ‘@ is all about.  This allows you to insert formatted multi line text into a variable.  But why the double line feed?

Ansible tower should be running on a Centos 7 box.  If you managed to get it running on Ubuntu then well done you, but I didn’t have the time to figure it out so Centos 7 is what I’m working on.  Windows and Linux handle line feeds and carriage returns differently so this is why you get all kinds of odd behaviour opening up some files in Notepad that look fine in other editors.

The Here-String requires you to start the text block on a new line (at least on 2012R2 it does) but because of the LR/LF discrepancy, a single feed to windows would be classed as the same line.  So double feed and you now have a Here-String that is piped into Set-Content and stored in a .ps1 in the C:\Windows\Temp folder.

The 3rd line then runs that file, setting up the PowerShell remoting.  It sounds easy, but believe me, it took me the better half of the day to get this figured out.

Final step was to prove that Ansible could provision the Guest environment.  Again not a straight forward task, but with a very easy solution.  The simplest method of provisioning is to add a feature.  There is already an IIS example on win_feature so I copied it into a new role and added the role to the create playbook.  But this is not going to work.  This is because currently the playbook is using Hosts: localhost but we need to point it to the guest for the next stage of provisioning.

This is how my top level playbook looks :

---
- hosts {{ target_host }} # set to localhost
  gather_facts: no

  roles:
  - vsphere
  - networking
  - gather_facts

- hosts: virtual

  roles:
    install_iis

Did I just change hosts in the middle of a playbook? I done did!  Yes you can change hosts in the middle of the playbook.  But where does it get the virutal hosts from?

See the 3rd role in the 1st block?  Gather_facts.  There’s a neat trick I did here.

- vsphere_guest:
    vcenter_hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    guest: "{{ vguest_name }}"
    vmware_guest_facts: yes

- name: Add Guest to virtual host group
  add_host: name="{{hw_eth0.ipaddresses[0]}}" groups="virtual"

Using the same vsphere_guest module, I got facts about the guest, and used those facts to add it dynamically to the virtual host group.  Theoretically I could have gotten it from the variable {{ vguest_ipv4 }} but this way looks a lot more awesome.

We’re not out of the woods yet though.  Simply trying to add the guest to the virtual group won’t get you a connection. It will try to connect, but with ssh.  We need to remind Ansible that this is a winrm connection.  The best way to do that is with group_vars.  Create a new $projectroot/group_vars/virtual.yml and add this option

---
ansible_ssh_connection: winrm

No further configuration needed after that and ansible tower connects to the guest via winrm over IP and without as much as breaking a sweat added via the win_feature module an IIS server.

- name: Install IIS
  win_feature:
    name: "Web-Server"
    state: present
    restart: yes
    include_sub_features: yes
    include_management_tools: yes

So in summary I now have :

  • Ansible tower running on a Linux Centos 7 server
  • Communicating to a vmware vcenter hypervisor
  • Pulling playbooks for a locally hosted bitbucket (stash)
  • Spinning up a Guest VM from an existing template
  • Setting up the IP credentials
  • Enabling powershell remoting
  • Adding features

All with a few clicks of a mouse button.  I would say that today has been a good day

Vagrant, Ansible and Windows 2012

As part of helping the developers I’m putting together a vagrant VMWare Workstation environment.  The main application is based on JBOSS with MSSQL 2012 as the backend database.

Ops already have a very well documented process to creating these environments, but it’s a slow, methodical and laborious process.  The perfect low hanging fruit to prove how CM can help save time and effort.

Initially the greatest challenge I encountered with this choice of CM was that it currently does not run natively in windows environments.  Some may question my decision to proceed in this path due to this point, however my decision is based on many factors.  The thing I like about Ansible is that it is agentless.  There does not need to be anything installed in Linux or Windows for Ansible to work with it whereas Puppet and Chef would require the addition of an agent.

There is cost as well.  My client will require a centralised control mechanism to initiate the creation and provisioning of VMs on their vCenter cluster.  Ansible’s solution is considerably cheaper than it’s competitors.

And finally there is the consideration of staff training and organisational adoption.  Ansible I feel has the lower learning curve.  As it uses YAML, it has an easier structure to learn than either puppet and chef.  Ansible’s methods of structure in the form of roles allows for easy creation of playbooks that can use multiple roles in any combinations and it’s intuitive structure makes it very easy to get staff on board with it’s logical flow.

So how did I go about bringing about using a technology that won’t run on windows to provision a windows server environment?

Let’s first talk about vagrant.  For those new to it, vagrant is a VM abstraction layer that allows you to easily create and delete virtualized environments in VmWare or Oracle Virtualbox intsalled on your development workstation from standard templates without all the faf and hassle of needing to manually create, install OS and set them up.  With the proper provisioning commands you can get an entire environment up and running with the simple command

vagrant up

Vagrant allows you to do many neat tricks.  Such as spin up 2 VMs in a single call.  And this is how I have managed to get around the windows/ansible problem.

My vagrant solution first defines 2 machines.  The windows 2012 r2 which will be the main environment, and a minimal install centos 7 which will be the ansible control.

First the windows box is brought up, and shell commands are ran to prepare the Remote Management settings needed to allow ansible to connect.  Secondly, the centos box is brought up, ansible installed and a playbook ran against the 1st VM.

Yes it is a work around, but it works far better than I expected.  This now can form the template design for other development environments my client has. They would only need to change the playbooks with the correct configuration management for the application they’re developing on and be able to use the same vagrant file on all.