Containable Development Environments

While (in my opinion) the jury is still out on whether 2016 was indeed the year of DevOps as promised by Gartner, it certainly was a great year for innovation with many tools gaining much needed exposure.

It is one of those tools that I will focus on in this post.  Containers.  But while a lot of that hype has been around how awesome containers can be on an enterprise level, I’m going to examine them from the angle of how they could potentially be the number one tool used for development environments. And here’s why I think that.

Traditionally developers used to create their environments directly onto their own local workstations.  Along with all those great “Well it works on my machine” excuses, to borrow from a great writer, This has made a lot of people very angry and been widely regarded as a bad move. (Douglas Adams, Hitchhikers guide to the Galaxy).

When everyone was manually installing tools any old ad-hoc way it was to be expected that things woudn’t always work as intended in production environments (which would also have had their own manual configuration at some point).  Great days for coffee and head ache tabled manufacturers.

Over recent years organisations have been moving steadily towards virtualizing their development environments or at least automating the installation onto local machines so that they can have at least some kind of level playing field.

For the time being, I’m going to put aside the localised environment put directly onto the development workstation and focus on VM usage for a while.

One of the neat features toted by containers are how they are more efficient than VMs due to the way they function.  This is true.  When a virtual machine is running, the host has to emulate all the hardware such as devices, BIOS, network adapters and even the CPU and memory in some cases where proxying is not an option (such as none x86 architecture).

Containers function by running directly on the hardware of the host, using the hosts OS but segregating the application layer inside those handy shippable areas.  It does mean that you are limited to a certain extent.  For instance, A Hypervisor can run any operating system regardless of what it is. Windows VMs and Linux VMs can cohabit on the same host as happy as Martini with Ice but you can’t run say an MS Exchange server in a container on a Centos Docker host, or a full NginX Linux stack on the windows variant.  For large enterprise Full Wintel environments for example this won’t be an issue as they’d only need Windows container hosts, but for smaller mixed infrastructure, it means that they would need to run 2 container instances, doubling the required support for the 2 very different platforms and this is where containers do fall short of the mark as an enterprise level tool for production environments.

However, that being said, my focus isn’t to knock containers, but to praise them for the benefit that they could potentially bring in the actual creation of software!

Lets go back to the Developer who has stopped installing quasi production environments directly onto his workstation and has now adopted VM based development.  Depending on the spec of his machine, he could be fine or in for a very hard time.  As already mentioned, VMs are emulated which means they take up processing power, memory and more disk space than what is available to the guest.  They hog resources.  For enterprise solutions such as vCenter or QEMU, the overhead is not really an issue.  Many tests have proven that these enterprise solutions only loose fractions of a percent on overhead against running the same operating systems in a bare bones capacity and enterprise storage is cheap as chips.  Workstation virtualisation solutions however are a different story.  Where as the enterprise hypervisors will only be running that virtualization process, workstations will also be running Email clients, web browsers, IDEs such as visual studio, Mono-Develop, PHPStorm or Sublime to name a few plus many other processes and applications in the background.  The VMs will be sharing the available resources with all those others so you will never receive anywhere close to bare bones performance.  You will frequently find VMs locking up from time to time or being slow to respond (especially if a virus scan is running).  While these are small niggles and don’t occur regularly, they can be frustrating when you’re up against a deadline and sophos decides now is a great time to bring that VM to a grinding halt.

By moving to containers, you can eliminate a lot of that aggravation simply from not having all those resources sucked up to run another operating system within the operating system.  Instead, the container allows you to run the application stack directly to the hardware.  I’m not promising that it will cure all your problems when the computer grinds to it’s knees during those infernal virus scans, but if the workstation in question is limited in resources it can help to give the developer the environment they need without hogging HD, memory or CPU.

And finally probably what I feel is the best bit.  Providing that the VM the Developer was using was provisioned correctly with a CM tool such as Ansible, Puppet or Chef, there is no reason why the same CM script couldn’t be used to provision the container as well so moving from VM based development to container based is not as hard as you would think.  CM is the magic glue that holds it all together and allows us to create environments in vagrant, vCenter, Azure or physical boxes regardless what or where they are.  Including Containers.

In summary, I don’t see Containers being the enterprise production heal all tool some are purporting it to be.  The gains are too few and the costs too high, but for development environments, I see a very bright future.

 

 

A big bar of Chocolatey

I posted recently my first impressions of chocolatey, the package manager for windows.

This post is going to focus on some scenarios that many Enterprise customers may face when using this software deployment platform as part of their Configuration Managements solution.

Most of the applications you’ll be installing will be fairly light weight.  Things like Notepad++ (because we all know not to use notepad right?), java-jre/jdk, Anti-Virus are generically standard additions for server environments.  They are usually light weight (less than a few hundred meg at most) and Chocolately can install them with ease.  But there is one current limitation to Chocolatey I found that makes installing certain software not as easy as choco install this-package.

Currently the limit to the size of the nupkg is 2 gig. For the majority of your enterprise dependencies this will not be an issue.  But what about when it comes to installing things like SQL Developer edition/Enterprise/Datacentre or exchange which can come in at over 4 gig in size when you package the whole media? There may be options that you can strip out of the installation folder if you have a specific build and don’t need certain features, but this blog will assume you have a dynamic use case that could change over time or by project so will need the full installation media present.

You can certainly create large packages, but Chocolatey will throw an error when trying to install them.  So how do we install large packages within the bounds of this limitation?

Chocolatey I’ve found is a very dynamic and configurable tool.  The help guides on their website give us all the information we require to get up and running quickly and there’s plenty of choice for creating our local repos.  So while the current 2 gig limit on nupkg binaries does limit quick package creation and installs for the bigger software, all is not lost as there are ways to work around it.

Applications like SQL Server and Exchange aren’t like your standard MSBuild or MSI installers.  Notepad++ for example is an installer which contains all the required dependencies in a single package.  SQL on the other hand is a lot more complex.  There is a setup.exe, but that is used to call all the other dependencies on the media source.  If you try and package the whole thing up you’re going to be in for a hard time as I’ve already stated, but due to the way that Chocolatey works, these giant installations can potentially be the smallest packages you create.

Lets examine the innards of a package to see how this can be done.

At it’s most basic form, a package consists of a .nuspec file which details all the meta data, a chocolateyinstall.ps1 script which handles what is being installed and how and finally the installer it’s self.  Creating packages is as easy as :

choco new packagename

and packaging with

choco pack path/to/packagename.nuspec

With a business version you can generate packages automatically from the installer it’s self which is with out a doubt a very neat feature.

My initial attempt at installing SQL Enterprise was to put all the media in the tools directory which gave me a nupkg of around 4.5 gig.  Way too big.

As I mentioned Chocolatey is very dynamic in how packages can be installed.  Initially it creates the installer script with the following headers detailing what the name of the actual installer is and where it can find it :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = Join-Path $toolsDir ‘setup.exe’

So this would assume that I’m pulling a package from a repository that is specified when I set up Chocolatey initially, or from the –source argument.  Seeing as how SQL is too large to effectively package whole, I found that I could host the installation media on a UNC network share and map a drive to it.  So now my headers look like this :

$packageName = ‘Microsoft-Mysql-Server-Datacenter’
$toolsDir = “$(Split-Path -parent $MyInvocation.MyCommand.Definition)”
$fileLocation = ‘Y:\SQL_Server_DC\setup.exe’

This also means that when creating the nupkg I didn’t need to include the setup.exe so the new size is just under 4k!  But that is just one of the hurdles I had to leap.

I’m installing all my packages via Ansible configuration management.  One of the included modules is win_chocolatey which for simple installations from a NuGet type rep works well enough.  Unfortunately I’m installing from UNC which requires that an authenticated drive is mapped.  Mapped drives require a persistent user connection which Ansible currently does not support.  If you try and map a drive as part of the provisioning process, it will exist for the lifetime of that WinRM connection only and be lost when the next command is initiated.  I manged to work around this by creating a Chocolatey bootstrap script :

param (
$netshare_password,
$package,
$arguments
)
$PWord = ConvertTo-SecureString $netshare_password -AsPlainText -Force
$netshare_cred = new-object -TypeName System.Management.Automation.PSCredential -ArgumentList “NUGET\netshareuser”,$PWord

New-PSDrive -Name “Y” -PSProvider “FileSystem” -Root “\\NUGET\Installation Media” -Persist -Credential $netshare_cred

choco install $package -y –force -source Y:\ –ia=$arguments

And called within Ansible like this :

– name: Installing SQL Server
raw: ‘C:\Windows\Temp\ChocoBootstrap.ps1 -netshare_password “M@d3Up9@55w0Rd” -package “microsoft-sql-sever-datacenter” -arguments “/ConfigurationFile=C:\Windows\Temp\ConfigurationFile.ini”‘

Through this work around, I am able to install packages larger than 2 Gb with ease.

Windows sessions and shell access

I’m not a windows guru, it’s been many many years since I moved over into Linux space, so if I’m completely wrong here, please correct me.

I ran into an interesting and frustrating problem last week trying to get Ansible Tower to talk to a VM running on a vCenter host.  Creating it, giving it an IP address and changing it’s hostname with vpshere shell commands worked perfectly.  My troubles began when prepping the system by configuring the remote management using ansible’s available script.  Works perfectly on Vagrant. No problems there.

As I mentioned in an earlier blog, I found the only way to put the script onto the VM was via vsphere shell and the Set-Content command in PowerShell to save it into a local file, but trying to run the file I kept getting a frustratingly elusive error.

New-SefSignedCertificate : CertEnroll:CX509Enrollment::_CeateRequest: Provider type Not Defined. 0x80090017 (-2146893801 NTE_PROV_TYPE_NOT_DEF) At C:\Windows\Temp\ConfigureRemotingForAnsible.ps1:88 char:10 + $cert = New-SelfSignedCertificate -DnsName $SubjectName – CertStoreLocation “Cert:\LocalMachine\My” + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NoSpecified: (:) [New-SelfSignedCertificate], Exception + FullyQualifiedErrorId : System.Exception,Microsoft.CertificateServices.Commands.NewSelfSignedCertificateCommand

In powershell 4, the New-SelfSignedCertificate does not have a settable property for Provider, and all the googlefu I could muster was not turning anything up. But I did notice one particular pattern that put me on track to a fix.

I noticed that when I first started the VM fresh the script would inevitably fail.  I ran it again and it would fail, but I could run it from within the VM fine.  Then I noticed that I could run it if the user was logged in.  So a few hours were spent spinning up fresh VMs and testing conditions until I was satisfied that the only way that the script would run via ansible was when a user was logged in.  Speaking to the tech guys who look after the servers it seems that when vsphere creates a shell connection it seems to be a partial connection and doesn’t initiate a user session.  It appears that New-SelfSignedCertificate requires that a valid user session exists to validate the certificate against.

So the fix after that was fairly easy.

I found that you can create a session in PowerShell and Invoke a command against it.  So I ended up with this :

Param (
[string]$username,
[string]$password
)
$pass = convertto-securestring $password -asplaintext -force
$mycred = new-object -typename System.Management.Automation.PSCredential -argumentlist $username,$pass
$appsession = New-PSSession -ComputerName localhost -Credential $mycred
Invoke-Command -Session $appsession -FilePath c:\windows\temp\ConfigureRemotingForAnsible.ps1
Remove-PSSession $appsession

So my with_items now looks like this :

with_items:
– ” -command Set-Content ConfigureRemotingForAnsible.ps1 @’
{{ lookup(‘file’, ‘files/setup.ps1’) }}
‘@”
– ” -command Set-Content run-session.ps1 @’
{{ lookup(‘file’, ‘files/run.ps1’) }}
‘@”
– ” -command \”& .\\run-session.ps1 -username {{ vguest_admin_user }} -password {{ vguest_admin_password }} | Out-File .\\powershell_output.txt\””

Now Ansible can talk to the vCenter VMs and start doing things properly with them.

Ansible Tower and vSphere : Talking to a Windows Server 2012 with no IP address

So far this week has been very productive and exciting.  There are still many things up in the air right now, but my priority for this week is to integrate Ansible Tower with the vCenter, create spin up and provision a windows 2012R2 server.

I started the week by upgrading Tower from 2.4.5 to 3.0.1.  Running the ./setup.sh took it right through without a hitch.  Logging into the tower front end I was pleased with the cleaner more professional dashboard and icons.  Not just that but the layouts of some of the forms are far better than in previous versions.  Well done RedHat!

Ops gave me my own vCenter to play with last week and with only 11 days left of my tower license I felt it prudent to get cracking.  As I have come to expect from Ansible, the documentation was clear enough with good examples that I could copy and paste into a playbook.  Edited in Atom and pushed into the git repository I was good to go.

The tower project has already been set up  to point to a locally hosted bitbucket SCM and when I created my first test playbook to create the vcenter guest, it pulled those changes and I was able to select the playbook in the job template.

To generate the right amount of dynamic information for the vSphere guest I have added fields to the custom survey.  Some already filled in but available to edit.  But on my first run, I hit a snag.  It told me I had to install pysphere.

pip install pysphere

Run again and now it’s cooking.  After about 5 minutes, it passed and going into my vSphere client it had indeed created the guest VM from the predefined template Ops put there.

This is a successful first stage but still a ways to go.  I still have to provision the guest!

Initially the Guest is sitting there with no network connectivity.  The vCenter resides in a server VLAN which does not have access to a DHCP server.  So the box automatically picks a 169 address.  How do you get an IP address onto a guest VM which can’t be connected to directly from Ansible Tower?

Some emails to redhat and googling brought me up with the wonderful module vmware_vm_shell.  Ok!  Now we’re talking.   I now have a way to interface with the guest through vCenter direct to it’s shell.

Before I continue, I will mention another dependancy.  vmware_vm_shell uses pyVmomi so you will have to install that.

pip install pyvmomi

We can now access PowerShell and set the IP address through that with this handy role and one liner :

- name: Configure IP address
  local_action:
    module: vmware_vm_shell
    hostname:"{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    datacenter: "{{ vcenter_datacenter }}"
    vm_id: "{{ vguest_name }}"
    vm_username: {{ vguest_admin }}
    vm_password: {{ vguest_password  }}
    vm_shell: "C:\Windows\System32\WindowsPowershell\1.0\powershell.exe"
    vm_shell_args: " -command (Get-NetAdapter -Name Ethernet |New-NetIPAddress -InterfaceAlias Ethernet -AddressFamily IPv4 -IPAddress {{ vguest_ipv4 }} -PrefixLength {{ vguest_mask }} -DefaultGateway {{ vguest_gateway}} )-and (Set-DnsClientServerAddress -InterfaceAlias Ethernet -ServerAddresses {{ vguest_dns }})"
    vm_shell_cwd: "C:\Windows\Temp"

Now we have an IP address on the Windows server, Ansible can talk to it.  Or can it?

In my earlier experiments with vagrant and ansible, one of the first things I did in the provisioning shell command was to run a WinRM PowerSHell script to enable PowerShell remoting.  And we hit another hurdle.  The vCenter I’m developing against does not have access to the domain, so I’m stuck for accessing any network resources.  But I have to run a powershell script on the guest, which is in the playbook assets on the tower server.

It’s a multiple line shell script so I can’t just pass it through the args on vm_shell.  Or can I?

Turns out I can.  Placing the ConfigureRemotingForAnsible.ps1 script into the $role/files directory makes it available to the role for funky things like I’m about to do.

So as not to duplicate the above block I added a with_items and moved the shell_args I’d written earlier into the list to join it’s siblings :

  vm_shell_args: {{ item }}
  vm_shell_cwd: "C:\Windows\Temp"
 with_items:
  - " -command (Get-NetAdapter -Name........"
  - " -command @'

 lookup('file', 'files/ConfigureRemotingForAnsble.ps1') '@ | Set-Content ConfigureRemotingForAnsible.ps1"
  - " -File ConfigureRemotingForAnsible.ps1"

Lets talk about what I’ve done here and why the 2nd command looks so odd.  You’ll notice that I’m using something called a Here-String (which is what the @’ ‘@ is all about.  This allows you to insert formatted multi line text into a variable.  But why the double line feed?

Ansible tower should be running on a Centos 7 box.  If you managed to get it running on Ubuntu then well done you, but I didn’t have the time to figure it out so Centos 7 is what I’m working on.  Windows and Linux handle line feeds and carriage returns differently so this is why you get all kinds of odd behaviour opening up some files in Notepad that look fine in other editors.

The Here-String requires you to start the text block on a new line (at least on 2012R2 it does) but because of the LR/LF discrepancy, a single feed to windows would be classed as the same line.  So double feed and you now have a Here-String that is piped into Set-Content and stored in a .ps1 in the C:\Windows\Temp folder.

The 3rd line then runs that file, setting up the PowerShell remoting.  It sounds easy, but believe me, it took me the better half of the day to get this figured out.

Final step was to prove that Ansible could provision the Guest environment.  Again not a straight forward task, but with a very easy solution.  The simplest method of provisioning is to add a feature.  There is already an IIS example on win_feature so I copied it into a new role and added the role to the create playbook.  But this is not going to work.  This is because currently the playbook is using Hosts: localhost but we need to point it to the guest for the next stage of provisioning.

This is how my top level playbook looks :

---
- hosts {{ target_host }} # set to localhost
  gather_facts: no

  roles:
  - vsphere
  - networking
  - gather_facts

- hosts: virtual

  roles:
    install_iis

Did I just change hosts in the middle of a playbook? I done did!  Yes you can change hosts in the middle of the playbook.  But where does it get the virutal hosts from?

See the 3rd role in the 1st block?  Gather_facts.  There’s a neat trick I did here.

- vsphere_guest:
    vcenter_hostname: "{{ vcenter_hostname }}"
    username: "{{ vcenter_username }}"
    password: "{{ vcenter_password }}"
    guest: "{{ vguest_name }}"
    vmware_guest_facts: yes

- name: Add Guest to virtual host group
  add_host: name="{{hw_eth0.ipaddresses[0]}}" groups="virtual"

Using the same vsphere_guest module, I got facts about the guest, and used those facts to add it dynamically to the virtual host group.  Theoretically I could have gotten it from the variable {{ vguest_ipv4 }} but this way looks a lot more awesome.

We’re not out of the woods yet though.  Simply trying to add the guest to the virtual group won’t get you a connection. It will try to connect, but with ssh.  We need to remind Ansible that this is a winrm connection.  The best way to do that is with group_vars.  Create a new $projectroot/group_vars/virtual.yml and add this option

---
ansible_ssh_connection: winrm

No further configuration needed after that and ansible tower connects to the guest via winrm over IP and without as much as breaking a sweat added via the win_feature module an IIS server.

- name: Install IIS
  win_feature:
    name: "Web-Server"
    state: present
    restart: yes
    include_sub_features: yes
    include_management_tools: yes

So in summary I now have :

  • Ansible tower running on a Linux Centos 7 server
  • Communicating to a vmware vcenter hypervisor
  • Pulling playbooks for a locally hosted bitbucket (stash)
  • Spinning up a Guest VM from an existing template
  • Setting up the IP credentials
  • Enabling powershell remoting
  • Adding features

All with a few clicks of a mouse button.  I would say that today has been a good day