Tag Archives: automation

Quicker switching of active docker machines

fbbb494a7eef5f9278c6967b6072ca3e_400x400           machines

As it stands today, with the docker machine command you have to manually specify environment variables for DOCKER_HOST and DOCKER_AUTH.

So the process would be:

This is a bit of a pain when doing it manually, so I was looking for a quicker way to switch back and forth and I think this works pretty well, though not totally elegant.

I started with a shell script which contains the following.  It takes the machine name as input, outputs the syntax to a script and sources it.   I found I had to do it this way, otherwise the current user session wouldn’t have the variables changed, only for the script itself.

So you would run it with:

Let me know if you have a better way!

Tagged , , ,

Tech Preview of Docker Machine Driver for Fusion

(Also, checkout the vSphere driver here.)

I kind of beat up on the vSphere driver quite a bit in the last post.  (sorry guys!).   So I wanted to give a super easy example of what else you can do with it.   I just (finally) watched the DockerCon keynote where they introduced the machine functionality and their messaging on this helping “zero to docker” in just a few commands resonated.  This example shows what the vision is.

So here we go – using docker machine with VMware Fusion as the endpoint on OSX.

You may need to click on the video and watch it in theatre mode to see the text.


Tagged , , , , ,

Tech Preview of Docker Machine Driver for vSphere

UPDATE:  Machine is now out of beta, and I have a newer post on some of the changes here: http://www.jaas.co/2015/03/20/using-the-released-version-of-docker-machine-v0-1/

(Also, checkout the Fusion driver here.)


On Dec 4 2014 Docker announced the “machine” command/functionality as one of the announcements at DockerCon 2014.

In short it provides a way to deploy to many different locations from the same docker cli interface (yes I know that is like ATM machine, just deal with it.).  In their initial alpha release they are including drivers for Virtualbox, and Digital Ocean (though now as of Dec 18 looking at their GitHub page they list syntax already additional for AWS & Azure though I’m not sure if this is functional yet).

The next day on Dec 5 2014 VMware announced a preview of their extension to this Docker Machine command for deploying to VMware Workstation, vSphere and vCloud Air.

I have been using the vSphere part of it a bit this week and found the existing instructions a bit lacking so I wanted to provide some tips and examples to get up and running.

Things to know – but maybe come back to this list later….
First off, a few take aways I learned the hard way.  Which included bugging a very smart dude for help.    A few of these may not make sense until you dive into the functionality yet, so you may want to revisit this section later if you have trouble.

To be clear, I am not posting this list as all the blemishes I found, but as a guide to help anyone else that is struggling to see this vision that this functionality has the potential to bring.  Remember – this is a preview release of the VMware driver, and an alpha release of the Docker code.  Totally unsupported in every meaning of the phrase.


1) Understand that for this release you need to use the bundled docker binary as it has some functionality that the newest release you’ll get from package managers don’t have.   To get it to co-exist on a system that already had docker-io installed on it, make sure to either specify the full path to either one, or make sure the $PATH env variable is set so it picks up the one you want first.   I also copied my released docker binary to docker_local, so I could easily run that command if I wanted to switch to a local docker container instance.

2) This release of the machine command requires the use of environment variables to specify the active host.  When you run machine ls it will list all the existing docker machines available.  It also specifies the “active” one.  I’m not in the loop on the dev details of this but I assume this will be cleared up in the future.   Even though it says active, you still have to set the ennvironment variable.  Either pull the URL from the machine url machinename command or you can used this nested command

Do note that this is required.  One stumbling point I found was I couldn’t find a way to make this work when I don’t have a real tty session like when vCO makes a SSH call in a workflow.  TBD there……

3) As a follow up to both of the two above, if you DO want or need to switch between using docker machine and a local docker binary, you need to clear out the environment variable with

Yes kind of annoying for now, but I’m sure this will be fixed soon enough.  They are probably discussing this issue here on github.

4) I’m not sure how many others use local docker registries out there, but I do quite a bit for lab environments.  As with this other post I made about the change to forcing SSL communication, it took a moment to figure out how to force the configuration setting on each docker machine.  The really smart guy I alluded to previously built me a boot2docker iso with it embedded in it, so that’s an option, or you could could just manually apply it like this:


5) I had quite a bit of problems with a space in the datacenter name, and special characters in passwords.  There may be a workaround, but simple escaping it out didnt work so I just renamed the datacenter and used an account with all simple characters.  Remember….alpha code….

6) Yes, it downloads the ISO every time you run the machine command.  I don’t know why. Go ask docker.  Because, alpha.

7) I even hesitate putting this one here… but in my personal lab it kept failing when transferring the ISO to the datastore.  But it works fine in another lab environment I use.
Probably my own issue with some ghetto routing issue I have….  I worked around it by uploading the ISO by hand to the datastore.  Even though as I said in #6 it downloads it every time, if it already exists in the datastore it doesn’t try to push it again.Lets-Do-This

Step by Step

This syntax is accurate as of Dec 18 with CentOS 6.6 64bit.

Grab the tarball:

GRUMBLE GRUMBLE… whoever compressed this didn’t include the directory…. so it extracts to the present working directory……

Append the directory you extracted to your path environment variable.  Lately I’ve been using ~./bash_profile to set individual settings per user on each host, your results may vary.

And now to fire it off for the first time:

And with any luck you should see a VM pop up named docker-host-ABCDEFG  (that last part is random).  If you get errors, read over the ‘PRO-Tips’ at the end, and ‘Things to know’ at the top.

Now to list the current machines, run:

Set the required environment variables with:




Now! The magic is happening!  Run a normal docker command like:

And see the magic happen for reals.   This image is being deployed on this new docker “host” which is actually a barebones vSphere VM.


1) If you are doing a bunch of trial and error and you see the message that the docker host already exists, even though it may or may not have been deployed.  This is because even if the command fails it still gets added to the local machine list.  Clean it up with machine rm -f machinename   the -f forces the remove, if the actual VM doesn’t exist.

2) If you get an error message similar to “FATA[0086] open /root/.docker/public-key.json: no such file or directory”  just run the docker binary included here and it will create this file for you.

3) I crafted a pretty sweet bash script to nuke all machines at once.  Add the -f flag to force if you have to.  It works as such:



So what does this give us.   In my mind this gives us a simple interface that you may already be familar with and already using on your local machine, the ability to deploy to any number of other endpoints like public or private clouds.   That’s powerful.   Especially with any automation you have already created – slipping this into the mix, makes it even more robust.

This post was heavy on text and light on screenshots on purpose as it’s a complicated subject in this state of development.  I hope to put together a quick video to demonstrate this functionality soon.  Stay tuned.



Tagged , , , ,

Local Docker Registry Update

It appears since I last wrote about creating a local and persistent Docker registry on CentOS they changed the default behavior to force secure communication.   In basic environments like I use and build in a lab, SSL is just a headache best left alone.

Doing docker push now with docker version 1.3.2 I get the error:

The best solution I found was to add this option to /etc/sysconfig/docker like the following [1] [2]

Restart Docker, and then all is well in Docker push land once again.



Tagged , , ,

New vCAC & Application Services 6.1 template prep script (linux)

UPDATE: Dec 9 2014 – vCAC is renamed to vRealize Automation (vRA).   vRA 6.2 is dropping today and the pre-req script is posted here.

UPDATE: Dec 16 2014 – Doh!!  I was multitasking too much when i posted that last update.   The pre-req script wasn’t the point of this original post, but is still useful none the less.  To recap – the pre-req script is to ease setting up a vRA IAAS machine.   The template prep script is to ease setting up a linux template to be used _WITH_ vRA.
A great tool that flew under my radar in the most recent 6.1 release for AppD….er…Application Services and vCAC proper… is a script that does all the steps to prepare a linux template for you for both agents.   If you are at all familiar with this process, you’ll find it to be a huuuuuuuge time saver.

Original Post:
If you look at the documentation it’s quite cumbersome and full of potenial human error points.  This script will check all dependencies, install them where it can, and either prompt for the appropriate server names or accept inline input.

Getting Started
First pull the script off the AppD server and make it executable.

How to use it – Interactive

If you wanted to just dive in run it for interactive mode:

How to use it – Unattended

I update templates quite often in the lab environments I work in, so I like keeping a quick reference in a note that I can quickly cut & paste from.  Now that this script accepts inline inputs I could gain an extra sysadmin merit badge and just drop it into a shell script in a common place across all templates and just manually run that.  Easy Peasy.

So here’s the help page:


Here’s what I would run which tells it the three server names, not to install java,  not to check ssl certs, a timeout of 300 secs and not to prompt to confirm:

The last line is a handy step that prevents centOS templates to increment the nic# when cloning.  There could be a better way but it works.


… it does it’s thing…..and finishes with:



Now you’re ready to shut it down, take a snapshot, start data collection, and update your blueprint!

Tagged , ,

How to use a local persistent Docker registry on CentOS 6.5

UPDATE: Dec 16 2014, I found a new option is needed now using Docker version 1.3.2   See more here

There a bunch of blogs out there showing a tutorial on how to use a local docker registry but none of them (that I have found) have it boiled down to the absolute simplest syntax and terms.   So here you go!

First off terminology Docker Hub is where images are typically pulled from when you just type the normal “docker pull blah” commands.  A registry is what it’s referred to as instead of a hub or repo.   To save time and bandwidth here is how you can stand up a persistent local registry to store your images.  Persistent meaning the image data is kept after the image is discarded.

Syntax here is working on CentOS 6.5

1) Install the needed bits.  This is no different than normal.

2) Start docker

3) Fire up the example registry.  This downloads and runs the registry image, exposes port 5000, and links local dir /opt/registry to /tmp/registry within the container.   This is key.  Otherwise, after the container stops the images go poof.

4) We could do this locally on this first machine, but we’ll show the syntax from somewhere else to illustrate.   On some other machine, first do the same install steps above to install the EPEL rpm and install Docker.   Then pull the images you want:


5) List the images, and we see this image separates out a few versions of the OS.  CentOS7 is the latest (see how the IMAGE ID matches 87e5b…), CentOS6 is image 68edf..  and CentOS5 is 504a65…


6) Add some tags to gives the images a new identity.  Replace “docker-reg” with your docker registry hostname.

7) List images again to verify


8) Now push the tagged images to our local registry

9) Lastly, on some third machine with docker already installed (this post makes this handy by deploying these nodes as a catalog item), pull the time.  Notice it’s WAY fast now.  ~14 seconds in this screenshot.  Notice we only have the latest centos tagged, just pull the others and you’re good.

Compare this to pulling from docker hub at about ~2:20.   For a large image like the SpringTrader app I built and this would cut down about an hour download time dramatically.

I wanted to compare this to the SpringTrader app, so I pulled it earlier and began pushing it to my local registry.  One thing I noticed was it does buffer the image to disk when you push so be aware you will need the disk space (and time) available for this.   The time savings will happen later on subsequent deployments.  And i crashed my VM when running out of space the first time….

Then on some other node

Boom.  In about 10% of the time it normally takes to deploy this image she’s up and running!  It took about 13 minutes to download from Docker Hub, and about 2 minutes from my local registry.  That’s a win if you have a need to do this over and over.



Where are the images stored?

On a docker node, what I have been using to refer to the base machine and not the containers themselves, I found the docker files here.


On the local registry, i found the files here.  Remember we told it to use /opt/registry on the base machine and map that to /tmp/registry within the container?


Tagged , , ,

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration


In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:


For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.


Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.



Script 2 – Installs the agent (we saw that syntax in the first post)


Script 3 – Assigns tag(s) to the system being deployed


Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).


Let’s go ahead and request three nodes:



Shortly after the three machines show up in vSphere:


And when the deploying machine stage is complete, they show up in vCAC:


And it shows up in the JumpCloud console:


And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.



What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this


Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:


I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.




Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.


And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.



Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.


 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.


Tagged , , , ,

Directory as A Service: Part 1 – Intro & Setup

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_wI have been playing with an interesting new service from a startup based just down the road in Boulder, CO called JumpCloud.  In their own words:

“JumpCloud’s Directory-as-a-Service (DaaS) securely connects employees and IT resources through a single, unified cloud-based user directory. It is the single point of authority and authentication for a business’s many employees and access rules.”   link

I take this to mean it is a hosted directory service.  Interesting concept, which I bet is met with a ton of resistance from those that fight off-prem services but I’ll leave that topic for later discussion and focus on the technology right now.   I wanted to see how I could integrate this into VMware’s vCAC so that is what I built.   I’ll split this into two posts.  This first one will just cover setup, the second will be the integration.

First Impression
I have to admit, I really enjoy companies that make it super easy to try out their offerings.  JumpCloud offers 10 managed nodes for free then gives a one line exact syntax for how to deploy the service at the command line with your account credential already in place.  They also have a full example for Puppet and Chef similarly configured with your credential.   Literally cut, paste, go.  But more on this later.

Walkthrough of first time use

When you first login to the console you are met with a simple interface and nothing configured.  Let’s walk through initial configuration.

The first step seems to be to add users:


Once we add our user, Mountain, we see his account is in a pending state.


When the Mountain checks his email, he’ll see the activation message.


And when he clicks on it he can set his own password


And lastly, The Mountain is automatically presented with a multifactor authentication code that you can scan directly into Google Authenticator.  This is a killer feature in my opinion!

[ don’t worry about trying to steal these credentials, it won’t get you anywhere! ]

OK, now that the account is setup we see we have one more notification for this user:


Tags seemed a little confusing to me when I got started.  They appear to be the only grouping mechanism, so it is how you associate users to systems.  My guess would be you would assign your developers to the development machine tag, and your system administrators to some sort of all machines tag.   I went ahead and created tags that JumpCloud used in a few of their demo scripts.

We setup a tag for all servers, and give the mountain access


We continue on and create a few more


Now the cool part.  Before when I mentioned I love when companies give simple ways to try a service?



So we cut/paste this syntax in a newly provisioned CentOS VM and it does everything for you


 They have some sort of dynamic HTML on many of the console web pages, so when this command is run the empty previous screen is replaced with a system listing.


Notice we do get an alert regarding no tags automatically are assigned to the system.  I’ll explore this in my next post on integrations, but for now we do it by hand:


I’m not clear what’s going on behind the scenes (if it’s a push to the agent, or a reoccurring check in or what) but shortly after we see that the mountain is added to the passwd file on this centos machine:


Now if The Mountain tries to login at this point he will be denied?  Why?  Because if we look at the system details we see the default configuration is locked down pretty tight allowing ONLY public key authentication.


If I go ahead and click each of those buttons to allow root, allow password auth, AND allow multifactor   (because I like to be safe and dangerous all at the same time….it’s a lab after all).  The Mountain is now happy he can login.  Notice the prompt for the multifactor token WITHOUT ANY OTHER MANUAL CONFIG ON THE SYSTEM.  That. is. awesome.



Other stuff?
That’s it for the basics.  They have additional functional to configure sudoers via the JumpCloud console but I’ll leave that alone for now.

Overall thoughts
Given how young this startup is, I will give a pass on some of the few negatives I encountered (like UI problems in safari, maybe other features I would want to see like simple user grouping, on premies AD integration, windows host support).  What they have now works pretty well and they make it super easy to use.  It took me a little while to find the API information but when I did I was able to do the automation I will show next.    In short, this technology has promise for either the small environment that has absolutely no on prem environment, or for any sized organization to help with strictly access control on systems.

Full disclosure: I didn’t just happen to stumble upon JumpCloud as I know their Cheif Product Officer from the local cycling community here in Colorado, though I am not being influenced with free bike parts or beer to give their service a whirl.  Yet.

Tagged , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.


Single machine blueprint in vCAC



“DockerVM” tag in vSphere denotes the docker capability machines



Service Blueprint for various docker images



You can specify items like the image name, the port to expose, the bash command, etc..



Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.


…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login



Tagged , , , , ,

Using vSphere Tags via vCO

As vSphere 5.5 currently stands, the only way to interact with vSphere tags is via PowerCLI.   This leaves vCO out of the party without some effort to build it manually.   I am working on a solution where I wanted to include tags in some automation to enable some awesomeness so I explored if it was possible to expose to this vCO without huge effort.  Success!


It wasn’t too difficult to build this.  The two most difficult parts were setting up the PowerShell host for vCO (google it… it’s difficult.. at least it was for me the first time around), and parsing of the XML returned from PowerShell to vCO to get the data I wanted.   These workflows are a bit rough but they work as a first draft.   For anything production caliber you’ll want to evaluate the performance impact of hitting the powershell host as often, and definitely change the password field from string (proof of concepts!).

What I have built so far is a workflow “JaaS Tags- Add Tag” that accepts strings for the name of the tag, and virtual machine name.  This fires off powershell commands in a vCO action:


To show how it works running manually in the vCO client:



And to show that the tag is actually applied, you can find it in the Web Client:


Now, I also have a workflow to find VMs from the tag that is supplied.  I needed flexibility out of these in the solution that i’m working on so the output from this one is two arrays – one of the name in a string, and one of VC:VirtualMachine type.




Running this guy manually, just supply the tag name:


And to show the output parameters, you can see that in the vCO client:


Yay Tags!   Now you can include these workflows in other solutions to utilize tags in ways they weren’t intended.  Stay tuned for the solution I built this for.

I’ve posted an export of this package to FlowGrab, check it out here:  https://flowgrab.com/project/view.xhtml?id=d2623373-838f-4ee2-8d6e-c6f582cb452f




Tagged , , , , ,