Tag Archives: vco

Experiment: Pooling in vRA & Code Stream

Background

I recently attended DevOpsDays Rockies which is a community oriented DevOps conference (check them out in your area, it was great!).  I saw a talk by @aspen (from Twitter/Gnip) entitled “Bare Metal Deployments with Chef”.   He described something he/they built that, if I recall correctly, uses a PXE/Chef/MagicpixieDust to pull from a pool of standby bare metal hardware to fully automate bringing it into a production cluster for Cassandra (or what have you).

This got me thinking on something I was struggling with lately.  Whenever I develop blueprints in Application Director / Application Services, or just vRA/Code Stream, the bulk of the time I just hit the go button and wait.  Look at the error message, tweak and repeat.  The bottleneck by far is in waiting for the VM to provision.  Partly this is due to the architecture of the products, but also it has to do with the slow nested development environments I have to use.  We can do better…..!

Products using pooling

I then started thinking about what VDM / Horizon View have always done for this concept.  If I recall correctly, as it’s been years and years since I’ve worked with it, to speed up deployments of a desktop to a user, a pool concept exists so that there will always be one available on demand to be used.   I don’t have much visibility into it but I am also told the VMware Hands On Labs does the same – keeps a certain number of labs ready to be used so the user does not have to wait for it to spin up.  Interesting.

The idea

So I thought – how could I bring this upfront deployment time to the products I’m working with today to dramatically speed up development time?   And this is what I built – a pooling concept for vRA & Code Stream managed by vRO workflows.

Details – How Redis Works

When planning this out I realized I needed a way to store a small bit of persistent data.   I wanted to use something new (to me) so I looked at a few NoSQL solutions since I’ve wanted to learn one.  I decided on Redis as a key value store, and found Webdis which provides a light REST api into Redis.

I couldn’t find any existing vCO plugins for Redis I/O which is fine, the calls are super simple:

Example of assigning a value of a string variable:

Snip20150517_5The redis command is: “set stringName stringValue”
So the webdis URL to “put” at is “http://fqdn/SET/stringName stringValue”

Then to read the variable back:

Snip20150517_6The redis command is: “get stringName stringValue”
So the webdis URL to “get” at is “http://fqdn/GET/stringName”

Easy peasy. There is similar functional for lists, with commands to pop a value off either end of the list.  This is all I needed, a few simple variables (for things like the pool size) and a list (for things like the list of VMs storing IP addresses & names).

So in vCO I just created a bunch of REST operations that used various number of parameters in the URL line:

Snip20150517_7
I found the most efficient way to run these operations was to parametrize the operation name, and pass it to a single workflow to do the I/O

Details – Workflow(s)

The bulk of the work for this pooling concept is done in the following workflow that runs every 15 minutes.

Snip20150517_8In general it works like this:

  • Check if the workloads are locked – since it can take time to deploy the VMs, only one deployment will be going at a time.
    • If locked, end.
    • If not locked, continue.
  • Lock the deploys.
  • Get the pool max target (I generally set this to 10 or 20 for testing).
  • Get the current pool size (the length of the list in Redis.  much faster than asking vSphere/vRA).
  • If the current size is not at the target, deploy until it is reached.
  • Unlock the deploys.
  • Profit.

I did not have to do it this way, but the nested workflow that does the actual VM deployments is requesting vRA catalog items.

In Action

After I got it fully working and the pool populated, you can check the list values with this type of Redis query:

Snip20150517_9

Redis: lrange vmlist 0 -1 (-1 means all)
Webdis: http://fqdn/LRANGE/vmlist/0/-1

The matching machines in vSphere:

Snip20150517_11

In Action – Code Stream

Normally in a simple Code Stream pipeline you would deploy a VM by requesting the specific blueprint via vRA like this:

Snip20150517_19

In this solution, instead I use a custom action to grab the VM from the pool and return the IP back to the pipeline as a variable.  Then I treat the VM like it’s an existing machine and continue on and at the end delete the machine.

Snip20150517_18

This reduces the list in redis by one, so the next time the scheduled workflow runs that checks the list size it will deploy a new one.

(Kind of) Continuous Deployment

I have a job in Jenkins that builds the sample application I am using from source in Git, pushes the compiled code to Artifactory and does a post build action that calls Code Stream to deploy.

Snip20150517_15

I wanted to see if there were any bugs in my code, so I wanted this whole thing to run end to end over and over and over…   I configured the Jenkins job to build every 30 minutes.  I went on vacation the week after I built this solution so I wanted to see if over time anything broke down.  Amazingly enough it kept on trucking while I was gone, and even got up to the mid 700’s in Jenkins builds.   Neat!

Snip20150517_12

Jenkins builds

Artifacts

Artifacts

Code Stream executions

Code Stream executions

Summary

To my surprise, this actually works pretty darn well.  I figured my implementation would be so-so but the idea would get across.  It turns out, what I’ve built here is darn handy and I’ll probably be using it the next time I am in a development cycle.

Post any questions here and I’ll try to answer them.   I’m not planning to post my workflows publicly just yet, fyi.

Tagged , , , , , , , , , , , ,

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_w

In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:

Snip20141016_46

For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.

Snip20141016_47

Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.

Snip20141016_49

 

Script 2 – Installs the agent (we saw that syntax in the first post)

 

Script 3 – Assigns tag(s) to the system being deployed

 

Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).

Snip20141016_51

Let’s go ahead and request three nodes:

Snip20141016_53

 

Shortly after the three machines show up in vSphere:

Snip20141016_54

And when the deploying machine stage is complete, they show up in vCAC:

Snip20141016_59

And it shows up in the JumpCloud console:

Snip20141016_58

And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.

Snip20141016_60

 

What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this

Snip20141016_72

Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:

Snip20141016_61

I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.

Snip20141016_63

Snip20141016_64

 

Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.

Snip20141016_65

And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.

Snip20141016_67

 

Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.

Snip20141016_68


Snip20141016_70
 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.

 

Tagged , , , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

Snip20141001_67
So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.

Snip20141001_68

Single machine blueprint in vCAC

 

Snip20141001_71

“DockerVM” tag in vSphere denotes the docker capability machines

 

Snip20141001_72

Service Blueprint for various docker images

 

Snip20141001_74

You can specify items like the image name, the port to expose, the bash command, etc..

 

Snip20141001_77

Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.

Snip20141001_78

…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login

BY RUNNING THIS IMAGE YOU IMPLICITLY ACCEPT ALL EULU THAT ANY INCLUDED COMPONENTS IMPOSE.  THIS IS SIMPLY A DEMO APPLICATION AND NOTHING MORE.

 

Tagged , , , , ,

Using vSphere Tags via vCO

As vSphere 5.5 currently stands, the only way to interact with vSphere tags is via PowerCLI.   This leaves vCO out of the party without some effort to build it manually.   I am working on a solution where I wanted to include tags in some automation to enable some awesomeness so I explored if it was possible to expose to this vCO without huge effort.  Success!

Snip20140930_54

It wasn’t too difficult to build this.  The two most difficult parts were setting up the PowerShell host for vCO (google it… it’s difficult.. at least it was for me the first time around), and parsing of the XML returned from PowerShell to vCO to get the data I wanted.   These workflows are a bit rough but they work as a first draft.   For anything production caliber you’ll want to evaluate the performance impact of hitting the powershell host as often, and definitely change the password field from string (proof of concepts!).

What I have built so far is a workflow “JaaS Tags- Add Tag” that accepts strings for the name of the tag, and virtual machine name.  This fires off powershell commands in a vCO action:

 

To show how it works running manually in the vCO client:

Snip20140930_60

 

And to show that the tag is actually applied, you can find it in the Web Client:

Snip20140930_57

Now, I also have a workflow to find VMs from the tag that is supplied.  I needed flexibility out of these in the solution that i’m working on so the output from this one is two arrays – one of the name in a string, and one of VC:VirtualMachine type.

Snip20140930_55

 

 

Running this guy manually, just supply the tag name:

Snip20140930_59

And to show the output parameters, you can see that in the vCO client:

Snip20140930_61

Yay Tags!   Now you can include these workflows in other solutions to utilize tags in ways they weren’t intended.  Stay tuned for the solution I built this for.

I’ve posted an export of this package to FlowGrab, check it out here:  https://flowgrab.com/project/view.xhtml?id=d2623373-838f-4ee2-8d6e-c6f582cb452f

 

 

 

Tagged , , , , ,

vCO Workflow Collaboration with FlowGrab

flowgrab_logo_beta_gray_bgGithub for vCO!

A while back I posted about the vFLOWer tool that provided a way of unpackaging a vCO package into xml that can be easily pushed to a version control system like Git.  The same company, ByteLife, has taken it a step further and have opened a site in beta that allows sharing and collaboration of vCO packages called FlowGrab.   My understanding is that the tools that made up vFLOWer are run on each packge that is uploaded such that I can be version controlled, diff’ed, merged.  Although this developer functionality is not part of their public feature set yet, what is there does look to be useful in sharing workflows with others.  It’s a heck of a lot more useful than posting the full package file to a blog post or GitHub, or worse yet, requiring you to git clone the XML output from vFLOWer, repackage it and then import.   Painful.  This is a step towards automation of all of that.Snip20140827_14

I thought I would give this offering a quick try with a package I had laying around.  This is a workflow that I use for notifications that reaches out to Pushover over REST.  I blogged about it a bit when I explored pulling data from Weather Underground here.   This package includes workflows that power on and power off VMs based on a keyword, and send an alert when complete.  You can use this as an example of how to use the pushover workflow in your own to do something useful.  Reusable content! Yay!

The process starts as normal, exporting the package from vCO:

Snip20140827_1
The name will be automatically filled in, this is your package file.  Save this to disk.

Snip20140827_2

 

Now what’s handy about this site is you don’t have to go through all the trouble of using vFLOWer as I previously posted about, simply browse to their site. Create an account.  Create your project and upload your package.  Done!.

Snip20140831_8

 

After you post your project, it is available for others to view:

Snip20140918_16
Clicking on the download link downloads the actual package file, not XML than you then have to build.

Snip20140918_17

Now when you go to import this into another vCO instance, you are shown that a number of the contents already exist because I reused some library content for this example:

Snip20140918_18

Snip20140827_3

 

Now the package exists, and you should see the new workflows:

Snip20140827_4

 

Snip20140827_5

 

If you try to run the Power On or Power Off scripts as is you’ll get an error because you haven’t added the REST host and operation for Pushover yet.  This older blog post of mine shows how to set those up.

Snip20140918_19

 

Tagged , , ,

Blog As A Service (Application Director + WordPress + Twitter + Cyclocross)

It has been a while since I’ve posted a full project, so here it goes.  Enjoy.  This time we have our first guest appearance in a super awesome software developer, cycling, and beer snob buddy @jrrickard, who I am doing a number of projects with these days & showcasing some of them in a booth at our employers internal Science Fair (It’s going to be rad.  And there will be pliny.)    How did I get so far off track already…

The Story

I have been doing a lot with VMware vCloud Application Director lately, and while I think it is a really interesting product that strokes all of my ex-sysadmin nerdy tendancies in all the right ways, I think it has a really bright future as it’s a _really_ powerful tool that almost no one knows about (yet).  So, I thought to myself, what’s something totally silly I can do with it while still showing off it’s real world potential?  (if you haven’t picked up on it yet, that’s kind of my thing…)   That’s when I decided:

 

About ten minutes later, I have this:

Screen Shot 2014-04-14 at 11.57.58 PM

 

So what’s going on here?   Step by Step:

1) I tweet a specific phrase like “I should start a blog on cyclocross” .

2) @jrrickard wrote some slick python that talks to Twitter over the REST API, finds that phrase, and sends a specially formatted call over a REST API to vCO [1] [note that is supposed to be a superscript… not sure how to do that and link to a lower section in this blog yet..]

3) The vCO workflow(s) talk to AppD over a REST API, processes the parameters, finds the application blueprint in AppD, and schedules the deployment.  (There is some new fancy secret sauce another co-worker is developing that I may post about at a later time.  In short it is a collection of workflows that finds and executes exactly what you want in AppD based on the human readable name instead of the ID… which is a pain in the rear to find..  seriously…)

4) The AppD blueprint that gets deployed is based on a canned blueprint found at the VMware Solution Exchange here [2] with only a minor addition at the end.  It deploys a CentOS VM from a template, and installs Apache, MySQL and WordPress in order with all dependencies resolved.  AppD configures each one and starts all services.

5) Based on the original tweet that started this whole mess, an AppD service I created from scratch and added to the blueprint (a) pulls a CLI for WordPress from Git, (b) uses that cli to write a few posts to the blog on the topic I tweeted, and lastly (c) goes to flickr (over another REST API) and pulls a bunch of images on the topic to adds them to the posts.

Screen Shot 2014-04-15 at 12.06.32 AM


Screen Shot 2014-04-15 at 12.07.15 AM

[FUTURE] 6) We didn’t add it in yet for a number of reasons, but plan to complete the loop and notify back with a tweet saying “Here’s your blog link!”.

More infos:

[1] So why make the REST call to vCO instead of AppD directly? 

Good question!   Doing it this way can be considered best practice because it can give you greater flexibility in processing all the data and lets you monitor the process and better trap erros. This way you are not relying on the 3rd party system entirely but can work around it and integrate better.

[2] VMware Solution Exchange?

Yes!  It’s great!  You can browse all of these canned & demo solutions like Websphere, Oracle, JasperReports, SharePoint, Jenkins, and Liferay to see how one could (or should) use AppD to automate deployments.

[3] Want to see the workflows and blueprints?

Maybe when I get some time to clean it all up. It’s kind of…messy right now.

Credits:

@jrrickard / Jeremy Rickard for the help with the python script running the twitter bot.

@MarcoCaronna  for the AppD/vCO zen master skills simplifying the REST API

BitNami for the original WordPress blueprint here

Tagged , , ,

VMware vCO Port and Account Quick Reference

Maybe it’s just me… but I can never for the life of me remember the correct ports and accounts to use for each part of VMware vCenter Orchestrator (vCO).  If it happens to me – it’s probably happening to a ton of people out there.

I hope this helps!

This is current as of vCO 5.5

Port 8281/https  Appliance Page.
Port 8283/https VCO Configuration, Default: vmware/vmware (or user vmware and PW set in OVF/OVA deployment)
Port 8480/https Appliance Configuration, Default: root/vmware (or user root and PW set in OVF/OVA deployment)
Client URL: https://fqdn:8281/vco/client/client.jnlp
Tagged ,

Version control and vCO Revisited (vCO 5.5.1 released)

Recently I posted a walkthrough of using a 3rd party framework called vFLOWer to convert packages exported from vCO to text format such that a version control system like github can read.   Well, that was a waste of time….!    VMware just released a new version of vCO (5.5.1) with this functionality built into the product now!  Excellent.

Screen Shot 2014-03-15 at 11.57.02 AMThere is a menu option on the package object to “Expand package to folder”

Which results in the following on the file system:

Screen Shot 2014-03-15 at 11.59.08 AM

Very cool!  Well done and timely for my bloggering.

Tagged , ,

Using vFLOWer and GitHub to bring (better) version control to vCO

Update March 15, 2014:  This type of functionality is now included in the product in vCO 5.5.1  See more here

vCO has very basic versioning built into the workflows.  It works for simple mistakes from version to version, but isn’t great for cross environment and multiple developer use cases.  Enter the vFLOWer Toolkit made by ByteLife.   They are a VMware Partner that built, what I think is, a BCDR solution using a lot of Orchestrator workflows.  When building their solution they found they really needed tighter control over versioning their work for their developers, so they built a solution and made it available.  Now I know it’s a little bit of a hack, but it works great.   Let’s look at how to use it here.  This post will be screenshot heavy, but is worth it to illustrate what is going on and how you do it.

I wanted to post all my workflows for the Dinner as a Service project, so let’s walk through that here as an excercise.  First you need to export your items from vCO. For this project, I used the instance of vCO embedded with the vCAC appliance.  The URL for the client is https://fqdn_of_vcac_appliance:8281/vco/client/client.jnlp

vCO Login

vCO Login

Once we’ve logged in, switch to the administrator view. Click on the Package tab.  If you aren’t already keeping your workflows in a package, go ahead and create one.  Right click, Add Package.

Create package

Create package

Give it a name

Give it a name

Now we need to add items to the package, so right click “Edit”

Edit the package

Edit the package

Switch to the Workflows tab, click Insert Workflows and add the workflows you want to put into this package.

Add the good stuff

Add the good stuff

Screen Shot 2014-03-02 at 9.37.47 AM

You’ll notice it captures all associated items with these workflows you specify.  In my case for this project, it added an Action and Plugin for REST.

REST Operation action

REST Operation action

REST Plugin

REST Plugin

Awesome, so we have the items we’re concerned about in a single package.   Right click and export package.

Export Package

Export Package

Options for export.  Note the "export the values of configuration settings" that I will call out later.

Options for export. Note the “export the values of configuration settings” that I will call out later.

Now we have the package exported to the file system.

Now we have the package exported to the file system.

Now that we have our package exported, we need to convert it from the native binary format (I think?) to something that GitHub can use to do plain text revision control on.   Follow the User Guide on the vFLOWer website here or continue reading for a more detailed step by step process.

 We need to satisfy the requirements for vFLOWer which are – Apache ANT, Java, OpenSSL and Git.

Starting with ANT,  download the package here.  There is no installer for this one so we’re just going to extract it to our source dir at C:srcapache-ant-1.9.3  (use whatever path you want, but I’m going to use this src directory throughout).

ANT directory with the vCO package.

ANT directory with the vCO package.

We need to edit our environment variables to include ANT.     If you are unfamiliar with this process these steps will help as it’s not exactly an everyday activity.   Right click on Computer, select Properties.  (or Control Panel, ‘System and Security’, ‘System’).

Computer - Properties

Computer – Properties

Select ‘Advanced System Properties’.

Advanced System Settings

Advanced System Settings

Select ‘Environment Variables’.

Environment Variables

Environment Variables

Click ‘New’.   Add ‘ANT_HOME’ with our path to ‘C:srcapache-ant-1.9.3’.

ANT_HOME

ANT_HOME

This makes ANT_HOME available to the system but it’s not in the system path yet.    Scroll down to Path, click ‘Edit’, and add ‘;%ANT_HOME%bin’ to the end.

Path for ANT

Path for ANT

Test this by opening a new command prompt and type ‘ant -version’

Test ANT

Test ANT

That’s one down.   Now we need a Java JDK.  The newest at the time of this writing is jdk-7u51-windows-x64.exe and can be found here.

You want the JDK on the left.

You want the JDK on the left.

Choose your flavor.

Choose your flavor.

There is a windows installer for this one so it’s straight forward.    When complete, similar to what we did for the ANT environment variables, add JAVA_HOME, set it to C:Program FilesJavajdk1.7.0_51

JAVA_HOME

JAVA_HOME

Test with a new command prompt ‘echo %JAVA_HOME%’    This does not need to be added to the system path.  (EDIT: when I did this on Win7 it added automatically, on 2008 it didn’t.  That’s annoying.)

Testing JAVA_HOME

Testing JAVA_HOME

One more down, a few to go.   You may need to install the 2008 C++ Redistributable’s.   I have never known what’s in here but they say you should use it.  Kind of like vitamins.   Find it here.

For OpenSSL, grab the pre-compiled windows version here and install.  You may get a warning about the 2008 C++ package, this probably goes away if we would have rebooted – yay windows.  Choose the defaults.  Add to the system Path variable ‘;C:OpenSSL-Win32bin’ .

Update Path

Update Path

Test with the command ‘openssl version’ in a new command prompt.

Testing OpenSSL

Testing OpenSSL

Now we’re in the home stretch.  We just need the windows git package from here.

Download GIT because you can't get GIT without getting GIT first. GIT it?

Download GIT because you can’t get GIT without getting GIT first. GIT it?

Choose most of the defaults except two.  On the PATH screen, simplify things and have the installer modify the variable for you by choosing ‘Run GIT from the Windows Command Prompt’

Adjusting PATH automatically

Adjusting PATH automatically

On the line ending option page choose: Checkout as-is, commit unix-style line endings.

Line Ending Conversions

Line Ending Conversions

And test git real quick with ‘git –version’

Testing GIT

Testing GIT

Alrighty.  Now for the meat of it.   Everything up until now was just prep.  From here on out the vFLOWer doc is pretty descriptive.

You need to log into GitHub (create an account if you need to), and create a few ‘Fork’ of the vFLOWer repo.  If you are new to git, what this does is creates a new copy of this repo that is now owned by you but still chained to the parent.   If this was a real dev effort, we could push changes up to the parent if/when needed.  This is how developers work on multiple ‘branches’ of a code base without stomping on each other in one single repo location.    So login, browse to vFLOWer here, click fork.

Fork the original vFLOWer repo.

Fork the original vFLOWer repo.

Now you have your own fork of vFLOWer.  This will be where you store all your vCO stuff.   Let’s make it your own by renaming and edit the README.md   Click ‘Settings’ to rename.

Screen Shot 2014-03-02 at 2.54.24 PM

We’ll want to name it something meaningful.  For me, that always includes “stuff”:

my-vco-stuff

my-vco-stuff

When done, click ‘README.md’, click ‘edit’,  add your own content.  The read me file is similar to a index.html – we now see our new content at the root of the repo.

Click Edit

Click Edit

Enter your content

Comments Rock.

Comments Rock.

Add a comment before saving (we’ll talk more about comments later):

Screen Shot 2014-03-02 at 12.51.38 PM

And refresh the main page to see it:

Screen Shot 2014-03-02 at 12.51.51 PMNow comes the real magic of Git.   On the right hand of the main screen, copy the URL to the clipboard.   Mine is https://github.com/jasper9/my-vco-stuff.git

Grab the URL

Grab the URL

Back on the original windows machine we were on, change dir to our C:src directory and pull your Git repo.   This downloads the full thing to a subdir of the same name.

git clone

git clone

Check it out in windows explorer and it’ll look identical to the web interface.

We have files!

We have files!

Back to the vCO package we originally exported.  Copy this package into the /inout sub directory (C:srcmy-vco-stuffinout):

Package file in inout

Package file in inout

This is where ANT comes in. From the root of the repo (not the inout dir) run ‘ant pre commit’     You should see a lot of text scroll and end with a success message.

Text...text...DONE!

Text…text…DONE!

Check the inout directory and you should see the package is removed:

InOut Dir now

InOut Dir now

Check the content directory and you should see items from vCO that is familiar to you.  In my case I am looking at the Workflows dir.  My workflows were in a folder in vCO and that structure is maintained here at C:srcmy-vco-stuffcontentWorkflowsx My Library

Workflow Content

Workflow Content

Open up one of the workflows and you will see what it looks like in clear text.   One item of note is you might see values that were entered into parameters in vCO.   In my case I have ones for the Pushover service that is sensitive (API keys).  I want to remove these with placeholder values.  If you wish, you can uncheck a box in the vCO export process to not include any parameter data – if you have a lot this could be handy, but could also limit you in other ways by requiring some manual work to re-setup.

XML Content

XML Content

Now back in the root directory of this repo, type ‘git status’ to see the current status of this repo we’re in.  You’ll see a message complaining that there are new directories we weren’t expecting.

'git status' output

‘git status’ output

Add these automatically with the ‘git add .’ command.   Alternatively you could add items one by one if you ever need to in the future.

Now run status again and you’ll see it’s a bit happier, reporting cleanly that we have some new files in this snapshot of your code base.
Happier git status

Happier git status

Remember when you edited the README.md file and we added a message describing what we did?  That’s a comment.  Like you learned in “BASIC programming 101″ comments are a wonderful thing.  Maybe that was just me.  (Oh yeah, I can’t talk about git commit comments without dropping this XKCD on you)

MY HANDS ARE TYPING WORDS

MY HANDS ARE TYPING WORDS

So that change to the README.md we could have done here if we wanted to by manually editing the file.  To comment the changes we just made adding the vCO content type:

git comment attempt

git comment attempt

Two problems arise here (only the first is currently shown there)

Problem 1 I sometimes encounter – cut/paste from PDF’s or other sources don’t always work.  I blogged a little about it here.   In this case, it’s the PDF’s fault and the OSX setting didn’t fix it.  Replace the hyphen looking character with a real hyphen and run it again.
Problem 2 Now your commit was probably successful but it complains you haven’t set your account up right.  Let’s do that next.
Successful, kinda

Successful, kinda

Add your name and email with:

Then reset the author in the comment with:

My intention in this post isn’t to give a full primer on git (here’s one that looks good), so for the last part I’ll leave it simply at a high level explanation. We need to push our changes that we just committed to the server. We do so with the command:

Git push

Git push

Now if we refresh our github webpage, you should see the updates

Ch-cha-cha-cha-changes

Ch-cha-cha-cha-changes

Browse through the content and verify your workflows are there:

New stuff!

New stuff!

Awesome – now that’s how you export your packages, use this tool to convert them, and use Git to post them to GitHub.  Now to show how you would pull it down again, I’ll flip over to a fresh environment (but with all the prereq’s met).  This would be like if you were someone else and stumbled upon these workflows – how do I use someone else’s work.   Because this is a public repo, this is unauthenticated.

git clone

git clone

If you look into what was downloaded, it is as you would expect.  There is a content directory with our vCO stuff in XML format and no inout directory.

Fresh bits

Fresh bits

Now we need to use ANT to convert it back into a format we can import into vCO.

Yay!

Yay!

And check into the inout directory, and behold our package!

vCO Package

vCO Package

Load up the vCO client now.  Switch to the Administrator view.  The packages tab.  click the import button

Load it up!

Load it up!

You will see an SSL error you can ignore because the vFLOWer tool created a self signed cert for the process.   You may see a conflict message like this one shown if anything exists already or is at a newer version:

Warning(s)

Warning(s)

And there you go.  Now you can run your workflow in a different environment.  Or in this case, make dinner in someone else’s datacenter?

Ready to go!

Ready to go!

Tagged , , , , , ,

JAAS Project Lake Placid – Dinner as a Service – Overview

This post is part of a project I’m calling ‘Dinner As A Service.’
Posts:
Overview
Electricity Costs
Electronics 

This project has been a while in the making as it’s the most elaborate I’ve attempted to date. To fully document this one it is going to span multiple posts so be patient and check back over time if you are interested in these weird projects of mine.

To summarize, this project starts with VMware vCAC 6.0 where one orders ‘dinner’ and ends with the food being cooked in a Sous Vide water bath. Over engineered, over kill, and quite eccentric but hey it’s fun. And I learned a lot along the way in ever area.

Software:
VMware vCloud Automation Center
VMware vCenter Orchestrator
Raspibrew

Hardware:
Raspberry Pi
Solid State Relay
Pi Cobbler
2 x 7 Segment LCD
Temperature sensor
LED, Case, other bits..
Slow Cooker
(we’ll expand on the parts and builds later…)

First lets start with the basics – What Is Sous Vide? According to Wikipedia:

In other words – food awesomeness using self service, automation & electronics.   Here’s a site I got most of my information from.  They make a pricey, but off the shelf cooker.

What is a good application automation? Electronics and vCO! In truth the software part really just kicks it off (though I did add in some functionality that makes picking your temperature easier). The special sauce is in the electronics and the script on the Raspberry Pi, but we’ll get to that.

End to end (or menu to plate?), here is how it all works:

0. In the beginning, there was meat.   The wife and I joke that we know a guy and buy meat out of the back of a truck.  It’s actually not far from the truth however.   Check these guys out if you are in Colorado.  They deliver to locations all over the front range.
raw_01

1. The project starts with the menu. We’re using vCloud Automation Center 6 (vCAC) here as the front end. The main reason was so that I could learn more about the internals of it, but also because this sort of thing is right up it’s ally. Well, not cooking food – but a frontend for the automation of services.

Menu

2. Let’s say I want to cook up a nice steak tonight. I choose beef and I am given the choice of how I want the meat cooked. This is big advantage to using this front end – I don’t know off the top of my head what each temperature range is so associating it this way with a label is great. I’m going to pick Medium here (140d F) since my other/better half doesn’t like it any redder.

Steak_02

3. Yay!

Steak_03
4. Some magic happens in the background (via vCenter Orchestrator) and I get a notification to my phone that the temperature has been set.

Pushover_iOS_01_smaller

5. Because I’m a super nerd, I also get it to my watch. In truth, it’s actually really handy.

Pebble_01

6. On the Raspberry Pi, there is a python daemon running called Raspibrew (some guy built this for homebrewing beer! Awesome!). I used this because it has PID intelligence built in (so I didn’t have try to learn to code one…) and it has a REST api (which before I realized was included had already written one in node.js…ahwell….)

Raspibrew:RaspiBrew

7. The rPI is housed tentatively-permanently in a project case along with the rest of the electronics. Most important is the temperature sensor, and the solid state relay. When we hit our target temperature, the relay shuts off power to the outlet the slow cooker is plugged into.

heating01

8. We wait for it to heat up. Usually when I’ve been cooking I wrap the slow cooker in this heat shielding since it appeared I have a pretty crappy cooker that looses heat fast and has a weak heating element. Getting it high enough for vegatables was almost impossible without this.

Space_Blanket

9. Also as an aside, during this whole heating and cooking process I have a separate python script I made that writes the temperature readings out to Xively (previously Cosm) for historical tracking. This has been useful for when I was studying the temperature swings to see how much I needed to tune the PID values.

Xively_01

10. When we get to our target temperature another alert is sent out letting me know. This has been really useful so I can go mill around doing other things and not watch it. (either preface with why it says 50, or do this screenshot again for real)

Pushover_ios_done_01

Pushover_pebble_done_01

11. Drop the meat in!

raw_033

12. Now this is the part I have not done any further automation on. The rPI’s PID algorithm is still in charge of tweaking power to the slow cooker when it thinks it needs it, but I have not set up any timers yet. I wanted to add another 7-segment display with a countdown, and prompt for this in vCAC but the box got crowded with other bits and I had trouble figuring out how to reliably do a countdown timer on a rPI without a real time clock. Feature request!

13. When our cooking time elapses, we continue on the cooking process. Sous Vide is an awesome way to get super tender meat, but isn’t great on the eye candy front. Browning in a hot pan or using a torch for a bit works wonders.

Pull it out of the bath.  It’s really ugly when cooked this way, but oh so tasty.

Done_01

Brown the meat.  (During this cook I really should have gotten a much better crust going.  I was just excited to be using this steak sauce for the first time in a long long time….!)

Done_02

14. Finish up whatever sauces and sides are in order for the day and boom…. Dinner!

Done_04

Done_03

I’ll follow this post up more info on the hardware, software, and other bits.   Stay tuned..

Tagged , , , , ,