Category Archives: Projects

Stickman costume /w LED pixel strips

In some way, it’s more that I’m dressing up as my house for halloween and that it just looks like a stick man… but more on that another time… i digress…

DIY stickman made with LED pixel strips!  Yay!

I had a roll of LED pixel strip left over from Christmas last year and have been looking for a use for it lately.  It’s a rather expensive bit of kit to cut up into pieces and use for something like this but it was left over anyway.  And turns out I mis-judged lengths (and made a few mistakes) and had to order another roll.  But crap, I couldn’t just buy the same expensive one so I got a cheaper one…but double crap… it turned out to have RGB in a different order (at full green or full blue the light up as opposites.  luckily red was the same) and they’re half as populated (30 LEDs per meter, instead of 60) so…I just had to put my perfectionism on hold…it’s just a costume!

Anyway, you can use just about any type of pixels strips to make this work I’ll just note here what I used and what I found handy.  Your mileage will definitely vary.

Parts:

Brain:
Arduino – I used a uno but any will work. (sparkfun)

LEDs:
(1) Pixel LED RGB Strip 60 LEDs/m 60 Pixels/m Waterproof Tube (16ft-6in/5 meter Roll) – 12v / INK1003 (WS2811 clone) (holidaycoro)
(2) Pixel RGB LED Strip 30 LEDs/m 10 Pixels/m Waterproof Tube (16ft-6in/5 meter Roll) – 12v / 2811 / BLACK PCB (holidaycoro)

Buttons:
(1) Some sort of small button (sparkfun)
(2) Fun “gameshow-like” button found on Ebay (ebay)

Power:
(1) USB for Arduino (Anker)
(2) 12v battery pack for lights (Amazon)

Odds n Ends:
Electrolytic Decoupling Capacitors – 1000uF/25V (sparkfun)
Solder Shrink Sleeves Wire Splices / 18 – 22 AWG Wire / Red (holidaycoro)
3pin Connector Male Female Cable Wire for WS2811 WS2812B LED Strip 10pcs (Vozop)
Saftey pins…lots and lots…
Velcro…lots..
~ 9″ hoop thingy from a hobby store

LED Strips
These lights are “smart” leds, meaning every single light is individually addressable in huge variations of red green and blue to produce….I don’t even know..how many colors.  They are quite fun.   For this I’m not doing any fancy animations, just solid colors.  You could do this REALLY cheap when using just dumb single color LEDs, but what fun would that be…..!


Power
The battery pack I used was nice because it already had the right size barrel plug on it for quick disconnecting and swapping.  I had wanted to use a rechargeable battery pack similar to the ones Anker makes but with a barrel plug on it that goes up to 12v but it didn’t have enough amperage.

LED strips come in either 6v and 12v. Also, some are 3 pin and 4 pin. Be sure you plan ahead all your parts.  Also, It’s best to separate the power for the Arduino from the strips for simplicity.  If you do it this way, be sure to connect the grounds together.

Also, to prevent the initial power surge from causing damage to the lights it’s best to wire in a capacitor, see that in the diagram.


Wiring
Yeah I’m not a great circuit designer…..

Pretty simple.  You just need a 470k resistor in line of the data from pin 6.  I wanted two different buttons to control the lights, so I wired up the one I could hold in my hand, and if that failed for some reason I could still hit the button on the board.


Using some of the parts I linked to above, I created some diy splitters  to simplify the connections between strands.

I planned out the sections something like this:

Code
The full code is here on github.  It’s really nothing special at all.   All it does is switch between a list of colors when the button is pressed.  I’m using the FastLED library with no animations, just solid colors.  Much could be improved here, but I just went for simplicity in this build.

Snippet:

I first thought it would be easier to use an off the shelf controller but the cheap $10 one I picked, while fun, was not ideal for this type of use.  But bonus, I learned the controller I got does indeed control pixels perfectly so I could use it for something else someday.

 

Would I would do different
If I was to do this again, I would use the cheap strips on black background for the whole thing for sure.  I got that second strip on sale for only $15!

 

Sources / Inspiration

coeleveld.com

Instructables

RGB Stickman

 

Tagged , , ,

Technology found our new best friend.

Last night I built a robot that brought us to our new best friend. Meet Cash.

Cash

Before I explain this strange statement, first let’s back up.

Two weeks ago we found our beloved Maddie was stricken with a tumor on her spleen that ruptured.  I won’t dive into the heartbreaking details, but you can read about it here, here and here.

maddie_camping

To summarize: heartbroken.   That damn wonderful dog lived a great life and will never be replaced.  But we have found we’re a two dog family.  Enter the idea of visiting shelters….which is always fun..!

IMG_6736

After a few misses, we found just how competitive adopting dogs is in Boulder.  Yes, competitive.

Forget cycling, running, and climbing – the most competitive sport in Boulder is trying to adopt a dog from the pound link

Dogs fly out of the Boulder Humane Society.  There was one Jen was interested in that was adopted within hours of her becoming available.  We heard of one from employees that was going HOME within 45 minutes of stepping into the adoptables area.   Seriously.

The employees say to just keep an eye on the website.  So that’s what we did for a bit.  We noticed it was updated frequently throughout the day.  But there was no way to be notified of new dogs.  Enter my light bulb moment.

I saw there was no RSS feed, and (of course) no API.  So I took a glance at the HTML source and saw it would be super easy to screen scrape.  Muahhaha…… this will be easy peasy!    With just a little bit of hacking last night I had a working system that scraped their webpage every 15 minutes, stored it in a local database, and sent us an email when a dog became available!  Ha! Leg up, take that one, Boulder animal people.  Dog adoption performance enhancing drugs.

In the morning I surmized that wasn’t nearly geeky enough.  I added functionality to email us when a dog appeared to be adopted (wasn’t listed any more).  And since email is SO year 2000s, I spun up a new twitter account and had it tweet and direct message us when a dog showed up and went home.  I dub thee: Dog(S)talker.  Get it?  Dog Stalker.  Dogs Talker.  I kill me…

Low and behold…while I was out with the kid on his bike and Jen working on an extension to the chicken coop, DING. DM from the new robot:

Snip20160403_15

Due to an unfortunate typo in the code it is missing the details of the dog but still….. the fucking thing worked… A quick click on the link showed it was a 1 year old, Australian Kelpie Mix, and about 45 pounds.  Check check and check all the boxes!  I yelled across the street: “JEN!”  to which I immediately heard the reply, “I’M GETTING READY TO GO [to the shelter]!”

15 minutes later I received this:

IMG_6740

So an absolute max of 30 minutes from the time she was posted to the website to one of us showing up to check her out.

Long story short, he’s perfect for us.  I’ll post the code to github soon.   Perhaps if this is useful to anyone else I can add others to the notifications.

Snip20160403_16

 

Tagged , , ,

Using a time series DB, starting with InfluxDB

devops-everywhereAt the last two conferences I attended (DevOpsDays Rockies and GlueCon) I heard a lot of mentions of NoSQL and time series databases.   I hate not knowing about things and not having experience with it so I’ve been playing with both of these.    First I integrated a NoSQL db using Redis into a project of mine recently.  And just now I’ve been playing with InfluxDB as a monitoring system and here I’m going to tell you about my experience.

I didn’t want to get caught up in any installation shenanigans so I tracked down docker images to assist in getting up and running fast.  And glad I did because it worked immediately!

index

1: InfluxDB

InfluxDB docker image: https://registry.hub.docker.com/u/tutum/influxdb/

Docker command:

And then right away you can load in a webbrowser: http://your.ip.address:8083 and you will get the below screen:

Snip20150527_9Once you log in with root/root, you will be shown that you have no databases out of the box, go ahead and insert a name and hit create:

Snip20150527_10You are now given a simple UI to be able to push and pull data into the system by hand.  To test this we will add some values that are in the same format as some of my scripts that deal with temperature.  Basically you can think of the time series as the table, and the values as key/value pairs.

Snip20150527_11

Then you can craft a simple query to verify the data:

Snip20150527_12

Neat!   Now unlike some other solutions, InfluxDB doesn’t provide any visualization functionality (other than a basic line).  I spun up a Grafana container to do this.

grafana

2: Grafana:

Grafana docker image: https://registry.hub.docker.com/u/tutum/grafana/

Docker command:

There are simpler ways to start up this container but I found all of these parameters got me quickly to where I wanted.

Now you can login to port 80 on this machine and you will be presented with an empty first page:

Snip20150527_13

Empty graphs aren’t very exciting, so let’s configure them real quick…

The syntax in Grafana is slightly different than we saw directly in InfluxDB but is mostly straight forward.  We put the database name (temperature) into the “series” field.  We fill in the blanks for the select statement – use last(value) and item=’80027_temp’ to specify the key/value.  Click in somewhere else to change focus and the graph should reload showing the values we entered by hand.

Snip20150527_14

Now I wanted to play around with it further so I modified some existing scripts I have for doing various types of monitoring like pulling data from weather underground (temperature, humidity, wind), and some data for free disk space from a NAS.  Mix it up and it came out looking like this:

Snip20150527_15

To feed the data in, I took the easy way out and used a perl client documented here.  So I then just modified my existing scripts to feed the data to this perl script every 30 seconds and bam, I’m done.

 

 

Tagged , , ,

Experiment: Pooling in vRA & Code Stream

Background

I recently attended DevOpsDays Rockies which is a community oriented DevOps conference (check them out in your area, it was great!).  I saw a talk by @aspen (from Twitter/Gnip) entitled “Bare Metal Deployments with Chef”.   He described something he/they built that, if I recall correctly, uses a PXE/Chef/MagicpixieDust to pull from a pool of standby bare metal hardware to fully automate bringing it into a production cluster for Cassandra (or what have you).

This got me thinking on something I was struggling with lately.  Whenever I develop blueprints in Application Director / Application Services, or just vRA/Code Stream, the bulk of the time I just hit the go button and wait.  Look at the error message, tweak and repeat.  The bottleneck by far is in waiting for the VM to provision.  Partly this is due to the architecture of the products, but also it has to do with the slow nested development environments I have to use.  We can do better…..!

Products using pooling

I then started thinking about what VDM / Horizon View have always done for this concept.  If I recall correctly, as it’s been years and years since I’ve worked with it, to speed up deployments of a desktop to a user, a pool concept exists so that there will always be one available on demand to be used.   I don’t have much visibility into it but I am also told the VMware Hands On Labs does the same – keeps a certain number of labs ready to be used so the user does not have to wait for it to spin up.  Interesting.

The idea

So I thought – how could I bring this upfront deployment time to the products I’m working with today to dramatically speed up development time?   And this is what I built – a pooling concept for vRA & Code Stream managed by vRO workflows.

Details – How Redis Works

When planning this out I realized I needed a way to store a small bit of persistent data.   I wanted to use something new (to me) so I looked at a few NoSQL solutions since I’ve wanted to learn one.  I decided on Redis as a key value store, and found Webdis which provides a light REST api into Redis.

I couldn’t find any existing vCO plugins for Redis I/O which is fine, the calls are super simple:

Example of assigning a value of a string variable:

Snip20150517_5The redis command is: “set stringName stringValue”
So the webdis URL to “put” at is “http://fqdn/SET/stringName stringValue”

Then to read the variable back:

Snip20150517_6The redis command is: “get stringName stringValue”
So the webdis URL to “get” at is “http://fqdn/GET/stringName”

Easy peasy. There is similar functional for lists, with commands to pop a value off either end of the list.  This is all I needed, a few simple variables (for things like the pool size) and a list (for things like the list of VMs storing IP addresses & names).

So in vCO I just created a bunch of REST operations that used various number of parameters in the URL line:

Snip20150517_7
I found the most efficient way to run these operations was to parametrize the operation name, and pass it to a single workflow to do the I/O

Details – Workflow(s)

The bulk of the work for this pooling concept is done in the following workflow that runs every 15 minutes.

Snip20150517_8In general it works like this:

  • Check if the workloads are locked – since it can take time to deploy the VMs, only one deployment will be going at a time.
    • If locked, end.
    • If not locked, continue.
  • Lock the deploys.
  • Get the pool max target (I generally set this to 10 or 20 for testing).
  • Get the current pool size (the length of the list in Redis.  much faster than asking vSphere/vRA).
  • If the current size is not at the target, deploy until it is reached.
  • Unlock the deploys.
  • Profit.

I did not have to do it this way, but the nested workflow that does the actual VM deployments is requesting vRA catalog items.

In Action

After I got it fully working and the pool populated, you can check the list values with this type of Redis query:

Snip20150517_9

Redis: lrange vmlist 0 -1 (-1 means all)
Webdis: http://fqdn/LRANGE/vmlist/0/-1

The matching machines in vSphere:

Snip20150517_11

In Action – Code Stream

Normally in a simple Code Stream pipeline you would deploy a VM by requesting the specific blueprint via vRA like this:

Snip20150517_19

In this solution, instead I use a custom action to grab the VM from the pool and return the IP back to the pipeline as a variable.  Then I treat the VM like it’s an existing machine and continue on and at the end delete the machine.

Snip20150517_18

This reduces the list in redis by one, so the next time the scheduled workflow runs that checks the list size it will deploy a new one.

(Kind of) Continuous Deployment

I have a job in Jenkins that builds the sample application I am using from source in Git, pushes the compiled code to Artifactory and does a post build action that calls Code Stream to deploy.

Snip20150517_15

I wanted to see if there were any bugs in my code, so I wanted this whole thing to run end to end over and over and over…   I configured the Jenkins job to build every 30 minutes.  I went on vacation the week after I built this solution so I wanted to see if over time anything broke down.  Amazingly enough it kept on trucking while I was gone, and even got up to the mid 700’s in Jenkins builds.   Neat!

Snip20150517_12

Jenkins builds

Artifacts

Artifacts

Code Stream executions

Code Stream executions

Summary

To my surprise, this actually works pretty darn well.  I figured my implementation would be so-so but the idea would get across.  It turns out, what I’ve built here is darn handy and I’ll probably be using it the next time I am in a development cycle.

Post any questions here and I’ll try to answer them.   I’m not planning to post my workflows publicly just yet, fyi.

Tagged , , , , , , , , , , , ,

Using the released version of Docker-Machine (v0.1) with VMware vSphere

I began uplifting some of my content today which included upgrading to the newest docker (v1.5) and docker-machine (v0.1), and came across a number of changes.

  • The command is now officially “docker-machine” instead of just “machine” which is what it was when I first played with it.
  • All the VMware driver commands are now prefixed with “vmware”   so instead of “–vsphere-vcenter” it is now “–vmwarevsphere-vcenter”   a full example is:

    And they have an easier way to set the environment variables now:
  • I couldn’t get “–vmwarevsphere-boot2docker-url” to work with a custom URL which is probably a bug.  If you leave it out entirely it will use a default location.
  • ..Which is a good thing because boot2docker now includes VMware tools, which negates the need for a custom .ISO
  • The only other change I need to make to the boot2docker image is the use of a insecure registry, so I just include in my syntax the running of a shell script which runs: docker-machine ssh $1 sudo sed -i -e ‘s/–tlsverify/–tlsverify –insecure-registry docker-hub:5000/g’ /var/lib/boot2docker/profile  You can find this full shell script on github here.   “docker-hub” is my registry hostname on port 5000
  • I noticed the name of the VM now matches what docker-machine calls it instead of a random string.

That’s about it so far.  I have not used it too extensively yet but so far so good.  I did not see a single hang of the docker commands like I saw previously with the older versions.  Thumbs up so far.

Tagged , , , ,

Alpha release of Docker Machine Driver for DigitalOcean

I make absolutely zero money on this blog.  Notice there are no ads anywhere on the page.  DigitalOcean has a referral program, so if you are interested in signing up please do a guy a favor and use this link.   Thanks!!!

I wrote about the VMware Fusion and VMware vSphere drivers for Docker Machine previously.  Since I’m playing with this new stuff and trying out DigitalOcean also, I thought I’d show how Machine & DigitalOcean work together.

Comparison
In short, it’s way easier to spin up than vSphere is at the current time, but I’m sure after some bug fixes that’ll get easier over time.  (I’ll try out VMware vCloud Air soon to be fair).  DigitalOcean uses their own image on the backend by default, so there is no downloading and uploading of a boot2docker image so net time to a container is really fast compared.  Also, there are only a handful of config settings (region, size, and image) and all of them are optional if you want to use the defaults.

Cost Savings?
As I am playing more and more with docker machine, I keep thinking about possible money savings this could bring to developers, and could push for a new cost model in public clouds.  Most providers that I have used (AWS, DigitalOcean..) you pay a flat rate by time (minutes, hours…) for compute and for the resources.  But it’s a flat charge, no change in cost for utilization.  Of course nominal charges for bandwidth but generally it’s so small, at least for dev/test it’s irrelevant.    So think about that.   If you spin up LESS overall machines now by plugging in docker machine into your CI or CD workflows by configuring Jenkins or whatever to deploy directly to a docker image inside a machine instead of directly to a cloud provider….. that could eventually save real money long term because you have much less individual workloads running and being billed for, though you may need a slightly larger shell machine.   This is like the cost savings that virtualization originally brought all over again in a way….

Video
Anyway, enough aimless ponderings.   Here is using machine with DigitalOcean:

I make absolutely zero money on this blog.  Notice there are no ads anywhere on the page.  DigitalOcean has a referral program, so if you are interested in signing up please do a guy a favor and use this link.   Thanks!!!

 

Tagged , , ,

Small(er) Clouds: A drop(let) in the cloud ocean

digitaloceanI make absolutely zero money on this blog.  Notice there are no ads anywhere on the page.  DigitalOcean has a referral program, so if you are interested in signing up please do a guy a favor and use this link.   Thanks!!!

I do a lot of research in my day job.  Well any IT job I’ve ever had really.  Constant learning, constant troubleshooting.  My Google fu is strong.   In all the work I do, lately I’ve been coming across references to DigitalOcean again and again.   Usually it is in the context of how to deploy the topic I’m looking into on their systems (their tech writers are way prolific!).  (EDIT: I take that back – they open their tutorial site up to the public.  That’s freaking brilliant.  And useful.  And bonus for SEO), but other times it’s in conversations or examples on sites other than their own.    I can feel when a trend is growing, and this is one.  Pay attention to this company if you are into this sort of thing.

What is it all about

It looks like their niche is catering to developer workloads, and doing it well.  They make it extremely easy, and fast to spin up, use and tear down machines.  They appear to charge set prices per size of instance, no matter where in the world, which probably helps billing be very predictable.

User Experience

I absolutely love when the user experience is elevated over all else.  To me, Digital Ocean just seems to gush this point in their UI and workflow when you sign up, deploy and use systems.

“..the company refuses to deprioritize user experience — unlike the cloud giants that he sees as competitors…”  Why growing cloud DigitalOcean isn’t scared of Amazon, Google, and Microsoft

The only hiccup I had signing up was for some reason my account was flagged and needed a human to look it over before they allowed me to deploy a machine.   I guess this is a good thing in the end really.

 

I’m a suspicious individual, obviously

 

Deploying Workloads

I’m finding videos to be much easer to understand quick topics, so here’s a quick view into deploying a machine, or as they call them ‘droplet’.

 

I make absolutely zero money on this blog.  Notice there are no ads anywhere on the page.  DigitalOcean has a referral program, so if you are interested in signing up please do a guy a favor and use this link.   Thanks!!!

Tagged ,

Tech Preview of Docker Machine Driver for Fusion

(Also, checkout the vSphere driver here.)

I kind of beat up on the vSphere driver quite a bit in the last post.  (sorry guys!).   So I wanted to give a super easy example of what else you can do with it.   I just (finally) watched the DockerCon keynote where they introduced the machine functionality and their messaging on this helping “zero to docker” in just a few commands resonated.  This example shows what the vision is.

So here we go – using docker machine with VMware Fusion as the endpoint on OSX.

You may need to click on the video and watch it in theatre mode to see the text.

 

Tagged , , , , ,

Tech Preview of Docker Machine Driver for vSphere

UPDATE:  Machine is now out of beta, and I have a newer post on some of the changes here: http://www.jaas.co/2015/03/20/using-the-released-version-of-docker-machine-v0-1/

(Also, checkout the Fusion driver here.)

fbbb494a7eef5f9278c6967b6072ca3e_400x400

On Dec 4 2014 Docker announced the “machine” command/functionality as one of the announcements at DockerCon 2014.

In short it provides a way to deploy to many different locations from the same docker cli interface (yes I know that is like ATM machine, just deal with it.).  In their initial alpha release they are including drivers for Virtualbox, and Digital Ocean (though now as of Dec 18 looking at their GitHub page they list syntax already additional for AWS & Azure though I’m not sure if this is functional yet).

The next day on Dec 5 2014 VMware announced a preview of their extension to this Docker Machine command for deploying to VMware Workstation, vSphere and vCloud Air.

I have been using the vSphere part of it a bit this week and found the existing instructions a bit lacking so I wanted to provide some tips and examples to get up and running.

Things to know – but maybe come back to this list later….
First off, a few take aways I learned the hard way.  Which included bugging a very smart dude for help.    A few of these may not make sense until you dive into the functionality yet, so you may want to revisit this section later if you have trouble.

To be clear, I am not posting this list as all the blemishes I found, but as a guide to help anyone else that is struggling to see this vision that this functionality has the potential to bring.  Remember – this is a preview release of the VMware driver, and an alpha release of the Docker code.  Totally unsupported in every meaning of the phrase.

dc8MRKMpi

1) Understand that for this release you need to use the bundled docker binary as it has some functionality that the newest release you’ll get from package managers don’t have.   To get it to co-exist on a system that already had docker-io installed on it, make sure to either specify the full path to either one, or make sure the $PATH env variable is set so it picks up the one you want first.   I also copied my released docker binary to docker_local, so I could easily run that command if I wanted to switch to a local docker container instance.

2) This release of the machine command requires the use of environment variables to specify the active host.  When you run machine ls it will list all the existing docker machines available.  It also specifies the “active” one.  I’m not in the loop on the dev details of this but I assume this will be cleared up in the future.   Even though it says active, you still have to set the ennvironment variable.  Either pull the URL from the machine url machinename command or you can used this nested command

Do note that this is required.  One stumbling point I found was I couldn’t find a way to make this work when I don’t have a real tty session like when vCO makes a SSH call in a workflow.  TBD there……

3) As a follow up to both of the two above, if you DO want or need to switch between using docker machine and a local docker binary, you need to clear out the environment variable with

Yes kind of annoying for now, but I’m sure this will be fixed soon enough.  They are probably discussing this issue here on github.

4) I’m not sure how many others use local docker registries out there, but I do quite a bit for lab environments.  As with this other post I made about the change to forcing SSL communication, it took a moment to figure out how to force the configuration setting on each docker machine.  The really smart guy I alluded to previously built me a boot2docker iso with it embedded in it, so that’s an option, or you could could just manually apply it like this:

 

5) I had quite a bit of problems with a space in the datacenter name, and special characters in passwords.  There may be a workaround, but simple escaping it out didnt work so I just renamed the datacenter and used an account with all simple characters.  Remember….alpha code….

6) Yes, it downloads the ISO every time you run the machine command.  I don’t know why. Go ask docker.  Because, alpha.

7) I even hesitate putting this one here… but in my personal lab it kept failing when transferring the ISO to the datastore.  But it works fine in another lab environment I use.
Probably my own issue with some ghetto routing issue I have….  I worked around it by uploading the ISO by hand to the datastore.  Even though as I said in #6 it downloads it every time, if it already exists in the datastore it doesn’t try to push it again.Lets-Do-This


Step by Step

This syntax is accurate as of Dec 18 with CentOS 6.6 64bit.

Grab the tarball:

GRUMBLE GRUMBLE… whoever compressed this didn’t include the directory…. so it extracts to the present working directory……

Append the directory you extracted to your path environment variable.  Lately I’ve been using ~./bash_profile to set individual settings per user on each host, your results may vary.

And now to fire it off for the first time:

And with any luck you should see a VM pop up named docker-host-ABCDEFG  (that last part is random).  If you get errors, read over the ‘PRO-Tips’ at the end, and ‘Things to know’ at the top.

Now to list the current machines, run:

Set the required environment variables with:

magic

 

 

Now! The magic is happening!  Run a normal docker command like:

And see the magic happen for reals.   This image is being deployed on this new docker “host” which is actually a barebones vSphere VM.

PRO-Tips….

1) If you are doing a bunch of trial and error and you see the message that the docker host already exists, even though it may or may not have been deployed.  This is because even if the command fails it still gets added to the local machine list.  Clean it up with machine rm -f machinename   the -f forces the remove, if the actual VM doesn’t exist.

2) If you get an error message similar to “FATA[0086] open /root/.docker/public-key.json: no such file or directory”  just run the docker binary included here and it will create this file for you.

3) I crafted a pretty sweet bash script to nuke all machines at once.  Add the -f flag to force if you have to.  It works as such:

 

Conclusion

So what does this give us.   In my mind this gives us a simple interface that you may already be familar with and already using on your local machine, the ability to deploy to any number of other endpoints like public or private clouds.   That’s powerful.   Especially with any automation you have already created – slipping this into the mix, makes it even more robust.

This post was heavy on text and light on screenshots on purpose as it’s a complicated subject in this state of development.  I hope to put together a quick video to demonstrate this functionality soon.  Stay tuned.

 

 

Tagged , , , ,

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_w

In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:

Snip20141016_46

For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.

Snip20141016_47

Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.

Snip20141016_49

 

Script 2 – Installs the agent (we saw that syntax in the first post)

 

Script 3 – Assigns tag(s) to the system being deployed

 

Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).

Snip20141016_51

Let’s go ahead and request three nodes:

Snip20141016_53

 

Shortly after the three machines show up in vSphere:

Snip20141016_54

And when the deploying machine stage is complete, they show up in vCAC:

Snip20141016_59

And it shows up in the JumpCloud console:

Snip20141016_58

And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.

Snip20141016_60

 

What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this

Snip20141016_72

Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:

Snip20141016_61

I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.

Snip20141016_63

Snip20141016_64

 

Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.

Snip20141016_65

And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.

Snip20141016_67

 

Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.

Snip20141016_68


Snip20141016_70
 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.

 

Tagged , , , ,