Things learned managing production WordPress: How to easily enable HTTPS

This is the start of a series that will chronicle everything I’ve learned along the way keeping the wife’s photography website (https://www.jendzphotography.com) running smoothly.  I’ll cover topics including performance improvements, dealing with spam & robots, content distribution networks (CDN) and using website tools to track progress.  I intend to keep the technical mumbojumbo to a minimum and make the reading level less technical than my typical blog posts for easier consumption.

Background

In 2014 Google announced they will start boosting page rankings for https enabled sites.  While SEO is of course important, it’s also just good practice to use SSL.

The current Wikipedia entry for HTTPS includes:

HTTP Secure (HTTPS) is an adaptation of the Hypertext Transfer Protocol (HTTP) for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted by Transport Layer Security (TLS), or formerly, its predecessor, Secure Sockets Layer (SSL). The protocol is therefore also often referred to as HTTP over TLS, or HTTP over SSL.

Put simply, when you enable HTTPS you put a private key and public key in the web server configuration that encrypts traffic between the web server and your web browser.  OK… put even more simply…  It makes your web traffic hard(er) for bad people to read.

Certificates are issued by a trusted Certificate Authority (CA).  The whole system is based on trust.  Your web browser contains a list of the CAs in the world.  When you load a HTTPS enabled site, the certificate is compared against the list, and if all checks out, it turns green and you are safe.  If one of these CAs get severely hacked, browsers will remove them from the lists and anything they issued will no longer be trusted.  Usually you have to pay out money to a CA to have a certificate issued for you from a company like GlobalSign, Verisign, or GeoTrust.  In 2016 however, a free service was launched called Let’s Encrypt that now offers them for free.  Yay!!

Let’s Encrypt

Their free service is great, but it does have a limitation of only a 60 day duration in the certificates they issue (typically the commercial ones are measured in years).  They explain their reasoning for that here.  Basically, it is a built in failsafe if they get compromised (stolen) and the short duration encourages automation.  And I have to admit, the automation tool I first used is great!

Getting Started

The easiest way to get started is by using a tool to request the certificate and put it in place for you.  I found the CertBot tool from the Electronic Freedom Foundation (EFF).  I’m using Apache on CentOS 6 so I’ll just focus on that.  The install steps for that are here.

 

Run the commands, and it starts off installing dependencies

Right off, it dives into a few prompts.  First the terms of service, then an EFF email notification.

It goes through your web server config and lists the configured domains. In our live site it listed all we had sites for, surprisingly.

Then it asks if you want the tool to automatically configure apache to force redirects to SSL/443 or not.  Be sure you take a backup of your config before proceeding – by default at /etc/httpd/conf/httpd.conf.

Yay!  First part is done!

So remember the certificate is only valid for 60 days.  They include a method to automatically renew it, which is actually pretty awesome.  Enterprise system administrators forget all the time to renew certificates.

They have a dry run option (which  means it shows what it would do, but doesn’t actually do it)

(ignore my annoying python warnings/errors using the default config)

Add this to cron so you don’t forget.  They recommend doing it twice a day, but I don’t see any harm in doing it multiple more times.

Yay!  We’re all good!

Ah crap…. what now?

By default, it only picked up the base domain (jaas-demo.com) that was configured in Apache, and not the full domain that is more human friendly (www.jaas-demo.com).

They have a command to add a domain to the certificate:

There we go, thats better.

To troubleshoot any problems, you can use this syntax to dump the contents of the certificate in human readable form:

 

That’s all we got!  For a basic site this will work out of the box.  You may have to tweak things a little if you use a content delivery network, or pull in files from other sites and so on.  We’ll cover some of this in a later post.

UPDATE:

Wuups…   I didn’t notice in the OpenSSL output that only the subdomain www.jaas-demo.com was in the certificate!  I actually ran the wrong command above with the expand flag.   What that syntax did was create a NEW certificate, not add the subdomain to the existing one.  The syntax should include the cert-name field like so:

 

Ahh that is better.

 

**** Shameless plug ****

Have a website problem of any sort you need help with?  Contact me here to see if I can help.  Rates based on complexity and time required.

**** /Shameless plug ****

Tagged , , , , , , ,

Stickman costume /w LED pixel strips

In some way, it’s more that I’m dressing up as my house for halloween and that it just looks like a stick man… but more on that another time… i digress…

DIY stickman made with LED pixel strips!  Yay!

I had a roll of LED pixel strip left over from Christmas last year and have been looking for a use for it lately.  It’s a rather expensive bit of kit to cut up into pieces and use for something like this but it was left over anyway.  And turns out I mis-judged lengths (and made a few mistakes) and had to order another roll.  But crap, I couldn’t just buy the same expensive one so I got a cheaper one…but double crap… it turned out to have RGB in a different order (at full green or full blue the light up as opposites.  luckily red was the same) and they’re half as populated (30 LEDs per meter, instead of 60) so…I just had to put my perfectionism on hold…it’s just a costume!

Anyway, you can use just about any type of pixels strips to make this work I’ll just note here what I used and what I found handy.  Your mileage will definitely vary.

Parts:

Brain:
Arduino – I used a uno but any will work. (sparkfun)

LEDs:
(1) Pixel LED RGB Strip 60 LEDs/m 60 Pixels/m Waterproof Tube (16ft-6in/5 meter Roll) – 12v / INK1003 (WS2811 clone) (holidaycoro)
(2) Pixel RGB LED Strip 30 LEDs/m 10 Pixels/m Waterproof Tube (16ft-6in/5 meter Roll) – 12v / 2811 / BLACK PCB (holidaycoro)

Buttons:
(1) Some sort of small button (sparkfun)
(2) Fun “gameshow-like” button found on Ebay (ebay)

Power:
(1) USB for Arduino (Anker)
(2) 12v battery pack for lights (Amazon)

Odds n Ends:
Electrolytic Decoupling Capacitors – 1000uF/25V (sparkfun)
Solder Shrink Sleeves Wire Splices / 18 – 22 AWG Wire / Red (holidaycoro)
3pin Connector Male Female Cable Wire for WS2811 WS2812B LED Strip 10pcs (Vozop)
Saftey pins…lots and lots…
Velcro…lots..
~ 9″ hoop thingy from a hobby store

LED Strips
These lights are “smart” leds, meaning every single light is individually addressable in huge variations of red green and blue to produce….I don’t even know..how many colors.  They are quite fun.   For this I’m not doing any fancy animations, just solid colors.  You could do this REALLY cheap when using just dumb single color LEDs, but what fun would that be…..!


Power
The battery pack I used was nice because it already had the right size barrel plug on it for quick disconnecting and swapping.  I had wanted to use a rechargeable battery pack similar to the ones Anker makes but with a barrel plug on it that goes up to 12v but it didn’t have enough amperage.

LED strips come in either 6v and 12v. Also, some are 3 pin and 4 pin. Be sure you plan ahead all your parts.  Also, It’s best to separate the power for the Arduino from the strips for simplicity.  If you do it this way, be sure to connect the grounds together.

Also, to prevent the initial power surge from causing damage to the lights it’s best to wire in a capacitor, see that in the diagram.


Wiring
Yeah I’m not a great circuit designer…..

Pretty simple.  You just need a 470k resistor in line of the data from pin 6.  I wanted two different buttons to control the lights, so I wired up the one I could hold in my hand, and if that failed for some reason I could still hit the button on the board.


Using some of the parts I linked to above, I created some diy splitters  to simplify the connections between strands.

I planned out the sections something like this:

Code
The full code is here on github.  It’s really nothing special at all.   All it does is switch between a list of colors when the button is pressed.  I’m using the FastLED library with no animations, just solid colors.  Much could be improved here, but I just went for simplicity in this build.

Snippet:

I first thought it would be easier to use an off the shelf controller but the cheap $10 one I picked, while fun, was not ideal for this type of use.  But bonus, I learned the controller I got does indeed control pixels perfectly so I could use it for something else someday.

 

Would I would do different
If I was to do this again, I would use the cheap strips on black background for the whole thing for sure.  I got that second strip on sale for only $15!

 

Sources / Inspiration

coeleveld.com

Instructables

RGB Stickman

 

Tagged , , ,

HOWTO: Change Storage Policies for VSAN across entire clusters /w PowerCLI

I had a need recently to switch the applied storage policy across a ton of VMs, but I didn’t want to change the default policy that comes out of the box.   A tough proposition as I found no easy way to do it.  It took quite a bit of googling and trial and error but I came up with this two liner to get it done, so here you go world – go forth and policy change if you need such a thing.

The first line applies it to the VM object, then the next applies it to all the disks.  Easy peasy.

 

Technology found our new best friend.

Last night I built a robot that brought us to our new best friend. Meet Cash.

Cash

Before I explain this strange statement, first let’s back up.

Two weeks ago we found our beloved Maddie was stricken with a tumor on her spleen that ruptured.  I won’t dive into the heartbreaking details, but you can read about it here, here and here.

maddie_camping

To summarize: heartbroken.   That damn wonderful dog lived a great life and will never be replaced.  But we have found we’re a two dog family.  Enter the idea of visiting shelters….which is always fun..!

IMG_6736

After a few misses, we found just how competitive adopting dogs is in Boulder.  Yes, competitive.

Forget cycling, running, and climbing – the most competitive sport in Boulder is trying to adopt a dog from the pound link

Dogs fly out of the Boulder Humane Society.  There was one Jen was interested in that was adopted within hours of her becoming available.  We heard of one from employees that was going HOME within 45 minutes of stepping into the adoptables area.   Seriously.

The employees say to just keep an eye on the website.  So that’s what we did for a bit.  We noticed it was updated frequently throughout the day.  But there was no way to be notified of new dogs.  Enter my light bulb moment.

I saw there was no RSS feed, and (of course) no API.  So I took a glance at the HTML source and saw it would be super easy to screen scrape.  Muahhaha…… this will be easy peasy!    With just a little bit of hacking last night I had a working system that scraped their webpage every 15 minutes, stored it in a local database, and sent us an email when a dog became available!  Ha! Leg up, take that one, Boulder animal people.  Dog adoption performance enhancing drugs.

In the morning I surmized that wasn’t nearly geeky enough.  I added functionality to email us when a dog appeared to be adopted (wasn’t listed any more).  And since email is SO year 2000s, I spun up a new twitter account and had it tweet and direct message us when a dog showed up and went home.  I dub thee: Dog(S)talker.  Get it?  Dog Stalker.  Dogs Talker.  I kill me…

Low and behold…while I was out with the kid on his bike and Jen working on an extension to the chicken coop, DING. DM from the new robot:

Snip20160403_15

Due to an unfortunate typo in the code it is missing the details of the dog but still….. the fucking thing worked… A quick click on the link showed it was a 1 year old, Australian Kelpie Mix, and about 45 pounds.  Check check and check all the boxes!  I yelled across the street: “JEN!”  to which I immediately heard the reply, “I’M GETTING READY TO GO [to the shelter]!”

15 minutes later I received this:

IMG_6740

So an absolute max of 30 minutes from the time she was posted to the website to one of us showing up to check her out.

Long story short, he’s perfect for us.  I’ll post the code to github soon.   Perhaps if this is useful to anyone else I can add others to the notifications.

Snip20160403_16

 

Tagged , , ,

How to send vCenter alarms to Slack

I’m spending some of my time in the new gig with my old sysadmin ops hat on.  We needed a quick easy way to keep an eye on alerts from a vSphere environment so….what else would be more fun than to funnel them to Slack?!  Easy peasy, even on the vCenter Appliance.  Let’s see how…

First you need to configure the integration on Slack.   In the channel you wish to see the alerts in, click the “Add a service integration” link.

Snip20150806_12

Now there is not any special integration with vSphere, we are going to be using a simple REST api to push the content.  Scroll down to “Incoming WebHooks”

Snip20150806_13

Now you need to approve the integration verifying the chat room and click the button:

Snip20150806_14

The outcome of this will be a long URL you will need for the script.

Now we need to get your script ready. Now remember this is on vCenter (windows OR appliance), not ESXi.  Much credit to this guy that created a simple script for Zabbix, as this is a hacked up version of it.   The key here is using the environment $VMWARE_ALARM_EVENTDESCRIPTION which I use because it’s short and simple.   If you want other types of data check out the documentation here.

Now you just simply need to hook this script up to the alarm in vSphere:

Snip20150806_15

Sweet.  Cool.  Let there be (kind of) chatops.

But, I hear you asking…   What if you want to apply this to all your alarms??   Also…. easy peasy.   I just whipped together some powercli and bam.

That line will apply this script action to ALL alarms in the vCenter you connect to.   It will apply this by default to the Yellow to Red action level.    For now I wanted this to trip on all four cases so I looked a little deeper and found this will do it:

Now if you are like me and you screw this up along the way, you may have to clear out the actions across the board.  This line will do that for you:

 

 

Tagged , , , ,

VMware Annoucements @ DockerCon 2015

Two announcements are being made by VMware at DockerCon today that I am pretty stoked about. Here’s a snipped of the details and a link roundup.  I’ll revisit these soon with deeper posts.

VMware AppCatalyst

VMware AppCatalyst is an API and Command Line Interface (CLI)-driven Mac hypervisor that is purpose-built for developers, with the goal of bringing the datacenter environment to the desktop.

“Introducing AppCatalyst – the desktop hypervisor for developers” – VMware Cloud Native Blog link

VMware Communities & download link

Update June 23, 2015 – Using Pivotal Lattice with AppCatalyst by @jrrickard link

Update June 23, 2015 – Vagrant provider for AppCatalyst link

Project Bonneville

..an extension of VMware vSphere that enables seamless support for Docker containers

VMware-Project-Bonneville

“Introducing Project Bonneville” – VMware Cloud Native Blog link

Overview video from VMware’s Brit Mad Scientist Ben Corrie here.

Update June 23, 2015 – Demo video of Bonneville link

Update June 23, 2015 – Bonneville running MS-DOS to play Lemmings link

Other Links

“Extending the Data Center with VMware AppCatalyst and Project Bonneville” – VMware Tribal Knowledge blog post

“VMware previews Project Bonneville, a Docker runtime that works with vSphere” – Venture Beat post (with some weird upside down image that is giving me a headache)

“A Different VMware: An API-Driven Hypervisor and a Docker Oriented vSphere” – The New Stack post

Update June 23, 2015 – “VMware targets new DevOps tools at Docker” – Silicon Angle link

Update June 23, 2015 – “VMware Doubles Down on Docker Integration with Project Bonneville” – Server Watch link

Update June 23, 2015 – “VMware AppCatalyst and Project Bonneville: ‘Datacenter On the Desktop'” – Virtualization Review link

Update June 23, 2015 – “VMware brings AppCatalyst and Project Bonneville technology previews” – InfoTech Lead link

Update June 23, 2015 – “VMware Brings More Tools To Docker Development” – Information Week link

Update June 23, 2015 – “VMware builds a magic mirror for containers and a desktop cloud” – The Register link

Update June 24, 2015 –“VMware Blunts Container Attack With Bonneville VM” – The Platform link

Update June 24, 2015 – “VMware containers go soup-to-nuts for cloud apps” – TechTarget link

Update June 24, 2015 – “VMware Embeds Docker Container Capabilities in Hypervisor” – Datacenter Knowledge link

 

A change…or pivot if you will…..

Pivotal_Logo_200I have been at VMware for 7 years (this week on the dot actually!).  That is a half a lifetime in IT Dog Years.  In that time I have done many different things, and been to many different places.  I saw (and at times helped (or tried to help) ) virtualization mature from a fringe lab thing that would never run production workloads efficiently and easily, to an established vendor that most people are using in some way.  Quite a ride!

Just after the July 4th holiday I will be (metaphorically, though not geographically) be walking a few blocks up the hill in Palo Alto from the VMware campus to a sister EMC Federation company, Pivotal.  I’ll be leaving the current Pre-Sales gig and getting my hands dirty directly in technology as a main focus.  I’m excited!

www.jaams.co

micro-services1-297x250I plan to continue the blogging weird and silly projects on here, though it will stray from a VMware focus to more broad devopsy topics in general.   Hence the slight change in name (mostly as a joke that I was told at GlueCon recently) – Josh as a (Micro) Service!  Kind of catchy don’t you think?

I’ll spare you all the pontificating on merits of focusing on one thick technology stack made up of all kinds of mashed together bits being a monolithic focus, and now for the future breaking it down into singular focus areas and doing each of them well……I don’t know… This joke might not work entirely, but I get a good laugh out of it anyway.

Onward!

“Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure. Life is either a daring adventure, or nothing.”
Helen Keller

“Live every week like it’s Shark Week.” – Tracy Jordan

“It’s more fun to be a pirate than to join the Navy.” – Steve Jobs

Tagged , , ,

Using a time series DB, starting with InfluxDB

devops-everywhereAt the last two conferences I attended (DevOpsDays Rockies and GlueCon) I heard a lot of mentions of NoSQL and time series databases.   I hate not knowing about things and not having experience with it so I’ve been playing with both of these.    First I integrated a NoSQL db using Redis into a project of mine recently.  And just now I’ve been playing with InfluxDB as a monitoring system and here I’m going to tell you about my experience.

I didn’t want to get caught up in any installation shenanigans so I tracked down docker images to assist in getting up and running fast.  And glad I did because it worked immediately!

index

1: InfluxDB

InfluxDB docker image: https://registry.hub.docker.com/u/tutum/influxdb/

Docker command:

And then right away you can load in a webbrowser: http://your.ip.address:8083 and you will get the below screen:

Snip20150527_9Once you log in with root/root, you will be shown that you have no databases out of the box, go ahead and insert a name and hit create:

Snip20150527_10You are now given a simple UI to be able to push and pull data into the system by hand.  To test this we will add some values that are in the same format as some of my scripts that deal with temperature.  Basically you can think of the time series as the table, and the values as key/value pairs.

Snip20150527_11

Then you can craft a simple query to verify the data:

Snip20150527_12

Neat!   Now unlike some other solutions, InfluxDB doesn’t provide any visualization functionality (other than a basic line).  I spun up a Grafana container to do this.

grafana

2: Grafana:

Grafana docker image: https://registry.hub.docker.com/u/tutum/grafana/

Docker command:

There are simpler ways to start up this container but I found all of these parameters got me quickly to where I wanted.

Now you can login to port 80 on this machine and you will be presented with an empty first page:

Snip20150527_13

Empty graphs aren’t very exciting, so let’s configure them real quick…

The syntax in Grafana is slightly different than we saw directly in InfluxDB but is mostly straight forward.  We put the database name (temperature) into the “series” field.  We fill in the blanks for the select statement – use last(value) and item=’80027_temp’ to specify the key/value.  Click in somewhere else to change focus and the graph should reload showing the values we entered by hand.

Snip20150527_14

Now I wanted to play around with it further so I modified some existing scripts I have for doing various types of monitoring like pulling data from weather underground (temperature, humidity, wind), and some data for free disk space from a NAS.  Mix it up and it came out looking like this:

Snip20150527_15

To feed the data in, I took the easy way out and used a perl client documented here.  So I then just modified my existing scripts to feed the data to this perl script every 30 seconds and bam, I’m done.

 

 

Tagged , , ,

New backup option for Synology devices

synology1512I have two Synology NAS devices in my home lab that I’ve always struggled with being sure I have full backups of as they have grown pretty large over time.   I have the 5 bay DS1512+ (with an additional 5 bay expansion), and I have a tiny 2 bay DS214SE.    I didn’t plan my use of them this why but it just kind of evolved over time, such is how my lab is.  Something breaks or is slow, I tweak to squeeze out better performance on a small budget and life goes on.

Cur615976_0_frently I use the large array for normal file storage (music, photos, videos, ISOs, etc…), I used the extra expansion for transition storage over time when I moved stuff around (mainly VMs).   I originally used the tiny 2bay NAS for my tiny portable lab based on NUCs.  I now have the management components living on a iSCSI lun on there (VC, PSC, vCO, DNS, AD…) and for all compute I am now using VSAN across three white box machines (which is working fantastic!).

I’ve always struggled with backups in my lab.  Any free options out there either won’t cover two VCs, cpu core limited or VM count limited.  I’ve been using William Lam’s GhettoVCB forever, which is solid but mostly manual.

Enter….what I found this past weekend.  Synology management software images that will work in a VM or baremetal!   This is literally the same OS that runs on their devices.  I first tried it in a VM to test it out.  All seemed well except for updating as it appears to break so you have to wait for unoffical patches.   To use this for real, I went ahead and swapped out the USB drive on my HP N40L which was previously running FreeNAS for backups.

This allowed me to setup a reoccurring RSync from the DS1512:

Snip20150527_4…And also allowed me to setup a reoccurring backup of the iSCSI LUN (holding the management VMs):

Snip20150527_5

While the ~250gb iSCSI backup was pretty quick, the Rsync of 6 TB of small & large files is taking a while.   Performance seems pretty decent, at least for my home lab that can be kind of…iffy given the amount of crazy crap I run on the large flat network of consumer level 1GB switches no tuning whatsoever.

Snip20150527_6

Prior to this I was doing all my backups manually – both the rsyncs and ghettovcb backups.   I would then a few times a year move a backup set outside of my house to a family member.  I highly suggest you do the same!  I do my best to follow the 3-2-1 rule, though I’m not doing great on the multiple types of media as my photo collection has grown too large to use “cloud” storage useful or economical.

Check it out for yourselves!

Install information (credit as the source!) http://www.bussink.ch/?p=1672

More information http://www.xpenology.nl/

Downloads http://xpenology.me/downloads/

Tagged , , , ,

Experiment: Pooling in vRA & Code Stream

Background

I recently attended DevOpsDays Rockies which is a community oriented DevOps conference (check them out in your area, it was great!).  I saw a talk by @aspen (from Twitter/Gnip) entitled “Bare Metal Deployments with Chef”.   He described something he/they built that, if I recall correctly, uses a PXE/Chef/MagicpixieDust to pull from a pool of standby bare metal hardware to fully automate bringing it into a production cluster for Cassandra (or what have you).

This got me thinking on something I was struggling with lately.  Whenever I develop blueprints in Application Director / Application Services, or just vRA/Code Stream, the bulk of the time I just hit the go button and wait.  Look at the error message, tweak and repeat.  The bottleneck by far is in waiting for the VM to provision.  Partly this is due to the architecture of the products, but also it has to do with the slow nested development environments I have to use.  We can do better…..!

Products using pooling

I then started thinking about what VDM / Horizon View have always done for this concept.  If I recall correctly, as it’s been years and years since I’ve worked with it, to speed up deployments of a desktop to a user, a pool concept exists so that there will always be one available on demand to be used.   I don’t have much visibility into it but I am also told the VMware Hands On Labs does the same – keeps a certain number of labs ready to be used so the user does not have to wait for it to spin up.  Interesting.

The idea

So I thought – how could I bring this upfront deployment time to the products I’m working with today to dramatically speed up development time?   And this is what I built – a pooling concept for vRA & Code Stream managed by vRO workflows.

Details – How Redis Works

When planning this out I realized I needed a way to store a small bit of persistent data.   I wanted to use something new (to me) so I looked at a few NoSQL solutions since I’ve wanted to learn one.  I decided on Redis as a key value store, and found Webdis which provides a light REST api into Redis.

I couldn’t find any existing vCO plugins for Redis I/O which is fine, the calls are super simple:

Example of assigning a value of a string variable:

Snip20150517_5The redis command is: “set stringName stringValue”
So the webdis URL to “put” at is “http://fqdn/SET/stringName stringValue”

Then to read the variable back:

Snip20150517_6The redis command is: “get stringName stringValue”
So the webdis URL to “get” at is “http://fqdn/GET/stringName”

Easy peasy. There is similar functional for lists, with commands to pop a value off either end of the list.  This is all I needed, a few simple variables (for things like the pool size) and a list (for things like the list of VMs storing IP addresses & names).

So in vCO I just created a bunch of REST operations that used various number of parameters in the URL line:

Snip20150517_7
I found the most efficient way to run these operations was to parametrize the operation name, and pass it to a single workflow to do the I/O

Details – Workflow(s)

The bulk of the work for this pooling concept is done in the following workflow that runs every 15 minutes.

Snip20150517_8In general it works like this:

  • Check if the workloads are locked – since it can take time to deploy the VMs, only one deployment will be going at a time.
    • If locked, end.
    • If not locked, continue.
  • Lock the deploys.
  • Get the pool max target (I generally set this to 10 or 20 for testing).
  • Get the current pool size (the length of the list in Redis.  much faster than asking vSphere/vRA).
  • If the current size is not at the target, deploy until it is reached.
  • Unlock the deploys.
  • Profit.

I did not have to do it this way, but the nested workflow that does the actual VM deployments is requesting vRA catalog items.

In Action

After I got it fully working and the pool populated, you can check the list values with this type of Redis query:

Snip20150517_9

Redis: lrange vmlist 0 -1 (-1 means all)
Webdis: http://fqdn/LRANGE/vmlist/0/-1

The matching machines in vSphere:

Snip20150517_11

In Action – Code Stream

Normally in a simple Code Stream pipeline you would deploy a VM by requesting the specific blueprint via vRA like this:

Snip20150517_19

In this solution, instead I use a custom action to grab the VM from the pool and return the IP back to the pipeline as a variable.  Then I treat the VM like it’s an existing machine and continue on and at the end delete the machine.

Snip20150517_18

This reduces the list in redis by one, so the next time the scheduled workflow runs that checks the list size it will deploy a new one.

(Kind of) Continuous Deployment

I have a job in Jenkins that builds the sample application I am using from source in Git, pushes the compiled code to Artifactory and does a post build action that calls Code Stream to deploy.

Snip20150517_15

I wanted to see if there were any bugs in my code, so I wanted this whole thing to run end to end over and over and over…   I configured the Jenkins job to build every 30 minutes.  I went on vacation the week after I built this solution so I wanted to see if over time anything broke down.  Amazingly enough it kept on trucking while I was gone, and even got up to the mid 700’s in Jenkins builds.   Neat!

Snip20150517_12

Jenkins builds

Artifacts

Artifacts

Code Stream executions

Code Stream executions

Summary

To my surprise, this actually works pretty darn well.  I figured my implementation would be so-so but the idea would get across.  It turns out, what I’ve built here is darn handy and I’ll probably be using it the next time I am in a development cycle.

Post any questions here and I’ll try to answer them.   I’m not planning to post my workflows publicly just yet, fyi.

Tagged , , , , , , , , , , , ,