Category Archives: Projects

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_w

In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:

Snip20141016_46

For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.

Snip20141016_47

Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.

Snip20141016_49

 

Script 2 – Installs the agent (we saw that syntax in the first post)

 

Script 3 – Assigns tag(s) to the system being deployed

 

Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).

Snip20141016_51

Let’s go ahead and request three nodes:

Snip20141016_53

 

Shortly after the three machines show up in vSphere:

Snip20141016_54

And when the deploying machine stage is complete, they show up in vCAC:

Snip20141016_59

And it shows up in the JumpCloud console:

Snip20141016_58

And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.

Snip20141016_60

 

What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this

Snip20141016_72

Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:

Snip20141016_61

I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.

Snip20141016_63

Snip20141016_64

 

Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.

Snip20141016_65

And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.

Snip20141016_67

 

Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.

Snip20141016_68


Snip20141016_70
 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.

 

Tagged , , , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

Snip20141001_67
So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.

Snip20141001_68

Single machine blueprint in vCAC

 

Snip20141001_71

“DockerVM” tag in vSphere denotes the docker capability machines

 

Snip20141001_72

Service Blueprint for various docker images

 

Snip20141001_74

You can specify items like the image name, the port to expose, the bash command, etc..

 

Snip20141001_77

Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.

Snip20141001_78

…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login

BY RUNNING THIS IMAGE YOU IMPLICITLY ACCEPT ALL EULU THAT ANY INCLUDED COMPONENTS IMPOSE.  THIS IS SIMPLY A DEMO APPLICATION AND NOTHING MORE.

 

Tagged , , , , ,

Using vSphere Tags via vCO

As vSphere 5.5 currently stands, the only way to interact with vSphere tags is via PowerCLI.   This leaves vCO out of the party without some effort to build it manually.   I am working on a solution where I wanted to include tags in some automation to enable some awesomeness so I explored if it was possible to expose to this vCO without huge effort.  Success!

Snip20140930_54

It wasn’t too difficult to build this.  The two most difficult parts were setting up the PowerShell host for vCO (google it… it’s difficult.. at least it was for me the first time around), and parsing of the XML returned from PowerShell to vCO to get the data I wanted.   These workflows are a bit rough but they work as a first draft.   For anything production caliber you’ll want to evaluate the performance impact of hitting the powershell host as often, and definitely change the password field from string (proof of concepts!).

What I have built so far is a workflow “JaaS Tags- Add Tag” that accepts strings for the name of the tag, and virtual machine name.  This fires off powershell commands in a vCO action:

 

To show how it works running manually in the vCO client:

Snip20140930_60

 

And to show that the tag is actually applied, you can find it in the Web Client:

Snip20140930_57

Now, I also have a workflow to find VMs from the tag that is supplied.  I needed flexibility out of these in the solution that i’m working on so the output from this one is two arrays – one of the name in a string, and one of VC:VirtualMachine type.

Snip20140930_55

 

 

Running this guy manually, just supply the tag name:

Snip20140930_59

And to show the output parameters, you can see that in the vCO client:

Snip20140930_61

Yay Tags!   Now you can include these workflows in other solutions to utilize tags in ways they weren’t intended.  Stay tuned for the solution I built this for.

I’ve posted an export of this package to FlowGrab, check it out here:  https://flowgrab.com/project/view.xhtml?id=d2623373-838f-4ee2-8d6e-c6f582cb452f

 

 

 

Tagged , , , , ,

vCO Workflow Collaboration with FlowGrab

flowgrab_logo_beta_gray_bgGithub for vCO!

A while back I posted about the vFLOWer tool that provided a way of unpackaging a vCO package into xml that can be easily pushed to a version control system like Git.  The same company, ByteLife, has taken it a step further and have opened a site in beta that allows sharing and collaboration of vCO packages called FlowGrab.   My understanding is that the tools that made up vFLOWer are run on each packge that is uploaded such that I can be version controlled, diff’ed, merged.  Although this developer functionality is not part of their public feature set yet, what is there does look to be useful in sharing workflows with others.  It’s a heck of a lot more useful than posting the full package file to a blog post or GitHub, or worse yet, requiring you to git clone the XML output from vFLOWer, repackage it and then import.   Painful.  This is a step towards automation of all of that.Snip20140827_14

I thought I would give this offering a quick try with a package I had laying around.  This is a workflow that I use for notifications that reaches out to Pushover over REST.  I blogged about it a bit when I explored pulling data from Weather Underground here.   This package includes workflows that power on and power off VMs based on a keyword, and send an alert when complete.  You can use this as an example of how to use the pushover workflow in your own to do something useful.  Reusable content! Yay!

The process starts as normal, exporting the package from vCO:

Snip20140827_1
The name will be automatically filled in, this is your package file.  Save this to disk.

Snip20140827_2

 

Now what’s handy about this site is you don’t have to go through all the trouble of using vFLOWer as I previously posted about, simply browse to their site. Create an account.  Create your project and upload your package.  Done!.

Snip20140831_8

 

After you post your project, it is available for others to view:

Snip20140918_16
Clicking on the download link downloads the actual package file, not XML than you then have to build.

Snip20140918_17

Now when you go to import this into another vCO instance, you are shown that a number of the contents already exist because I reused some library content for this example:

Snip20140918_18

Snip20140827_3

 

Now the package exists, and you should see the new workflows:

Snip20140827_4

 

Snip20140827_5

 

If you try to run the Power On or Power Off scripts as is you’ll get an error because you haven’t added the REST host and operation for Pushover yet.  This older blog post of mine shows how to set those up.

Snip20140918_19

 

Tagged , , ,

Small Things: NUC Sack

I transitioned some of my home lab onto tiny Intel NUC machines this year and just love the form factor and the fact that they are so portable. In fact I flew with them, that tiny NAS in the picture and a router to an event to demo some projects.  If only they went to 32 gb or higher, they would be perfect.

My buddy @jrrickard and his co-worker 3d printed me a rack for these little guys that we’re loving calling the “NUC Sack”.   Love love love it!!! Thanks guys!

If you want to print one yourself they posted the files on github here.   I’m told if you want to use this with the taller NUCs that will take a spinning disk inside, add 15mm to each of the sections.

NUC-SAC

Tagged ,

Automated testing with Selenium & PHPUnit

I am working on a much larger project right now that is using Selenium to do unit testing, and found this installation procedure to be a PITA on CentOS so I thought I would share my experience with the interwebs to relieve some pain for others.  I am very new to using these tools so if anyone has suggestions for improvement I am all ears, though this post will be strictly on the install portion.   Expect a much longer and more interesting post in the future.

This is not intended to be a primier on what Selenium is at all.  If you are interested check out this video, docs are located here for PHPUnit, here for Selenium,  and here for Composer (to cover all my reference linkage).

 

This worked as of August 18, using Centos 6.4, JRE 7u67, Selenium 2.42.2, PHP 5.3.3, PHPUnit 4.1.6


 

First, install java if you need it.  I am using the super awesome alternatives command to manage the symlinks of multiple versions.

Get from Sun: jre-7u67-linux-x64.rpm
alternatives –install /usr/bin/java java /usr/java/jre1.7.0_67/bin/java 1
alternatives –config java
pick the new one
At the time of this post, this was a version that worked for me
# java -version
java version “1.7.0_67”
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)

Now we want to install Selenium

ln -s /usr/local/bin/selenium-server-standalone-2.42.2.jar /usr/local/bin/selenium-server-standalone.jar 

add to .bash_profile, then relogin or source .bash_profile
alias selenium=”java -jar /usr/local/bin/selenium-server-standalone-2.42.2.jar”

Install packages for X and Firefox, since I’m running this in a headless VM
yum install -y java firefox Xorg xorg-x11-server-Xvfb

There are multiple options to install PHPUnit.   Pear is the easiest as it was what I was familar with but appears to be going end of life at the end of 2014.  Composer is brand new to me – you kids and your fancy package management systems!!!!1 – but looks pretty useful.

(1) via PEAR (end of life at end of 2014)
yum install php php-pear php-xml php-devel -y
pear channel-update pear.php.net
pear config-set auto_discover 1
pear install phpunit
pear install phpunit/PHPUnit_Selenium

(2) via Composer
yum install php php-xml php-devel php-pdo git -y
curl -sS https://getcomposer.org/installer | php
mv composer.phar /usr/local/bin/composer
vi composer.json
Add this content:
{
    “require-dev”: {
        “phpunit/phpunit”: “4.1.*”,
        “phpunit/php-invoker”: “*”,
        “phpunit/dbunit”: “*”,
        “phpunit/phpunit-selenium”: “>=1.2”
    }
}
Then run this:
composer install 

Docs say to add this to your path:
~/.composer/vendor/bin/
But I installed my stuff in a subdir as root, so I also added this to the path also
/root/tmp/vendor/bin 

# phpunit –version
PHPUnit 4.1.6 by Sebastian Bergmann
YAY!

Now to setup an X session.  Many ways to do this, here is a quick and dirty way:

Xvfb :99 -screen 0 800x600x16 &
export DISPLAY=:99
selenium 
Now open a new session to the host.

Now for the actual unit test code.   I’ll share a simple example that checks this blog for something as simple as the TITLE html tag.

<?php
class JAASTest extends PHPUnit_Extensions_Selenium2TestCase
{
    protected function setUp()
    {
        $this->setBrowser(‘firefox’);
        $this->setBrowserUrl(‘http://www.jaas.co/‘);
    }
    public function testTitle()
    {
        $this->url(‘http://www.jaas.co/‘);
        $this->assertEquals(‘Josh As A Service | www.JaaS.co‘, $this->title());
    }
}
?> 
Save it as  JAASTest.php
Invoke it by:
phpunit JAASTest.php

PHPUnit 4.1.6 by Sebastian Bergmann.
Time: 13.65 seconds, Memory: 5.00Mb
OK (1 test, 1 assertion) 
YAY!! Automated testing by little robot minions. Just what I have always wanted.
There is a ton more you can do with this type of testing which I will be exploring soon now that I have this up and working.  Check back later on in the year for the results of the project!
Tagged , , ,

Hello Chuck. (My "Hello World" in iOS)

Little known fact about me right now is that I’ve started taking classes for Grad school recently.   I had the opportunity to take an iOS class as an elective right now which in my mind is killing two birds as the phrase goes.  I’ve wanted to see how this kind of development worked for some time.   All said I done now that my project is over I think Objective-C programming (I didn’t look into the new fancy swift yet) is a bit convoluted and hard to grasp some idiosyncrasies in the syntax, but overall a good experience.  I wouldn’t call myself a real programmer by any stretch of the imagination.  I’m a decent scripter.  I’m good at quickly figuring out how to make something work and showing it off, I guess in my work world we would call that a proof of concept.   But I am by no means elegant or efficient.   I leave that to much smarter people.

My idea was that I wanted to learn how to do REST api communication from an iOS application.  I planned to do something relatively simple for my class project that would teach me the basics, and then later I would build something bigger, perhaps to integrate into some of my other mad scientist-like projects.  And that is exactly what I did.

Behold, Hello Chuck.

Screen Shot 2014-06-26 at 9.54.44 PM

I’ve posted about other fun REST api’s previously like FOaaS but I do not believe I have talked about The Internet Chuck Norris Database.   This web service consists of a few simple REST endpoints that will return a chuck norris joke.  Simple right?   As I am typing this their website is down (or they are filtering me due to all my API calls in developing the app…) so I’ll reference an archive.org link here.

The most simple use case is a GET to http://api.icndb.com/jokes/random to retun a random joke:

Screen Shot 2014-06-29 at 8.57.38 PM

 

I learned how to persist this data in multiple ways within the app with Archives, SQLite, and Core Data.   Showing the jokes from storage in a table view.   Neat stuff:

 

Screen Shot 2014-06-26 at 9.53.24 PM

 

I had to add a few more use cases to the app for the class so luckily there were a few other end points available.   One inputed a firstname and a last name to customize the joke.  In the app I made it look like:

 

blog_chuck_personalized

I also added functionality to pull multiple jokes at once from an endpoint, and one that produced a joke from the ID key.   Not very sexxy to show off but were good use cases to learn how to develop.

All in all it was a fun project that exposed me to a new technology.  I have a ton of ideas for new apps I’ll probably only get a chance to develop 5% of them.   We’ll see.   I did submit it to the app store this past week, though I’m sure it will be rejected based on the usage of a a celebrities proper name.   Damn you Apple Censor Board!

If it does get posted, check it out!

HelloChuck_Icon

 

Tagged , ,

VMware vCenter Operations data mining

*Warming – This is most likely totally unsupported and not how the product is intended to be used!

I’ve been working on a couple projects with vCOps recently (NO SPOILERS YET!) and came across something that was totally new to me that I wanted to share.   No internal secret sauce here either.  I found this via KB 2011714.

FYI – I’m not an expert on licensing, but I assume only the highest level license unlocks this along with the custom portal.

There is a way to query the vCOps system with SQL syntax from a web browser so you do not have to muck with any database connectors, hacking anything, or potentially breaking something.   Plug in the URL https://[vcops UI]/vcops-custom/dbAccessQuery.action    and you are greeted with a screen like the below.

I was looking for what the exact syntax was for host memory so I went through the tables one by one with select * statements until I found output that looked like it was what I recognized.   And BINGO!  I found it via syntax like this (heh, i said like there…get it?)

select * from attributekey where attr_key like ‘%mem%’ and attr_key like ‘%host%’

The metric I was looking for is  mem|host_usagePct

Screen Shot 2014-04-17 at 11.49.06 PM

Excellent but what can I do with that now?

vCOps 5.8, the most recent version at the time of this writing, has had an API of sorts for many versions.  I wouldn’t call it a REST API by any stretch but it does offer a way of getting data IN and OUT over HTTP.      (This is nothing ground breaking here, people have been talking about this at VMworld and blogs for a while now…)

How do I get data in?

On your vCOps system you can plug this URL in to get a definition of the API:  https://[vcops ui]/HttpPostAdapter/OpenAPIServlet

addGeneralMetricObservations

This interface is used to import metric data into vCenter Operations Manager, one resource at a time. This interface handles the creation of the resource and resource kind, as well as any new metric names. This is the default interface used when no “action” parameter is specified in the first line of body of the HTTP Post.

This is what we are looking for to add custom data.  Note it says no “action” parameter is specified.   Also note the unfamilar syntax for posting data.  You only provide the below URL, a POST method and this in the body.

first line
resourceName,adapterKindKey,resourceKindKey,identifiers,resourceDescription,monitoringInterval,storeOnly, sourceAdapter, disableResourceCreation

All subsequent lines
One metric per line with a comma-separated list of: metricName,alarmLevel,alarmMessage,date,value,thresholdHigh,thresholdLow

URL: https://[vcops ui]/HttpPostAdapter/OpenAPIServlet
METHOD: Post
BODY:
MyData,Http Post,MyResource,,MyDescription,,yes,
Metric|Name,0,NoValue,139780080800,42,

Screen Shot 2014-04-18 at 12.01.35 AMAnd the value is added.   Note this is the time in MS not S.  Originally I was doing work from vCO and found the time in there was seconds and not milliseconds as vCOps expects, and….I was confused.  For a while.

Ok so you can add custom data so what…

The “So what” will be coming!  Be patient!

How do I pull data out?

Back to the original point of the post in figuring out that host metric syntax.     I learned from the API reference the  lookupResource action:

lookupResource

Use this interface to find existing resources matching specified resource name, adapter kind and resource kind.
Resource name matching can use string compare or regular expression matching. To enable regular expression matching specify “regex:” prefix followed by the matching pattern.
Adapter kind and resource kind parameters are optional. If not specified only name will be used for matching.
Return value is one row per found resource:
resourceName=[name of resource]&adapterKindKey=[adapterKindKey]&resourceKindKey=[resourceKindKey]&identifiers=[identifiers]

To use this interface, the body of the HTTP POST request should contain a single line:
action=lookupResource&resourceName=[name of resource]&adapterKindKey=[adapterKindKey]&resourceKindKey=[resourceKindKey]

This is useful in short, in case I know one of the values like the name of the resource (my host name), I can find the rest of the values for adapterKindKey,  resourceKindKey and Identifier.   This all probably makes sense to someone but it’s way confusing to me.  So here’s the shortcut:

Similar to the previous query, send this in the body.  Ah ha!

BODY:
action=lookupResource&resourceName=my.esxi.host.fqdn

RESULT:
resourceName=esx-01.jaas.local&adapterKindKey=VMWARE&resourceKindKey=HostSystem&identifiers=VMEntityName::my.esxi.host.fqdn::false$$VMEntityObjectID::host-10$$VMEntityVCID::A1319CCE-D9AC-491A-A990-C8880CBEDE7C

Screen Shot 2014-04-18 at 12.16.34 AM

Excellent!  But…Why?

Now we know all the variables needed to pull data!  If you check the API reference again you’ll find “ getMetricDataAndDT”

getMetricDataAndDT

Use this interface to get collected data for a resource and a metric.
Results will be a CSV List (Including the header):

Time, Value, LowDT, HighDt, smooth
1365195172879, 37.5, 15.2, 73.3, 36.2 — with DT and Smooth values
1365195172879, 37.5, , , 36.2 — with no dt and smooth values
1365195172879, 37.5, , , — with no dt and no smooth

To use this interface, the body of the HTTP POST request should contain a single line:
action=getMetricDataAndDT&resourceName=[name of resource]&adapterKindKey=[adapterKindKey]&resourceKindKey=[resourceKind]&identifiers=[identifiers]&metricKey=[metrickey]&starttime=[startTime]&endtime=[endTime]&includeDt=[true|false]&includeSmooth=[true|false]

Now we know the resource name, adapter kind key, resource kind key, identifiers AND the metric key (which was the original point of this post!).  I love when a plan comes together…

Combine all the bits, and stick a start time in there (or just something really small to return everything… probably not recommended):

action=getMetricDataAndDT&resourceName=my.esxi.host.fqdn&adapterKindKey=VMWARE&resourceKindKey=HostSystem&metricKey=mem|host_usagePct&starttime=-1&identifiers=VMEntityName::esx-01.jaas.local::false$$VMEntityObjectID::host-10$$VMEntityVCID::A1319CCE-D9AC-491A-A990-C8880CBEDE7C

Screen Shot 2014-04-18 at 12.22.09 AM

And we have data!

Again..  The product is not really intended to be used this way, but all that data is in there.   Let’s do something special with it!

Tagged , , , ,

Blog As A Service (Application Director + WordPress + Twitter + Cyclocross)

It has been a while since I’ve posted a full project, so here it goes.  Enjoy.  This time we have our first guest appearance in a super awesome software developer, cycling, and beer snob buddy @jrrickard, who I am doing a number of projects with these days & showcasing some of them in a booth at our employers internal Science Fair (It’s going to be rad.  And there will be pliny.)    How did I get so far off track already…

The Story

I have been doing a lot with VMware vCloud Application Director lately, and while I think it is a really interesting product that strokes all of my ex-sysadmin nerdy tendancies in all the right ways, I think it has a really bright future as it’s a _really_ powerful tool that almost no one knows about (yet).  So, I thought to myself, what’s something totally silly I can do with it while still showing off it’s real world potential?  (if you haven’t picked up on it yet, that’s kind of my thing…)   That’s when I decided:

 

About ten minutes later, I have this:

Screen Shot 2014-04-14 at 11.57.58 PM

 

So what’s going on here?   Step by Step:

1) I tweet a specific phrase like “I should start a blog on cyclocross” .

2) @jrrickard wrote some slick python that talks to Twitter over the REST API, finds that phrase, and sends a specially formatted call over a REST API to vCO [1] [note that is supposed to be a superscript… not sure how to do that and link to a lower section in this blog yet..]

3) The vCO workflow(s) talk to AppD over a REST API, processes the parameters, finds the application blueprint in AppD, and schedules the deployment.  (There is some new fancy secret sauce another co-worker is developing that I may post about at a later time.  In short it is a collection of workflows that finds and executes exactly what you want in AppD based on the human readable name instead of the ID… which is a pain in the rear to find..  seriously…)

4) The AppD blueprint that gets deployed is based on a canned blueprint found at the VMware Solution Exchange here [2] with only a minor addition at the end.  It deploys a CentOS VM from a template, and installs Apache, MySQL and WordPress in order with all dependencies resolved.  AppD configures each one and starts all services.

5) Based on the original tweet that started this whole mess, an AppD service I created from scratch and added to the blueprint (a) pulls a CLI for WordPress from Git, (b) uses that cli to write a few posts to the blog on the topic I tweeted, and lastly (c) goes to flickr (over another REST API) and pulls a bunch of images on the topic to adds them to the posts.

Screen Shot 2014-04-15 at 12.06.32 AM


Screen Shot 2014-04-15 at 12.07.15 AM

[FUTURE] 6) We didn’t add it in yet for a number of reasons, but plan to complete the loop and notify back with a tweet saying “Here’s your blog link!”.

More infos:

[1] So why make the REST call to vCO instead of AppD directly? 

Good question!   Doing it this way can be considered best practice because it can give you greater flexibility in processing all the data and lets you monitor the process and better trap erros. This way you are not relying on the 3rd party system entirely but can work around it and integrate better.

[2] VMware Solution Exchange?

Yes!  It’s great!  You can browse all of these canned & demo solutions like Websphere, Oracle, JasperReports, SharePoint, Jenkins, and Liferay to see how one could (or should) use AppD to automate deployments.

[3] Want to see the workflows and blueprints?

Maybe when I get some time to clean it all up. It’s kind of…messy right now.

Credits:

@jrrickard / Jeremy Rickard for the help with the python script running the twitter bot.

@MarcoCaronna  for the AppD/vCO zen master skills simplifying the REST API

BitNami for the original WordPress blueprint here

Tagged , , ,

Dinner as a Service – Electronics

This post is part of a project I’m calling ‘Dinner As A Service.’
Posts:
Overview
Electricity Costs
Electronics 

(This post is entirely on the hardware side of things so if you only care about my software ramblings, these are not the droids you are looking for.)

Since physical computing is an area I really have no deep background in, as with most of my projects that involve it, I owe quite a bit of credit to others.  Here are most if not all of the sources I used:

The build started like any self respecting build would…. as a prototype in a fishing tackle box!

High voltage! Don't Touch!

High voltage! Don’t Touch!

Surprisingly enough, once the kinks were worked out this worked pretty darn well.   Believe it or not, I cooked about 5 meals like this.  A fire extinguisher was near by.

The prototype consisted of:

Now that I’m writing this ex post facto, the high voltage bits are WAY FREAKING overkill for this implementation with my current slow cooker. (25A SSR?!? I think the later bits I added are bout 10A capable…)  See the last post on electricity usage for more on that.   What’s good is, assuming I didn’t make any huge errors (which may be a leap) I’ll be able to use this with much more powerful heating elements like a heat stick to brew beer…. we’ll see…..

*I’ll post a wiring diagram here* from EAGLE once I figure it out.  It’s not the most intuitive application.

Now this prototype worked great, but I wanted some bling.  I’ve never built anything with much bling to date so this was quite the learning experience.  Via the Awesomeness that is Adafruit, I found the 7 Segment display.  I’m in love:

So retro. So cool. I MUST HAVE THESE.

So retro. So cool. I MUST HAVE THESE.

I scrapped my first idea of just using a LCD text display, and went with two of these days.   Partly this decision was because I was having stability issues with the LCD i first tested with gibberish eventually always showing up, but also because I don’t need to display much text at all.  I wanted to keep this build purpose built and not add bells and whistles (like speakers and pandora functions like I originally spec’ed!).  So target temp, current temp and some sort of status indicators is all I need.    I wanted to include some timer displays to help with the cooking process but the lack of a real time clock in the rPi gave me some headache so I scrapped that idea.  Maybe i’ll revisit with either the add on board or a beagle bone.

I ordered a number of bits up from Adafruit and Digikey.  Found a case at the hardware store (which probably was a mistake….need to do more research on cases next time…).  And dove in…   Slowly I started adding bits.  First a 7segment LCD:

Hey look bitcoin used to be a bit higher

Hey look bitcoin used to be a bit higher

I read somewhere it really helps when cutting cases to use painters tape on top to draw your lines and keep the edges clean.   Yeah i doubt that helped my absolute hack job:

If i get cancer this is why.

If i get cancer this is why.

Progress was made pretty quickly, as I started getting time pressure to move on to other projects.

Hey look I call that progress.

Hey look I call that progress. Look at that LEGIT heat shrink wire!

And started cutting the ports for the power in & out:

Again. This case decision sucked. Anyone want to fund a 3d printer for me please?

Again. This case decision sucked. Anyone want to fund a 3d printer for me please?

And there she is. Fully working and how she looks today (with a slightly corrected label that someone on Facebook wouldn’t let go).   Notice all the empty space on the bottom half which may some day be used for some timer displays.

Yay. Blinky things.

Yay. Blinky things.

And on her maiden voyage, she cooked us up quite possibly the most tasty meal yet.  Tenderloin Steaks with Morel Cream.   As Ferris Bueller said, “If you have the means, I highly recommend picking one up.”

omgomgomg nommers.

omgomgomg nommers.

And here she is, mostly finished (TM).

May or may not burn your house down, make the dogs run away, make the child cry (TM)

May or may not burn your house down, make the dogs run away, make the child cry (TM)

Parts added for the final build:

If I was to continue development on this, the items on my list are:

  • Add timers
  • Add audible alerts
  • Add on/off switch
  • Add detachable temperature probe
  • Better case
  • Currently the rPI uses power from an external USB source due to lack of space for the transformer within the case.  This should be wired in.

There you have it.  Now I don’t really need feedback on everything I did wrong here.  No I don’t have a fuse wired into the box, I’m relying on the GCFI that I plug into, etc..   This is more or less a proof of concept that is sticking a round a while.   If I keep using it, as it seems like I am, I really am considering going legit with one of these store bought monsters.

Tagged , ,