Category Archives: NewStuff

What I’ve been up to lately: CTO Ambassador, vExpert 2015, VCP6-Cloud & Conferences

Busy time in the salt mine day job!  RADIO_CTO_v6 Hope this does not read as a bunch of humble bragging, just keeping tabs on all the happenings here on the blog.

I’ve recently joined a field facing group at VMware called the CTO Ambassadors on a rotation for the next two years.  This is really exciting for me as it is similar in intent to a program that I helped build in my past life in support.  This CTOA program in their words:

 

“focuses on creating and developing technology focused communities that span R&D and the field. This enables the effective flow of information and knowledge, backwards and forwards, between R&D and our field, and ultimately with our customers and partners, which in turn drives more profitable relationships, greater revenue, and new and improved products. “

I am excited to be able to take part in this experience.  The next year or two will be exciting to have a chance at helping shape many things internal to the company but also have a clear priority of taking part in other communities for the company.

vexpert_logoAlso, I got word that I was renewed as a VMware vExpert for 2015.  That’s fun too!  This is a designation for being active in the social media and blogging communities.  And is kind of a fun title too.

I passeCloud_VCPd the new VCP6-Cloud test this week.  I will admit it covers such a broad range of topics that it is a bit difficult in parts as I do not touch many of these too often any more.  And there are a number of questions that would purely be answered by just glancing at a GUI.   I used to be able to breeze in to the core VCP exam, take the test in 20 minutes and pass without ever thinking twice when I was in a purely a vSphere focused role.  Not sure if it is the test or it is me now.  Probably both column A and column B.

devops-days-rockiesFinally, I signed up for a few conferences coming up in Colorado that look to be exciting.  GlueCon in May (Broomfield, Colorado) and DevOps Days Denver.  Looking forward to hearing some great talks on some hot topics right now.

 

Tagged , , , , , ,

Suggested reading: DZone’s Guide to Enterprise Integration

Snip20141117_8Previously I posted about DZone’s Guide to Continuous Delivery (which is excellent), and now I have been reading their guide on Enterprise Integration.   I highly suggest checking it out.   I really geek out on the trends of SOA and microservices.   The idea of “dumb pipes and smart endpoints” is intriguing to me (which to be fair I guess they are crediting this post as their source).   Also I find it fascinating how the same companies keep coming up as the example case, like Netflix, Etsy, Soundcloud etc.   I would hate to be one of their competitors…

Tagged ,

New vCAC & Application Services 6.1 template prep script (linux)

UPDATE: Dec 9 2014 – vCAC is renamed to vRealize Automation (vRA).   vRA 6.2 is dropping today and the pre-req script is posted here.

UPDATE: Dec 16 2014 – Doh!!  I was multitasking too much when i posted that last update.   The pre-req script wasn’t the point of this original post, but is still useful none the less.  To recap – the pre-req script is to ease setting up a vRA IAAS machine.   The template prep script is to ease setting up a linux template to be used _WITH_ vRA.
A great tool that flew under my radar in the most recent 6.1 release for AppD….er…Application Services and vCAC proper… is a script that does all the steps to prepare a linux template for you for both agents.   If you are at all familiar with this process, you’ll find it to be a huuuuuuuge time saver.

Original Post:
If you look at the documentation it’s quite cumbersome and full of potenial human error points.  This script will check all dependencies, install them where it can, and either prompt for the appropriate server names or accept inline input.


Getting Started
First pull the script off the AppD server and make it executable.

Snip20141106_22
How to use it – Interactive

If you wanted to just dive in run it for interactive mode:

Snip20141106_24
How to use it – Unattended

I update templates quite often in the lab environments I work in, so I like keeping a quick reference in a note that I can quickly cut & paste from.  Now that this script accepts inline inputs I could gain an extra sysadmin merit badge and just drop it into a shell script in a common place across all templates and just manually run that.  Easy Peasy.

So here’s the help page:

Snip20141106_23

Here’s what I would run which tells it the three server names, not to install java,  not to check ssl certs, a timeout of 300 secs and not to prompt to confirm:

The last line is a handy step that prevents centOS templates to increment the nic# when cloning.  There could be a better way but it works.

Snip20141106_25

… it does it’s thing…..and finishes with:

Snip20141106_26

 

Now you’re ready to shut it down, take a snapshot, start data collection, and update your blueprint!

Tagged , ,

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_w

In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:

Snip20141016_46

For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.

Snip20141016_47

Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.

Snip20141016_49

 

Script 2 – Installs the agent (we saw that syntax in the first post)

 

Script 3 – Assigns tag(s) to the system being deployed

 

Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).

Snip20141016_51

Let’s go ahead and request three nodes:

Snip20141016_53

 

Shortly after the three machines show up in vSphere:

Snip20141016_54

And when the deploying machine stage is complete, they show up in vCAC:

Snip20141016_59

And it shows up in the JumpCloud console:

Snip20141016_58

And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.

Snip20141016_60

 

What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this

Snip20141016_72

Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:

Snip20141016_61

I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.

Snip20141016_63

Snip20141016_64

 

Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.

Snip20141016_65

And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.

Snip20141016_67

 

Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.

Snip20141016_68


Snip20141016_70
 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.

 

Tagged , , , ,

Directory as A Service: Part 1 – Intro & Setup

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_wI have been playing with an interesting new service from a startup based just down the road in Boulder, CO called JumpCloud.  In their own words:

“JumpCloud’s Directory-as-a-Service (DaaS) securely connects employees and IT resources through a single, unified cloud-based user directory. It is the single point of authority and authentication for a business’s many employees and access rules.”   link

I take this to mean it is a hosted directory service.  Interesting concept, which I bet is met with a ton of resistance from those that fight off-prem services but I’ll leave that topic for later discussion and focus on the technology right now.   I wanted to see how I could integrate this into VMware’s vCAC so that is what I built.   I’ll split this into two posts.  This first one will just cover setup, the second will be the integration.

First Impression
I have to admit, I really enjoy companies that make it super easy to try out their offerings.  JumpCloud offers 10 managed nodes for free then gives a one line exact syntax for how to deploy the service at the command line with your account credential already in place.  They also have a full example for Puppet and Chef similarly configured with your credential.   Literally cut, paste, go.  But more on this later.

Walkthrough of first time use

When you first login to the console you are met with a simple interface and nothing configured.  Let’s walk through initial configuration.

The first step seems to be to add users:

Snip20141016_28

Once we add our user, Mountain, we see his account is in a pending state.

Snip20141016_30

When the Mountain checks his email, he’ll see the activation message.

Snip20141016_31

And when he clicks on it he can set his own password

Snip20141016_32

And lastly, The Mountain is automatically presented with a multifactor authentication code that you can scan directly into Google Authenticator.  This is a killer feature in my opinion!

[ don’t worry about trying to steal these credentials, it won’t get you anywhere! ]

OK, now that the account is setup we see we have one more notification for this user:

Snip20141016_34

Tags seemed a little confusing to me when I got started.  They appear to be the only grouping mechanism, so it is how you associate users to systems.  My guess would be you would assign your developers to the development machine tag, and your system administrators to some sort of all machines tag.   I went ahead and created tags that JumpCloud used in a few of their demo scripts.

We setup a tag for all servers, and give the mountain access

Snip20141016_36

We continue on and create a few more

Snip20141016_37

Now the cool part.  Before when I mentioned I love when companies give simple ways to try a service?

Snip20141016_26

 

So we cut/paste this syntax in a newly provisioned CentOS VM and it does everything for you

Snip20141016_39

 They have some sort of dynamic HTML on many of the console web pages, so when this command is run the empty previous screen is replaced with a system listing.

Snip20141016_40

Notice we do get an alert regarding no tags automatically are assigned to the system.  I’ll explore this in my next post on integrations, but for now we do it by hand:

Snip20141016_41

I’m not clear what’s going on behind the scenes (if it’s a push to the agent, or a reoccurring check in or what) but shortly after we see that the mountain is added to the passwd file on this centos machine:

Snip20141016_42

Now if The Mountain tries to login at this point he will be denied?  Why?  Because if we look at the system details we see the default configuration is locked down pretty tight allowing ONLY public key authentication.

Snip20141016_43

If I go ahead and click each of those buttons to allow root, allow password auth, AND allow multifactor   (because I like to be safe and dangerous all at the same time….it’s a lab after all).  The Mountain is now happy he can login.  Notice the prompt for the multifactor token WITHOUT ANY OTHER MANUAL CONFIG ON THE SYSTEM.  That. is. awesome.

Snip20141016_44

 

Other stuff?
That’s it for the basics.  They have additional functional to configure sudoers via the JumpCloud console but I’ll leave that alone for now.

Overall thoughts
Given how young this startup is, I will give a pass on some of the few negatives I encountered (like UI problems in safari, maybe other features I would want to see like simple user grouping, on premies AD integration, windows host support).  What they have now works pretty well and they make it super easy to use.  It took me a little while to find the API information but when I did I was able to do the automation I will show next.    In short, this technology has promise for either the small environment that has absolutely no on prem environment, or for any sized organization to help with strictly access control on systems.


Full disclosure: I didn’t just happen to stumble upon JumpCloud as I know their Cheif Product Officer from the local cycling community here in Colorado, though I am not being influenced with free bike parts or beer to give their service a whirl.  Yet.

Tagged , ,