Monthly Archives: October 2014

Mass update WordPress content in MySQL

Here’s a quickie that may help someone someday.  If so, yay, I helped!

When I migrated my blog most recently I must have screwed something up along the way and some old posts had all image URLs replaced with a now bad IP address instead of a fqdn.   I crafted up a SQL statement that was able to update a ton of posts all at once.  Gotta love efficiency.

This does a search and replace on every post (post_content field within the posts table) and replaces the first snip with the second.  Neato.

 

 

Tagged , ,

How to use a local persistent Docker registry on CentOS 6.5

UPDATE: Dec 16 2014, I found a new option is needed now using Docker version 1.3.2   See more here

There a bunch of blogs out there showing a tutorial on how to use a local docker registry but none of them (that I have found) have it boiled down to the absolute simplest syntax and terms.   So here you go!

First off terminology Docker Hub is where images are typically pulled from when you just type the normal “docker pull blah” commands.  A registry is what it’s referred to as instead of a hub or repo.   To save time and bandwidth here is how you can stand up a persistent local registry to store your images.  Persistent meaning the image data is kept after the image is discarded.

Syntax here is working on CentOS 6.5

1) Install the needed bits.  This is no different than normal.

2) Start docker

3) Fire up the example registry.  This downloads and runs the registry image, exposes port 5000, and links local dir /opt/registry to /tmp/registry within the container.   This is key.  Otherwise, after the container stops the images go poof.

4) We could do this locally on this first machine, but we’ll show the syntax from somewhere else to illustrate.   On some other machine, first do the same install steps above to install the EPEL rpm and install Docker.   Then pull the images you want:

Snip20141023_4

5) List the images, and we see this image separates out a few versions of the OS.  CentOS7 is the latest (see how the IMAGE ID matches 87e5b…), CentOS6 is image 68edf..  and CentOS5 is 504a65…

Snip20141023_7

6) Add some tags to gives the images a new identity.  Replace “docker-reg” with your docker registry hostname.

7) List images again to verify

Snip20141023_8

8) Now push the tagged images to our local registry

9) Lastly, on some third machine with docker already installed (this post makes this handy by deploying these nodes as a catalog item), pull the time.  Notice it’s WAY fast now.  ~14 seconds in this screenshot.  Notice we only have the latest centos tagged, just pull the others and you’re good.

Snip20141023_13
Compare this to pulling from docker hub at about ~2:20.   For a large image like the SpringTrader app I built and this would cut down about an hour download time dramatically.
Snip20141023_14

I wanted to compare this to the SpringTrader app, so I pulled it earlier and began pushing it to my local registry.  One thing I noticed was it does buffer the image to disk when you push so be aware you will need the disk space (and time) available for this.   The time savings will happen later on subsequent deployments.  And i crashed my VM when running out of space the first time….

Then on some other node

Boom.  In about 10% of the time it normally takes to deploy this image she’s up and running!  It took about 13 minutes to download from Docker Hub, and about 2 minutes from my local registry.  That’s a win if you have a need to do this over and over.

 

 

Where are the images stored?

On a docker node, what I have been using to refer to the base machine and not the containers themselves, I found the docker files here.

Snip20141023_15

On the local registry, i found the files here.  Remember we told it to use /opt/registry on the base machine and map that to /tmp/registry within the container?

Snip20141023_17

Tagged , , ,

Directory as A Service: Part 2 – vCAC Integration

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_w

In the first post I introduced JumpCloud, a hosted off-premise directory service.   In this post I will show one way to integrate it into vCloud Automation Center (vCAC).

Getting Started
I got started with re-using a simple CentOS single machine blueprint I had already configured and which does a few configurations on boot already:

Snip20141016_46

For simplicty, I like using the scripted method in a build profile to do simple stuff.   I don’t take any credit for this configuration as I totally just copied what others have done before me (sorry don’t have the link to the exact post handy).   This build profile mounts a network share, and runs a DNS script from that share to automatically add the new machine to my DNS.

Snip20141016_47

Now the additions we’ll make to integrate into JumpCloud are as follows.  I split it into three scripts because I built this iteratively using some of their example code, but could have just as easy done it all in one. I’ll refer to the script numbers as they appear below for clarity:

I used all of JumpCloud’s example code as examples, but am posting my modifications to github here.

Snip20141016_49

 

Script 2 – Installs the agent (we saw that syntax in the first post)

 

Script 3 – Assigns tag(s) to the system being deployed

 

Script 4 – Sets the configuration of enabling password auth, enable root, and enable multifactor

Go ahead and do the normal vCAC configurations and end up with a catalog item for a JumpCloud enabled CentOS machine.   I have not done anything else special to prompt the user for values here.  I could see it being useful as integrating this into ALL catalog requests and give the user the choice to enter (or choose from a list of) tags, or the authentication choices (password, public key, root…).

Snip20141016_51

Let’s go ahead and request three nodes:

Snip20141016_53

 

Shortly after the three machines show up in vSphere:

Snip20141016_54

And when the deploying machine stage is complete, they show up in vCAC:

Snip20141016_59

And it shows up in the JumpCloud console:

Snip20141016_58

And the same method of authentication now works as we discussed previously.  Using google authenticator to log into one of my machines is pretty darn awesome I have to admit.

Snip20141016_60

 

What now?
Now what about day 2 operations.  Well I could add some menu actions to add/remove the system from tags or modify the other settings, that could be useful.  But at a bare minium I wanted tear down to be clean.  Since I was able to get the machine creation be totally automated, I also wanted cleanup to be.  Since vCAC is a self service portal, you wouldn’t want the JumpCloud console to be full of machines that no longer exist.

You don’t want your console filling up with old machines like this

Snip20141016_72

Because I took the easy way out and used the Build Profile scripting method of customizing machines from vCAC, I had to go a different route for tear down.   The easiest way to inject automation at this stage today with vCAC 6.1 is firing off a vCO workflow by using a machine state workflow.   So first I built a simple vCO workflow that runs the SSH command on the machine:

Snip20141016_61

I had to look it up to get the syntax right but the input parameter you want to use is “vCACVm” of type “vCAC:VirtualMachine”.   From that variable you want to pull the VM name (vCACVm.displayName), and use that to know where to run the SSH command. Simple and effective.

Snip20141016_63

Snip20141016_64

 

Why shell scripts on each node and not full vCO automation?
Normally I wouldn’t bat an eye and do all of this from vCO itself. Create the JumpCloud endpoint as a REST HOST, create various actions as REST Operations, etc.  The script just reaches out to a REST API after all.   The first reason was, well, they already had demo scripts available that work completely.   But really it is because authentication is done from each individual node, and each node (appears to) have a unique system key and this system key is used in the authentication to the REST API.   This may need to be revisited for true enterprise functionality for central provisioning, IPAM or even synchronizing directory services.   But I digress….

Implementing custom machine stages
Back to the vCO bit.  We need to run the below library workflow “Assign a state change workflow to a blueprint and its virtual machines” where we specify the Blueprint we are using and the vCO workflow we want to run, and at which machine state we want it run.

Snip20141016_65

And bam, a custom property gets automatically added to the Blueprint itself.   Since we want this to happen when the machine is destroyed we chose the Machine Disposing stage.   The long ID starting with 8be39.. is the vCO workflow ID.   You may have encountered this if you ever needed to invoke workflows from the vCO REST API like here.  This above library workflow is a lot more useful for a more complicated integration with lots of values being passed but hey it saved a little time for us here.

Snip20141016_67

 

Try it out
Now unless I missed documenting a step here (as I’m writing this after I built the integration), all we have to do is destroy the machine like we would normally and it quickly disappears from the JumpCloud console.

Snip20141016_68


Snip20141016_70
 And there we go.  Full integration for login access control to my lab environment machines provisioned from vCAC.  If I’m honest I may even continue using this for a few machines as handy as it is.

 

Tagged , , , ,

Directory as A Service: Part 1 – Intro & Setup

Directory as A Service: Part 1 – Intro & Setup
Directory as A Service: Part 2 – vCAC Integration

jc_100_wI have been playing with an interesting new service from a startup based just down the road in Boulder, CO called JumpCloud.  In their own words:

“JumpCloud’s Directory-as-a-Service (DaaS) securely connects employees and IT resources through a single, unified cloud-based user directory. It is the single point of authority and authentication for a business’s many employees and access rules.”   link

I take this to mean it is a hosted directory service.  Interesting concept, which I bet is met with a ton of resistance from those that fight off-prem services but I’ll leave that topic for later discussion and focus on the technology right now.   I wanted to see how I could integrate this into VMware’s vCAC so that is what I built.   I’ll split this into two posts.  This first one will just cover setup, the second will be the integration.

First Impression
I have to admit, I really enjoy companies that make it super easy to try out their offerings.  JumpCloud offers 10 managed nodes for free then gives a one line exact syntax for how to deploy the service at the command line with your account credential already in place.  They also have a full example for Puppet and Chef similarly configured with your credential.   Literally cut, paste, go.  But more on this later.

Walkthrough of first time use

When you first login to the console you are met with a simple interface and nothing configured.  Let’s walk through initial configuration.

The first step seems to be to add users:

Snip20141016_28

Once we add our user, Mountain, we see his account is in a pending state.

Snip20141016_30

When the Mountain checks his email, he’ll see the activation message.

Snip20141016_31

And when he clicks on it he can set his own password

Snip20141016_32

And lastly, The Mountain is automatically presented with a multifactor authentication code that you can scan directly into Google Authenticator.  This is a killer feature in my opinion!

[ don’t worry about trying to steal these credentials, it won’t get you anywhere! ]

OK, now that the account is setup we see we have one more notification for this user:

Snip20141016_34

Tags seemed a little confusing to me when I got started.  They appear to be the only grouping mechanism, so it is how you associate users to systems.  My guess would be you would assign your developers to the development machine tag, and your system administrators to some sort of all machines tag.   I went ahead and created tags that JumpCloud used in a few of their demo scripts.

We setup a tag for all servers, and give the mountain access

Snip20141016_36

We continue on and create a few more

Snip20141016_37

Now the cool part.  Before when I mentioned I love when companies give simple ways to try a service?

Snip20141016_26

 

So we cut/paste this syntax in a newly provisioned CentOS VM and it does everything for you

Snip20141016_39

 They have some sort of dynamic HTML on many of the console web pages, so when this command is run the empty previous screen is replaced with a system listing.

Snip20141016_40

Notice we do get an alert regarding no tags automatically are assigned to the system.  I’ll explore this in my next post on integrations, but for now we do it by hand:

Snip20141016_41

I’m not clear what’s going on behind the scenes (if it’s a push to the agent, or a reoccurring check in or what) but shortly after we see that the mountain is added to the passwd file on this centos machine:

Snip20141016_42

Now if The Mountain tries to login at this point he will be denied?  Why?  Because if we look at the system details we see the default configuration is locked down pretty tight allowing ONLY public key authentication.

Snip20141016_43

If I go ahead and click each of those buttons to allow root, allow password auth, AND allow multifactor   (because I like to be safe and dangerous all at the same time….it’s a lab after all).  The Mountain is now happy he can login.  Notice the prompt for the multifactor token WITHOUT ANY OTHER MANUAL CONFIG ON THE SYSTEM.  That. is. awesome.

Snip20141016_44

 

Other stuff?
That’s it for the basics.  They have additional functional to configure sudoers via the JumpCloud console but I’ll leave that alone for now.

Overall thoughts
Given how young this startup is, I will give a pass on some of the few negatives I encountered (like UI problems in safari, maybe other features I would want to see like simple user grouping, on premies AD integration, windows host support).  What they have now works pretty well and they make it super easy to use.  It took me a little while to find the API information but when I did I was able to do the automation I will show next.    In short, this technology has promise for either the small environment that has absolutely no on prem environment, or for any sized organization to help with strictly access control on systems.


Full disclosure: I didn’t just happen to stumble upon JumpCloud as I know their Cheif Product Officer from the local cycling community here in Colorado, though I am not being influenced with free bike parts or beer to give their service a whirl.  Yet.

Tagged , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

Snip20141001_67
So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.

Snip20141001_68

Single machine blueprint in vCAC

 

Snip20141001_71

“DockerVM” tag in vSphere denotes the docker capability machines

 

Snip20141001_72

Service Blueprint for various docker images

 

Snip20141001_74

You can specify items like the image name, the port to expose, the bash command, etc..

 

Snip20141001_77

Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.

Snip20141001_78

…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login

BY RUNNING THIS IMAGE YOU IMPLICITLY ACCEPT ALL EULU THAT ANY INCLUDED COMPONENTS IMPOSE.  THIS IS SIMPLY A DEMO APPLICATION AND NOTHING MORE.

 

Tagged , , , , ,