Tag Archives: vmware

A change…or pivot if you will…..

Pivotal_Logo_200I have been at VMware for 7 years (this week on the dot actually!).  That is a half a lifetime in IT Dog Years.  In that time I have done many different things, and been to many different places.  I saw (and at times helped (or tried to help) ) virtualization mature from a fringe lab thing that would never run production workloads efficiently and easily, to an established vendor that most people are using in some way.  Quite a ride!

Just after the July 4th holiday I will be (metaphorically, though not geographically) be walking a few blocks up the hill in Palo Alto from the VMware campus to a sister EMC Federation company, Pivotal.  I’ll be leaving the current Pre-Sales gig and getting my hands dirty directly in technology as a main focus.  I’m excited!

www.jaams.co

micro-services1-297x250I plan to continue the blogging weird and silly projects on here, though it will stray from a VMware focus to more broad devopsy topics in general.   Hence the slight change in name (mostly as a joke that I was told at GlueCon recently) – Josh as a (Micro) Service!  Kind of catchy don’t you think?

I’ll spare you all the pontificating on merits of focusing on one thick technology stack made up of all kinds of mashed together bits being a monolithic focus, and now for the future breaking it down into singular focus areas and doing each of them well……I don’t know… This joke might not work entirely, but I get a good laugh out of it anyway.

Onward!

“Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure. Life is either a daring adventure, or nothing.”
Helen Keller

“Live every week like it’s Shark Week.” – Tracy Jordan

“It’s more fun to be a pirate than to join the Navy.” – Steve Jobs

Tagged , , ,

Experiment: Pooling in vRA & Code Stream

Background

I recently attended DevOpsDays Rockies which is a community oriented DevOps conference (check them out in your area, it was great!).  I saw a talk by @aspen (from Twitter/Gnip) entitled “Bare Metal Deployments with Chef”.   He described something he/they built that, if I recall correctly, uses a PXE/Chef/MagicpixieDust to pull from a pool of standby bare metal hardware to fully automate bringing it into a production cluster for Cassandra (or what have you).

This got me thinking on something I was struggling with lately.  Whenever I develop blueprints in Application Director / Application Services, or just vRA/Code Stream, the bulk of the time I just hit the go button and wait.  Look at the error message, tweak and repeat.  The bottleneck by far is in waiting for the VM to provision.  Partly this is due to the architecture of the products, but also it has to do with the slow nested development environments I have to use.  We can do better…..!

Products using pooling

I then started thinking about what VDM / Horizon View have always done for this concept.  If I recall correctly, as it’s been years and years since I’ve worked with it, to speed up deployments of a desktop to a user, a pool concept exists so that there will always be one available on demand to be used.   I don’t have much visibility into it but I am also told the VMware Hands On Labs does the same – keeps a certain number of labs ready to be used so the user does not have to wait for it to spin up.  Interesting.

The idea

So I thought – how could I bring this upfront deployment time to the products I’m working with today to dramatically speed up development time?   And this is what I built – a pooling concept for vRA & Code Stream managed by vRO workflows.

Details – How Redis Works

When planning this out I realized I needed a way to store a small bit of persistent data.   I wanted to use something new (to me) so I looked at a few NoSQL solutions since I’ve wanted to learn one.  I decided on Redis as a key value store, and found Webdis which provides a light REST api into Redis.

I couldn’t find any existing vCO plugins for Redis I/O which is fine, the calls are super simple:

Example of assigning a value of a string variable:

Snip20150517_5The redis command is: “set stringName stringValue”
So the webdis URL to “put” at is “http://fqdn/SET/stringName stringValue”

Then to read the variable back:

Snip20150517_6The redis command is: “get stringName stringValue”
So the webdis URL to “get” at is “http://fqdn/GET/stringName”

Easy peasy. There is similar functional for lists, with commands to pop a value off either end of the list.  This is all I needed, a few simple variables (for things like the pool size) and a list (for things like the list of VMs storing IP addresses & names).

So in vCO I just created a bunch of REST operations that used various number of parameters in the URL line:

Snip20150517_7
I found the most efficient way to run these operations was to parametrize the operation name, and pass it to a single workflow to do the I/O

Details – Workflow(s)

The bulk of the work for this pooling concept is done in the following workflow that runs every 15 minutes.

Snip20150517_8In general it works like this:

  • Check if the workloads are locked – since it can take time to deploy the VMs, only one deployment will be going at a time.
    • If locked, end.
    • If not locked, continue.
  • Lock the deploys.
  • Get the pool max target (I generally set this to 10 or 20 for testing).
  • Get the current pool size (the length of the list in Redis.  much faster than asking vSphere/vRA).
  • If the current size is not at the target, deploy until it is reached.
  • Unlock the deploys.
  • Profit.

I did not have to do it this way, but the nested workflow that does the actual VM deployments is requesting vRA catalog items.

In Action

After I got it fully working and the pool populated, you can check the list values with this type of Redis query:

Snip20150517_9

Redis: lrange vmlist 0 -1 (-1 means all)
Webdis: http://fqdn/LRANGE/vmlist/0/-1

The matching machines in vSphere:

Snip20150517_11

In Action – Code Stream

Normally in a simple Code Stream pipeline you would deploy a VM by requesting the specific blueprint via vRA like this:

Snip20150517_19

In this solution, instead I use a custom action to grab the VM from the pool and return the IP back to the pipeline as a variable.  Then I treat the VM like it’s an existing machine and continue on and at the end delete the machine.

Snip20150517_18

This reduces the list in redis by one, so the next time the scheduled workflow runs that checks the list size it will deploy a new one.

(Kind of) Continuous Deployment

I have a job in Jenkins that builds the sample application I am using from source in Git, pushes the compiled code to Artifactory and does a post build action that calls Code Stream to deploy.

Snip20150517_15

I wanted to see if there were any bugs in my code, so I wanted this whole thing to run end to end over and over and over…   I configured the Jenkins job to build every 30 minutes.  I went on vacation the week after I built this solution so I wanted to see if over time anything broke down.  Amazingly enough it kept on trucking while I was gone, and even got up to the mid 700’s in Jenkins builds.   Neat!

Snip20150517_12

Jenkins builds

Artifacts

Artifacts

Code Stream executions

Code Stream executions

Summary

To my surprise, this actually works pretty darn well.  I figured my implementation would be so-so but the idea would get across.  It turns out, what I’ve built here is darn handy and I’ll probably be using it the next time I am in a development cycle.

Post any questions here and I’ll try to answer them.   I’m not planning to post my workflows publicly just yet, fyi.

Tagged , , , , , , , , , , , ,

Introducing VMware Project Photon (#vmwcna)

VMW-LOGO-PHOTONUnless you have been hiding under an IT rock, you no doubt have heard about the new crop of tiny linux OS releases as of late that are positioned as a “Container Host Runtime” or “Linux Container OS” (here, here, here).   They are stripped down to the bare essentials and geared towards running containers efficiently at scale.   CoreOS, Atom, Snappy and so on.  Today VMware’s Cloud Native team is introducing Project Photon as their flavor of this ecosystem.  (Link to GitHub page)

Entirely open source. (Free as in beer.)  Built in VMware tools.  Optimized for the VMware hypervisors. There are lots of benefits for VMware building their own from the kernel and not forking an existing OS that will become more clear over time, but I will leave it to the official messaging for now.

What is Project Photon?

Project Photon is a tech preview of an open source, Linux container host runtime optimized for vSphere. Photon is extensible, lightweight, and supports the most common container formats including Docker, Rocket (rkt) and Garden.

Project Photon includes a small footprint, yum-compatible, package-based lifecycle management system, and will support an rpm-ostree image-based system versioning.

When used with development tools and environments such as VMware Fusion, VMware Workstation, HashiCorp (Vagrant and Atlas) and production runtime environment (vSphere, vCloud Air), Photon allows seamless migration of container based Apps from development to production.

From “Getting Started” documentation

It may not make sense to some why VMware is releasing a linux OS.  This will become more clear over time.  But for today, just think about the power of VMware owning the hypervisor underneath, AND the VM operating system as a platform for running containers.  You get all the benefit of the vSphere world (HA, DRS, FT, NSX, vSAN, vMotion….) and all the benefits of containers!  Plus… remember VMfork that Duncan has blogged about?  hmmmmm….

 

Installation

Snip20150418_75

 

Snip20150418_74

 

Seriously….. Using the minimal install, 12second install time in Fusion on my MacBook Pro.  303 mb footprint.  That. is. awesome.  The following are the sizes and average install times I’ve noticed.  Booting is literally just a few moments.

The install comes in three flavors from the same .ISO, (or you can custom pick packages)

Full: 1.7GB.  40 to 60 seconds to install
Minimum: 303mb. 10 to 20 seconds to install
Micro: 259mb. 8 to 12 seconds to install

 

Photon OS (Micro): Photon Micro is a completely stripped down version of Photon that can serve as an application container, but doesn’t have sufficient packages for hosting containers. This version is only suited for running an application as a container. Due to the extremely limited set of packages installed, this might be considered the most secure version.

Photon Container OS (Minimum): Photon Minimum is a very lightweight version of the container host runtime that is best suited for container management and hosting. There is sufficient packaging and functionality to allow most common operations around modifying existing containers, as well as being a highly performant and full-featured runtime.

Photon Full OS (All): Photon Full includes several additional packages to enhance the authoring and packaging of containerized applications and/or system customization. For simply running containers, Photon Full will be overkill. Use Photon Full for developing and packaging the application that will be run as a container, as well as authoring the container, itself. For testing and validation purposes, Photon Full will include all components necessary to run containers.

Photon Custom OS: Photon Custom provides complete flexibility and control for how you want to create a specific container runtime environment. Use Photon Custom to create a specific environment that might add incremental & required functionality between the Micro and Minimum footprints or if there is specific framework that you would like installed.

From “Getting Started” documentation

Using Photon / SystemD

I’ll be the first to admit I have not adopted CentOS7 yet as all my labs are still using CentOS6, so I was not familiar with the new SystemD commands as of yet.  There is some good info on it here and here.

TLDR; for services,  Project Photon uses systemd:
You no longer are running chkconfig or /etc/init.d/ scripts.  Instead you use systemctl enable service and systemctl start postfix.

Also networking is different, you edit files in /etc/systemd/network instead of sysconfig.  I’ll show more info on that below.

One more helpful thing to know is there are no logs in your familiar home of /var/log/, they are managed centrally in journalctl. Digital Ocean has a great overview of the usage of it here.  I won’t rehash all of the functionality that they wrote about but I’ll give a quick example.

TLDR; for logging, Project Photon uses journalctl:
You no longer use /var/log/postfix.log.  Instead you use (to continuously tail) journalctl -f -u postfix


How to Get Started

VMware has posted a bunch of great getting started guides here that walk through deploying on Fusion, vSphere, GCE, AWS, Air, etc…  In addition to those guides, here are some tips on configuration to help get those that are not familiar up and running right away.

Here is what I’ve been doing when I deploy a new machine.  I’ve found each of these have exact syntax and capitalization that are important, otherwise the IP does not get configured.

  • Allow root SSH access in /etc/ssh/sshd_config
  • Set the correct hostname in /etc/sysconfig/network
  • Configure a static IP by:

mv /etc/systemd/network/10-dhcp-eth0.network /etc/systemd/network/static.network

Edit the contents to be:

  • Update the hosts file to be sure you have short and fqdn set on 127.0.0.1

  • Then run the commands to configure the hostname

  • I like using keys for SSH to login quicker in my lab

Good to go!

 

Tagged , , , , , ,

New release: VMware Software Manager – Download Service

Release Notes

Documentation

Snip20150311_9Today VMware is releasing a trove of software.  One small tool that will surely make release days like this much easier is Software Manager.   I just tried it out and it’s exactly what you will expect.   No more browsing through the somewhat painful download web pages!  Have it all come to you.

The tool is windows only, and comes in at a whopping 17MB for the tool itself.  It comes packaged as a MSI, quickly install it, specify a location (that has enough space for the many gigs of of downloads) and it fires up a web page connecting to localhost on port 8000.  Login with your MyVMware credentials.   BAM!  You will see all the downloads you are entitled to and have a very easy one click download for a whole suite of components.

Snip20150311_13

 

UPDATE:  vSphere 6.0 is now showing up in the product.   If you installed it first thing today, you might need to stop and restart the service for it to show up.   Worst case, kill your browser with a hammer.

Snip20150312_20

Tagged , ,

Tech Preview of Docker Machine Driver for Fusion

(Also, checkout the vSphere driver here.)

I kind of beat up on the vSphere driver quite a bit in the last post.  (sorry guys!).   So I wanted to give a super easy example of what else you can do with it.   I just (finally) watched the DockerCon keynote where they introduced the machine functionality and their messaging on this helping “zero to docker” in just a few commands resonated.  This example shows what the vision is.

So here we go – using docker machine with VMware Fusion as the endpoint on OSX.

You may need to click on the video and watch it in theatre mode to see the text.

 

Tagged , , , , ,

Tech Preview of Docker Machine Driver for vSphere

UPDATE:  Machine is now out of beta, and I have a newer post on some of the changes here: http://www.jaas.co/2015/03/20/using-the-released-version-of-docker-machine-v0-1/

(Also, checkout the Fusion driver here.)

fbbb494a7eef5f9278c6967b6072ca3e_400x400

On Dec 4 2014 Docker announced the “machine” command/functionality as one of the announcements at DockerCon 2014.

In short it provides a way to deploy to many different locations from the same docker cli interface (yes I know that is like ATM machine, just deal with it.).  In their initial alpha release they are including drivers for Virtualbox, and Digital Ocean (though now as of Dec 18 looking at their GitHub page they list syntax already additional for AWS & Azure though I’m not sure if this is functional yet).

The next day on Dec 5 2014 VMware announced a preview of their extension to this Docker Machine command for deploying to VMware Workstation, vSphere and vCloud Air.

I have been using the vSphere part of it a bit this week and found the existing instructions a bit lacking so I wanted to provide some tips and examples to get up and running.

Things to know – but maybe come back to this list later….
First off, a few take aways I learned the hard way.  Which included bugging a very smart dude for help.    A few of these may not make sense until you dive into the functionality yet, so you may want to revisit this section later if you have trouble.

To be clear, I am not posting this list as all the blemishes I found, but as a guide to help anyone else that is struggling to see this vision that this functionality has the potential to bring.  Remember – this is a preview release of the VMware driver, and an alpha release of the Docker code.  Totally unsupported in every meaning of the phrase.

dc8MRKMpi

1) Understand that for this release you need to use the bundled docker binary as it has some functionality that the newest release you’ll get from package managers don’t have.   To get it to co-exist on a system that already had docker-io installed on it, make sure to either specify the full path to either one, or make sure the $PATH env variable is set so it picks up the one you want first.   I also copied my released docker binary to docker_local, so I could easily run that command if I wanted to switch to a local docker container instance.

2) This release of the machine command requires the use of environment variables to specify the active host.  When you run machine ls it will list all the existing docker machines available.  It also specifies the “active” one.  I’m not in the loop on the dev details of this but I assume this will be cleared up in the future.   Even though it says active, you still have to set the ennvironment variable.  Either pull the URL from the machine url machinename command or you can used this nested command

Do note that this is required.  One stumbling point I found was I couldn’t find a way to make this work when I don’t have a real tty session like when vCO makes a SSH call in a workflow.  TBD there……

3) As a follow up to both of the two above, if you DO want or need to switch between using docker machine and a local docker binary, you need to clear out the environment variable with

Yes kind of annoying for now, but I’m sure this will be fixed soon enough.  They are probably discussing this issue here on github.

4) I’m not sure how many others use local docker registries out there, but I do quite a bit for lab environments.  As with this other post I made about the change to forcing SSL communication, it took a moment to figure out how to force the configuration setting on each docker machine.  The really smart guy I alluded to previously built me a boot2docker iso with it embedded in it, so that’s an option, or you could could just manually apply it like this:

 

5) I had quite a bit of problems with a space in the datacenter name, and special characters in passwords.  There may be a workaround, but simple escaping it out didnt work so I just renamed the datacenter and used an account with all simple characters.  Remember….alpha code….

6) Yes, it downloads the ISO every time you run the machine command.  I don’t know why. Go ask docker.  Because, alpha.

7) I even hesitate putting this one here… but in my personal lab it kept failing when transferring the ISO to the datastore.  But it works fine in another lab environment I use.
Probably my own issue with some ghetto routing issue I have….  I worked around it by uploading the ISO by hand to the datastore.  Even though as I said in #6 it downloads it every time, if it already exists in the datastore it doesn’t try to push it again.Lets-Do-This


Step by Step

This syntax is accurate as of Dec 18 with CentOS 6.6 64bit.

Grab the tarball:

GRUMBLE GRUMBLE… whoever compressed this didn’t include the directory…. so it extracts to the present working directory……

Append the directory you extracted to your path environment variable.  Lately I’ve been using ~./bash_profile to set individual settings per user on each host, your results may vary.

And now to fire it off for the first time:

And with any luck you should see a VM pop up named docker-host-ABCDEFG  (that last part is random).  If you get errors, read over the ‘PRO-Tips’ at the end, and ‘Things to know’ at the top.

Now to list the current machines, run:

Set the required environment variables with:

magic

 

 

Now! The magic is happening!  Run a normal docker command like:

And see the magic happen for reals.   This image is being deployed on this new docker “host” which is actually a barebones vSphere VM.

PRO-Tips….

1) If you are doing a bunch of trial and error and you see the message that the docker host already exists, even though it may or may not have been deployed.  This is because even if the command fails it still gets added to the local machine list.  Clean it up with machine rm -f machinename   the -f forces the remove, if the actual VM doesn’t exist.

2) If you get an error message similar to “FATA[0086] open /root/.docker/public-key.json: no such file or directory”  just run the docker binary included here and it will create this file for you.

3) I crafted a pretty sweet bash script to nuke all machines at once.  Add the -f flag to force if you have to.  It works as such:

 

Conclusion

So what does this give us.   In my mind this gives us a simple interface that you may already be familar with and already using on your local machine, the ability to deploy to any number of other endpoints like public or private clouds.   That’s powerful.   Especially with any automation you have already created – slipping this into the mix, makes it even more robust.

This post was heavy on text and light on screenshots on purpose as it’s a complicated subject in this state of development.  I hope to put together a quick video to demonstrate this functionality soon.  Stay tuned.

 

 

Tagged , , , ,

vCAC Remote Console remote privilege escalation

LINK TO VMware Advisory VMSA-2014-0013

LINK to CVE-2014-8373

If you have a vCAC (or the new name vRealize Automation, vRA) system on an untrusted network you should read up on this. (Or in truth, one could argue if you have it all in a production environment….).

VMware vCloud Automation Center has a remote privilege escalation vulnerability. This issue may allow an authenticated vCAC user to obtain administrative access to vCenter Server.

To be clear, this is not a broad virtual machine remove console (VMRC) issue, but how it is implemented in vCAC/vRA.  vSphere is not affected, vCD is not affected.   vRA 6.2 is not affected as “connect using VMRC” is disabled.  The workaround for the older versions is to disable this method.

Tagged , , , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

Snip20141001_67
So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.

Snip20141001_68

Single machine blueprint in vCAC

 

Snip20141001_71

“DockerVM” tag in vSphere denotes the docker capability machines

 

Snip20141001_72

Service Blueprint for various docker images

 

Snip20141001_74

You can specify items like the image name, the port to expose, the bash command, etc..

 

Snip20141001_77

Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.

Snip20141001_78

…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login

BY RUNNING THIS IMAGE YOU IMPLICITLY ACCEPT ALL EULU THAT ANY INCLUDED COMPONENTS IMPOSE.  THIS IS SIMPLY A DEMO APPLICATION AND NOTHING MORE.

 

Tagged , , , , ,

Small Things: vCAC 6.1 – "Data Collection" catalog entry

For anyone that does template configuration changes in vCAC can attest to how big of a pain it is to reconfigure the agents, shut down the machine, snapshot, browse the menu structure to where you force data collection, click collect data for all the items, browse to the blueprint config and wait for it to complete.    Well, hopefully this tip can speed that up just a little, or at least make it less of a headache for you.

vCAC 6.1 comes with a ton of vCO workflows out of the box.  One that caught my eye is “Force data collection”.

Snip20140919_22

 

Adding this workflow as a catalog item is a breeze under Advanced Services – Service Blueprints.  When complete it will show up like any other service or template:

Snip20140919_25

 

And does its job quite well:

Snip20140919_30

 

One warning, you will want to set a constant value for the one question it will prompt for in this workflow.   Edit the blueprint as such:

Snip20140919_26

 

And choose your IaaS (windows) server:

Snip20140919_27

 

I quickly installed a fresh new instance of vCAC & IaaS today, and not sure if it was an error during install or not but at first mine didn’t show any hosts here in the above screen shot.   I had to go into vCO with the client and run this workflow to add it.  Your results may vary.

Snip20140919_31

 

EDIT Sept 22 2014:   I wasn’t clear about where to find this workflow.  It’s found within these folders:

Orchestrator
Library
vCloud Automation Center
Infrastructure Administration
Extensibility
> Force Data Collection

Tagged , , ,

Small Things: vSphere 5.5 U2 – C# Client, Editing HWv10 VMs

Maybe I’m just becoming an old get off my lawn ex-operations curmudgeon in my “old” age, but I find the vSphere Client hard to part with (this message brought to you by Me, and only Me, and no one but Me).  I found it very annoying that if you upgraded any VM hardware versions to 10, you could no longer edit settings in the old client – even something common like mounting an ISO.

vSphere 5.5 U2 has brought us this, yay!  This allows editing of any features present in the old client which is good enough for basic stuff.

Snip20140917_15

 

 

EDIT: Whoops.. fixed the screenshot

Tagged ,