HOWTO: Change Storage Policies for VSAN across entire clusters /w PowerCLI

I had a need recently to switch the applied storage policy across a ton of VMs, but I didn’t want to change the default policy that comes out of the box.   A tough proposition as I found no easy way to do it.  It took quite a bit of googling and trial and error but I came up with this two liner to get it done, so here you go world – go forth and policy change if you need such a thing.

The first line applies it to the VM object, then the next applies it to all the disks.  Easy peasy.

 

Technology found our new best friend.

Last night I built a robot that brought us to our new best friend. Meet Cash.

Cash

Before I explain this strange statement, first let’s back up.

Two weeks ago we found our beloved Maddie was stricken with a tumor on her spleen that ruptured.  I won’t dive into the heartbreaking details, but you can read about it here, here and here.

maddie_camping

To summarize: heartbroken.   That damn wonderful dog lived a great life and will never be replaced.  But we have found we’re a two dog family.  Enter the idea of visiting shelters….which is always fun..!

IMG_6736

After a few misses, we found just how competitive adopting dogs is in Boulder.  Yes, competitive.

Forget cycling, running, and climbing – the most competitive sport in Boulder is trying to adopt a dog from the pound link

Dogs fly out of the Boulder Humane Society.  There was one Jen was interested in that was adopted within hours of her becoming available.  We heard of one from employees that was going HOME within 45 minutes of stepping into the adoptables area.   Seriously.

The employees say to just keep an eye on the website.  So that’s what we did for a bit.  We noticed it was updated frequently throughout the day.  But there was no way to be notified of new dogs.  Enter my light bulb moment.

I saw there was no RSS feed, and (of course) no API.  So I took a glance at the HTML source and saw it would be super easy to screen scrape.  Muahhaha…… this will be easy peasy!    With just a little bit of hacking last night I had a working system that scraped their webpage every 15 minutes, stored it in a local database, and sent us an email when a dog became available!  Ha! Leg up, take that one, Boulder animal people.  Dog adoption performance enhancing drugs.

In the morning I surmized that wasn’t nearly geeky enough.  I added functionality to email us when a dog appeared to be adopted (wasn’t listed any more).  And since email is SO year 2000s, I spun up a new twitter account and had it tweet and direct message us when a dog showed up and went home.  I dub thee: Dog(S)talker.  Get it?  Dog Stalker.  Dogs Talker.  I kill me…

Low and behold…while I was out with the kid on his bike and Jen working on an extension to the chicken coop, DING. DM from the new robot:

Snip20160403_15

Due to an unfortunate typo in the code it is missing the details of the dog but still….. the fucking thing worked… A quick click on the link showed it was a 1 year old, Australian Kelpie Mix, and about 45 pounds.  Check check and check all the boxes!  I yelled across the street: “JEN!”  to which I immediately heard the reply, “I’M GETTING READY TO GO [to the shelter]!”

15 minutes later I received this:

IMG_6740

So an absolute max of 30 minutes from the time she was posted to the website to one of us showing up to check her out.

Long story short, he’s perfect for us.  I’ll post the code to github soon.   Perhaps if this is useful to anyone else I can add others to the notifications.

Snip20160403_16

 

Tagged , , ,

How to send vCenter alarms to Slack

I’m spending some of my time in the new gig with my old sysadmin ops hat on.  We needed a quick easy way to keep an eye on alerts from a vSphere environment so….what else would be more fun than to funnel them to Slack?!  Easy peasy, even on the vCenter Appliance.  Let’s see how…

First you need to configure the integration on Slack.   In the channel you wish to see the alerts in, click the “Add a service integration” link.

Snip20150806_12

Now there is not any special integration with vSphere, we are going to be using a simple REST api to push the content.  Scroll down to “Incoming WebHooks”

Snip20150806_13

Now you need to approve the integration verifying the chat room and click the button:

Snip20150806_14

The outcome of this will be a long URL you will need for the script.

Now we need to get your script ready. Now remember this is on vCenter (windows OR appliance), not ESXi.  Much credit to this guy that created a simple script for Zabbix, as this is a hacked up version of it.   The key here is using the environment $VMWARE_ALARM_EVENTDESCRIPTION which I use because it’s short and simple.   If you want other types of data check out the documentation here.

Now you just simply need to hook this script up to the alarm in vSphere:

Snip20150806_15

Sweet.  Cool.  Let there be (kind of) chatops.

But, I hear you asking…   What if you want to apply this to all your alarms??   Also…. easy peasy.   I just whipped together some powercli and bam.

That line will apply this script action to ALL alarms in the vCenter you connect to.   It will apply this by default to the Yellow to Red action level.    For now I wanted this to trip on all four cases so I looked a little deeper and found this will do it:

Now if you are like me and you screw this up along the way, you may have to clear out the actions across the board.  This line will do that for you:

 

 

Tagged , , , ,

VMware Annoucements @ DockerCon 2015

Two announcements are being made by VMware at DockerCon today that I am pretty stoked about. Here’s a snipped of the details and a link roundup.  I’ll revisit these soon with deeper posts.

VMware AppCatalyst

VMware AppCatalyst is an API and Command Line Interface (CLI)-driven Mac hypervisor that is purpose-built for developers, with the goal of bringing the datacenter environment to the desktop.

“Introducing AppCatalyst – the desktop hypervisor for developers” – VMware Cloud Native Blog link

VMware Communities & download link

Update June 23, 2015 – Using Pivotal Lattice with AppCatalyst by @jrrickard link

Update June 23, 2015 – Vagrant provider for AppCatalyst link

Project Bonneville

..an extension of VMware vSphere that enables seamless support for Docker containers

VMware-Project-Bonneville

“Introducing Project Bonneville” – VMware Cloud Native Blog link

Overview video from VMware’s Brit Mad Scientist Ben Corrie here.

Update June 23, 2015 – Demo video of Bonneville link

Update June 23, 2015 – Bonneville running MS-DOS to play Lemmings link

Other Links

“Extending the Data Center with VMware AppCatalyst and Project Bonneville” – VMware Tribal Knowledge blog post

“VMware previews Project Bonneville, a Docker runtime that works with vSphere” – Venture Beat post (with some weird upside down image that is giving me a headache)

“A Different VMware: An API-Driven Hypervisor and a Docker Oriented vSphere” – The New Stack post

Update June 23, 2015 – “VMware targets new DevOps tools at Docker” – Silicon Angle link

Update June 23, 2015 – “VMware Doubles Down on Docker Integration with Project Bonneville” – Server Watch link

Update June 23, 2015 – “VMware AppCatalyst and Project Bonneville: ‘Datacenter On the Desktop'” – Virtualization Review link

Update June 23, 2015 – “VMware brings AppCatalyst and Project Bonneville technology previews” – InfoTech Lead link

Update June 23, 2015 – “VMware Brings More Tools To Docker Development” – Information Week link

Update June 23, 2015 – “VMware builds a magic mirror for containers and a desktop cloud” – The Register link

Update June 24, 2015 –“VMware Blunts Container Attack With Bonneville VM” – The Platform link

Update June 24, 2015 – “VMware containers go soup-to-nuts for cloud apps” – TechTarget link

Update June 24, 2015 – “VMware Embeds Docker Container Capabilities in Hypervisor” – Datacenter Knowledge link

 

A change…or pivot if you will…..

Pivotal_Logo_200I have been at VMware for 7 years (this week on the dot actually!).  That is a half a lifetime in IT Dog Years.  In that time I have done many different things, and been to many different places.  I saw (and at times helped (or tried to help) ) virtualization mature from a fringe lab thing that would never run production workloads efficiently and easily, to an established vendor that most people are using in some way.  Quite a ride!

Just after the July 4th holiday I will be (metaphorically, though not geographically) be walking a few blocks up the hill in Palo Alto from the VMware campus to a sister EMC Federation company, Pivotal.  I’ll be leaving the current Pre-Sales gig and getting my hands dirty directly in technology as a main focus.  I’m excited!

www.jaams.co

micro-services1-297x250I plan to continue the blogging weird and silly projects on here, though it will stray from a VMware focus to more broad devopsy topics in general.   Hence the slight change in name (mostly as a joke that I was told at GlueCon recently) – Josh as a (Micro) Service!  Kind of catchy don’t you think?

I’ll spare you all the pontificating on merits of focusing on one thick technology stack made up of all kinds of mashed together bits being a monolithic focus, and now for the future breaking it down into singular focus areas and doing each of them well……I don’t know… This joke might not work entirely, but I get a good laugh out of it anyway.

Onward!

“Security is mostly a superstition. It does not exist in nature, nor do the children of men as a whole experience it. Avoiding danger is no safer in the long run than outright exposure. Life is either a daring adventure, or nothing.”
Helen Keller

“Live every week like it’s Shark Week.” – Tracy Jordan

“It’s more fun to be a pirate than to join the Navy.” – Steve Jobs

Tagged , , ,

Using a time series DB, starting with InfluxDB

devops-everywhereAt the last two conferences I attended (DevOpsDays Rockies and GlueCon) I heard a lot of mentions of NoSQL and time series databases.   I hate not knowing about things and not having experience with it so I’ve been playing with both of these.    First I integrated a NoSQL db using Redis into a project of mine recently.  And just now I’ve been playing with InfluxDB as a monitoring system and here I’m going to tell you about my experience.

I didn’t want to get caught up in any installation shenanigans so I tracked down docker images to assist in getting up and running fast.  And glad I did because it worked immediately!

index

1: InfluxDB

InfluxDB docker image: https://registry.hub.docker.com/u/tutum/influxdb/

Docker command:

And then right away you can load in a webbrowser: http://your.ip.address:8083 and you will get the below screen:

Snip20150527_9Once you log in with root/root, you will be shown that you have no databases out of the box, go ahead and insert a name and hit create:

Snip20150527_10You are now given a simple UI to be able to push and pull data into the system by hand.  To test this we will add some values that are in the same format as some of my scripts that deal with temperature.  Basically you can think of the time series as the table, and the values as key/value pairs.

Snip20150527_11

Then you can craft a simple query to verify the data:

Snip20150527_12

Neat!   Now unlike some other solutions, InfluxDB doesn’t provide any visualization functionality (other than a basic line).  I spun up a Grafana container to do this.

grafana

2: Grafana:

Grafana docker image: https://registry.hub.docker.com/u/tutum/grafana/

Docker command:

There are simpler ways to start up this container but I found all of these parameters got me quickly to where I wanted.

Now you can login to port 80 on this machine and you will be presented with an empty first page:

Snip20150527_13

Empty graphs aren’t very exciting, so let’s configure them real quick…

The syntax in Grafana is slightly different than we saw directly in InfluxDB but is mostly straight forward.  We put the database name (temperature) into the “series” field.  We fill in the blanks for the select statement – use last(value) and item=’80027_temp’ to specify the key/value.  Click in somewhere else to change focus and the graph should reload showing the values we entered by hand.

Snip20150527_14

Now I wanted to play around with it further so I modified some existing scripts I have for doing various types of monitoring like pulling data from weather underground (temperature, humidity, wind), and some data for free disk space from a NAS.  Mix it up and it came out looking like this:

Snip20150527_15

To feed the data in, I took the easy way out and used a perl client documented here.  So I then just modified my existing scripts to feed the data to this perl script every 30 seconds and bam, I’m done.

 

 

Tagged , , ,

New backup option for Synology devices

synology1512I have two Synology NAS devices in my home lab that I’ve always struggled with being sure I have full backups of as they have grown pretty large over time.   I have the 5 bay DS1512+ (with an additional 5 bay expansion), and I have a tiny 2 bay DS214SE.    I didn’t plan my use of them this why but it just kind of evolved over time, such is how my lab is.  Something breaks or is slow, I tweak to squeeze out better performance on a small budget and life goes on.

Cur615976_0_frently I use the large array for normal file storage (music, photos, videos, ISOs, etc…), I used the extra expansion for transition storage over time when I moved stuff around (mainly VMs).   I originally used the tiny 2bay NAS for my tiny portable lab based on NUCs.  I now have the management components living on a iSCSI lun on there (VC, PSC, vCO, DNS, AD…) and for all compute I am now using VSAN across three white box machines (which is working fantastic!).

I’ve always struggled with backups in my lab.  Any free options out there either won’t cover two VCs, cpu core limited or VM count limited.  I’ve been using William Lam’s GhettoVCB forever, which is solid but mostly manual.

Enter….what I found this past weekend.  Synology management software images that will work in a VM or baremetal!   This is literally the same OS that runs on their devices.  I first tried it in a VM to test it out.  All seemed well except for updating as it appears to break so you have to wait for unoffical patches.   To use this for real, I went ahead and swapped out the USB drive on my HP N40L which was previously running FreeNAS for backups.

This allowed me to setup a reoccurring RSync from the DS1512:

Snip20150527_4…And also allowed me to setup a reoccurring backup of the iSCSI LUN (holding the management VMs):

Snip20150527_5

While the ~250gb iSCSI backup was pretty quick, the Rsync of 6 TB of small & large files is taking a while.   Performance seems pretty decent, at least for my home lab that can be kind of…iffy given the amount of crazy crap I run on the large flat network of consumer level 1GB switches no tuning whatsoever.

Snip20150527_6

Prior to this I was doing all my backups manually – both the rsyncs and ghettovcb backups.   I would then a few times a year move a backup set outside of my house to a family member.  I highly suggest you do the same!  I do my best to follow the 3-2-1 rule, though I’m not doing great on the multiple types of media as my photo collection has grown too large to use “cloud” storage useful or economical.

Check it out for yourselves!

Install information (credit as the source!) http://www.bussink.ch/?p=1672

More information http://www.xpenology.nl/

Downloads http://xpenology.me/downloads/

Tagged , , , ,

Experiment: Pooling in vRA & Code Stream

Background

I recently attended DevOpsDays Rockies which is a community oriented DevOps conference (check them out in your area, it was great!).  I saw a talk by @aspen (from Twitter/Gnip) entitled “Bare Metal Deployments with Chef”.   He described something he/they built that, if I recall correctly, uses a PXE/Chef/MagicpixieDust to pull from a pool of standby bare metal hardware to fully automate bringing it into a production cluster for Cassandra (or what have you).

This got me thinking on something I was struggling with lately.  Whenever I develop blueprints in Application Director / Application Services, or just vRA/Code Stream, the bulk of the time I just hit the go button and wait.  Look at the error message, tweak and repeat.  The bottleneck by far is in waiting for the VM to provision.  Partly this is due to the architecture of the products, but also it has to do with the slow nested development environments I have to use.  We can do better…..!

Products using pooling

I then started thinking about what VDM / Horizon View have always done for this concept.  If I recall correctly, as it’s been years and years since I’ve worked with it, to speed up deployments of a desktop to a user, a pool concept exists so that there will always be one available on demand to be used.   I don’t have much visibility into it but I am also told the VMware Hands On Labs does the same – keeps a certain number of labs ready to be used so the user does not have to wait for it to spin up.  Interesting.

The idea

So I thought – how could I bring this upfront deployment time to the products I’m working with today to dramatically speed up development time?   And this is what I built – a pooling concept for vRA & Code Stream managed by vRO workflows.

Details – How Redis Works

When planning this out I realized I needed a way to store a small bit of persistent data.   I wanted to use something new (to me) so I looked at a few NoSQL solutions since I’ve wanted to learn one.  I decided on Redis as a key value store, and found Webdis which provides a light REST api into Redis.

I couldn’t find any existing vCO plugins for Redis I/O which is fine, the calls are super simple:

Example of assigning a value of a string variable:

Snip20150517_5The redis command is: “set stringName stringValue”
So the webdis URL to “put” at is “http://fqdn/SET/stringName stringValue”

Then to read the variable back:

Snip20150517_6The redis command is: “get stringName stringValue”
So the webdis URL to “get” at is “http://fqdn/GET/stringName”

Easy peasy. There is similar functional for lists, with commands to pop a value off either end of the list.  This is all I needed, a few simple variables (for things like the pool size) and a list (for things like the list of VMs storing IP addresses & names).

So in vCO I just created a bunch of REST operations that used various number of parameters in the URL line:

Snip20150517_7
I found the most efficient way to run these operations was to parametrize the operation name, and pass it to a single workflow to do the I/O

Details – Workflow(s)

The bulk of the work for this pooling concept is done in the following workflow that runs every 15 minutes.

Snip20150517_8In general it works like this:

  • Check if the workloads are locked – since it can take time to deploy the VMs, only one deployment will be going at a time.
    • If locked, end.
    • If not locked, continue.
  • Lock the deploys.
  • Get the pool max target (I generally set this to 10 or 20 for testing).
  • Get the current pool size (the length of the list in Redis.  much faster than asking vSphere/vRA).
  • If the current size is not at the target, deploy until it is reached.
  • Unlock the deploys.
  • Profit.

I did not have to do it this way, but the nested workflow that does the actual VM deployments is requesting vRA catalog items.

In Action

After I got it fully working and the pool populated, you can check the list values with this type of Redis query:

Snip20150517_9

Redis: lrange vmlist 0 -1 (-1 means all)
Webdis: http://fqdn/LRANGE/vmlist/0/-1

The matching machines in vSphere:

Snip20150517_11

In Action – Code Stream

Normally in a simple Code Stream pipeline you would deploy a VM by requesting the specific blueprint via vRA like this:

Snip20150517_19

In this solution, instead I use a custom action to grab the VM from the pool and return the IP back to the pipeline as a variable.  Then I treat the VM like it’s an existing machine and continue on and at the end delete the machine.

Snip20150517_18

This reduces the list in redis by one, so the next time the scheduled workflow runs that checks the list size it will deploy a new one.

(Kind of) Continuous Deployment

I have a job in Jenkins that builds the sample application I am using from source in Git, pushes the compiled code to Artifactory and does a post build action that calls Code Stream to deploy.

Snip20150517_15

I wanted to see if there were any bugs in my code, so I wanted this whole thing to run end to end over and over and over…   I configured the Jenkins job to build every 30 minutes.  I went on vacation the week after I built this solution so I wanted to see if over time anything broke down.  Amazingly enough it kept on trucking while I was gone, and even got up to the mid 700’s in Jenkins builds.   Neat!

Snip20150517_12

Jenkins builds

Artifacts

Artifacts

Code Stream executions

Code Stream executions

Summary

To my surprise, this actually works pretty darn well.  I figured my implementation would be so-so but the idea would get across.  It turns out, what I’ve built here is darn handy and I’ll probably be using it the next time I am in a development cycle.

Post any questions here and I’ll try to answer them.   I’m not planning to post my workflows publicly just yet, fyi.

Tagged , , , , , , , , , , , ,

Introducing VMware Project Photon (#vmwcna)

VMW-LOGO-PHOTONUnless you have been hiding under an IT rock, you no doubt have heard about the new crop of tiny linux OS releases as of late that are positioned as a “Container Host Runtime” or “Linux Container OS” (here, here, here).   They are stripped down to the bare essentials and geared towards running containers efficiently at scale.   CoreOS, Atom, Snappy and so on.  Today VMware’s Cloud Native team is introducing Project Photon as their flavor of this ecosystem.  (Link to GitHub page)

Entirely open source. (Free as in beer.)  Built in VMware tools.  Optimized for the VMware hypervisors. There are lots of benefits for VMware building their own from the kernel and not forking an existing OS that will become more clear over time, but I will leave it to the official messaging for now.

What is Project Photon?

Project Photon is a tech preview of an open source, Linux container host runtime optimized for vSphere. Photon is extensible, lightweight, and supports the most common container formats including Docker, Rocket (rkt) and Garden.

Project Photon includes a small footprint, yum-compatible, package-based lifecycle management system, and will support an rpm-ostree image-based system versioning.

When used with development tools and environments such as VMware Fusion, VMware Workstation, HashiCorp (Vagrant and Atlas) and production runtime environment (vSphere, vCloud Air), Photon allows seamless migration of container based Apps from development to production.

From “Getting Started” documentation

It may not make sense to some why VMware is releasing a linux OS.  This will become more clear over time.  But for today, just think about the power of VMware owning the hypervisor underneath, AND the VM operating system as a platform for running containers.  You get all the benefit of the vSphere world (HA, DRS, FT, NSX, vSAN, vMotion….) and all the benefits of containers!  Plus… remember VMfork that Duncan has blogged about?  hmmmmm….

 

Installation

Snip20150418_75

 

Snip20150418_74

 

Seriously….. Using the minimal install, 12second install time in Fusion on my MacBook Pro.  303 mb footprint.  That. is. awesome.  The following are the sizes and average install times I’ve noticed.  Booting is literally just a few moments.

The install comes in three flavors from the same .ISO, (or you can custom pick packages)

Full: 1.7GB.  40 to 60 seconds to install
Minimum: 303mb. 10 to 20 seconds to install
Micro: 259mb. 8 to 12 seconds to install

 

Photon OS (Micro): Photon Micro is a completely stripped down version of Photon that can serve as an application container, but doesn’t have sufficient packages for hosting containers. This version is only suited for running an application as a container. Due to the extremely limited set of packages installed, this might be considered the most secure version.

Photon Container OS (Minimum): Photon Minimum is a very lightweight version of the container host runtime that is best suited for container management and hosting. There is sufficient packaging and functionality to allow most common operations around modifying existing containers, as well as being a highly performant and full-featured runtime.

Photon Full OS (All): Photon Full includes several additional packages to enhance the authoring and packaging of containerized applications and/or system customization. For simply running containers, Photon Full will be overkill. Use Photon Full for developing and packaging the application that will be run as a container, as well as authoring the container, itself. For testing and validation purposes, Photon Full will include all components necessary to run containers.

Photon Custom OS: Photon Custom provides complete flexibility and control for how you want to create a specific container runtime environment. Use Photon Custom to create a specific environment that might add incremental & required functionality between the Micro and Minimum footprints or if there is specific framework that you would like installed.

From “Getting Started” documentation

Using Photon / SystemD

I’ll be the first to admit I have not adopted CentOS7 yet as all my labs are still using CentOS6, so I was not familiar with the new SystemD commands as of yet.  There is some good info on it here and here.

TLDR; for services,  Project Photon uses systemd:
You no longer are running chkconfig or /etc/init.d/ scripts.  Instead you use systemctl enable service and systemctl start postfix.

Also networking is different, you edit files in /etc/systemd/network instead of sysconfig.  I’ll show more info on that below.

One more helpful thing to know is there are no logs in your familiar home of /var/log/, they are managed centrally in journalctl. Digital Ocean has a great overview of the usage of it here.  I won’t rehash all of the functionality that they wrote about but I’ll give a quick example.

TLDR; for logging, Project Photon uses journalctl:
You no longer use /var/log/postfix.log.  Instead you use (to continuously tail) journalctl -f -u postfix


How to Get Started

VMware has posted a bunch of great getting started guides here that walk through deploying on Fusion, vSphere, GCE, AWS, Air, etc…  In addition to those guides, here are some tips on configuration to help get those that are not familiar up and running right away.

Here is what I’ve been doing when I deploy a new machine.  I’ve found each of these have exact syntax and capitalization that are important, otherwise the IP does not get configured.

  • Allow root SSH access in /etc/ssh/sshd_config
  • Set the correct hostname in /etc/sysconfig/network
  • Configure a static IP by:

mv /etc/systemd/network/10-dhcp-eth0.network /etc/systemd/network/static.network

Edit the contents to be:

  • Update the hosts file to be sure you have short and fqdn set on 127.0.0.1

  • Then run the commands to configure the hostname

  • I like using keys for SSH to login quicker in my lab

Good to go!

 

Tagged , , , , , ,

Using the released version of Docker-Machine (v0.1) with VMware vSphere

I began uplifting some of my content today which included upgrading to the newest docker (v1.5) and docker-machine (v0.1), and came across a number of changes.

  • The command is now officially “docker-machine” instead of just “machine” which is what it was when I first played with it.
  • All the VMware driver commands are now prefixed with “vmware”   so instead of “–vsphere-vcenter” it is now “–vmwarevsphere-vcenter”   a full example is:

    And they have an easier way to set the environment variables now:
  • I couldn’t get “–vmwarevsphere-boot2docker-url” to work with a custom URL which is probably a bug.  If you leave it out entirely it will use a default location.
  • ..Which is a good thing because boot2docker now includes VMware tools, which negates the need for a custom .ISO
  • The only other change I need to make to the boot2docker image is the use of a insecure registry, so I just include in my syntax the running of a shell script which runs: docker-machine ssh $1 sudo sed -i -e ‘s/–tlsverify/–tlsverify –insecure-registry docker-hub:5000/g’ /var/lib/boot2docker/profile  You can find this full shell script on github here.   “docker-hub” is my registry hostname on port 5000
  • I noticed the name of the VM now matches what docker-machine calls it instead of a random string.

That’s about it so far.  I have not used it too extensively yet but so far so good.  I did not see a single hang of the docker commands like I saw previously with the older versions.  Thumbs up so far.

Tagged , , , ,