Category Archives: ItsTheSmallThings

Technology found our new best friend.

Last night I built a robot that brought us to our new best friend. Meet Cash.

Cash

Before I explain this strange statement, first let’s back up.

Two weeks ago we found our beloved Maddie was stricken with a tumor on her spleen that ruptured.  I won’t dive into the heartbreaking details, but you can read about it here, here and here.

maddie_camping

To summarize: heartbroken.   That damn wonderful dog lived a great life and will never be replaced.  But we have found we’re a two dog family.  Enter the idea of visiting shelters….which is always fun..!

IMG_6736

After a few misses, we found just how competitive adopting dogs is in Boulder.  Yes, competitive.

Forget cycling, running, and climbing – the most competitive sport in Boulder is trying to adopt a dog from the pound link

Dogs fly out of the Boulder Humane Society.  There was one Jen was interested in that was adopted within hours of her becoming available.  We heard of one from employees that was going HOME within 45 minutes of stepping into the adoptables area.   Seriously.

The employees say to just keep an eye on the website.  So that’s what we did for a bit.  We noticed it was updated frequently throughout the day.  But there was no way to be notified of new dogs.  Enter my light bulb moment.

I saw there was no RSS feed, and (of course) no API.  So I took a glance at the HTML source and saw it would be super easy to screen scrape.  Muahhaha…… this will be easy peasy!    With just a little bit of hacking last night I had a working system that scraped their webpage every 15 minutes, stored it in a local database, and sent us an email when a dog became available!  Ha! Leg up, take that one, Boulder animal people.  Dog adoption performance enhancing drugs.

In the morning I surmized that wasn’t nearly geeky enough.  I added functionality to email us when a dog appeared to be adopted (wasn’t listed any more).  And since email is SO year 2000s, I spun up a new twitter account and had it tweet and direct message us when a dog showed up and went home.  I dub thee: Dog(S)talker.  Get it?  Dog Stalker.  Dogs Talker.  I kill me…

Low and behold…while I was out with the kid on his bike and Jen working on an extension to the chicken coop, DING. DM from the new robot:

Snip20160403_15

Due to an unfortunate typo in the code it is missing the details of the dog but still….. the fucking thing worked… A quick click on the link showed it was a 1 year old, Australian Kelpie Mix, and about 45 pounds.  Check check and check all the boxes!  I yelled across the street: “JEN!”  to which I immediately heard the reply, “I’M GETTING READY TO GO [to the shelter]!”

15 minutes later I received this:

IMG_6740

So an absolute max of 30 minutes from the time she was posted to the website to one of us showing up to check her out.

Long story short, he’s perfect for us.  I’ll post the code to github soon.   Perhaps if this is useful to anyone else I can add others to the notifications.

Snip20160403_16

 

Tagged , , ,

Using a time series DB, starting with InfluxDB

devops-everywhereAt the last two conferences I attended (DevOpsDays Rockies and GlueCon) I heard a lot of mentions of NoSQL and time series databases.   I hate not knowing about things and not having experience with it so I’ve been playing with both of these.    First I integrated a NoSQL db using Redis into a project of mine recently.  And just now I’ve been playing with InfluxDB as a monitoring system and here I’m going to tell you about my experience.

I didn’t want to get caught up in any installation shenanigans so I tracked down docker images to assist in getting up and running fast.  And glad I did because it worked immediately!

index

1: InfluxDB

InfluxDB docker image: https://registry.hub.docker.com/u/tutum/influxdb/

Docker command:

And then right away you can load in a webbrowser: http://your.ip.address:8083 and you will get the below screen:

Snip20150527_9Once you log in with root/root, you will be shown that you have no databases out of the box, go ahead and insert a name and hit create:

Snip20150527_10You are now given a simple UI to be able to push and pull data into the system by hand.  To test this we will add some values that are in the same format as some of my scripts that deal with temperature.  Basically you can think of the time series as the table, and the values as key/value pairs.

Snip20150527_11

Then you can craft a simple query to verify the data:

Snip20150527_12

Neat!   Now unlike some other solutions, InfluxDB doesn’t provide any visualization functionality (other than a basic line).  I spun up a Grafana container to do this.

grafana

2: Grafana:

Grafana docker image: https://registry.hub.docker.com/u/tutum/grafana/

Docker command:

There are simpler ways to start up this container but I found all of these parameters got me quickly to where I wanted.

Now you can login to port 80 on this machine and you will be presented with an empty first page:

Snip20150527_13

Empty graphs aren’t very exciting, so let’s configure them real quick…

The syntax in Grafana is slightly different than we saw directly in InfluxDB but is mostly straight forward.  We put the database name (temperature) into the “series” field.  We fill in the blanks for the select statement – use last(value) and item=’80027_temp’ to specify the key/value.  Click in somewhere else to change focus and the graph should reload showing the values we entered by hand.

Snip20150527_14

Now I wanted to play around with it further so I modified some existing scripts I have for doing various types of monitoring like pulling data from weather underground (temperature, humidity, wind), and some data for free disk space from a NAS.  Mix it up and it came out looking like this:

Snip20150527_15

To feed the data in, I took the easy way out and used a perl client documented here.  So I then just modified my existing scripts to feed the data to this perl script every 30 seconds and bam, I’m done.

 

 

Tagged , , ,

New backup option for Synology devices

synology1512I have two Synology NAS devices in my home lab that I’ve always struggled with being sure I have full backups of as they have grown pretty large over time.   I have the 5 bay DS1512+ (with an additional 5 bay expansion), and I have a tiny 2 bay DS214SE.    I didn’t plan my use of them this why but it just kind of evolved over time, such is how my lab is.  Something breaks or is slow, I tweak to squeeze out better performance on a small budget and life goes on.

Cur615976_0_frently I use the large array for normal file storage (music, photos, videos, ISOs, etc…), I used the extra expansion for transition storage over time when I moved stuff around (mainly VMs).   I originally used the tiny 2bay NAS for my tiny portable lab based on NUCs.  I now have the management components living on a iSCSI lun on there (VC, PSC, vCO, DNS, AD…) and for all compute I am now using VSAN across three white box machines (which is working fantastic!).

I’ve always struggled with backups in my lab.  Any free options out there either won’t cover two VCs, cpu core limited or VM count limited.  I’ve been using William Lam’s GhettoVCB forever, which is solid but mostly manual.

Enter….what I found this past weekend.  Synology management software images that will work in a VM or baremetal!   This is literally the same OS that runs on their devices.  I first tried it in a VM to test it out.  All seemed well except for updating as it appears to break so you have to wait for unoffical patches.   To use this for real, I went ahead and swapped out the USB drive on my HP N40L which was previously running FreeNAS for backups.

This allowed me to setup a reoccurring RSync from the DS1512:

Snip20150527_4…And also allowed me to setup a reoccurring backup of the iSCSI LUN (holding the management VMs):

Snip20150527_5

While the ~250gb iSCSI backup was pretty quick, the Rsync of 6 TB of small & large files is taking a while.   Performance seems pretty decent, at least for my home lab that can be kind of…iffy given the amount of crazy crap I run on the large flat network of consumer level 1GB switches no tuning whatsoever.

Snip20150527_6

Prior to this I was doing all my backups manually – both the rsyncs and ghettovcb backups.   I would then a few times a year move a backup set outside of my house to a family member.  I highly suggest you do the same!  I do my best to follow the 3-2-1 rule, though I’m not doing great on the multiple types of media as my photo collection has grown too large to use “cloud” storage useful or economical.

Check it out for yourselves!

Install information (credit as the source!) http://www.bussink.ch/?p=1672

More information http://www.xpenology.nl/

Downloads http://xpenology.me/downloads/

Tagged , , , ,

Quicker switching of active docker machines

fbbb494a7eef5f9278c6967b6072ca3e_400x400           machines

As it stands today, with the docker machine command you have to manually specify environment variables for DOCKER_HOST and DOCKER_AUTH.

So the process would be:

This is a bit of a pain when doing it manually, so I was looking for a quicker way to switch back and forth and I think this works pretty well, though not totally elegant.

I started with a shell script which contains the following.  It takes the machine name as input, outputs the syntax to a script and sources it.   I found I had to do it this way, otherwise the current user session wouldn’t have the variables changed, only for the script itself.

So you would run it with:

Let me know if you have a better way!

Tagged , , ,

New vCAC & Application Services 6.1 template prep script (linux)

UPDATE: Dec 9 2014 – vCAC is renamed to vRealize Automation (vRA).   vRA 6.2 is dropping today and the pre-req script is posted here.

UPDATE: Dec 16 2014 – Doh!!  I was multitasking too much when i posted that last update.   The pre-req script wasn’t the point of this original post, but is still useful none the less.  To recap – the pre-req script is to ease setting up a vRA IAAS machine.   The template prep script is to ease setting up a linux template to be used _WITH_ vRA.
A great tool that flew under my radar in the most recent 6.1 release for AppD….er…Application Services and vCAC proper… is a script that does all the steps to prepare a linux template for you for both agents.   If you are at all familiar with this process, you’ll find it to be a huuuuuuuge time saver.

Original Post:
If you look at the documentation it’s quite cumbersome and full of potenial human error points.  This script will check all dependencies, install them where it can, and either prompt for the appropriate server names or accept inline input.


Getting Started
First pull the script off the AppD server and make it executable.

Snip20141106_22
How to use it – Interactive

If you wanted to just dive in run it for interactive mode:

Snip20141106_24
How to use it – Unattended

I update templates quite often in the lab environments I work in, so I like keeping a quick reference in a note that I can quickly cut & paste from.  Now that this script accepts inline inputs I could gain an extra sysadmin merit badge and just drop it into a shell script in a common place across all templates and just manually run that.  Easy Peasy.

So here’s the help page:

Snip20141106_23

Here’s what I would run which tells it the three server names, not to install java,  not to check ssl certs, a timeout of 300 secs and not to prompt to confirm:

The last line is a handy step that prevents centOS templates to increment the nic# when cloning.  There could be a better way but it works.

Snip20141106_25

… it does it’s thing…..and finishes with:

Snip20141106_26

 

Now you’re ready to shut it down, take a snapshot, start data collection, and update your blueprint!

Tagged , ,

Mass update WordPress content in MySQL

Here’s a quickie that may help someone someday.  If so, yay, I helped!

When I migrated my blog most recently I must have screwed something up along the way and some old posts had all image URLs replaced with a now bad IP address instead of a fqdn.   I crafted up a SQL statement that was able to update a ton of posts all at once.  Gotta love efficiency.

This does a search and replace on every post (post_content field within the posts table) and replaces the first snip with the second.  Neato.

 

 

Tagged , ,

Small Things: vCAC 6.1 – "Data Collection" catalog entry

For anyone that does template configuration changes in vCAC can attest to how big of a pain it is to reconfigure the agents, shut down the machine, snapshot, browse the menu structure to where you force data collection, click collect data for all the items, browse to the blueprint config and wait for it to complete.    Well, hopefully this tip can speed that up just a little, or at least make it less of a headache for you.

vCAC 6.1 comes with a ton of vCO workflows out of the box.  One that caught my eye is “Force data collection”.

Snip20140919_22

 

Adding this workflow as a catalog item is a breeze under Advanced Services – Service Blueprints.  When complete it will show up like any other service or template:

Snip20140919_25

 

And does its job quite well:

Snip20140919_30

 

One warning, you will want to set a constant value for the one question it will prompt for in this workflow.   Edit the blueprint as such:

Snip20140919_26

 

And choose your IaaS (windows) server:

Snip20140919_27

 

I quickly installed a fresh new instance of vCAC & IaaS today, and not sure if it was an error during install or not but at first mine didn’t show any hosts here in the above screen shot.   I had to go into vCO with the client and run this workflow to add it.  Your results may vary.

Snip20140919_31

 

EDIT Sept 22 2014:   I wasn’t clear about where to find this workflow.  It’s found within these folders:

Orchestrator
Library
vCloud Automation Center
Infrastructure Administration
Extensibility
> Force Data Collection

Tagged , , ,

Small Things: NUC Sack

I transitioned some of my home lab onto tiny Intel NUC machines this year and just love the form factor and the fact that they are so portable. In fact I flew with them, that tiny NAS in the picture and a router to an event to demo some projects.  If only they went to 32 gb or higher, they would be perfect.

My buddy @jrrickard and his co-worker 3d printed me a rack for these little guys that we’re loving calling the “NUC Sack”.   Love love love it!!! Thanks guys!

If you want to print one yourself they posted the files on github here.   I’m told if you want to use this with the taller NUCs that will take a spinning disk inside, add 15mm to each of the sections.

NUC-SAC

Tagged ,

Small Things: vSphere 5.5 U2 – C# Client, Editing HWv10 VMs

Maybe I’m just becoming an old get off my lawn ex-operations curmudgeon in my “old” age, but I find the vSphere Client hard to part with (this message brought to you by Me, and only Me, and no one but Me).  I found it very annoying that if you upgraded any VM hardware versions to 10, you could no longer edit settings in the old client – even something common like mounting an ISO.

vSphere 5.5 U2 has brought us this, yay!  This allows editing of any features present in the old client which is good enough for basic stuff.

Snip20140917_15

 

 

EDIT: Whoops.. fixed the screenshot

Tagged ,