Tag Archives: containers

Introducing VMware Project Photon (#vmwcna)

VMW-LOGO-PHOTONUnless you have been hiding under an IT rock, you no doubt have heard about the new crop of tiny linux OS releases as of late that are positioned as a “Container Host Runtime” or “Linux Container OS” (here, here, here).   They are stripped down to the bare essentials and geared towards running containers efficiently at scale.   CoreOS, Atom, Snappy and so on.  Today VMware’s Cloud Native team is introducing Project Photon as their flavor of this ecosystem.  (Link to GitHub page)

Entirely open source. (Free as in beer.)  Built in VMware tools.  Optimized for the VMware hypervisors. There are lots of benefits for VMware building their own from the kernel and not forking an existing OS that will become more clear over time, but I will leave it to the official messaging for now.

What is Project Photon?

Project Photon is a tech preview of an open source, Linux container host runtime optimized for vSphere. Photon is extensible, lightweight, and supports the most common container formats including Docker, Rocket (rkt) and Garden.

Project Photon includes a small footprint, yum-compatible, package-based lifecycle management system, and will support an rpm-ostree image-based system versioning.

When used with development tools and environments such as VMware Fusion, VMware Workstation, HashiCorp (Vagrant and Atlas) and production runtime environment (vSphere, vCloud Air), Photon allows seamless migration of container based Apps from development to production.

From “Getting Started” documentation

It may not make sense to some why VMware is releasing a linux OS.  This will become more clear over time.  But for today, just think about the power of VMware owning the hypervisor underneath, AND the VM operating system as a platform for running containers.  You get all the benefit of the vSphere world (HA, DRS, FT, NSX, vSAN, vMotion….) and all the benefits of containers!  Plus… remember VMfork that Duncan has blogged about?  hmmmmm….

 

Installation

Snip20150418_75

 

Snip20150418_74

 

Seriously….. Using the minimal install, 12second install time in Fusion on my MacBook Pro.  303 mb footprint.  That. is. awesome.  The following are the sizes and average install times I’ve noticed.  Booting is literally just a few moments.

The install comes in three flavors from the same .ISO, (or you can custom pick packages)

Full: 1.7GB.  40 to 60 seconds to install
Minimum: 303mb. 10 to 20 seconds to install
Micro: 259mb. 8 to 12 seconds to install

 

Photon OS (Micro): Photon Micro is a completely stripped down version of Photon that can serve as an application container, but doesn’t have sufficient packages for hosting containers. This version is only suited for running an application as a container. Due to the extremely limited set of packages installed, this might be considered the most secure version.

Photon Container OS (Minimum): Photon Minimum is a very lightweight version of the container host runtime that is best suited for container management and hosting. There is sufficient packaging and functionality to allow most common operations around modifying existing containers, as well as being a highly performant and full-featured runtime.

Photon Full OS (All): Photon Full includes several additional packages to enhance the authoring and packaging of containerized applications and/or system customization. For simply running containers, Photon Full will be overkill. Use Photon Full for developing and packaging the application that will be run as a container, as well as authoring the container, itself. For testing and validation purposes, Photon Full will include all components necessary to run containers.

Photon Custom OS: Photon Custom provides complete flexibility and control for how you want to create a specific container runtime environment. Use Photon Custom to create a specific environment that might add incremental & required functionality between the Micro and Minimum footprints or if there is specific framework that you would like installed.

From “Getting Started” documentation

Using Photon / SystemD

I’ll be the first to admit I have not adopted CentOS7 yet as all my labs are still using CentOS6, so I was not familiar with the new SystemD commands as of yet.  There is some good info on it here and here.

TLDR; for services,  Project Photon uses systemd:
You no longer are running chkconfig or /etc/init.d/ scripts.  Instead you use systemctl enable service and systemctl start postfix.

Also networking is different, you edit files in /etc/systemd/network instead of sysconfig.  I’ll show more info on that below.

One more helpful thing to know is there are no logs in your familiar home of /var/log/, they are managed centrally in journalctl. Digital Ocean has a great overview of the usage of it here.  I won’t rehash all of the functionality that they wrote about but I’ll give a quick example.

TLDR; for logging, Project Photon uses journalctl:
You no longer use /var/log/postfix.log.  Instead you use (to continuously tail) journalctl -f -u postfix


How to Get Started

VMware has posted a bunch of great getting started guides here that walk through deploying on Fusion, vSphere, GCE, AWS, Air, etc…  In addition to those guides, here are some tips on configuration to help get those that are not familiar up and running right away.

Here is what I’ve been doing when I deploy a new machine.  I’ve found each of these have exact syntax and capitalization that are important, otherwise the IP does not get configured.

  • Allow root SSH access in /etc/ssh/sshd_config
  • Set the correct hostname in /etc/sysconfig/network
  • Configure a static IP by:

mv /etc/systemd/network/10-dhcp-eth0.network /etc/systemd/network/static.network

Edit the contents to be:

  • Update the hosts file to be sure you have short and fqdn set on 127.0.0.1

  • Then run the commands to configure the hostname

  • I like using keys for SSH to login quicker in my lab

Good to go!

 

Tagged , , , , , ,

Using the released version of Docker-Machine (v0.1) with VMware vSphere

I began uplifting some of my content today which included upgrading to the newest docker (v1.5) and docker-machine (v0.1), and came across a number of changes.

  • The command is now officially “docker-machine” instead of just “machine” which is what it was when I first played with it.
  • All the VMware driver commands are now prefixed with “vmware”   so instead of “–vsphere-vcenter” it is now “–vmwarevsphere-vcenter”   a full example is:

    And they have an easier way to set the environment variables now:
  • I couldn’t get “–vmwarevsphere-boot2docker-url” to work with a custom URL which is probably a bug.  If you leave it out entirely it will use a default location.
  • ..Which is a good thing because boot2docker now includes VMware tools, which negates the need for a custom .ISO
  • The only other change I need to make to the boot2docker image is the use of a insecure registry, so I just include in my syntax the running of a shell script which runs: docker-machine ssh $1 sudo sed -i -e ‘s/–tlsverify/–tlsverify –insecure-registry docker-hub:5000/g’ /var/lib/boot2docker/profile  You can find this full shell script on github here.   “docker-hub” is my registry hostname on port 5000
  • I noticed the name of the VM now matches what docker-machine calls it instead of a random string.

That’s about it so far.  I have not used it too extensively yet but so far so good.  I did not see a single hang of the docker commands like I saw previously with the older versions.  Thumbs up so far.

Tagged , , , ,

How to modify a boot2docker .ISO for Docker Machine

I have been doing quite a bit of work with VMware vSphere + Docker + Machine lately.  I helped build a Hands On Lab for the recent VMware Partner Exchange conference with it.  I can’t promise, but it looks like it might be available publicly.  If you have access – it’s named HOL-SDC-1430.  It has been a difficult process as we’re in such alpha territory.  Things sometimes work, sometimes they don’t, and then they change a rev later..

Snip20150212_9An example of this is three specific items needed in a boot2docker image that is used as the docker host VM.  We need VMware tools (or open vm tools), some networking updates, and in my case the change to docker settings to allow pulling from an insecure local registry.

VMware’s Cloud Native Apps R&D has forked the main boot2docker repo and done the tools work and networking work (probably among others too) but I had to dive in and figure out how to edit it further to allow for a new docker option.  I really can’t claim to be an authoritative source on the docker and boot2docker side of things here but the googles failed me on a single location for all this information so here you go!

1) First you have to clone from a specific branch of VMware’s Cloud Native Apps git repo.  ovt stands for open vm tools.  See the diff’s here.

2) I found a Dockerfile is what is used to customize the iso.  How it works is the container is built from the file, and a number of other dependencies in the sub directories but is written to print out the iso data when run.  Pretty clever whoever first came up with this method.    So to do my hard-coding-not-best-practice-but-solves-my-needs I edit the Dockerfile as follows by removing the dependency on the b2d version, and just pull the latest Docker.   1.5 came out this week and I was wanting to pick up those updates.

Snip20150212_7

3) I also needed to use a local repository without certificates since I am building lab environments, so I added a new config variable for $DOCKER_REG to make it easier to update later.

Snip20150212_8

4) Now the rest is just following the b2d documentation.  Build the container with:

5) And write out the ISO with:

One thing that hung me up for a bit was machine doesn’t do any checksum on the ISO you tell it to use.   If machine sees the image already exists it WILL NOT overwrite it on the target datastore, so remember to delete it and let it upload the new one.  Very important.

Tagged , , , , ,

Quicker switching of active docker machines

fbbb494a7eef5f9278c6967b6072ca3e_400x400           machines

As it stands today, with the docker machine command you have to manually specify environment variables for DOCKER_HOST and DOCKER_AUTH.

So the process would be:

This is a bit of a pain when doing it manually, so I was looking for a quicker way to switch back and forth and I think this works pretty well, though not totally elegant.

I started with a shell script which contains the following.  It takes the machine name as input, outputs the syntax to a script and sources it.   I found I had to do it this way, otherwise the current user session wouldn’t have the variables changed, only for the script itself.

So you would run it with:

Let me know if you have a better way!

Tagged , , ,

Tech Preview of Docker Machine Driver for Fusion

(Also, checkout the vSphere driver here.)

I kind of beat up on the vSphere driver quite a bit in the last post.  (sorry guys!).   So I wanted to give a super easy example of what else you can do with it.   I just (finally) watched the DockerCon keynote where they introduced the machine functionality and their messaging on this helping “zero to docker” in just a few commands resonated.  This example shows what the vision is.

So here we go – using docker machine with VMware Fusion as the endpoint on OSX.

You may need to click on the video and watch it in theatre mode to see the text.

 

Tagged , , , , ,

Tech Preview of Docker Machine Driver for vSphere

UPDATE:  Machine is now out of beta, and I have a newer post on some of the changes here: http://www.jaas.co/2015/03/20/using-the-released-version-of-docker-machine-v0-1/

(Also, checkout the Fusion driver here.)

fbbb494a7eef5f9278c6967b6072ca3e_400x400

On Dec 4 2014 Docker announced the “machine” command/functionality as one of the announcements at DockerCon 2014.

In short it provides a way to deploy to many different locations from the same docker cli interface (yes I know that is like ATM machine, just deal with it.).  In their initial alpha release they are including drivers for Virtualbox, and Digital Ocean (though now as of Dec 18 looking at their GitHub page they list syntax already additional for AWS & Azure though I’m not sure if this is functional yet).

The next day on Dec 5 2014 VMware announced a preview of their extension to this Docker Machine command for deploying to VMware Workstation, vSphere and vCloud Air.

I have been using the vSphere part of it a bit this week and found the existing instructions a bit lacking so I wanted to provide some tips and examples to get up and running.

Things to know – but maybe come back to this list later….
First off, a few take aways I learned the hard way.  Which included bugging a very smart dude for help.    A few of these may not make sense until you dive into the functionality yet, so you may want to revisit this section later if you have trouble.

To be clear, I am not posting this list as all the blemishes I found, but as a guide to help anyone else that is struggling to see this vision that this functionality has the potential to bring.  Remember – this is a preview release of the VMware driver, and an alpha release of the Docker code.  Totally unsupported in every meaning of the phrase.

dc8MRKMpi

1) Understand that for this release you need to use the bundled docker binary as it has some functionality that the newest release you’ll get from package managers don’t have.   To get it to co-exist on a system that already had docker-io installed on it, make sure to either specify the full path to either one, or make sure the $PATH env variable is set so it picks up the one you want first.   I also copied my released docker binary to docker_local, so I could easily run that command if I wanted to switch to a local docker container instance.

2) This release of the machine command requires the use of environment variables to specify the active host.  When you run machine ls it will list all the existing docker machines available.  It also specifies the “active” one.  I’m not in the loop on the dev details of this but I assume this will be cleared up in the future.   Even though it says active, you still have to set the ennvironment variable.  Either pull the URL from the machine url machinename command or you can used this nested command

Do note that this is required.  One stumbling point I found was I couldn’t find a way to make this work when I don’t have a real tty session like when vCO makes a SSH call in a workflow.  TBD there……

3) As a follow up to both of the two above, if you DO want or need to switch between using docker machine and a local docker binary, you need to clear out the environment variable with

Yes kind of annoying for now, but I’m sure this will be fixed soon enough.  They are probably discussing this issue here on github.

4) I’m not sure how many others use local docker registries out there, but I do quite a bit for lab environments.  As with this other post I made about the change to forcing SSL communication, it took a moment to figure out how to force the configuration setting on each docker machine.  The really smart guy I alluded to previously built me a boot2docker iso with it embedded in it, so that’s an option, or you could could just manually apply it like this:

 

5) I had quite a bit of problems with a space in the datacenter name, and special characters in passwords.  There may be a workaround, but simple escaping it out didnt work so I just renamed the datacenter and used an account with all simple characters.  Remember….alpha code….

6) Yes, it downloads the ISO every time you run the machine command.  I don’t know why. Go ask docker.  Because, alpha.

7) I even hesitate putting this one here… but in my personal lab it kept failing when transferring the ISO to the datastore.  But it works fine in another lab environment I use.
Probably my own issue with some ghetto routing issue I have….  I worked around it by uploading the ISO by hand to the datastore.  Even though as I said in #6 it downloads it every time, if it already exists in the datastore it doesn’t try to push it again.Lets-Do-This


Step by Step

This syntax is accurate as of Dec 18 with CentOS 6.6 64bit.

Grab the tarball:

GRUMBLE GRUMBLE… whoever compressed this didn’t include the directory…. so it extracts to the present working directory……

Append the directory you extracted to your path environment variable.  Lately I’ve been using ~./bash_profile to set individual settings per user on each host, your results may vary.

And now to fire it off for the first time:

And with any luck you should see a VM pop up named docker-host-ABCDEFG  (that last part is random).  If you get errors, read over the ‘PRO-Tips’ at the end, and ‘Things to know’ at the top.

Now to list the current machines, run:

Set the required environment variables with:

magic

 

 

Now! The magic is happening!  Run a normal docker command like:

And see the magic happen for reals.   This image is being deployed on this new docker “host” which is actually a barebones vSphere VM.

PRO-Tips….

1) If you are doing a bunch of trial and error and you see the message that the docker host already exists, even though it may or may not have been deployed.  This is because even if the command fails it still gets added to the local machine list.  Clean it up with machine rm -f machinename   the -f forces the remove, if the actual VM doesn’t exist.

2) If you get an error message similar to “FATA[0086] open /root/.docker/public-key.json: no such file or directory”  just run the docker binary included here and it will create this file for you.

3) I crafted a pretty sweet bash script to nuke all machines at once.  Add the -f flag to force if you have to.  It works as such:

 

Conclusion

So what does this give us.   In my mind this gives us a simple interface that you may already be familar with and already using on your local machine, the ability to deploy to any number of other endpoints like public or private clouds.   That’s powerful.   Especially with any automation you have already created – slipping this into the mix, makes it even more robust.

This post was heavy on text and light on screenshots on purpose as it’s a complicated subject in this state of development.  I hope to put together a quick video to demonstrate this functionality soon.  Stay tuned.

 

 

Tagged , , , ,

Docker as a Service via vCAC: Part 1

dockerThis project started with the question – using VMware management software today, I wonder what would it be like to manage and provide self service access to Docker containers right alongside the traditional virtual machines and other services.  So that is what I explored and that is what I built here so far.   This is just going to be part one of…many probably…  as I develop the ideas further.
What does this solution aim to do?
This solution elevates a docker based container to somewhat of a “first class” citizen, in that it sits alongside virtual machines in the self service vCAC catalog.

Really? You are able to do this today?
Well…  Mostly.   More work needs to be done to make it more functional.  But as of this part 1, provisioning is working great, and monitoring too (thanks to work from a co-worker that will be written about later).  Anything further like day 2 operations, or just tear down (of the containers) is manual currently.  But still possible.

Snip20141001_67
So walk me through it?
There’s a single machine blueprint that deploys a CentOS machine and installs docker.  I needed a way to identity these machines as a special entity so I went with vSphere tags for now.   So using the vCAC Extensibility functionality I also have it fire off a vCO workflow that calls PowerShell (talked about here) to add the vSphere tag.  Not elegant but it works.  This will be improved later.      So now that a machine exists, there is additional catalog items  for various services like the demo application SpringTrader, or simply MySQL, Postgres, Ubuntu, etc, that run a vCO workflow to deploy the image onto one of the existing docker nodes.   Currently it picks a node randomly, but with some additional effort I plan to implement a (very) poor mans DRS and utilize either a hyperic plugin that my team is working on, or maybe just query cpu/memory directly to choose the node.

OK tldr;  Boil it down!?
First docker nodes are deployed from vCAC blueprints.  Then vCO workflows can identity those nodes via vSphere tags and deploy the requested docker image.

Snip20141001_68

Single machine blueprint in vCAC

 

Snip20141001_71

“DockerVM” tag in vSphere denotes the docker capability machines

 

Snip20141001_72

Service Blueprint for various docker images

 

Snip20141001_74

You can specify items like the image name, the port to expose, the bash command, etc..

 

Snip20141001_77

Slight smoke and mirrors in the screenshot – this is using a local docker registry so don’t be confused by that.

Snip20141001_78

…and to prove she’s working

What’s next?
I (and others) am (are) working on ways to tear down the containers, tear down the nodes in mass, automate discovery and monitoring of the containers, and so on..   Currently there’s not even a way to know where the image was deployed to – to come!

Can we have the workflows, scripts, etc…?
Not yet…!  But if it is fun for anyone, I do have the demo app springtrader available on Docker Hub if you want it.  It weighs in at a hefty 4gb.  Find it at jasper9/st_allinone.    There is no special sauce included in here, it’s simply the SpringTrader app documented here  built into an image all ready to go  (very poorly built, I’m sure….).

Sweet! How do I run SpringTrader?
This should probably be a full post on it’s own.  But in short this command will get it running.  Change the first 8080 to any open port if it’s already in use.

docker run -t -i -p 8080:8080 jasper9/st_allinone /bin/bash -c ‘/startup.sh && /bin/bash’

Then see the web interface at:  http://ip.address:8080/spring-nanotrader-web/#login

BY RUNNING THIS IMAGE YOU IMPLICITLY ACCEPT ALL EULU THAT ANY INCLUDED COMPONENTS IMPOSE.  THIS IS SIMPLY A DEMO APPLICATION AND NOTHING MORE.

 

Tagged , , , , ,