Tag Archives: centos

Things learned managing production WordPress: How to easily enable HTTPS

This is the start of a series that will chronicle everything I’ve learned along the way keeping the wife’s photography website (https://www.jendzphotography.com) running smoothly.  I’ll cover topics including performance improvements, dealing with spam & robots, content distribution networks (CDN) and using website tools to track progress.  I intend to keep the technical mumbojumbo to a minimum and make the reading level less technical than my typical blog posts for easier consumption.


In 2014 Google announced they will start boosting page rankings for https enabled sites.  While SEO is of course important, it’s also just good practice to use SSL.

The current Wikipedia entry for HTTPS includes:

HTTP Secure (HTTPS) is an adaptation of the Hypertext Transfer Protocol (HTTP) for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted by Transport Layer Security (TLS), or formerly, its predecessor, Secure Sockets Layer (SSL). The protocol is therefore also often referred to as HTTP over TLS, or HTTP over SSL.

Put simply, when you enable HTTPS you put a private key and public key in the web server configuration that encrypts traffic between the web server and your web browser.  OK… put even more simply…  It makes your web traffic hard(er) for bad people to read.

Certificates are issued by a trusted Certificate Authority (CA).  The whole system is based on trust.  Your web browser contains a list of the CAs in the world.  When you load a HTTPS enabled site, the certificate is compared against the list, and if all checks out, it turns green and you are safe.  If one of these CAs get severely hacked, browsers will remove them from the lists and anything they issued will no longer be trusted.  Usually you have to pay out money to a CA to have a certificate issued for you from a company like GlobalSign, Verisign, or GeoTrust.  In 2016 however, a free service was launched called Let’s Encrypt that now offers them for free.  Yay!!

Let’s Encrypt

Their free service is great, but it does have a limitation of only a 60 day duration in the certificates they issue (typically the commercial ones are measured in years).  They explain their reasoning for that here.  Basically, it is a built in failsafe if they get compromised (stolen) and the short duration encourages automation.  And I have to admit, the automation tool I first used is great!

Getting Started

The easiest way to get started is by using a tool to request the certificate and put it in place for you.  I found the CertBot tool from the Electronic Freedom Foundation (EFF).  I’m using Apache on CentOS 6 so I’ll just focus on that.  The install steps for that are here.


Run the commands, and it starts off installing dependencies

Right off, it dives into a few prompts.  First the terms of service, then an EFF email notification.

It goes through your web server config and lists the configured domains. In our live site it listed all we had sites for, surprisingly.

Then it asks if you want the tool to automatically configure apache to force redirects to SSL/443 or not.  Be sure you take a backup of your config before proceeding – by default at /etc/httpd/conf/httpd.conf.

Yay!  First part is done!

So remember the certificate is only valid for 60 days.  They include a method to automatically renew it, which is actually pretty awesome.  Enterprise system administrators forget all the time to renew certificates.

They have a dry run option (which  means it shows what it would do, but doesn’t actually do it)

(ignore my annoying python warnings/errors using the default config)

Add this to cron so you don’t forget.  They recommend doing it twice a day, but I don’t see any harm in doing it multiple more times.

Yay!  We’re all good!

Ah crap…. what now?

By default, it only picked up the base domain (jaas-demo.com) that was configured in Apache, and not the full domain that is more human friendly (www.jaas-demo.com).

They have a command to add a domain to the certificate:

There we go, thats better.

To troubleshoot any problems, you can use this syntax to dump the contents of the certificate in human readable form:


That’s all we got!  For a basic site this will work out of the box.  You may have to tweak things a little if you use a content delivery network, or pull in files from other sites and so on.  We’ll cover some of this in a later post.


Wuups…   I didn’t notice in the OpenSSL output that only the subdomain www.jaas-demo.com was in the certificate!  I actually ran the wrong command above with the expand flag.   What that syntax did was create a NEW certificate, not add the subdomain to the existing one.  The syntax should include the cert-name field like so:


Ahh that is better.


**** Shameless plug ****

Have a website problem of any sort you need help with?  Contact me here to see if I can help.  Rates based on complexity and time required.

**** /Shameless plug ****

Tagged , , , , , , ,

Local Docker Registry Update

It appears since I last wrote about creating a local and persistent Docker registry on CentOS they changed the default behavior to force secure communication.   In basic environments like I use and build in a lab, SSL is just a headache best left alone.

Doing docker push now with docker version 1.3.2 I get the error:

The best solution I found was to add this option to /etc/sysconfig/docker like the following [1] [2]

Restart Docker, and then all is well in Docker push land once again.



Tagged , , ,

How to use a local persistent Docker registry on CentOS 6.5

UPDATE: Dec 16 2014, I found a new option is needed now using Docker version 1.3.2   See more here

There a bunch of blogs out there showing a tutorial on how to use a local docker registry but none of them (that I have found) have it boiled down to the absolute simplest syntax and terms.   So here you go!

First off terminology Docker Hub is where images are typically pulled from when you just type the normal “docker pull blah” commands.  A registry is what it’s referred to as instead of a hub or repo.   To save time and bandwidth here is how you can stand up a persistent local registry to store your images.  Persistent meaning the image data is kept after the image is discarded.

Syntax here is working on CentOS 6.5

1) Install the needed bits.  This is no different than normal.

2) Start docker

3) Fire up the example registry.  This downloads and runs the registry image, exposes port 5000, and links local dir /opt/registry to /tmp/registry within the container.   This is key.  Otherwise, after the container stops the images go poof.

4) We could do this locally on this first machine, but we’ll show the syntax from somewhere else to illustrate.   On some other machine, first do the same install steps above to install the EPEL rpm and install Docker.   Then pull the images you want:


5) List the images, and we see this image separates out a few versions of the OS.  CentOS7 is the latest (see how the IMAGE ID matches 87e5b…), CentOS6 is image 68edf..  and CentOS5 is 504a65…


6) Add some tags to gives the images a new identity.  Replace “docker-reg” with your docker registry hostname.

7) List images again to verify


8) Now push the tagged images to our local registry

9) Lastly, on some third machine with docker already installed (this post makes this handy by deploying these nodes as a catalog item), pull the time.  Notice it’s WAY fast now.  ~14 seconds in this screenshot.  Notice we only have the latest centos tagged, just pull the others and you’re good.

Compare this to pulling from docker hub at about ~2:20.   For a large image like the SpringTrader app I built and this would cut down about an hour download time dramatically.

I wanted to compare this to the SpringTrader app, so I pulled it earlier and began pushing it to my local registry.  One thing I noticed was it does buffer the image to disk when you push so be aware you will need the disk space (and time) available for this.   The time savings will happen later on subsequent deployments.  And i crashed my VM when running out of space the first time….

Then on some other node

Boom.  In about 10% of the time it normally takes to deploy this image she’s up and running!  It took about 13 minutes to download from Docker Hub, and about 2 minutes from my local registry.  That’s a win if you have a need to do this over and over.



Where are the images stored?

On a docker node, what I have been using to refer to the base machine and not the containers themselves, I found the docker files here.


On the local registry, i found the files here.  Remember we told it to use /opt/registry on the base machine and map that to /tmp/registry within the container?


Tagged , , ,