For the next time I’m trying to figure out how to update the Java SDK

The only reason I find myself having to update Java is to maintain the Elasticsearch server we have running at WSU. Every time I want to update the provisioning configuration, I end up with 25 tabs open trying to figure out what version is needed and how to get it.

This is hopefully a shortcut for next time.

The Elasticsearch installation instructions told me that when they were written, JDK version 1.8.0_73 was required. My last commit on the provisioning script shows 8u72, which I’m going to guess is 1.8.0_72, so I need to update.

I found the page titled Java SE Development Kit 8 Downloads, which has a list of the current downloads for JDK 8. I’m going to ignore that 8 is not 1.8 and continue under the assumption that JDK 8 is really JDK 1.8 because naming.

At the time of this post, the available downloads are for Java SE Development Kit 8u101. I’m not sure how far off from the Elasticsearch requirements that is, so I found a page that displays Java release numbers and dates.

Of course now I see that there are CPU and PSU (OTN) versions. The PSU version is 102, the CPU is 101, what to do. Luckily, Oracle has a page explaining the Java release version naming. Even though 102 is higher than 101, Oracle recommends the CPU over the PSU. Ok.

I go back to the downloads page, click the radio button to accept the licensing agreement, copy the URL for jdk-8u101-linux-x64.tar.gz, and I’m done!

Managing SSL certificates and HTTPS configuration at scale

Our multi-network multisite WordPress installation at WSU has 1022 sites spread across 342 unique domain names. We have 481 SSL certificates on the server to help secure the traffic to and from these domains. And we have 1039 unique server blocks in our nginx configuration to help route that traffic.

Configuring a site for HTTPS is often portrayed as a difficult process. This is mostly true depending on your general familiarity with server configuration and encryption.

The good thing about process is only having to figure it out a few times before you can automate it or define it in a way that makes things less difficult.

Pieces used during SSL certification

A key—get it—to understanding and defining the process of HTTPS configuration is to first understand the pieces you’re working with.

  • Private Key: This should be secret and unique. It is used by the server to sign encrypted traffic that it sends.
  • Public Key: This key can be distributed anywhere. It is used by clients to verify that encrypted traffic was signed by your private key.
  • CSR: A Certificate Signing Request. This contains your public key and other information about you and your server. Used to request digital certification from a certificate authority.
  • Certificate Authority: The issuer of SSL certificates. This authority is trusted by the server and clients to verify and sign public keys. Ideally, a certificate authority is trusted by the maximum number of clients. (i.e. all browsers)
  • SSL Certificate: Also known as a digital certificate or public key certificate. This contains your public key and is signed by a certificate authority. This signature applies a level of trust to your public key to help clients when deciding its validity.

Of the files and keys generated, the most important for the final configuration are the private key and the SSL certificate. The public key can be generated at any time from the private key and the CSR is only a vessel to send that public key to a certificate signing authority.

Losing or deleting the SSL certificate means downloading the SSL certificate again. Losing or deleting the private key means restarting the process entirely.

Obtaining an SSL certificate

The first step in the process is to generate the private key for a domain and a CSR containing the corresponding public key.

openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout jeremyfelt.com.key -out jeremyfelt.com.csr

This command will generate a 2048 bit RSA private key and a CSR signed with the SHA-256 hash algorithm. No public key file is generated as it is inserted directly into the CSR file.

Next, submit the CSR to a certificate signing authority. The certificate signing authority will sign the public key and return a digital certificate including the signature, your public key, and other information.

The certificate signing authority is often the part of the process that is annoying and difficult to automate.

If you’re purchasing the signature of a certificate through a certificate authority or reseller such as GoDaddy or Namecheap, the steps to purchase the initial request, submit the CSR, and download the correct certificate file can often be confusing and very time consuming.

Luckily, in WSU’s case, we have a university subscription to InCommon, a reseller of Comodo certificates. This allows us to request as many certificates as we need for one flat annual fee. It also provides a relatively straight forward web interface for requesting certificates. Similar to other resellers, we still need to wait as the request is approved by central IT and then generated by Comodo via InCommon.

Even better is the new certificate authority, Let’s Encrypt, which provides an API and a command line tool for submitting and finishing a certificate signing request immediately and for free.

Configuring the SSL certificate

This is where the process starts becoming more straight forward again. And where I’ll only focus on nginx as my familiarity with Apache disappeared years ago.

A cool thing about nginx when you’re serving HTTP requests is the flexibility of server names. It can use one server block in the configuration to serve thousands of sites.

server {
    listen 80;
    server_name *.wsu.edu wsu.io jeremyfelt.com foo.bar;
    root /var/www/wordpress;
}

However, when you serve HTTPS requests, you must specify which files to use for the private key and SSL certificate:

server {
    listen 443 ssl http2;
    server_name jeremyfelt.com;
    root /var/www/wordpress;

    ssl on;
    ssl_certificate /etc/nginx/ssl/jeremyfelt.com.cer;
    ssl_certificate_key /etc/nginx/ssl/jeremyfelt.com/key;
}

If you are creating private keys and requesting SSL certificates for individual sites as you configure them, this means having a server block for each server name.

There are three possibilities here:

  1. Use a wildcard certificate. This would allow for one server block for each set of subdomains. Anything at *.wsu.edu would be covered.
  2. Use a multi-domain certificate. This uses the SubjectAltName portion of a certificate to list multiple domains in a single certificate.
  3. Generate individual server blocks for each server name.

A wildcard certificate would be great if you control the domain and its subdomains. Unfortunately, at WSU, subdomains point to services all over the state. If everybody managing multiple subdomains also had a wildcard certificate to make it easier to manage HTTPS, the likelihood of that private key and certificate leaking out and becoming untrustworthy would increase.

Multi-domain certificates can be useful when you have some simple combinations like www.site.foo.bar and site.foo.bar. To redirect an HTTPS request from www to non-www, you need HTTPS configured for both. A minor issue is the size of the certificate. Every domain added to a SubjectAltName field increases the size of the certificate by the size of that domain text.

Not a big deal with a few small domains. A bigger deal with 100 large domains.

The convenience of multi-domain certificates also depends on how frequently domains are added. Any time a domain is added to a multi-domain certificate, it would need to be re-signed. If you know of several in advance, it may make sense.

If you hadn’t guessed yet, we use option 3 at WSU. Hence the 1039 unique server blocks! 🙂

From time to time we’ll request a small multi-domain certificate to handle the www to non-www redirects. But that too fits right into our process of putting the private key and certificate files in the proper place and generating a corresponding server block.

Using many server blocks in nginx for HTTPS

Private keys are generated, CSRs are submitted, SSL certificates are generated and downloaded.

Here’s what a generated server block at WSU looks like:

# BEGIN generated server block for fancy.wsu.edu
#
# Generated 2016-01-16 14:11:15 by jeremy.felt
server {
    listen 80;
    server_name fancy.wsu.edu;
    return 301 https://fancy.wsu.edu$request_uri;
}

server {
    server_name fancy.wsu.edu;

    include /etc/nginx/wsuwp-common-header.conf;

    ssl_certificate /etc/nginx/ssl/fancy.wsu.edu.cer;
    ssl_certificate_key /etc/nginx/ssl/fancy.wsu.edu.key;

    include /etc/nginx/wsuwp-ssl-common.conf;
    include /etc/nginx/wsuwp-common.conf;
}
# END generated server block for fancy.wsu.edu

We listen to requests on port 80 for fancy.wsu.edu and redirect those to HTTPS.

We listen to requests on port 443 for fancy.wsu.edu using a common header, provide directives for the SSL certificate and private key, and include the SSL configuration common to all server blocks.

wsuwp-common-header.conf

This is the smallest configuration file, so I’ll just include it here.

listen 443 ssl http2;
root /var/www/wordpress;

Listen on 443 for SSL and HTTP2 requests and use the directory where WordPress is installed as the web root.

These directives used to be part of the generated server blocks until nginx added support for HTTP2 and immediately deprecated support for SPDY. I had to replace spdy with http2 in all of our server blocks so instead decided to create a common config and include it.

WSU’s wsuwp-common-header.conf is open source if you’d like to use it.

wsuwp-ssl-common.conf

This is my favorite configuration file and one I often revisit. It contains all of the HTTPS specific nginx configuration.

# Enable HTTPS.
ssl on;

# Pick the allowed protocols
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# And much, much more...

This is a case where so much of the hard stuff is figured out for you. I regularly visit things like Mozilla’s intermediate set of ciphers and this boilerplate nginx configuration and then make adjustments as they make sense.

WSU’s wsuwp-ssl-common.conf is open source if you’d like to use it.

wsuwp-common.conf

And the configuration file for WordPress and other things. It’s the least interesting to talk about in this context. But! It too is open source if you’d like to use it.

The process of maintaining all of this

At the beginning I mentioned defining and automating the process as a way of making it less difficult. We haven’t yet reached full automation at WSU, but our process is now well defined.

  1. Generate a private key and CSR using our WSUWP TLS plugin. This provides an interface in the main network admin to type in a domain name and generate the required files. The private key stays on the server and the CSR is available to copy so that it can be submitted to InCommon.
  2. Submit the CSR through the InCommon web interface. Wait.
  3. Upon receipt of the approval email, download the SSL certificate from the embedded link.
  4. Upload the SSL certificate through the WSUWP TLS interface. This verifies the certificate’s domain, places it on the server alongside the private key, and generates the server block for nginx.
  5. Deploy the private key, SSL certificate, and generated server block file. At the moment, this process involves the command line.
  6. Run nginx -t to test the configuration and service nginx reload to pull it into production.
  7. In the WSUWP TLS plugin interface, verify the domain responds on HTTPS and remove it from the list.

Looking at the steps above, it’s not hard to imagine a completely automated process, especially if your certificate authority has a way of immediately approving request and responding with a certificate. And even without automation, having this process well defined allows several members of our team to generate, request, and deploy certificates.

I’d love to know what other ways groups are approaching this. I’ve often hoped and spent plenty of time searching for easier ways. Share your thoughts, especially if you see any holes! 🙂

Previously:

My first Let’s Encrypt certificate

The timing of the Let’s Encrypt beta could not be more perfect as my previous certificate expires on November 18th. I purposely purchased only a 1 year certificate because I knew Let’s Encrypt was coming. Let’s see how this works!

6:00pm

In my email, I have an invite to Let’s Encrypt for 3 whitelisted domains—jeremyfelt.com, www.jeremyfelt.com, and content.jeremyfelt.com. Per the documentation, I cloned the git repository to a spot on my server—I chose /home/jeremyfelt/—so that I could use the client.

I admit that I haven’t read any documentation up until this point, so I’m flying blind and impatient like normal. 🙂

My first attempt at running the ./letsencrypt-auto command was interesting, but kind of a failure. A ton of dependencies were installed, which is good. I have an outdated version of Python apparently, which is annoying.

WARNING: Python 2.6 support is very experimental at present…
if you would like to work on improving it, please ensure you have backups and then run this script again with the –debug flag!

It took me several attempts before I finally read the message above and figured out that I was supposed to run the Let’s Encrypt command as ./letsencrypt-auto --debug to even pass to the next level. If you have Python not 2.6, this probably won’t be an issue.

Ok. Figured that out, then that crazy fake linux light blue GUI comes up… progress! Go through a couple steps and get this:

Screen Shot 2015-11-12 at 6.09.52 PM

Right then. By default, the Let’s Encrypt client wants to be a web server so that it can properly communicate authority. This would be excellent if I didn’t already have a web server running.

At this point, I read down the page of documentation a bit and realized I could (a) use a config file and (b) use text only instead of ncurses. Sweet!

When I setup the default config file as /etc/letsencrypt/cli.ini, I noticed an option for webroot authenticator. This looked more promising as a way to handle authentication through Nginx. I enabled this and tried again.

And failed! “Failed authorization procedure” to be exact. My client told the other side to verify at http://jeremyfelt.com/.well-known/acme-challenge/BIGLONGSTRING, but my default Nginx configuration blocks public access to all hidden files.

I added a location block to Nginx specifically to allow .well-known and tried again.

Success! Authorization worked and a bunch of files were generated that look like what I need. I went into my Nginx configuration and updated the ssl_certificate directive to point at fullchain.pem and the ssl_certificate_key directive to point to privkey.pem. nginx -t has no complaints… let’s restart the server.

Failure! Big red X, invalid everything! The issuing CA is….. happy hacker fake CA.

Oh. Quick Google search and sure enough:

“happy hacker fake CA” is the issuer used in our staging/testing server. This is what the Let’s Encrypt client currently uses when you don’t specify a different server using the--server option like you did in the original post. Because of this, I believe the --server flag was not included when you ran the client. Try running the client again, but make sure you include the --server option from your original post.

Thank you, bmw!

I failed to update the cli.ini file that I had copied from the web to use the production API instead of the staging API.

Fix the server URL, try again. Success! And for real this time.

Screen Shot 2015-11-12 at 6.24.04 PM

I repeated the process with www.jeremyfelt.com and content.jeremyfelt.com, making fewer mistakes along the way and that’s that.

  • Here’s the final cli.ini file that worked for me.
  • And the final command line arguments: ./letsencrypt-auto --config /etc/letsencrypt/cli.ini --debug --agree-dev-preview certonly

6:36pm

I have 3 new certificates. I still have an A+ in SSL Labs. Nothing is broken. The future is here.

Thank you to everyone at Let’s Encrypt!

Security. Stored Content. Backward Compatibility.

This was almost a storm of 140 character segments, but I help make software to democratize publishing. I should use it. 😄

The pains people working on WordPress go to in making security, stored content, and backward compatibility all first priorities are amazing.

This is what I’ve learned and been inspired by the most since I started contributing to WordPress.

If there’s a moment in the course of development in which any of these three things are at risk, the tension or pain—or whatever you want to call it—is palpable. Providing a secure place for users to publish their content without worry is why we’re here.

WordPress 4.2.3 was released this week.

A security issue was fixed. Stored content was not lost.

Backward compatibility in the display of that content was stretched, sometimes broken, when WordPress could not guarantee the resulting security of a user’s site or stored content.

It’s okay to feel offended, especially if you were burned. Know that good work was done to try and make the bandaid as easy to pull of as possible. Know that people have been working on or thinking through easing the burn every minute since.

Running PHPUnit on VVV from PHPStorm 9

I spent so much time trying to get this working last November and kept running into brick walls. Today, I finally revisited a comment on that same issue by Rouven Hurling that pointed out two excellent notes from Andrea Ercolino.

  1. How to run WordPress tests in VVV using PHPStorm 8
  2. How to run WordPress tests in VVV using WP-CLI and PHPStorm 8

These are both great examples of showing your work. Andrea walks through the issue and shows things that didn’t work in addition to things that did. I was able to use the first to quickly solve an issue that I’ve been so close to figuring out, but have missed every time.

And! I haven’t gone through the second article yet, but the hidden gem there is that he has remote debugging with Xdebug working while running tests with PHPUnit. I’ve wanted this so much before.

All that aside (go read those articles!), I stumbled a bit following the instructions and wanted to log some of the settings I had to configure in order for PHPUnit to work properly with PHPStorm 9 and my local configuration.

For this all to work, I had to configure 4 things. The first three are accessed via PHPStorm -> Preferences in OSX. The fourth is accessed via the Run -> Edit Configurations menu in OSX.

A remote PHP interpreter

Choosing “Vagrant” in the Interpreters configuration didn’t work for me. I got an error that VBoxManage wasn’t found in my path when I selected my Vagrant instance and it tried to auto-detect things. I’m not sure if this is a bug or a misconfiguration. I almost wonder if it’s related to my recent upgrade today to Vagrant 1.7.4 and VirtualBox 5.0.

Instead I tried going forward with “SSH Credentials” to see what would happen. I put in the IP of the VVV box, 192.168.50.4, and the vagrant/vagrant username and password combination for SSH. I left the interpreter path alone and when I clicked OK, everything was verified. I was hesitant, because in the first step I had already deviated from the plan.

PHPUnit configuration

This one was easier and didn’t require any altered steps. I added a remote PHPUnit configuration, chose the already configured remote interpreter and was good to close out.

SFTP deployment configuration

Because I was not able to get Vagrant configured properly in the first step, I am dealing with an entirely different path. PHPStorm wants to have a deployment configured over SFTP so that it can be aware of the path structure inside the virtual machine that leads to the tests.

Luckily nothing special needs to happen in the VM to support SFTP, so I was able to add 192.168.50.4 as the server and vagrant/vagrant as the username and password combination.

I originally tried putting the full path to WordPress here as the root directory, but that caused PHPStorm to prepend that on any automatic path building it did. I tried Autodetect as well, but that detected /home/vagrant, which did not work with the WordPress files. Through the power of deduction, I set the actual root / as the root. 😜

I also used the Mappings tab on this screen to match up my local wordpress-develop directory with the one I’m interacting with remotely.

Run/debug configurations

This one ends up being very straight forward now that everything else is configured. All I needed to do is select the phpunit.xml.dist file via its local path on my machine.

And then I was good! I can now run the single site unit tests  for WordPress core just by using the Run command via green arrows everywhere in PHPStorm.

The tests took 3.14 minutes to run through the interface versus 2.45 minutes through the command line, but I was given a bunch of information on skipped tests throughout. I’ll likely stick to the command line interface for normal testing, though I’m looking forward to getting Xdebug configured along side this as it will make troubleshooting individual tests for issues a lot easier.

Big thanks again to Andrea Ercolina for the great material that brought me to a working config!

Flushing rewrite rules in WordPress multisite for fun and profit

Developing plugins and introducing new rewrite rules for features on single site installations of WordPress is pretty straight forward. Via register_activation_hook(), you’re able to setup a task that fires flush_rewrite_rules() and you can safely assume job well done.

But for multisite? A series of bad choices awaits.

Everything seems normal. The activation hook fires and even gives you a parameter to know if this is a network activation.

Your first option is to handle it exactly the same as a single site installation. flush_rewrite_rules() will fire, the rewrite rules for the current site will be built, and the network admin will be on their own to figure how these should be applied to the remaining sites.

It seems strange to say, but I’d really like you to choose this one.

Instead, you could detect if this is multisite and if the plugin is being activated network wide. At that point a few other options appear.

Grab all of the sites in wp_blogs and run flush_rewrite_rules() on each via switch_to_blog().

This sounds excellent. It’s horrible.

// Please don't. :)
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    flush_rewrite_rules();
    restore_current_blog();
}

Switching sites with switch_to_blog() changes the current context for a few things. The $wpdb global knows where it is, your cache keys are generated properly. But! There is absolutely no awareness of what plugins or themes would be loaded on the other site, only those in process now.

So. We get this (paraphrasing):

// Delete the rewrite_rules option from
// the switched site. Good!
delete_option( 'rewrite_rules' );

// Build the permalink structure in the
// context of the main site. Bad!
$this->rewrite_rules();

// Update the rewrite_rules option for the
// switched site. Horrible!
update_option( 'rewrite_rules', $this->rules );

All of a sudden every single site on the network has the same rewrite rules. 🔥🔥🔥

Grab all of the sites in wp_blogs and run delete_option( ‘rewrite_rules’ ) on each via switch_to_blog()

This is much, much closer. But still horrible!

// Warmer.... but please don't.
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );

foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );
    restore_current_blog();
}

It’s closer because it works. Deleting the rewrite_rules option occurs in the context of the switched site and you don’t need to worry about plugins registering their rewrite rules. On the next page load for that site, the rewrite rules will be built fresh and nothing will be broken.

It’s horrible because large networks. Ugh.

Really, not much of a deal at 10, 50, 100, or even 1000 sites. But 10000, 100000? Granted, it only takes a few seconds to delete the rewrite rules on all of these sites. But what’s going on with other various plugins on that next page load? If I have a network with large traffic on many sites, there’s going to be a small groan where the server has to generate and then store those rewrite rules in the database.

It’s up to the network administrator to draw that line, not the plugin.

You can mitigate this by checking wp_is_large_network() before taking these drastic actions. A network admin not wanting anything crazy to happen will be thankful.

// Much better. :)
if ( wp_is_large_network() ) {
    return;
}

// But watch out...
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );
    restore_current_blog();
}

But it’s also horrible because of multiple networks.

When a plugin is activated on a network, it’s stored in an array of network activated plugins in the meta for that network. A blanket query of wp_blogs often doesn’t account for the site_id. If you do go the drastic route and delete all rewrite rules options, make sure to do it on only sites of the current network.

// Much better...
if ( wp_is_large_network() ) {
    return;
}

// ...and we're probably still friends.
$query = "SELECT * FROM $wpdb->blogs WHERE site_id = 1";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );
    restore_current_blog();
}

And really, ignore all of my examples and use wp_get_sites(). Just be sure to familiarize yourself with the arguments so that you know how to get something different than 100 sites on the main network by default.

// Much better...
if ( wp_is_large_network() ) {
    return;
}

// ...and we're probably still friends.
$sites = wp_get_sites( array( 'network' => 1, 'limit' => 1000 ) );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );
    restore_current_blog();
}

Alas.

At this point you can probably feel yourself trying to make the right decision and giving up in confusion. That’s okay. Things have been this way for a while and they’re likely not going to change any time soon.

I honestly think your best option is to show a notice after network activation to the network admin letting them know rewrite rules have been changed and that they should take steps to flush them across the network.

Please: Look through your plugin code and make sure that activation is not firing flush_rewrite_rules() on every site. Every other answer is better than this.

If you’re comfortable, flush the rewrite rules on smaller networks by deleting the rewrite rules option on each site. This could be a little risky, but I’ll take it.

And if nothing else, just don’t handle activation on multisite. It’s a bummer, but will work itself out with a small amount of confusion.

OpenSSL commands that came in useful today

When nginx -t complained about a certificate/key mismatch this afternoon, I first assumed that the problem was on our end during our automated CSR/key generation or our certificate request process. I took a closer look at all three pieces to look for the source of the error using “The Most Common OpenSSL Commands“:

openssl rsa -in example.test.key -check

The info from the key check was pretty unhelpful, but it was a valid key. See the section below for how to better compare that.

openssl req -text -noout -verify -in example.test.csr

The CSR check was somewhat helpful as I was able to verify that the correct domain name and other request information was in place.

openssl x509 -in example.test.cer -text -noout

The certificate check was most helpful as I was able to diff the results of this with the results of a working certificate. This showed me that nothing was off and all data was formatted as expected, just different.

I turned to searching for the verbose error instead.

Via “SSL Library Error: 185073780 key values mismatch“, I used these commands to compare a certificate and private key to see if they were indeed not matching:

  • openssl x509 -noout -modulus -in example.test.cer | openssl md5
  • openssl rsa -noout -modulus -in example.test.key | openssl md5

Each of these generated an md5 hash that I was able to compare. In my case, the error reported by nginx -t was correct and the certificate generated by Comodo did not match my private key. I double checked this by comparing a working certificate/key pair that resulted in matching md5 hashes.

Bah. This is nice because it’s likely not our fault. This is not nice because now we have less control over fixing it. 😞

I do have a set of commands that may come in useful again. 😃

Various Networking Configurations in VVV

I dug in to some different configurations in VVV today and decided to write them up as I went. This will be posted in some form to the VVV wiki as well. There are other networking configurations available in Vagrant, though I’m not sure that any would be useful in development with VVV.

I would recommend using default settings for initial provisioning as things can get quirky inside the VM when trying to access outside sources. Run vagrant reload to process any network configuration changes.

Private Network (default)

config.vm.network :private_network, ip: “192.168.50.4”

This is the default configuration provided in VVV. A private network is created by VirtualBox between your host machine and the guest machine. The guest is assigned an IP address of 192.168.50.4 and your host machine is able to access it on that IP. VVV is configured to provide access to several default domains on this IP address so that browser requests from your host machine just work.

Outside access from other devices to this IP address is not available as the network interface is private to your machine.

Port Forwarding

config.vm.network “forwarded_port”, guest: 80, host: 8080

One option to provide other devices access to your guest machine is port forwarding. Uncommenting or adding this line in VVV’s Vagrantfile and then running vagrant reload will cause any traffic on port 8080 directed at your host machine to instead communicate with port 80 on the guest machine.

This configuration will work with private or public IP configurations as it deals with port forwarding rather than the IP of the virtual machine itself.

An immediate way to test this once configured would be to type your host machine’s IP address into a browser followed by :8080. With port forwarding enabled, something like http://192.168.1.119:8080 would bring up the default VVV dashboard.

Of course, this doesn’t do you much good with the default WordPress sites, as you’ll be stuck adding port 8080 to every request you make.

The easiest hack around this is to setup port forwarding on your router. Point incoming requests for port 80 to port 8080 on the IP address of your host machine. Requests through the router will then traverse ports 80 (public IP) -> 8080 (host) -> 80 (guest) and your development work can be shared with devices inside and outside of your network.

Say my router’s public IP is 14.15.16.17 and my computer’s local IP is 192.168.1.100.

  • Enable port forwarding in Vagrantfile.
  • Configure router to forward incoming port 80 to port 8080 on 192.168.1.100.
  • Visit src.wordpress-develop.14.15.16.17.xip.io on my phone, connected through LTE.

There are other things you can do on your local machine to reroute traffic from 80 to 8080 so that it forwards properly without the use of a router. Sal Ferrallelo has posted steps to take advantage of port forwarding directly in OSX using pfctl.

Public Network

config.vm.network “public_network”

Replacing our default private network configuration with a public network configuration immediately provides access to other devices on your local network. Using this configuration without specifying an IP address causes the guest machine to request an address dynamically from an available DHCP server—likely your router. During vagrant up, an option may be presented to choose which interface should be bridged. I chose my AirPort interface as that is what my local machine is using.

==> default: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) p2p0
4) bridge0
5) vnic0
6) vnic1
    default: What interface should the network bridge to? 1

Once the guest machine receives an IP address, access is immediately available to other devices on the network.

  • vagrant ssh and type ifconfig to determine the IP address of the guest – mine was 192.168.1.141.
  • Visit src.wordpress-develop.192.168.1.141.xip.io on my phone, connected to the wireless network.

To me this is most desirable as it provides access to devices on the local network, not to the outside. If you are using public wifi or another insecure network, be aware–this does open your machine up to other devices on that network.

config.vm.network “public_network”, ip: “192.168.1.141”

The same configuration would be available without DHCP by specifying the IP address to use. If you know what subnet your network is on, this may be a shortcut for providing access without having to use ifconfig inside the guest machine.

Deployment Workflows Wrap-up

This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.

While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.

Guidelines

All in all, I think my main guidelines for a successful workflow are:

  1. The one you use is better than the one you never implement.
  2. Communication and documentation are more important than anything else.

And I think there are a few questions that should be asked before you settle on anything.

Who is deploying?

This is the most important question. A good team deserves a workflow they’re comfortable with.

If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.

When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.

Push or Pull?

One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.

Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.

One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.

Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to jeremyfelt.com as the possibilities for things happening out of order rise as more people are involved.

When do we deploy?

Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.

I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version – 0001, 0002, etc… – and that works as well.

This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version 0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.

The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.

I think the following is my favorite from that piece:

We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!

Missing

You may have noticed—I’m missing a ton of possibilities.

I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.

Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.

And there are more, I’m very certain.

Wrapped.

That’s all I have for deployment. 🙂

I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!

And once again, here are mine:

  1. The WSUWP Platform
  2. jeremyfelt.com
  3. WSU Indie Sites
  4. The WSU Spine

Deployment Workflows, Part 4: WSU Spine

This post is the fourth in a series of deployment workflows I use to get code into production.

This is the one non-WordPress deployment writeup, though still interesting.

The WSU Spine plays the role of both branding and framework for websites created at WSU. It provides a consistent navigation experience, consistent default styles, and the proper University marks. At the same time, a fully responsive CSS framework makes it easy for front end developers at WSU to create mobile friendly pages. For sites that are in the WSUWP Platform, we provide a parent theme that harnesses this framework.

One of the great parts about maintaining a central framework like this is being able to serve it from a single location – repo.wsu.edu – so that the browsers of various visitors can cache the file once and not be continually downloading Spine versions while they traverse the landscape of WSU web.

It took us a bit to get going with our development workflow, but we finally settled on a good model later in 2014 centered around semantic versioning. We now follow a process similar to other libraries hosted on CDNs.

Versions of spine.min.js and spine.min.css for our current version 1.2.2 are provided at:

  • repo.wsu.edu/spine/1/* – Files here are cached for an hour. This major version URL will always be up to date with the latest version of the Spine. If we break backward compatibility, the URL will move up a major version to /spine/2/ so that we don’t break live sites. This is our most popular URL.
  • repo.wsu.edu/spine/1.2.2 – Files here are cached for 120 days. This is built for every minor and patch release. This allows for longer cache times and fine grained control in their own projects. It does increase the chance that older versions of the Spine will be in the wild. We still have not seen any traffic on this URL yet.
  • repo.wsu.edu/spine/develop/ – Files here are cached for 10 minutes. This is built every time the develop branch is updated in the repository and is considered bleeding edge and often unstable.

So, our objectives:

  1. Deploy to one directory for every change to develop.
  2. Deploy to the major version directory whenever a new release is tagged.
  3. Create and deploy to a minor/patch version directory whenever a new release is tagged.

As with the WSUWP Platform, we use GitHub’s webhooks. For this deployment process, we watch for both the create and push events using a very basic PHP script on the server rather than a plugin.

Here’s a shorter version of that script embedded:

  • If we receive a push event and it is on the develop branch, then we fire a deploy script with develop as the only argument.
  • If we receive a create event and it is for a new tag that matches our #.#.# convention, then we fire a deploy script with this tag as the only argument.

And here’s the deploy script that it’s firing:

The brief play by play from the script above is:

  1. Get things clean in the current repo.
  2. Checkout whichever branch or tag was passed as the argument.
  3. Update NPM dependencies.
  4. grunt prod to run through our build process.
  5. Move files to the appropriate directories.

Really, really smooth. Also really, really basic and ugly. 🙂

We’ve actually only done 2 or 3 releases with this model so far, so I’m not completely convinced there aren’t any bugs. It’s pretty easy to maintain as there really are few tasks that need to be completed. Build files, get files to production.

Another great bonus is the Spine version switcher we have built as a WordPress Customizer option in the theme. We can go to any site on the platform and test out the develop branch if needed.

In the near future, I’d like to create directories for any new branch that is created in the repository. This will allow us to test various bug fixes or new features before committing them to develop, making for a more stable environment overall.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.