OpenSSL commands that came in useful today

When nginx -t complained about a certificate/key mismatch this afternoon, I first assumed that the problem was on our end during our automated CSR/key generation or our certificate request process. I took a closer look at all three pieces to look for the source of the error using “The Most Common OpenSSL Commands“:

openssl rsa -in example.test.key -check

The info from the key check was pretty unhelpful, but it was a valid key. See the section below for how to better compare that.

openssl req -text -noout -verify -in example.test.csr

The CSR check was somewhat helpful as I was able to verify that the correct domain name and other request information was in place.

openssl x509 -in example.test.cer -text -noout

The certificate check was most helpful as I was able to diff the results of this with the results of a working certificate. This showed me that nothing was off and all data was formatted as expected, just different.

I turned to searching for the verbose error instead.

Via “SSL Library Error: 185073780 key values mismatch“, I used these commands to compare a certificate and private key to see if they were indeed not matching:

  • openssl x509 -noout -modulus -in example.test.cer | openssl md5
  • openssl rsa -noout -modulus -in example.test.key | openssl md5

Each of these generated an md5 hash that I was able to compare. In my case, the error reported by nginx -t was correct and the certificate generated by Comodo did not match my private key. I double checked this by comparing a working certificate/key pair that resulted in matching md5 hashes.

Bah. This is nice because it’s likely not our fault. This is not nice because now we have less control over fixing it. 😞

I do have a set of commands that may come in useful again. 😃

Various Networking Configurations in VVV

I dug in to some different configurations in VVV today and decided to write them up as I went. This will be posted in some form to the VVV wiki as well. There are other networking configurations available in Vagrant, though I’m not sure that any would be useful in development with VVV.

I would recommend using default settings for initial provisioning as things can get quirky inside the VM when trying to access outside sources. Run vagrant reload to process any network configuration changes.

Private Network (default)

config.vm.network :private_network, ip: “192.168.50.4”

This is the default configuration provided in VVV. A private network is created by VirtualBox between your host machine and the guest machine. The guest is assigned an IP address of 192.168.50.4 and your host machine is able to access it on that IP. VVV is configured to provide access to several default domains on this IP address so that browser requests from your host machine just work.

Outside access from other devices to this IP address is not available as the network interface is private to your machine.

Port Forwarding

config.vm.network “forwarded_port”, guest: 80, host: 8080

One option to provide other devices access to your guest machine is port forwarding. Uncommenting or adding this line in VVV’s Vagrantfile and then running vagrant reload will cause any traffic on port 8080 directed at your host machine to instead communicate with port 80 on the guest machine.

This configuration will work with private or public IP configurations as it deals with port forwarding rather than the IP of the virtual machine itself.

An immediate way to test this once configured would be to type your host machine’s IP address into a browser followed by :8080. With port forwarding enabled, something like http://192.168.1.119:8080 would bring up the default VVV dashboard.

Of course, this doesn’t do you much good with the default WordPress sites, as you’ll be stuck adding port 8080 to every request you make.

The easiest hack around this is to setup port forwarding on your router. Point incoming requests for port 80 to port 8080 on the IP address of your host machine. Requests through the router will then traverse ports 80 (public IP) -> 8080 (host) -> 80 (guest) and your development work can be shared with devices inside and outside of your network.

Say my router’s public IP is 14.15.16.17 and my computer’s local IP is 192.168.1.100.

  • Enable port forwarding in Vagrantfile.
  • Configure router to forward incoming port 80 to port 8080 on 192.168.1.100.
  • Visit src.wordpress-develop.14.15.16.17.xip.io on my phone, connected through LTE.

There are other things you can do on your local machine to reroute traffic from 80 to 8080 so that it forwards properly without the use of a router. Sal Ferrallelo has posted steps to take advantage of port forwarding directly in OSX using pfctl.

Public Network

config.vm.network “public_network”

Replacing our default private network configuration with a public network configuration immediately provides access to other devices on your local network. Using this configuration without specifying an IP address causes the guest machine to request an address dynamically from an available DHCP server—likely your router. During vagrant up, an option may be presented to choose which interface should be bridged. I chose my AirPort interface as that is what my local machine is using.

==> default: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) p2p0
4) bridge0
5) vnic0
6) vnic1
    default: What interface should the network bridge to? 1

Once the guest machine receives an IP address, access is immediately available to other devices on the network.

  • vagrant ssh and type ifconfig to determine the IP address of the guest – mine was 192.168.1.141.
  • Visit src.wordpress-develop.192.168.1.141.xip.io on my phone, connected to the wireless network.

To me this is most desirable as it provides access to devices on the local network, not to the outside. If you are using public wifi or another insecure network, be aware–this does open your machine up to other devices on that network.

config.vm.network “public_network”, ip: “192.168.1.141”

The same configuration would be available without DHCP by specifying the IP address to use. If you know what subnet your network is on, this may be a shortcut for providing access without having to use ifconfig inside the guest machine.

Deployment Workflows Wrap-up

This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.

While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.

Guidelines

All in all, I think my main guidelines for a successful workflow are:

  1. The one you use is better than the one you never implement.
  2. Communication and documentation are more important than anything else.

And I think there are a few questions that should be asked before you settle on anything.

Who is deploying?

This is the most important question. A good team deserves a workflow they’re comfortable with.

If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.

When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.

Push or Pull?

One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.

Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.

One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.

Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to jeremyfelt.com as the possibilities for things happening out of order rise as more people are involved.

When do we deploy?

Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.

I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version – 0001, 0002, etc… – and that works as well.

This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version 0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.

The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.

I think the following is my favorite from that piece:

We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!

Missing

You may have noticed—I’m missing a ton of possibilities.

I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.

Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.

And there are more, I’m very certain.

Wrapped.

That’s all I have for deployment. :)

I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!

And once again, here are mine:

  1. The WSUWP Platform
  2. jeremyfelt.com
  3. WSU Indie Sites
  4. The WSU Spine

Deployment Workflows, Part 4: WSU Spine

This post is the fourth in a series of deployment workflows I use to get code into production.

This is the one non-WordPress deployment writeup, though still interesting.

The WSU Spine plays the role of both branding and framework for websites created at WSU. It provides a consistent navigation experience, consistent default styles, and the proper University marks. At the same time, a fully responsive CSS framework makes it easy for front end developers at WSU to create mobile friendly pages. For sites that are in the WSUWP Platform, we provide a parent theme that harnesses this framework.

One of the great parts about maintaining a central framework like this is being able to serve it from a single location – repo.wsu.edu – so that the browsers of various visitors can cache the file once and not be continually downloading Spine versions while they traverse the landscape of WSU web.

It took us a bit to get going with our development workflow, but we finally settled on a good model later in 2014 centered around semantic versioning. We now follow a process similar to other libraries hosted on CDNs.

Versions of spine.min.js and spine.min.css for our current version 1.2.2 are provided at:

  • repo.wsu.edu/spine/1/* – Files here are cached for an hour. This major version URL will always be up to date with the latest version of the Spine. If we break backward compatibility, the URL will move up a major version to /spine/2/ so that we don’t break live sites. This is our most popular URL.
  • repo.wsu.edu/spine/1.2.2 – Files here are cached for 120 days. This is built for every minor and patch release. This allows for longer cache times and fine grained control in their own projects. It does increase the chance that older versions of the Spine will be in the wild. We still have not seen any traffic on this URL yet.
  • repo.wsu.edu/spine/develop/ – Files here are cached for 10 minutes. This is built every time the develop branch is updated in the repository and is considered bleeding edge and often unstable.

So, our objectives:

  1. Deploy to one directory for every change to develop.
  2. Deploy to the major version directory whenever a new release is tagged.
  3. Create and deploy to a minor/patch version directory whenever a new release is tagged.

As with the WSUWP Platform, we use GitHub’s webhooks. For this deployment process, we watch for both the create and push events using a very basic PHP script on the server rather than a plugin.

Here’s a shorter version of that script embedded:

  • If we receive a push event and it is on the develop branch, then we fire a deploy script with develop as the only argument.
  • If we receive a create event and it is for a new tag that matches our #.#.# convention, then we fire a deploy script with this tag as the only argument.

And here’s the deploy script that it’s firing:

The brief play by play from the script above is:

  1. Get things clean in the current repo.
  2. Checkout whichever branch or tag was passed as the argument.
  3. Update NPM dependencies.
  4. grunt prod to run through our build process.
  5. Move files to the appropriate directories.

Really, really smooth. Also really, really basic and ugly. :)

We’ve actually only done 2 or 3 releases with this model so far, so I’m not completely convinced there aren’t any bugs. It’s pretty easy to maintain as there really are few tasks that need to be completed. Build files, get files to production.

Another great bonus is the Spine version switcher we have built as a WordPress Customizer option in the theme. We can go to any site on the platform and test out the develop branch if needed.

In the near future, I’d like to create directories for any new branch that is created in the repository. This will allow us to test various bug fixes or new features before committing them to develop, making for a more stable environment overall.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 3: WSU Indie Sites

This post is the third in a series of deployment workflows I use to get code into production.

While the WSUWP Platform is the preferred place for new sites at WSU, there are times when a single site WordPress installation is required.

Until about February of 2014, all new sites were configured on our WSU Indie server because the platform was not ready and setting things up in IIS was getting old. Now, a site only goes here if it is going to do something crazy that may affect the performance of the platform as a whole. It’s a separate server and definitely an outlier in our day to day workflows. In fact, I’ve been able to remove all sites except for three from that server, with one more scheduled.

Because WSU Indie is setup to handle multiple single site installs, I needed a way to deploy everything except for WordPress, which is controlled on this server as part of provisioning.

The best current example of how each repository is configured is probably our old P2 configuration:

  • config/ – Contains the Nginx configuration for both local (Vagrant) and production instances.
  • wp-content/ – Contains mu-plugins, plugins, and themes.
  • .rsync-exclude – A list of files to exclude when rsync is used on the server during deployment.

When placed alongside a wordpress/ directory and an existing wp-config.php file, everything just works.

On the server, each configured indie site has its own directory setup based on descriptions in a Salt pillar file. An example of what this pillar file looks like is part of our local indie development README.

When provisioning runs, the final web directory looks something this:

  • /var/www/
  • /var/www/news.wsu.edu/
  • /var/www/news.wsu.edu/wp-config.php
  • /var/www/news.wsu.edu/wordpress/
  • /var/www/news.wsu.edu/wp-content/
  • /var/www/project.web.wsu.edu/
  • /var/www/project.web.wsu.edu/wp-config.php
  • /var/www/project.web.wsu.edu/wordpress/
  • /var/www/project.web.wsu.edu/wp-content/

Our job with deployment is to get the files from the repository into the production directory.

I decided git hooks would do the trick and be fun to try. In a nutshell, a git repository has a .git/hooks/ directory where you can define one or more shell scripts that fire at various points during various git actions. Pre-commit, post-commit, etc… These hooks can be configured locally or on a remote server. They are similar in a way to GitHub’s webhooks and can be very powerful.

We have a repository which contains all of these scripts, but it’s currently one of the few that we have as private. I’m not entirely sure why, so I’ll replicate a smaller version of one here.

The file structure we’re dealing with during deployment is this:

  • /var/repos/news.wsu.edu.git/ – A bare git repository setup for receiving deploy instructions.
  • /var/repos/news.wsu.edu.git/hook/post-receive – The hook that will fire after receiving a push from a developer.
  • /var/repos/news.wsu.edu/ – The directory where the master branch is always checked out.
  • /var/www/news.wsu.edu/ – Production.

This is a shortened version of the post-receive hook for the WSU News repository:

For this to work, I add a remote named “deploy” to my local environment for news.wsu.edu that points to the var/repos/news.wsu.edu.git/ directory in production. Whenever I want to deploy the current master branch, I type git push deploy master. This sets in motion the following:

  • /var/repos/news.wsu.edu.git/ receives my push of whatever happens to come from my local repo.
  • /var/repos/news.wsu.edu.git/hooks/post-receive fires when that push is done.
  • /var/repos/news.wsu.edu/ is instructed by the post-receive hook to fetch the latest master.
  • rsync is then used to sync the non-excluded files from /var/repos/news.wsu.edu/ to /var/www/news.wsu.edu/

And boom, latest master is in production.

There are a few caveats:

  1. There is no way to rollback, unless you revert in master.
  2. There is no way to deploy specific versions, you must assume that master is stable.
  3. It is user friendly only to someone comfortable with the git command line, though it’s probably possible to setup a GUI with multiple remotes. This is not necessarily a bad thing.
  4. It requires a public key from each developer to be configured on the server for proper authentication. This is also not necessarily a bad thing.

I think this is a pretty good example of how one can start to use git hooks for various things. The script above is definitely not a great long term solution, and if I hadn’t started focusing more on the health of the WSUWP Platform, things likely would have changed drastically. That said, once it was there it worked for multiple sites and multiple people on our team—and having something that works is pretty much the biggest objective of all.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 2: jeremyfelt.com

This post is the second in a series of deployment workflows I use to get code into production.

The site you’re reading this on is often a neglected playground of sorts. I’ll run into an interesting method of provisioning the server or maintaining code, apply it halfway and then get distracted with something else.

That said, somewhere around 16 months ago, I tried out Fabric to manage deployment actions for individual sites and decided to stick with it.

A fair warning is in order—I found an immediate, quick way to use Fabric to accomplish a handful of tasks and then let it be. It’s likely much more powerful than a wrapper for rsync, but that’s basically how I’m using it.

Ok. Backing up a bit, I think this workflow makes the most sense when you look at how the code is arranged.

  • www/ is where WordPress and its config files live.
  • www/wordpress is a submodule pointed at core.git.wordpress.org so that I can run trunk or a specific branch.
  • content/ is my wp-content directory. 1
  • tweets/ is a submodule pointed at https://github.com/jeremyfelt/tweets.jeremyfelt.com, where I maintain a version controlled copy of my Twitter archive.

The arrangement goes along with the types of code involved:

  • (wp-config, mu-plugins) Code custom or very specific to jeremyfelt.com. This should exist in the repository.
  • (plugins, themes) Code that cannot be sanely managed with submodules AND that I want to manage through the WordPress admin. This should exist in the repository.
  • (WordPress, tweets) Code part of sanely available repositories that can be used as submodules.

Anything in a submodule is really only necessary in the working directory we’re deploying from, so the pain points I normally have with submodules go away. I think (assume?) that’s because this is the purpose of submodules. 😉

Ok, all the code is in its right place at this point.

With the fabfile.py file in my directory, I have access to several commands at the command line. Here are the ones I actually use:

  • fab pull_plugins – Pull any plugins from production into the local plugins directory. This allows me to use WordPress to manage my plugins in production while keeping them in some sort of version control.
  • fab pull_themes – Pull any themes from production into the local themes directory. Again, I can use WordPress to manage these in production.
  • fab pull_uploads – Pull my uploads directory from production to maintain a backup of all uploaded content.
  • fab sync_themes – When I update WordPress via git, this command is a nice shortcut to sync over any changes to the default theme(s).
  • fab push_www – Push any local changes in the www directory to production. This is primarily used for any changes to WordPress.
  • fab push_tweets – Push any local changes in the tweets directory to production. This effectively updates tweets.jeremyfelt.com.
  • fab push_content – Push any plugin and theme updates that I may have done locally to production. I don’t have to separate these actions because I already have them in version control, so I’m not as worried about having a lot happen at once.

An obvious thing I’m missing is a method for backing up the database and syncing it locally. This is really only a few more lines in the fabfile.

I also need to reimplement a previous method I had for copying the site’s Nginx configuration, though that may also be part of another half completed provisioning attempt.

This workflow has been painless to me as an individual maintaining a production copy of code that is version controlled and locally accessible. It allows me to test new plugins and themes before adding them to version control. It also allows me to quickly test specific revisions of WordPress core in production.

I’m not sure how well this would expand for a team of developers deploying code. There are likely some easy ways to tighten up loose ends, mostly around internal communication and defining a release workflow. I am also certain that Fabric is more powerful than how I am using it. I look forward to digging in deeper at some point.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Notes


1: Renaming wp-content/ to content/… Don’t ever do this, ugh. Keeping it a directory up from WordPress is great, but keep it named wp-content. It is possible some of my problems also came from changing my content domain to content.jeremyfelt.com, but I blame a lot of it on content/. My current favorite is www/wordpress/, www/wp-content/, and www/wp-config.php.

Deployment Workflows, Part 1: The WSUWP Platform

This post is the first in a series of deployment workflows I use to get code into production.

Many of my weekdays are spent working on and around the WSUWP Platform, an open source project created to provide a large, multi-network, WordPress based publishing platform for universities.

Conceptually, a few things are being provided.

First, a common platform to manage many sites and many users on multiple networks. Achieving this objective takes more than a plugin or a theme. It requires hooking in as early as possible in the process, through sunrise and mu-plugins.

Second, a method to describe some plugins and themes that should be included when the project is built for production.

Third, a way to develop in a local environment without the requirement of a build process firing for any change to a plugin or theme.

The platform repository itself consists of WordPress, several must use plugins, a few drop-ins, a sunrise file, and a build process. These are the things that are absolutely mandatory for the platform to perform as intended. I went back and forth on the decision to include WordPress in its entirety rather than via submodule, but decided—beyond submodules being a pain in the ass—that it was important to provide the exact code being deployed. In certain situations, this could be useful to deploy patches for testing or start using a beta in production.

Outside of the platform, WSU has additional repositories unique to our production build. Basically, if a feature should exist for anyone using the platform in production, it should be an mu-plugin. If a features is optional, it should be added through a repository of common plugins or common themes. Even more repositories exist for individual plugins and themes considered more optional than others.

A lot of repositories.

The build process for the platform, managed with Grunt, looks for anything in build-plugins/public, build-plugins/private, build-plugins/individual, build-themes/public, build-themes/private, and build-themes/individual (deep breath)—and smashes it all together into the appropriate build/www/wp-content/* directory alongside the files included with the WSUWP Platform project.

When all tasks are complete, the build directory contains:

  • build/www/wp-config.php
  • build/www/wordpress/
  • build/www/wp-content/advanced-cache.php
  • build/www/wp-content/index.php
  • build/www/wp-content/install.php
  • build/www/wp-content/object-cache.php
  • build/www/wp-content/sunrise.php
  • build/www/wp-content/mu-plugins/
  • build/www/wp-content/plugins/
  • build/www/wp-content/themes/

In local development, it’s likely that you would never run the build process. Instead, the following structure is always available to you:

  • www/wp-config.php
  • www/wordpress/
  • www/wp-content/plugins/
  • www/wp-content/themes/

This allows us to work on individual plugins and themes without worrying about how that impacts the rest of the platform. I can install any theme I want for testing (e.g. /www/wp-content/themes/make/) or have individual git repositories available anywhere (e.g. www/wp-content/plugins/wsuwp-sso-authentication/.git). All of these individual plugins and themes are ignored in the platform’s .gitignore so that only items added via the build process make it to production.

How everything makes it to production is another matter entirely.

A big objective at WSU is to reduce the amount of overhead required for something to make it into production. To achieve this, we use a plugin to handle deployments via hooks in GitHub. It seems like scary magic sometimes, but we even use the platform to deploy the platform.

Creating a new deployment for the WSUWP Platform

When a deployment is configured through the plugin, a URL is made available to configure for webhooks in GitHub. We’ve made the decision that any tag created on a repository in GitHub should start the deploy process. A log is available of each deployment as it occurs:

A list of deployment instances

When a deployment request is received, things fire as such:

  1. A script on the server is fired to pull down the tagged version of that repository and transfer it to the proper “to be built” location.
  2. After the tagged version is ready, a file is created as a “ready to build” trigger.
  3. A cron job fires every minute looking for a “ready to build” trigger.
  4. If ready, another script on the server fires the build process.
  5. rsync is then used to sync this with what is already in production.

There are a few obvious holes with this process that will need to be resolved as we expand.

  • If many tags are created at once, a lot of strange things could happen.
  • There is currently no great rollback method beyond manually refiring a create tag hook in GitHub.
  • It’s a deploy first, code review later model. This works for small teams, but will require more care as we expand our collaborations within the university.

All in all, our objectives are served pretty well.

Anyone on our team with the appropriate GitHub access can tag a release of a theme or plugin and have it appear in production within a minute or two. This has been especially helpful for those who aren’t as familiar with command line tools and need to deploy CSS and HTML changes in a theme.

Anyone contributing to individual themes or features via plugins doesn’t have to worry about the build process as a whole. They can focus on one piece and the series of scripts on the server handles the larger puzzle.

We’ve made a common platform available in the hopes of attracting collaborators and at the same time have a structure in which we can make our own decisions to implement various features and functionality.

I would love feedback on this process. Please leave a comment or reach out if you have any questions or suggestions!

A Series of Deployment Workflows

Brian Krogsgard had a series of tweets last night centered around the difficulty in finding the “right way” to manage code for WordPress. And he’s completely right. Things have come a long way since we used FTP to blindly transfer files or edit them directly on the server. We’re expected to have these solid processes down that treat our code and servers with respect. And most of us probably live with some internal shame or fear about the way that we’re handling our stuff in production.

Well, I do.

When I read “code management”, I really hear “deployment process”. I may be hijacking Brian’s original intent a bit, but I think this is an area where our work doesn’t get shared nearly enough. Plenty of code is published in the open and we sometimes talk about it. I don’t see much of the secret, terrifying things we do to get code to the server though.

So, over the next few days I’m going to write about the various workflows that I’m currently using to get stuff into production and I’m going to list them all here as I go.

And, as Rachel Baker mentioned, code reviews help a ton in advancing our skill sets. It would be great to have the same thing for these workflows. Please share your own and don’t hold back in critiquing mine!

My Deployment Workflows:

  1. The WSUWP Platform
  2. jeremyfelt.com
  3. WSU Indie Sites
  4. The WSU Spine
  5. Wrap-up

Links for PFS, DH, DHE, and ECDHE and SSL in general

So many acronyms.

I have many tabs open right now that I’m about to close and I’m not great at bookmarks. Here are some of the things I’ve been reading while trying to figure out PFS in SSL.

And I just bought this book: Bulletproof SSL and TLS