On reactions, from Twin Peaks, which I’ll write about more when we’re done. This from S02E10.

– Oh, as usual you’re overreacting.
– Am I? Maybe I am, but they’re my reactions. And the hurt I feel is my hurt, and how I react is none of your damn business.
– Dear, be sensible.
– I’m being very sensible. I want you out of this place. I want you out of my life. I don’t wanna be hurt by you anymore.

It’s a great exchange.

Deployment Workflows Wrap-up

This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.

While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.


All in all, I think my main guidelines for a successful workflow are:

  1. The one you use is better than the one you never implement.
  2. Communication and documentation are more important than anything else.

And I think there are a few questions that should be asked before you settle on anything.

Who is deploying?

This is the most important question. A good team deserves a workflow they’re comfortable with.

If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.

When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.

Push or Pull?

One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.

Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.

One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.

Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to as the possibilities for things happening out of order rise as more people are involved.

When do we deploy?

Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.

I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version – 0001, 0002, etc… – and that works as well.

This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version 0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.

The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.

I think the following is my favorite from that piece:

We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!


You may have noticed—I’m missing a ton of possibilities.

I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.

Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.

And there are more, I’m very certain.


That’s all I have for deployment. :)

I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!

And once again, here are mine:

  1. The WSUWP Platform
  3. WSU Indie Sites
  4. The WSU Spine

Deployment Workflows, Part 4: WSU Spine

This post is the fourth in a series of deployment workflows I use to get code into production.

This is the one non-WordPress deployment writeup, though still interesting.

The WSU Spine plays the role of both branding and framework for websites created at WSU. It provides a consistent navigation experience, consistent default styles, and the proper University marks. At the same time, a fully responsive CSS framework makes it easy for front end developers at WSU to create mobile friendly pages. For sites that are in the WSUWP Platform, we provide a parent theme that harnesses this framework.

One of the great parts about maintaining a central framework like this is being able to serve it from a single location – – so that the browsers of various visitors can cache the file once and not be continually downloading Spine versions while they traverse the landscape of WSU web.

It took us a bit to get going with our development workflow, but we finally settled on a good model later in 2014 centered around semantic versioning. We now follow a process similar to other libraries hosted on CDNs.

Versions of spine.min.js and spine.min.css for our current version 1.2.2 are provided at:

  •* – Files here are cached for an hour. This major version URL will always be up to date with the latest version of the Spine. If we break backward compatibility, the URL will move up a major version to /spine/2/ so that we don’t break live sites. This is our most popular URL.
  • – Files here are cached for 120 days. This is built for every minor and patch release. This allows for longer cache times and fine grained control in their own projects. It does increase the chance that older versions of the Spine will be in the wild. We still have not seen any traffic on this URL yet.
  • – Files here are cached for 10 minutes. This is built every time the develop branch is updated in the repository and is considered bleeding edge and often unstable.

So, our objectives:

  1. Deploy to one directory for every change to develop.
  2. Deploy to the major version directory whenever a new release is tagged.
  3. Create and deploy to a minor/patch version directory whenever a new release is tagged.

As with the WSUWP Platform, we use GitHub’s webhooks. For this deployment process, we watch for both the create and push events using a very basic PHP script on the server rather than a plugin.

Here’s a shorter version of that script embedded:

  • If we receive a push event and it is on the develop branch, then we fire a deploy script with develop as the only argument.
  • If we receive a create event and it is for a new tag that matches our #.#.# convention, then we fire a deploy script with this tag as the only argument.

And here’s the deploy script that it’s firing:

The brief play by play from the script above is:

  1. Get things clean in the current repo.
  2. Checkout whichever branch or tag was passed as the argument.
  3. Update NPM dependencies.
  4. grunt prod to run through our build process.
  5. Move files to the appropriate directories.

Really, really smooth. Also really, really basic and ugly. :)

We’ve actually only done 2 or 3 releases with this model so far, so I’m not completely convinced there aren’t any bugs. It’s pretty easy to maintain as there really are few tasks that need to be completed. Build files, get files to production.

Another great bonus is the Spine version switcher we have built as a WordPress Customizer option in the theme. We can go to any site on the platform and test out the develop branch if needed.

In the near future, I’d like to create directories for any new branch that is created in the repository. This will allow us to test various bug fixes or new features before committing them to develop, making for a more stable environment overall.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 3: WSU Indie Sites

This post is the third in a series of deployment workflows I use to get code into production.

While the WSUWP Platform is the preferred place for new sites at WSU, there are times when a single site WordPress installation is required.

Until about February of 2014, all new sites were configured on our WSU Indie server because the platform was not ready and setting things up in IIS was getting old. Now, a site only goes here if it is going to do something crazy that may affect the performance of the platform as a whole. It’s a separate server and definitely an outlier in our day to day workflows. In fact, I’ve been able to remove all sites except for three from that server, with one more scheduled.

Because WSU Indie is setup to handle multiple single site installs, I needed a way to deploy everything except for WordPress, which is controlled on this server as part of provisioning.

The best current example of how each repository is configured is probably our old P2 configuration:

  • config/ – Contains the Nginx configuration for both local (Vagrant) and production instances.
  • wp-content/ – Contains mu-plugins, plugins, and themes.
  • .rsync-exclude – A list of files to exclude when rsync is used on the server during deployment.

When placed alongside a wordpress/ directory and an existing wp-config.php file, everything just works.

On the server, each configured indie site has its own directory setup based on descriptions in a Salt pillar file. An example of what this pillar file looks like is part of our local indie development README.

When provisioning runs, the final web directory looks something this:

  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/
  • /var/www/

Our job with deployment is to get the files from the repository into the production directory.

I decided git hooks would do the trick and be fun to try. In a nutshell, a git repository has a .git/hooks/ directory where you can define one or more shell scripts that fire at various points during various git actions. Pre-commit, post-commit, etc… These hooks can be configured locally or on a remote server. They are similar in a way to GitHub’s webhooks and can be very powerful.

We have a repository which contains all of these scripts, but it’s currently one of the few that we have as private. I’m not entirely sure why, so I’ll replicate a smaller version of one here.

The file structure we’re dealing with during deployment is this:

  • /var/repos/ – A bare git repository setup for receiving deploy instructions.
  • /var/repos/ – The hook that will fire after receiving a push from a developer.
  • /var/repos/ – The directory where the master branch is always checked out.
  • /var/www/ – Production.

This is a shortened version of the post-receive hook for the WSU News repository:

For this to work, I add a remote named “deploy” to my local environment for that points to the var/repos/ directory in production. Whenever I want to deploy the current master branch, I type git push deploy master. This sets in motion the following:

  • /var/repos/ receives my push of whatever happens to come from my local repo.
  • /var/repos/ fires when that push is done.
  • /var/repos/ is instructed by the post-receive hook to fetch the latest master.
  • rsync is then used to sync the non-excluded files from /var/repos/ to /var/www/

And boom, latest master is in production.

There are a few caveats:

  1. There is no way to rollback, unless you revert in master.
  2. There is no way to deploy specific versions, you must assume that master is stable.
  3. It is user friendly only to someone comfortable with the git command line, though it’s probably possible to setup a GUI with multiple remotes. This is not necessarily a bad thing.
  4. It requires a public key from each developer to be configured on the server for proper authentication. This is also not necessarily a bad thing.

I think this is a pretty good example of how one can start to use git hooks for various things. The script above is definitely not a great long term solution, and if I hadn’t started focusing more on the health of the WSUWP Platform, things likely would have changed drastically. That said, once it was there it worked for multiple sites and multiple people on our team—and having something that works is pretty much the biggest objective of all.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 2:

This post is the second in a series of deployment workflows I use to get code into production.

The site you’re reading this on is often a neglected playground of sorts. I’ll run into an interesting method of provisioning the server or maintaining code, apply it halfway and then get distracted with something else.

That said, somewhere around 16 months ago, I tried out Fabric to manage deployment actions for individual sites and decided to stick with it.

A fair warning is in order—I found an immediate, quick way to use Fabric to accomplish a handful of tasks and then let it be. It’s likely much more powerful than a wrapper for rsync, but that’s basically how I’m using it.

Ok. Backing up a bit, I think this workflow makes the most sense when you look at how the code is arranged.

  • www/ is where WordPress and its config files live.
  • www/wordpress is a submodule pointed at so that I can run trunk or a specific branch.
  • content/ is my wp-content directory. 1
  • tweets/ is a submodule pointed at, where I maintain a version controlled copy of my Twitter archive.

The arrangement goes along with the types of code involved:

  • (wp-config, mu-plugins) Code custom or very specific to This should exist in the repository.
  • (plugins, themes) Code that cannot be sanely managed with submodules AND that I want to manage through the WordPress admin. This should exist in the repository.
  • (WordPress, tweets) Code part of sanely available repositories that can be used as submodules.

Anything in a submodule is really only necessary in the working directory we’re deploying from, so the pain points I normally have with submodules go away. I think (assume?) that’s because this is the purpose of submodules. ;)

Ok, all the code is in its right place at this point.

With the file in my directory, I have access to several commands at the command line. Here are the ones I actually use:

  • fab pull_plugins – Pull any plugins from production into the local plugins directory. This allows me to use WordPress to manage my plugins in production while keeping them in some sort of version control.
  • fab pull_themes – Pull any themes from production into the local themes directory. Again, I can use WordPress to manage these in production.
  • fab pull_uploads – Pull my uploads directory from production to maintain a backup of all uploaded content.
  • fab sync_themes – When I update WordPress via git, this command is a nice shortcut to sync over any changes to the default theme(s).
  • fab push_www – Push any local changes in the www directory to production. This is primarily used for any changes to WordPress.
  • fab push_tweets – Push any local changes in the tweets directory to production. This effectively updates
  • fab push_content – Push any plugin and theme updates that I may have done locally to production. I don’t have to separate these actions because I already have them in version control, so I’m not as worried about having a lot happen at once.

An obvious thing I’m missing is a method for backing up the database and syncing it locally. This is really only a few more lines in the fabfile.

I also need to reimplement a previous method I had for copying the site’s Nginx configuration, though that may also be part of another half completed provisioning attempt.

This workflow has been painless to me as an individual maintaining a production copy of code that is version controlled and locally accessible. It allows me to test new plugins and themes before adding them to version control. It also allows me to quickly test specific revisions of WordPress core in production.

I’m not sure how well this would expand for a team of developers deploying code. There are likely some easy ways to tighten up loose ends, mostly around internal communication and defining a release workflow. I am also certain that Fabric is more powerful than how I am using it. I look forward to digging in deeper at some point.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.


1: Renaming wp-content/ to content/… Don’t ever do this, ugh. Keeping it a directory up from WordPress is great, but keep it named wp-content. It is possible some of my problems also came from changing my content domain to, but I blame a lot of it on content/. My current favorite is www/wordpress/, www/wp-content/, and www/wp-config.php.

Deployment Workflows, Part 1: The WSUWP Platform

This post is the first in a series of deployment workflows I use to get code into production.

Many of my weekdays are spent working on and around the WSUWP Platform, an open source project created to provide a large, multi-network, WordPress based publishing platform for universities.

Conceptually, a few things are being provided.

First, a common platform to manage many sites and many users on multiple networks. Achieving this objective takes more than a plugin or a theme. It requires hooking in as early as possible in the process, through sunrise and mu-plugins.

Second, a method to describe some plugins and themes that should be included when the project is built for production.

Third, a way to develop in a local environment without the requirement of a build process firing for any change to a plugin or theme.

The platform repository itself consists of WordPress, several must use plugins, a few drop-ins, a sunrise file, and a build process. These are the things that are absolutely mandatory for the platform to perform as intended. I went back and forth on the decision to include WordPress in its entirety rather than via submodule, but decided—beyond submodules being a pain in the ass—that it was important to provide the exact code being deployed. In certain situations, this could be useful to deploy patches for testing or start using a beta in production.

Outside of the platform, WSU has additional repositories unique to our production build. Basically, if a feature should exist for anyone using the platform in production, it should be an mu-plugin. If a features is optional, it should be added through a repository of common plugins or common themes. Even more repositories exist for individual plugins and themes considered more optional than others.

A lot of repositories.

The build process for the platform, managed with Grunt, looks for anything in build-plugins/public, build-plugins/private, build-plugins/individual, build-themes/public, build-themes/private, and build-themes/individual (deep breath)—and smashes it all together into the appropriate build/www/wp-content/* directory alongside the files included with the WSUWP Platform project.

When all tasks are complete, the build directory contains:

  • build/www/wp-config.php
  • build/www/wordpress/
  • build/www/wp-content/advanced-cache.php
  • build/www/wp-content/index.php
  • build/www/wp-content/install.php
  • build/www/wp-content/object-cache.php
  • build/www/wp-content/sunrise.php
  • build/www/wp-content/mu-plugins/
  • build/www/wp-content/plugins/
  • build/www/wp-content/themes/

In local development, it’s likely that you would never run the build process. Instead, the following structure is always available to you:

  • www/wp-config.php
  • www/wordpress/
  • www/wp-content/plugins/
  • www/wp-content/themes/

This allows us to work on individual plugins and themes without worrying about how that impacts the rest of the platform. I can install any theme I want for testing (e.g. /www/wp-content/themes/make/) or have individual git repositories available anywhere (e.g. www/wp-content/plugins/wsuwp-sso-authentication/.git). All of these individual plugins and themes are ignored in the platform’s .gitignore so that only items added via the build process make it to production.

How everything makes it to production is another matter entirely.

A big objective at WSU is to reduce the amount of overhead required for something to make it into production. To achieve this, we use a plugin to handle deployments via hooks in GitHub. It seems like scary magic sometimes, but we even use the platform to deploy the platform.

Creating a new deployment for the WSUWP Platform

When a deployment is configured through the plugin, a URL is made available to configure for webhooks in GitHub. We’ve made the decision that any tag created on a repository in GitHub should start the deploy process. A log is available of each deployment as it occurs:

A list of deployment instances

When a deployment request is received, things fire as such:

  1. A script on the server is fired to pull down the tagged version of that repository and transfer it to the proper “to be built” location.
  2. After the tagged version is ready, a file is created as a “ready to build” trigger.
  3. A cron job fires every minute looking for a “ready to build” trigger.
  4. If ready, another script on the server fires the build process.
  5. rsync is then used to sync this with what is already in production.

There are a few obvious holes with this process that will need to be resolved as we expand.

  • If many tags are created at once, a lot of strange things could happen.
  • There is currently no great rollback method beyond manually refiring a create tag hook in GitHub.
  • It’s a deploy first, code review later model. This works for small teams, but will require more care as we expand our collaborations within the university.

All in all, our objectives are served pretty well.

Anyone on our team with the appropriate GitHub access can tag a release of a theme or plugin and have it appear in production within a minute or two. This has been especially helpful for those who aren’t as familiar with command line tools and need to deploy CSS and HTML changes in a theme.

Anyone contributing to individual themes or features via plugins doesn’t have to worry about the build process as a whole. They can focus on one piece and the series of scripts on the server handles the larger puzzle.

We’ve made a common platform available in the hopes of attracting collaborators and at the same time have a structure in which we can make our own decisions to implement various features and functionality.

I would love feedback on this process. Please leave a comment or reach out if you have any questions or suggestions!

A Series of Deployment Workflows

Brian Krogsgard had a series of tweets last night centered around the difficulty in finding the “right way” to manage code for WordPress. And he’s completely right. Things have come a long way since we used FTP to blindly transfer files or edit them directly on the server. We’re expected to have these solid processes down that treat our code and servers with respect. And most of us probably live with some internal shame or fear about the way that we’re handling our stuff in production.

Well, I do.

When I read “code management”, I really hear “deployment process”. I may be hijacking Brian’s original intent a bit, but I think this is an area where our work doesn’t get shared nearly enough. Plenty of code is published in the open and we sometimes talk about it. I don’t see much of the secret, terrifying things we do to get code to the server though.

So, over the next few days I’m going to write about the various workflows that I’m currently using to get stuff into production and I’m going to list them all here as I go.

And, as Rachel Baker mentioned, code reviews help a ton in advancing our skill sets. It would be great to have the same thing for these workflows. Please share your own and don’t hold back in critiquing mine!

My Deployment Workflows:

  1. The WSUWP Platform
  3. WSU Indie Sites
  4. The WSU Spine
  5. Wrap-up

Show Your Work – Austin Kleon

According to my Amazon order date, I preordered Show Your Work by a couple weeks in early 2014 and still waited until now to finally read it.

This was a great, quick read with some inspiring moments. As I read the book, I continued adding people to a list in my head that should read it next. If I could convince everyone on our team to read this, bonus. Our WSU Labs initiative excites me most as a medium for researchers to talk about their work in process.

A couple choicier quotes. First from way #4, “Open up your Cabinet of Curiosities”:

“The problem with hoarding is you end up living off your reserves. Eventually, you’ll become stale. If you give away everything you have, you are left with nothing. This forces you to look, to be aware, to replenish.… Somehow the more you give away, the more comes back to you.” – Paul Arden

And then from way #7, “Don’t Turn Into Human Spam”:

“If you want to be accepted by a community, you have to first be a good citizen of that community. If you’re only pointing to your own stuff online, you’re doing it wrong. You have to be a connector. The writer Blake Butler calls this being an open node. If you want to get, you have to give. If you want to be noticed, you have to notice. Shut up and listen once in a while.”

Also in that section, the story of knuckleball pitchers comparing techniques rather than hiding them, was great.

It will be interesting to see how the other book on my shelf titled Show Your Work compares to this one.

Vague focuses and specific goals for 2015

When 2015 is finishing up, I hope to look back and be happy that I’ve done some of these while skipping others to do things even cooler.

In no particular order.

  • Read more books. I grew up reading books obsessively and I’ve lost that somewhat over the years. I still read constantly, but the draw of article after article on the Internet doesn’t allow for long periods of focus. If I read 25 books this year, fantastic.
  • Get rid of at least 10 physical books that I read this year while switching to ebooks for everything else. We reduced a ton in 2010, but the piles start to add up.
  • Any physical books purchased or received in 2015 should be read immediately and passed on to a willing reader. Exceptions made for reference and study material.
  • Read a handful of academic papers. I spend so much time thinking about the future of open access publishing, I should be a user.

The theme to 2015 seems to be about reading… :)

  • Write more. An average of one thoughtful post a week wouldn’t be horrible. Sharing more off the cuff thoughts here rather than Twitter would be wonderful.
  • Talk about my work more.
  • Get smarter about personal encryption.
  • Continue reducing. Seriously, get rid of those 2 laptops and the “netbook”.
  • Make measurable progress on the one product idea. Starting at 0.
  • At least 100 hours of freelance work.
  • Visit one new country, Glacier National Park, and the coast between Los Angeles and San Diego.
  • Make big strides in WordPress core for multisite.
  • A configurable VVV.
  • Get good at Backbone.
  • Ask for more advice and spend more time thinking about it.

A productive year, 2014

2014 was a good, productive year. Many, many things happened and many, many things shipped. I’ll take it.

Washington State University

As 2014 started, things were in full swing at WSU. We launched our first sites on the WSUWP Platform in the middle of February and have continued marching ever since. We’re now at 39 networks with 429 sites and 704 users. In the process, we’re sharing 117 repositories of our work on GitHub. Crazy!

My primary focus remains the central publishing platform, WSUWP, and the server provisioning that maintains that and other server instances. I continue to look for ways to help guide anyone toward sharing their work.

I think my favorite thing to come out of it all has been the open lab sessions we started holding in May. Every Friday morning a group of around 10-15 arrives and talks about the web for a couple hours. I’m hoping to promote this more throughout the university in 2015 so that we need to find a larger space.

Noteables: College of Business, Medicine, Hydrogen Lab, College of Engineering and Architecture, SWWRC, WSU Projects, WSU Labs, WSU Hub.

Varying Vagrant Vagrants

It was also a great year for VVV. Just about a year ago, we transitioned to an organization on GitHub. A few months later, we started the process of choosing an open source license. On October 7th, it was so.

Due to the productive year in other areas, and the temperance from changing a codebase that was in a licensing decision, it was a very slow release year. We did do quite a bit though and both our 1.1 and 1.2.0 releases were great. I’m excited about the things to come in 2015.


I love WordPress. And it’s been a wonderful WordPress year.

3.9, 4.0, and 4.1 were such great releases and so many things are coming together for even greater releases next year. I was humbled and happy to be a guest committer for the 4.1 release cycle. While I didn’t accomplish everything I wanted to, I was happy that we kept marching. The working group that has started to form around multisite will lead toward great things soon I think.

I was really happy with my talk at WordCamp Seattle, and had a great time before during and after. Most fun was finding my coworkers off in their own groups during contributor day contributing away.

WordCamp Vancouver was excellent as always. The community we have in Cascadia is so much fun. I will now take an extra day every time I go up so that I can (a) get a beer tour with Flynn and (b) go sight seeing.

WordCamp San Francisco was amazing. I was very happy with how my lightning talk turned out and had some great conversations as a result with others in higher education. There aren’t really words to describe the experience at the community summit and contributor days. What an intense week.

And the Pullman WordPress Meetup! We’re now 24 strong and have had a successful 7 meetups. Every month I leave wondering why it took an entire year to finally start this up. We have such a great community of people.

Web Conference at Penn State

The Web Conference at Penn State was a good break from WordCamps and a much different crowd than I’m used to. I wish I had a video to share, but no go. PSU was a great host and I met several people on the web team(s) there and came away very inspired by what others at big schools were already doing with WordPress.


No moving! We stayed in Pullman and we stayed in the same rental, a 12 minute walk to work. After all the moving we’ve done over the last several years, it was nice to pause for a minute.

We did travel a bit. I’m happy to have lived in this area as the scenery is pretty amazing. We made it up to Nelson, BC a couple times. To Missoula, MT twice. A route almost entirely around Idaho on the way down and back from the Grand Tetons. A few weeks back home in IL. A crazy trip to State College, PA via midnight rental from Pittsburgh. A nice walk around Bowen Island after a ferry from Vancouver.


Strong Belgian Ale, Burtonian English Pale Ale, Blackberry Stout, and a Scottish ale a couple days into its primary ferment. While I’d like to ramp up on variety, that will likely only happen if I switch to smaller carboys. 5 gallons goes a long way!

Now it’s time to continue watching Twin Peaks and pop some bubbly at midnight. Reflecting can wait another year. You all are wonderful, thanks for being here and a happy 2015!