How we’re using the WP REST API at Washington State University

As I write this, we have the WP REST API enabled for 1083 sites across 54 networks on our single installation of WordPress at Washington State University.

It’s probably worth noting that this only counts as one active installation in the repository stats. 🙁 It’s definitely worth noting how we use it! 🙂

Our primary use for the WP REST API is to share content throughout the University via our WSUWP Content Syndicate plugin. With the simple wsuwp_json shortcode, anyone is able to embed a list of headlines from articles published on

And just by changing the host, I can switch over and embed a couple of recent headlines from

Having the ability to share information across the University is very useful to us. It helps various groups and sites throughout the ecosystem feel more connected as visitors and as site owners.

Of course, we could have used a pre-existing syndication format like RSS as a solution, but a REST API is so much more flexible. It didn’t take much work to extend the initial plugin using things like register_rest_field() to support and display results from the central people directory we have in progress.

That’s me, pulled in from our people API.

This kind of data flexibility is a big part of our vision for the future of the web at WSU. Soon we’ll be able to highlight information for research faculty that may help to connect them with other groups working on similar topics. We’ll have ways to create articles on the edge of the network and have them bubble up through the various layers of the university—department, college, central news. And we’ll be able to start tying data to people in a smarter way so that we can help to make sure voices throughout the university are heard.

And that’s just our first angle! One day I’ll expand on how we see the REST API changing our front end workflow in creative ways.

A Method for Managing Mixed HTTP/HTTPS Sites in Multisite

This is a brief rundown of the method we’re currently using at WSU to manage mixed HTTP/HTTPS configurations in a multi-network WordPress setup.

Our assumptions:

  • Sites that are HTTP (HTTPS optional) on the front end should be forced HTTPS in any admin area.
  • Some sites should be forced HTTPS everywhere. This may be because of form inputs or because it’s a nice thing to do.
  • New domains may not immediately have certificates. We can measure risk and provide brief HTTP admin support—usually with trusted users on a wired network.

To force HTTPS in admin areas, we use the WordPress constant FORCE_SSL_ADMIN. To determine whether this can be enabled, we start with the assumption that it should and then check for a stored option attached to the currently requested domain telling us otherwise.

A bit further down, we use this information to actually set the constant.

This option is managed through our WSUWP TLS plugin, which tracks new domains and allows non server-admins to start the process of CSR generation and certificate upload. Once the domain goes through the entire process and is verified as working, the foo.bar_ssl_disabled option is deleted and admin page loads will be forced to HTTPS.

While the domain is going through this process, it will be accessible via HTTP in the admin, though the cookies generated on other sites will not work as they are flagged as secure. There’s probably some stuff I’m not aware of here, which is another reason to keep this very limited. 😬

Forcing HTTPS everywhere is much easier, as we can redirect all HTTP request for a domain to HTTPS in nginx (or Apache). At that point, we’ll set siteurl and home for the site to HTTPS as well so that WordPress generates HTTPS URLs for everything.

Screen Shot 2015-04-24 at 9.21.18 AM

I love that screenshot.

In a nutshell. Assume all admin requests are HTTPS, but have a config flag that allows you to offer temporary HTTP access. If a domain can be forced HTTPS everywhere, then handle that in the nginx/apache config.

First thoughts on our new

Today we launched a gorgeous new home page at WSU. For the most part everything went as planned and definitely without catastrophe. We’ll have a full stack write-up at some point soon on with more details (still a few more things to launch), but I’ve had a few thoughts throughout the day that I wanted to note.

Screen Shot 2015-03-24 at 11.24.01 PMWe’re HTTPS only. And that’s pretty freaking cool. It was a year ago today that we flipped the switch on WSU News to SPDY and ever since then I couldn’t wait to get the root domain. I had anticipated some push-back, though I don’t know why, and we haven’t heard a peep. I plan on running a script through a massive list of public university websites to see how many do this. Many don’t even support TLS on the home page, let alone force it.

Screen Shot 2015-03-24 at 11.28.24 PM

Our root domain is on WordPress. Typing that address in for the first time today after everything went live felt really, really cool. I don’t think that feeling is going to wear off. Even though this is site ~600 to launch on our platform, it’s a huge statement that the University is behind us on this. I don’t remember all of our conversations, but I don’t think that having the root on WordPress was really on our radar for the first 2 years. Dig it.

We’re pretty damn fast. That’s become a lot easier these days. But we have a lot of content on the home page—and a really big map—and we still serve the document to the browser super quickly. I actually screwed up pretty big here by microcaching with Nginx at the last minute. It made things even faster, but cached a bad redirect for quite a while. Lessons learned, and we’ll keep tweaking—especially with image optimization, but I love that we went out the gate with such a good looking waterfall.

And as I stormed earlier in my series of “I heart GPL” tweets, every part of our central web is open source. We publish our server provisioning, our WordPress multi-network setup, our brand framework, our themes, and our plugins. 134 repositories and counting. Not everything is pretty enough or documented enough, and will often serve more as an example than as a product. But, everything is out there and we’re sharing and doing our best to talk about it more.

Lots of this makes me happy. More to come! 🙂

Deployment Workflows, Part 1: The WSUWP Platform

This post is the first in a series of deployment workflows I use to get code into production.

Many of my weekdays are spent working on and around the WSUWP Platform, an open source project created to provide a large, multi-network, WordPress based publishing platform for universities.

Conceptually, a few things are being provided.

First, a common platform to manage many sites and many users on multiple networks. Achieving this objective takes more than a plugin or a theme. It requires hooking in as early as possible in the process, through sunrise and mu-plugins.

Second, a method to describe some plugins and themes that should be included when the project is built for production.

Third, a way to develop in a local environment without the requirement of a build process firing for any change to a plugin or theme.

The platform repository itself consists of WordPress, several must use plugins, a few drop-ins, a sunrise file, and a build process. These are the things that are absolutely mandatory for the platform to perform as intended. I went back and forth on the decision to include WordPress in its entirety rather than via submodule, but decided—beyond submodules being a pain in the ass—that it was important to provide the exact code being deployed. In certain situations, this could be useful to deploy patches for testing or start using a beta in production.

Outside of the platform, WSU has additional repositories unique to our production build. Basically, if a feature should exist for anyone using the platform in production, it should be an mu-plugin. If a features is optional, it should be added through a repository of common plugins or common themes. Even more repositories exist for individual plugins and themes considered more optional than others.

A lot of repositories.

The build process for the platform, managed with Grunt, looks for anything in build-plugins/public, build-plugins/private, build-plugins/individual, build-themes/public, build-themes/private, and build-themes/individual (deep breath)—and smashes it all together into the appropriate build/www/wp-content/* directory alongside the files included with the WSUWP Platform project.

When all tasks are complete, the build directory contains:

  • build/www/wp-config.php
  • build/www/wordpress/
  • build/www/wp-content/advanced-cache.php
  • build/www/wp-content/index.php
  • build/www/wp-content/install.php
  • build/www/wp-content/object-cache.php
  • build/www/wp-content/sunrise.php
  • build/www/wp-content/mu-plugins/
  • build/www/wp-content/plugins/
  • build/www/wp-content/themes/

In local development, it’s likely that you would never run the build process. Instead, the following structure is always available to you:

  • www/wp-config.php
  • www/wordpress/
  • www/wp-content/plugins/
  • www/wp-content/themes/

This allows us to work on individual plugins and themes without worrying about how that impacts the rest of the platform. I can install any theme I want for testing (e.g. /www/wp-content/themes/make/) or have individual git repositories available anywhere (e.g. www/wp-content/plugins/wsuwp-sso-authentication/.git). All of these individual plugins and themes are ignored in the platform’s .gitignore so that only items added via the build process make it to production.

How everything makes it to production is another matter entirely.

A big objective at WSU is to reduce the amount of overhead required for something to make it into production. To achieve this, we use a plugin to handle deployments via hooks in GitHub. It seems like scary magic sometimes, but we even use the platform to deploy the platform.

Creating a new deployment for the WSUWP Platform

When a deployment is configured through the plugin, a URL is made available to configure for webhooks in GitHub. We’ve made the decision that any tag created on a repository in GitHub should start the deploy process. A log is available of each deployment as it occurs:

A list of deployment instances

When a deployment request is received, things fire as such:

  1. A script on the server is fired to pull down the tagged version of that repository and transfer it to the proper “to be built” location.
  2. After the tagged version is ready, a file is created as a “ready to build” trigger.
  3. A cron job fires every minute looking for a “ready to build” trigger.
  4. If ready, another script on the server fires the build process.
  5. rsync is then used to sync this with what is already in production.

There are a few obvious holes with this process that will need to be resolved as we expand.

  • If many tags are created at once, a lot of strange things could happen.
  • There is currently no great rollback method beyond manually refiring a create tag hook in GitHub.
  • It’s a deploy first, code review later model. This works for small teams, but will require more care as we expand our collaborations within the university.

All in all, our objectives are served pretty well.

Anyone on our team with the appropriate GitHub access can tag a release of a theme or plugin and have it appear in production within a minute or two. This has been especially helpful for those who aren’t as familiar with command line tools and need to deploy CSS and HTML changes in a theme.

Anyone contributing to individual themes or features via plugins doesn’t have to worry about the build process as a whole. They can focus on one piece and the series of scripts on the server handles the larger puzzle.

We’ve made a common platform available in the hopes of attracting collaborators and at the same time have a structure in which we can make our own decisions to implement various features and functionality.

I would love feedback on this process. Please leave a comment or reach out if you have any questions or suggestions!

Figuring out how to serve many SSL certificates, part 2.

I’ve been pretty happy over the last couple days with our A+ score at SSL Labs. I almost got discouraged this morning when it was discovered that LinkedIn wasn’t able to pull in the data from our HTTPS links properly when sharing articles.

Their bot, `LinkedInBot/1.0 (compatible; Mozilla/5.0; Jakarta Commons-HttpClient/3.1 +`, uses an end of life HTTP client that happens to also be Java based. One of our warnings in the handshake simulation area was that clients using Java Runtime Environment 6u45 did not support 2048 DH params, something that we were using. I’m not entirely sure if LinkedIn has their JRE updated to 6u45, but I’m guessing that anything below that has the same issue.

I generated a new 1024 bit dhparams file to solve the immediate issue and reloaded nginx without changing any other configs. LinkedIn can now ingest our HTTPS links and we still have an A+ score. 🙂