Dealing with big WordPress plugins

This post has been sitting as a draft for over 2 years now and I don’t remember why I didn’t just hit publish at the time. I’m no longer at WSU, but I do still agree with what I wrote, so here you go!

May 15, 2017:

I spend quite a bit of time on plugins at WSU.

One of the challenges is finding the right way to provide a variety of features to 4500 users while not sacrificing security, performance, and stability.

Every time we need a feature, we’re faced with two options:

  1. Use an existing plugin.
  2. Create a new plugin.

The best, best, best answer is to use an existing plugin. When this happens, it means that something out there not only meets the requirements, but happens to align with our other guidelines:

  • Has an open source license that is compatible with the GPL v2 (or later) license.
  • Has an open development process and a well defined path to contribution.
  • Solves a very specific need and does not contain a large amount of code.
  • Passes a full code review by WSU’s web team or other trusted members of the web community.

When I started tackling all of this a few years ago, I made a few exceptions for some big WordPress plugins. A couple require licenses, a couple have a massive code base, and a couple have fairly closed development processes without a defined path to contribution.

At the time I figured the trade off was a great set of features with a relatively low level of effort. A few years later I now know that it’s these plugins that add the most overhead. Not financial overhead—the world of WordPress plugins is still priced so very low, but time and effort overhead.

There are a handful of smaller plugins we’ve had installed for years now without any kind of issue. They usually solve a limited number of problems and, because of the backward compatibility provided by WordPress, just work.

When a plugin gets big, it becomes more of a product than a solution. It often has a product team focused on improving the feature set, attracting more customers, and increasing revenue. Changes are made that adjust markup on the front end and UI/UX on the back end, all in an effort to improve the experience for the product’s customers.

As the maintainer of a diverse platform powering 1500 sites at a university, I’m more often looking for solutions rather than products.

To me, these are plugins that have made a handful of clear decisions and modified WordPress in a small way. Ideally, there’s a way to unhook each of the decisions or a way to build on the solution in a custom way. I can create a simple plugin that extends with decisions of my own.

So, building on an earlier writeup, if you find yourself in a position where you need to support thousands of sites and users and you have the opportunity to start from scratch, here’s how I’d approach plugins:

  • Keep things simple.
  • If a big plugin is useful, find the right way for it to be activated on only one site.
  • Build extensions that provide access to the big plugin from other sites.
  • Make your own plugins small and focused.
  • Don’t get behind on updates.

I’ll stress a couple of these in more words.

If you give a site owner the ability to activate a plugin, there is a very good chance they will activate the plugin. 1500 sites later, you’ll be digging for hours to see who is really using the plugin when you want to reduce its impact. So make sure big plugins are only activated exactly where you want.

Don’t get behind on updates. The WordPress plugin model doesn’t really account for LTS releases, so security updates and bug fixes are shipped alongside UI/UX changes. If you’ve fallen for a big plugin and the product changes are causing an unexpected burden, you’re still better off updating and dealing with it now.

And get good at saying no in a productive way.

This site/thread by Robin Sloan is a treasure

I really enjoyed this thread by Robin on Rosegarden. Enough that I’ve been remembering it and thinking about it for 3 days now.

Among several gems—and general creativity of presentation, there’s this:

What we hear from companies like T—— and F——- and Y—— is that monitoring communication at this scale, preventing that harm, is an unprecedented technical challenge.

That’s correct. However… no one asked for communication at this scale!

Robin Sloan – platforms.fyi

I also appreciate the idea that “no reasonable human needs more than 10,000 other humans to read their words within twenty minutes of writing them.”

Of course, I did learn about this from a tweet sent out to 43,000 people. 🙂

Script toggles for DNS over HTTPS on public Wi-Fi

DNS over HTTPS has been fun to try out. I’ve been using Cloudflare’s command line client/service/proxy configuration and working through some of the quirks with day to day use.

In a nutshell: A DNS proxy is setup on your local machine that makes DNS over HTTPS requests to Cloudflare’s 1.1.1.1 and 1.0.0.1 servers. Your configured DNS server is 127.0.0.1, and lookups are encrypted between your local machine and 1.1.1.1. With this data encrypted, your service provider (home ISP, coffee shop, hotel, airport, etc…) does not see the domain name requests that are made.

This works great until you join a wireless network with some kind of captive portal requiring you to either login or accept some kind of terms. At that point, the network’s DNS is usually used to provide the address for that captive portal and no other network activity is allowed until that process completes. In many cases, this will not be compatible with a DNS over HTTPS configuration.

There are a handful of steps required for switching back and forth, which could be a pain if you’re frequently bouncing between locations.

  • Enable or disable the cloudflared service
  • Enable or disable the dnsmasq service (if using that to capture local .test lookups for example)
  • Change the DNS configuration to either 127.0.0.1 or a default config to allow the network to serve its own DNS.

To handle this, I wrote a couple quick bash scripts that I can use to reduce my annoyance and toggle things back and forth.

The doh-enable script turns on cloudflared, turns on dnsmasq, and sets the local IP as a DNS server:

# doh-enable: enables the DNS over HTTPS config
sudo launchctl setenv TUNNEL_DNS_PORT 54
sudo launchctl load /Library/LaunchDaemons/com.cloudflare.cloudflared.plist
sudo brew services start dnsmasq
networksetup -setdnsservers Wi-Fi 127.0.0.1

The doh-disable script turns off cloudflared, turns off dnsmasq, and empties the custom DNS server config to accept the network default:

# doh-disable: disables the DNS over HTTPS config
sudo launchctl unload /Library/LaunchDaemons/com.cloudflare.cloudflared.plist
sudo brew services stop dnsmasq
networksetup -setdnsservers Wi-Fi empty

Now when I encounter a captive portal situation, all I need to do is type one command, sign in, then type another.

If you’re interested in trying out DNS over HTTPS, I found the Cloudflare documentation well written and this article helpful for getting dnsmasq running alongside it.