Add users from one site to another on multisite by role with WP-CLI

Today I wanted to make sure a bunch of editors from one site existed as editors of a new staging site that we’re building out. Both sites exist as part of the same multisite network.

Thanks to WP-CLI and xargs, this is pretty straight forward:

wp user list --role=editor --field=user_login | xargs -n1 -I % wp user set-role % editor

This tells WP-CLI to list only the user_login field for all of the editors on It then passes this list via pipe to xargs, which runs another wp command that tells WP-CLI to set the role of each user as editor on

Because users are already “created” at the global level in multisite, they are added to other sites by setting their role with wp user set-role.

I’d estimate that with a list of 15 users, this probably saved closed to 15 minutes and didn’t require a whole bunch of clicking and typing with two browser windows open side by side.

Props to Daniel’s runcommand post for providing an easy framework.

Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this and with a bit of elbow grease, you could get there on a $10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¯\_(ツ)_/¯

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail drop-in, add some static configuration, and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. 🍻

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case, check the Generate DKIM Settings option, and click Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • TXT record that acts as domain verification.
    • Three CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an MX record to route incoming email on your domain to AWS.
  5. Click Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to. As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note: You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to Domains under Identity Management and check that the status for your domain is listed as verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig to double check the TXT record’s response.

› dig TXT _amazonsesbroken.chipconf +short

› dig TXT +short

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using above. I can change that to,, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the appropriately named and MIT licensed AWS Lambda SES Email Forwarder function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail should be something like
    • emailBucket should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix should be an empty string.
    • forwardMapping is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:logs:*:*:*"
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.


Now I can go back to Postmark and add as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of to any address.

If someone replies to one of those emails (or just sends one to, it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Security. Stored Content. Backward Compatibility.

This was almost a storm of 140 character segments, but I help make software to democratize publishing. I should use it. 😄

The pains people working on WordPress go to in making security, stored content, and backward compatibility all first priorities are amazing.

This is what I’ve learned and been inspired by the most since I started contributing to WordPress.

If there’s a moment in the course of development in which any of these three things are at risk, the tension or pain—or whatever you want to call it—is palpable. Providing a secure place for users to publish their content without worry is why we’re here.

WordPress 4.2.3 was released this week.

A security issue was fixed. Stored content was not lost.

Backward compatibility in the display of that content was stretched, sometimes broken, when WordPress could not guarantee the resulting security of a user’s site or stored content.

It’s okay to feel offended, especially if you were burned. Know that good work was done to try and make the bandaid as easy to pull of as possible. Know that people have been working on or thinking through easing the burn every minute since.

Running PHPUnit on VVV from PHPStorm 9

I spent so much time trying to get this working last November and kept running into brick walls. Today, I finally revisited a comment on that same issue by Rouven Hurling that pointed out two excellent notes from Andrea Ercolino.

  1. How to run WordPress tests in VVV using PHPStorm 8
  2. How to run WordPress tests in VVV using WP-CLI and PHPStorm 8

These are both great examples of showing your work. Andrea walks through the issue and shows things that didn’t work in addition to things that did. I was able to use the first to quickly solve an issue that I’ve been so close to figuring out, but have missed every time.

And! I haven’t gone through the second article yet, but the hidden gem there is that he has remote debugging with Xdebug working while running tests with PHPUnit. I’ve wanted this so much before.

All that aside (go read those articles!), I stumbled a bit following the instructions and wanted to log some of the settings I had to configure in order for PHPUnit to work properly with PHPStorm 9 and my local configuration.

For this all to work, I had to configure 4 things. The first three are accessed via PHPStorm -> Preferences in OSX. The fourth is accessed via the Run -> Edit Configurations menu in OSX.

A remote PHP interpreter

Choosing “Vagrant” in the Interpreters configuration didn’t work for me. I got an error that VBoxManage wasn’t found in my path when I selected my Vagrant instance and it tried to auto-detect things. I’m not sure if this is a bug or a misconfiguration. I almost wonder if it’s related to my recent upgrade today to Vagrant 1.7.4 and VirtualBox 5.0.

Instead I tried going forward with “SSH Credentials” to see what would happen. I put in the IP of the VVV box,, and the vagrant/vagrant username and password combination for SSH. I left the interpreter path alone and when I clicked OK, everything was verified. I was hesitant, because in the first step I had already deviated from the plan.

PHPUnit configuration

This one was easier and didn’t require any altered steps. I added a remote PHPUnit configuration, chose the already configured remote interpreter and was good to close out.

SFTP deployment configuration

Because I was not able to get Vagrant configured properly in the first step, I am dealing with an entirely different path. PHPStorm wants to have a deployment configured over SFTP so that it can be aware of the path structure inside the virtual machine that leads to the tests.

Luckily nothing special needs to happen in the VM to support SFTP, so I was able to add as the server and vagrant/vagrant as the username and password combination.

I originally tried putting the full path to WordPress here as the root directory, but that caused PHPStorm to prepend that on any automatic path building it did. I tried Autodetect as well, but that detected /home/vagrant, which did not work with the WordPress files. Through the power of deduction, I set the actual root / as the root. 😜

I also used the Mappings tab on this screen to match up my local wordpress-develop directory with the one I’m interacting with remotely.

Run/debug configurations

This one ends up being very straight forward now that everything else is configured. All I needed to do is select the phpunit.xml.dist file via its local path on my machine.

And then I was good! I can now run the single site unit tests  for WordPress core just by using the Run command via green arrows everywhere in PHPStorm.

The tests took 3.14 minutes to run through the interface versus 2.45 minutes through the command line, but I was given a bunch of information on skipped tests throughout. I’ll likely stick to the command line interface for normal testing, though I’m looking forward to getting Xdebug configured along side this as it will make troubleshooting individual tests for issues a lot easier.

Big thanks again to Andrea Ercolina for the great material that brought me to a working config!

Flushing rewrite rules in WordPress multisite for fun and profit

Developing plugins and introducing new rewrite rules for features on single site installations of WordPress is pretty straight forward. Via register_activation_hook(), you’re able to setup a task that fires flush_rewrite_rules() and you can safely assume job well done.

But for multisite? A series of bad choices awaits.

Everything seems normal. The activation hook fires and even gives you a parameter to know if this is a network activation.

Your first option is to handle it exactly the same as a single site installation. flush_rewrite_rules() will fire, the rewrite rules for the current site will be built, and the network admin will be on their own to figure how these should be applied to the remaining sites.

It seems strange to say, but I’d really like you to choose this one.

Instead, you could detect if this is multisite and if the plugin is being activated network wide. At that point a few other options appear.

Grab all of the sites in wp_blogs and run flush_rewrite_rules() on each via switch_to_blog().

This sounds excellent. It’s horrible.

// Please don't. :)
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );

Switching sites with switch_to_blog() changes the current context for a few things. The $wpdb global knows where it is, your cache keys are generated properly. But! There is absolutely no awareness of what plugins or themes would be loaded on the other site, only those in process now.

So. We get this (paraphrasing):

// Delete the rewrite_rules option from
// the switched site. Good!
delete_option( 'rewrite_rules' );

// Build the permalink structure in the
// context of the main site. Bad!

// Update the rewrite_rules option for the
// switched site. Horrible!
update_option( 'rewrite_rules', $this->rules );

All of a sudden every single site on the network has the same rewrite rules. 🔥🔥🔥

Grab all of the sites in wp_blogs and run delete_option( ‘rewrite_rules’ ) on each via switch_to_blog()

This is much, much closer. But still horrible!

// Warmer.... but please don't.
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );

foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

It’s closer because it works. Deleting the rewrite_rules option occurs in the context of the switched site and you don’t need to worry about plugins registering their rewrite rules. On the next page load for that site, the rewrite rules will be built fresh and nothing will be broken.

It’s horrible because large networks. Ugh.

Really, not much of a deal at 10, 50, 100, or even 1000 sites. But 10000, 100000? Granted, it only takes a few seconds to delete the rewrite rules on all of these sites. But what’s going on with other various plugins on that next page load? If I have a network with large traffic on many sites, there’s going to be a small groan where the server has to generate and then store those rewrite rules in the database.

It’s up to the network administrator to draw that line, not the plugin.

You can mitigate this by checking wp_is_large_network() before taking these drastic actions. A network admin not wanting anything crazy to happen will be thankful.

// Much better. :)
if ( wp_is_large_network() ) {

// But watch out...
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

But it’s also horrible because of multiple networks.

When a plugin is activated on a network, it’s stored in an array of network activated plugins in the meta for that network. A blanket query of wp_blogs often doesn’t account for the site_id. If you do go the drastic route and delete all rewrite rules options, make sure to do it on only sites of the current network.

// Much better...
if ( wp_is_large_network() ) {

// ...and we're probably still friends.
$query = "SELECT * FROM $wpdb->blogs WHERE site_id = 1";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

And really, ignore all of my examples and use wp_get_sites(). Just be sure to familiarize yourself with the arguments so that you know how to get something different than 100 sites on the main network by default.

// Much better...
if ( wp_is_large_network() ) {

// ...and we're probably still friends.
$sites = wp_get_sites( array( 'network' => 1, 'limit' => 1000 ) );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );


At this point you can probably feel yourself trying to make the right decision and giving up in confusion. That’s okay. Things have been this way for a while and they’re likely not going to change any time soon.

I honestly think your best option is to show a notice after network activation to the network admin letting them know rewrite rules have been changed and that they should take steps to flush them across the network.

Please: Look through your plugin code and make sure that activation is not firing flush_rewrite_rules() on every site. Every other answer is better than this.

If you’re comfortable, flush the rewrite rules on smaller networks by deleting the rewrite rules option on each site. This could be a little risky, but I’ll take it.

And if nothing else, just don’t handle activation on multisite. It’s a bummer, but will work itself out with a small amount of confusion.

Great WordPress Meetup showing off 4.2 tonight

Two more months and the Pullman WordPress Meetup will be a year old! This is our 3rd release oriented meetup, so we’ve been around since the old days of 3.9.

WordPress 4.2 and more!

Wednesday, Apr 15, 2015, 6:00 PM

Sweet Mutiny
1195 SE Bishop Blvd #1 Pullman, WA

7 WordPress Superfans Went

WordPress 4.2 will be released at the end of April. We can walk through what’s new during the April meetup.A second presentation may fit as well, let’s talk about that during the February meetup.

Check out this Meetup →

We timed tonight’s meetup perfectly with the release of WordPress 4.2 RC1 and I was able to show off each of the new features to the group. There was some fun conversation throughout.

I think the new Press This interface got the best immediate reaction. It’s especially nice for those who have never experienced it before and much better as a new interface to those who hadn’t.

Extended character support for 🍕 was definitely next. It’s nice to talk about improved global support for WordPress in addition to the joy we get from expressive food characters. 😜

Plugin updates are a little hard to get excited about in a meetup environment, but the interface (or lack of redirect) did perk some ears. I know I love not being bounced to a new screen.

I also covered—possibly for the first time—a general overview of the WordPress release cycle and how things have matured over the last couple years. I think covering that along with the various methods to contribute should be good recurring topics, even if they don’t fill up the entire night.

This is what happens when I can’t fit “tonight was a great meetup” into 140 characters. 📜

WordCamp Yes, Vagrant Rocks #wcyvr

This is a companion blog post to my presentation at WordCamp Vancouver on August 17th, 2013. You can download the PDF of the slides or read through the following for the context that is often missing when reading a presentation at a later time. also has a video of the talk posted.

Hi WordPress, Meet Vagrant



Story Time

It was December 10th, 2012, the night of our WordPress developer meetup in Portland, that I decided I wanted to break up with MAMP.

Strangely enough for me at the time, I got a couple replies.

I’m not sure if I saw this tweet from Micah right away, as my reply didn’t come for a couple hours.

Another, an hour later, was from Tom Willmot, the founder of Humanmade, telling me that Homebrew was where it was at.

He then pointed me to a guide on GitHub that Humanmade uses for all new recruits, this guide being a great compilation of procedures to follow to get Nginx and the like up and running in your local (OSX) environment.

I immediately fell in love with this idea and started soaking up info. Screw MAMP, I was going to have an Nginx setup on my Mac.

About an hour after this, I finally replied to Micah, telling him that I hadn’t looked at it, but that it looked cool.


Fast forward a couple hours, I had to put most of this aside as I had a day job to concentrate on and breaking up with MAMP needed to wait until after hours. My tweets got more excited and I made my way to the developer meetup in a really good mood.

Justin Sainton, who is actually speaking next in this room, gave a great talk that night with an excellent title, “WP E-Commerce, I Hate You with the Fire of a Thousand Suns“, about the progress that’s been made toward refining the code base and improving the feature set. After Justin’s talk I continued my ranting to a few others about breaking up with MAMP and installing Nginx with homebrew instead. Daniel Bachhuber made a comment along the lines of – “why would you want to install all that junk on your computer?”



This is a good point.

Why would I want to install all that junk on my computer. I turned back to Micah’s first tweet suggesting that I check out Vagrant, determined to give it another chance.

And that’s when it clicked.

And at 11:44pm on December 11th, 2012, 24 hours after my initial amazement, Varying Vagrant Vagrants is launched into the world.

And so an obsession began. Developer lives changed. A super long name was shortened to VVV (sometimes I even call it V-trip). And within the next few months I was able to uninstall MAMP completely and convince others at 10up and in the community that Vagrant was the way to go.

My Goals

Which is why I’m here at WordCamp Vancouver. To introduce you to Vagrant and get you obsessed. I want each of you to leave this talk amped up to use Vagrant for WordPress development. And I’m pretty sure it’s going to work. After all, you are at…..


And we all know that this acronym was chosen because…


I should admit that I have some hidden agendas.


While this talk is going to go a long way in showing you a superior development environment that will change your life, there’s much more at stake. This is why you should keep my goals in your head while we go through this. 🙂


  • L(inux) – don’t need it. Love it, yes. Don’t need it.
  • A(pache) – Nginx is better. We can debate, but it is. Or what about lighttpd?
  • M(ySQL) – is great! But for how long? What about MariaDB or Percona? Well, I guess MariaDB fails to make my point, but Percona would leave us with LAPP.
  • P(HP) – Ok, that’s sticking around.


You’ve hopefully seen the new develop repository from a couple weeks ago at the beginning of the 3.7 cycles that starts to make use of Grunt for core development. Having a Vagrantfile to provide an agreed upon development environment for testing between versions of WordPress and PHP and MySQL and Apache and Nginx and… would be pretty slick.


We contribute in so many ways as a community to the WordPress project and there is a need for sysadmin contributions. It would be great to have a clear way for those who have sys admin experience to contribute to the WordPress project.

Anyhow, my goals aside.

As we’re covering the ins and outs of Vagrant, I’d like you to also tune in to ways Vagrant can fit into your development work flow, how it may have helped you solve problems faster in the past, and how it’s going to make you solve problems faster in the future.


Before we get into it, I want to exercise some hand raising powers.

  • Who here is a developer?
  • Who here is a sys admin, or manages servers in some way?
  • Who here is a developer and uses MAMP or XAMPP or WAMP?
  • Who here is a sys admin, or manages servers in some way, and thinks Apache is better than Nginx?
  • Who in the room has installed Vagrant on their machine before?
  • Who has used it more than once after installing it?
  • Who is using it day to day for their development environment?
  • What’s Vagrant?

How Did We Get Here?

I’ll get to what Vagrant is in a bit, but first I’m going to cover what we’ve been working with until now.


Has anyone ever used the term cowboy coding to describe the editing, obviously by others, of code on a live server?

Well, for a long while, cowboy coding didn’t seem so bad.

In fact, the beginning of WordPress development was very much all about it. Quite a few members in the WordPress community learned to code by sharing snippets with each other. If you visit the archives of Matt’s blog, there are code snippets to be found, ready to be hacked into your templates at will.

It was the wild west in the era of open source blogging and white screens were a great way of telling when something bad happened.


Luckily, this didn’t last forever. As familiarity with WordPress, PHP and MySQL progressed, local LAMP environments arrived. Apache, PHP and MySQL all had binaries that could be installed in Windows or Mac and a minimum environment could be setup with relative ease. White screens now had the opportunity to happen locally first and therefore more rarely in production.


Even better, MAMP, XAMPP and WAMP came along and provided a method of creating a stable LAMP sandbox for us to play in with just a few clicks. With the minimum requirements for WordPress development met, things got stable and stale. Having this stable sandbox environment goes a long way when building basic WordPress themes or plugins for customers.

Over the last several years, things have changed quite a bit in the landscape of the web, as they always do.


Nginx became a web server to be reckoned with and is now considered by many to be more powerful and performant than Apache.

Linux, MySQL, and PHP remain for the most part, but other additions like Memcached or Varnish are becoming more useful to WordPress developers to maintain object and page caches as sites are required to scale larger and larger while handling an assortment of traffic patterns.


Now, because of the way technology changes on us, when you sit down at your local machine and develop in a friendly familiar LAMP stack, there’s no guarantee that you’ll end up publishing your code to the same environment.

At the most extreme, it can be like only training in a swimming pool before jumping in to swim across the Pacific Ocean.

Personally, I can go back and find so many hours that were spent troubleshooting things that I could not reproduce locally because my environment did not match some unexpected thing on production.

Now. If you have the right OS, you may have filtered through a barrage of various tutorials online to piece together a sloppy system of manually installed packages that come close to matching your production environment, but good luck when you need to change something or develop a product that needs to exist in a different (or even multiple) environment.

Vagrant is the Magic You’ve Been Looking For


The last of Arthur C Clark’s three rules described in “Hazards of Prophecy: The Failure of Imagination” states that “any sufficiently advanced technology is indistinguishable from magic.”

When I sat down and used Vagrant for the first time, it was magic. I had no idea what was going on, but it was going on and I liked it. And it makes sense that it seem like magic at first because really cool things happen without much effort.

Over the last 8 months I’ve gotten deeper and deeper into what Vagrant is and what it means for development and I can now appreciate it as a piece of advanced technology with many possibilities for expansion and use rather than the magic it started as.

Even so, it still feels like magic.

30000 Feet


Before we get into any detail, let’s back up and cover this from 30k feet with one of my favorite things in the world. An analogy.

Your Computer is a Beautiful Lawn


It starts off well manicured, with nicely defined paths around kept gardens that have some wrought iron fencing around them, helping to keep clean. There may even be a couple police officers hanging out to make sure that nothing bad is going on.


A server is a large, beautiful beach connected to the ocean. So many possibilities for digging holes and building sand castles and creating complex moats for waves to come through to visualize how well your sand castle was constructed.


XAMPP/MAMP is a small sandbox in your yard. You have an old tire, maybe a pail. Some shovels and a few rocks that you can play with. You can test out some structures in the sand if you’d like in preparation for the big beach day. If you get real fancy, you might even drag over a hose to spray down the sand castle just to see what happens.


Installing server software directly on your computer is like having a load of sand delivered to your house and dumped on your front lawn. You can probably do a bunch with it, lay out as if it was a beach and build a sand castle or two, but that well manicured look you started with will go away and over time it’s going to become harder and harder to keep track of where all that sand went.


Vagrant is an extreme sandbox. You can do whatever the heck you want with a beach worth of sand, moving it around and building and getting in trouble. If you flip a front loader or a bulldozer goes crazy and starts running things over, no big deal. Hit the reset button and you get to start over.


And when you’re done for the day, only one command stands between you and the personal computer that remains a beautiful garden.

Ground Level

Photo by Steve Snodgrass

Now that we have a picture of what we’re shooting for, let’s back up again and start over on the ground.

What is Vagrant?


Well, soon.

First, let’s start with virtual machines.

Virtual machines are fictional computers.


Completely made up stories that have no true hardware, but exist as long as they are described.


Through something known as “platform virtualization”, these fictional computers, or guests are able to use the hardware resources provided to them on a host machine without actually controlling those resources themselves. This allows the fictional computers to have, among other things, a processor, memory, hard drives, and network access. In fact, multiple virtual machines (or guests) can be running on the same host at once, all sharing the host’s hardware in a safe way through virtualization.


Virtualbox is GPL licensed platform virtualization software that provides an interface for managing and using these virtual machines. It takes care of figuring out exactly how all of the hardware on your host machine is made available to any guest machine in a safe way.


Vagrant is MIT licensed open source software for “creating and configuring lightweight, reproducible, and portable development environments.”

Vagrant gives you a method to write the story that describes each fictional computer to be virtualized on your host machine so that you can share the story with others, passing around development environments as if it was code.


Probably the greatest part about all of this is that both Vagrant and Virtualbox have installers available for Mac, Windows, and Linux. This means that as a Mac user, I can pass the description of a fictional computer to a Windows user and we can both be confident that we’re looking at the exact same thing when we boot it up. And while the most popular operating system used inside the virtual machine itself is certainly some flavor of Linux, it’s entirely possible that a Windows or OSX machine can be described and passed around as well.

That’s pretty amazing.

Anatomy of a Virtual Machine Built With Vagrant


Let’s talk about the anatomy of a virtual machine built with Vagrant.

  • Host
    • Your local machine is the host. It has Vagrant and Virtualbox installed and a copy of the fictional machine story that’s ready to boot up at any time. The hard drives and network devices belong to it at all times.
  • Guest
    • Any virtual machine created through virtualization on your host computer is the guest. It is temporarily using the resources of the host, until you tell it to shut down and disappear.
  • Box
    • Vagrant provides a way to package boxes. These boxes contain at least an installation of an operating system as well as some guest additions that help any platform virtualization software communicate between host and guest. This base box can be as heavy or as light as you want. It can be a bare bones Ubuntu or CentOS installation with no server software other than that required by Vagrant to do its job. It could also contain all of the various server packages that you need, Nginx, MySQL, etc..
  • Provisioning
    • Provisioning makes having a bare bones box most ideal. This is what helps make Vagrant lightweight and reproducible. I can pass a base box around or share it among several different projects and then pass along unique provisioning scripts that explain and automate how the box should be configured as it boots.


Let’s walk through the step by step of getting a base.

  1. Download and install [Virtualbox](
  2. Download and install [Vagrant](
  3. Type `vagrant init`
  4. Create and then navigate to an empty directory on your computer via the command line and type `vagrant init`
    • This creates a Vagrantfile file in this empty directory that describes the virtual machine you are looking to start.
  5. edit Vagrantfile
  6. Type `vagrant up`

This is the magic part.

Through no additional interaction an empty Ubuntu box is available for my use on my computer. All I need to do is go to a command prompt, type `vagrant ssh` and I’m in. From there I can do anything that I would normally do with a fresh server instance.


Here is where provisioning comes in. While a base server is pretty awesome on its own, it doesn’t do too much for us if we have to install all of the server software every time that we boot the machine.


There are a few provisioners enabled with Vagrant by default – Ansible, Chef, Puppet, and Shell, with another one, Salt, almost officially in.

These provisioners help describe the story of the virtual machine that you are building every time you start a new instance. Each offers a similar feature set and mostly differs on syntax and organization. I’ll leave a few links in the slides so that you can familiarize yourself with them later.

Great power lies in the use of these provisioners, especially when pushing server configurations to production. I do suggest sticking with shell provisioning at first unless you are already familiar with something else or using it to configure production servers already.

I should mention that again. Ansible, Puppet, Chef, and Salt are already popular tools for server provisioning. There is a chance that you are or an amazing server admin in your life already has some sort of provisioning script setup that you can use to immediately duplicate production in your development environment. And if this happens, and the configuration is for WordPress, you should totally open source it so that we can all share in the love.


So I’m a bit biased as I’ve been working on this for the last 8 months, but I do think it’s a good and approachable example. Let’s walk through the shell provisioning taken from the open sourced Varying Vagrant Vagrants.

This Vagrant configuration is an opinionated attempt to mimic a fairly common server configuration used for performant WordPress projects. We’ve put quite a bit of work into this and have had an amazing number of contributions from the community already. I really would recommend grabbing this and using it to get your feet wet if not as your daily development environment for WordPress projects. Do note that the Internet today will probably not support the sudden `vagrant ups` of 100 developers, so you may need to wait until you get home.



And now, only a couple minutes stand between my beautiful lawn and having an extreme sandbox up and running. In fact, if I power off the virtual machine without destroying it completely, we’re looking at an extremely short start up time whenever I want to dive in.

And with Varying Vagrant Vagrants, once `vagrant up` is complete, I have everything I need.

An environment with Nginx, PHP 5.4, MySQL, both APC and memcached, the latest stable WordPress, trunk WordPress, WordPress unit tests, WP-cli, not to mention a whole range of smaller tools that make development easier.

I uninstalled MAMP shortly after creating this and haven’t looked back.

I know this might seem a little crazy on the outside for some of you. Who wants to spend all that time in the command line, right? You should know that once everything is configured in the provisioning script, there’s little chance that you’ll ever need to go into the command line via `vagrant ssh` if you don’t feel like it. Just develop as normal on your local ‘host’ machine and view the changes in the browser. The only commands you’ll need to issue are those to start and stop the Vagrant. And rumor has it that a GUI is possible for these in the future.


Why should you develop in an environment that matches production?


Vagrant allows you to version control your environment. In fact, the project I’m working on now started with a Vagrantfile.

At Washington State University, we’re in the beginning stages of a project that intends to provide a central publishing platform based on WordPress for any college or department inside the University to use.

I started the repo with a Vagrantfile.

Once I had that, I was able to add WordPress in and then start making the customizations we’re looking for through various plugins and configuration changes.

Now, as we work toward a place where the server architecture is finalized, we can adjust the development environment in the project repository as needed to see if any new problems arise while we continue to build out and use existing code. With the development environment under version control, we can document reasons for software changes or revert to something earlier if an issue does come up.


Vagrant allows you to share your environment. The ramp up time for developers on the project is almost nothing. Just a git clone separates them from having a full, matching development environment on their local machine. No risk to the server. No worry about the developer being on Windows or Mac or Linux.

Imagine working for a customer that is going to host their site on WPEngine and knowing with absolute confidence that the theme you created locally will work without issue.

Or knowing that customers using default WordPress installations on Dreamhost or Bluehost will have not troubles using the plugin that your about to publish to the WordPress plugin repository.


I should stress that while Vagrant is very much going to replace MAMP for you, it is not a MAMP replacement.

Because that’s not what you should want. Instead of one environment restricted to technology that the community needed years ago, you should want a flexible environment that can adapt seamlessly to the technologies that the community must work with today.


Rather than meeting the WordPress minimum requirements we talked about earlier, Vagrant provides a flexible way for a developer to meet their project’s environment requirements.

Wrap Up

  • Magic?
  • Advanced technology?
  • Ready to start using Vagrant on a day to day basis?

I really hope you go home and go through your first vagrant up and then tell me about it later, because it’s a wonderful experience to see how quickly your development can change with a new, more carefree environment.


A WordPress Core Vagrantfile

Vagrant is a very refreshing tool.

Once the Vagrantfile and provisioning of a project is setup, new developers can join a project and start developing immediately without worrying much about a local setup. DevOps and sysadmin folks can continue to tweak servers and various configurations, passing those changes on to the development team without interruption. Over time, the local development environment comes closer and closer to matching the project’s production environment and those annoying bugs that creep up when the two are different start to disappear.

A bunch of people in the WordPress community have done a ton of great work with VVV over the last 7 months and it’s great to see the traction that it’s gotten. It is a great example of a very useable and modern WordPress production environment and I think will continue to go a long way toward upping the game of many.


An open source project that ships with a default Vagrantfile is very refreshing.

The barrier to entry becomes so low. Within a matter of minutes, a virtual machine can be launched that provides the operating system, server software, and configuration needed to get started with development and to view the results of the changes that you are introducing. This gives everyone developing for the project a common baseline, helping to avoid worksforme bugs and allowing things to progress faster.

Take a look at projects like DiscourseCouchDBElasticsearchPiwik, and Gitlab for great examples of how Vagrant is currently used as part of core development.

Now. Imagine a WordPress core that ships with a Vagrantfile of its own. Anybody wanting to contribute to WordPress or to work in an optimal environment for building a plugin or theme that can run on any WordPress installation would have the ability to use a virtual machine that the community has decided is valid for development and comprehensive testing.

The possibilities of what could be included in that environment are pretty slick.

  • WordPress Unit Testing is one of the coolest things that VVV currently provides. Giving anyone who wishes a unit testing environment without having to worry about installing PHPUnit locally would go a long way in getting more WordPress developers to test.
  • Scripts that generate core documentation on the fly can be included with the Vagrant configuration, helping developers to explore the core code base more on their local machine.
  • Tools and guides for patch creation, even scripts that automate some of the processes that come with Trac interaction, would help lower the barrier to core contribution.

Granted there are hurdles.

WordPress, along with most other PHP projects, is forced to support multiple versions of PHP. And while web server versions don’t matter as much, it’s still nice to know that both Apache and Nginx are working well, especially when various issues are being dealt with in the world of rewrites. That said, there are ways to provide a flexible environment that can accomplish all of this.

And when we do!

Being able to switch from PHP 5.2 to 5.5, from WordPress 3.6 to trunk, or from Apache to Nginx on the fly to make sure your plugin is compatible with each will be an absolute dream.

All achievable with two steps: Check out WordPress, type vagrant up.

Grand ideas aside, the first goal is to exist. I’ve started a new repository on GitHub, WVand it would be wonderful to start discussing how this world looks now. I’d love to hear ideas on what provisioner should be used, what features should be started with, and how we can approach making this part of WordPress core.

Props to Zack Tollman for having this conversation with me several times and to Weston Ruter for sparking it again the other day.

v0.8 of Varying Vagrant Vagrants has been pushed

v0.8 of Varying Vagrant Vagrants has been pushed

Definitely not as ridiculously long as the v0.7 changelog, but a good month of steady progress.

  • Enable SSH agent forwarding
  • Wrap update/installation procedures with a network status check
  • Enable WP_DEBUG by default
  • Update wp-cli during provisioning
  • Better handling of package status checks
  • Better handling of custom apt sources
  • Add PHPMemcachedAdmin 1.2.2 to repository for memcached stats viewing.
  • Add phpMyAdmin 4.0.3 to repository for database management

If I had to pick favorites, it would be the network status check that is done before installations are attempted and the PHPMemcachedAdmin that we’re now including. If you want to learn more about how data is managed in memcached, this is definitely your tool.


On Citizenship in Open-source software development

On Citizenship in Open-source software development

I’ll steal the TL;DR from the author:

TL;DR: By giving an actual social status to the people contributing to a repository, GitHub would solve the problem of zombie-projects with a scattered community. By allowing these citizens to actually collaborate with each other, instead of just with the owner, repositories will live as long as the community exists, completely on auto-pilot.

Ignore “GitHub” for a second and replace with Upvoted contributions to by_the_people_ branches on plugins could be an interesting way of addressing the stale plugin or unresponsive plugin author issue.