Add users from one site to another on multisite by role with WP-CLI

Today I wanted to make sure a bunch of editors from one site existed as editors of a new staging site that we’re building out. Both sites exist as part of the same multisite network.

Thanks to WP-CLI and xargs, this is pretty straight forward:

wp user list --role=editor --field=user_login | xargs -n1 -I % wp user set-role % editor

This tells WP-CLI to list only the user_login field for all of the editors on It then passes this list via pipe to xargs, which runs another wp command that tells WP-CLI to set the role of each user as editor on

Because users are already “created” at the global level in multisite, they are added to other sites by setting their role with wp user set-role.

I’d estimate that with a list of 15 users, this probably saved closed to 15 minutes and didn’t require a whole bunch of clicking and typing with two browser windows open side by side.

Props to Daniel’s runcommand post for providing an easy framework.

Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this and with a bit of elbow grease, you could get there on a $10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¯\_(ツ)_/¯

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail drop-in, add some static configuration, and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. 🍻

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case, check the Generate DKIM Settings option, and click Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • TXT record that acts as domain verification.
    • Three CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an MX record to route incoming email on your domain to AWS.
  5. Click Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to. As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note: You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to Domains under Identity Management and check that the status for your domain is listed as verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig to double check the TXT record’s response.

› dig TXT _amazonsesbroken.chipconf +short

› dig TXT +short

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using above. I can change that to,, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the appropriately named and MIT licensed AWS Lambda SES Email Forwarder function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail should be something like
    • emailBucket should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix should be an empty string.
    • forwardMapping is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:logs:*:*:*"
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
              "Effect": "Allow",
              "Action": [
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.


Now I can go back to Postmark and add as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of to any address.

If someone replies to one of those emails (or just sends one to, it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Security. Stored Content. Backward Compatibility.

This was almost a storm of 140 character segments, but I help make software to democratize publishing. I should use it. 😄

The pains people working on WordPress go to in making security, stored content, and backward compatibility all first priorities are amazing.

This is what I’ve learned and been inspired by the most since I started contributing to WordPress.

If there’s a moment in the course of development in which any of these three things are at risk, the tension or pain—or whatever you want to call it—is palpable. Providing a secure place for users to publish their content without worry is why we’re here.

WordPress 4.2.3 was released this week.

A security issue was fixed. Stored content was not lost.

Backward compatibility in the display of that content was stretched, sometimes broken, when WordPress could not guarantee the resulting security of a user’s site or stored content.

It’s okay to feel offended, especially if you were burned. Know that good work was done to try and make the bandaid as easy to pull of as possible. Know that people have been working on or thinking through easing the burn every minute since.

Running PHPUnit on VVV from PHPStorm 9

I spent so much time trying to get this working last November and kept running into brick walls. Today, I finally revisited a comment on that same issue by Rouven Hurling that pointed out two excellent notes from Andrea Ercolino.

  1. How to run WordPress tests in VVV using PHPStorm 8
  2. How to run WordPress tests in VVV using WP-CLI and PHPStorm 8

These are both great examples of showing your work. Andrea walks through the issue and shows things that didn’t work in addition to things that did. I was able to use the first to quickly solve an issue that I’ve been so close to figuring out, but have missed every time.

And! I haven’t gone through the second article yet, but the hidden gem there is that he has remote debugging with Xdebug working while running tests with PHPUnit. I’ve wanted this so much before.

All that aside (go read those articles!), I stumbled a bit following the instructions and wanted to log some of the settings I had to configure in order for PHPUnit to work properly with PHPStorm 9 and my local configuration.

For this all to work, I had to configure 4 things. The first three are accessed via PHPStorm -> Preferences in OSX. The fourth is accessed via the Run -> Edit Configurations menu in OSX.

A remote PHP interpreter

Choosing “Vagrant” in the Interpreters configuration didn’t work for me. I got an error that VBoxManage wasn’t found in my path when I selected my Vagrant instance and it tried to auto-detect things. I’m not sure if this is a bug or a misconfiguration. I almost wonder if it’s related to my recent upgrade today to Vagrant 1.7.4 and VirtualBox 5.0.

Instead I tried going forward with “SSH Credentials” to see what would happen. I put in the IP of the VVV box,, and the vagrant/vagrant username and password combination for SSH. I left the interpreter path alone and when I clicked OK, everything was verified. I was hesitant, because in the first step I had already deviated from the plan.

PHPUnit configuration

This one was easier and didn’t require any altered steps. I added a remote PHPUnit configuration, chose the already configured remote interpreter and was good to close out.

SFTP deployment configuration

Because I was not able to get Vagrant configured properly in the first step, I am dealing with an entirely different path. PHPStorm wants to have a deployment configured over SFTP so that it can be aware of the path structure inside the virtual machine that leads to the tests.

Luckily nothing special needs to happen in the VM to support SFTP, so I was able to add as the server and vagrant/vagrant as the username and password combination.

I originally tried putting the full path to WordPress here as the root directory, but that caused PHPStorm to prepend that on any automatic path building it did. I tried Autodetect as well, but that detected /home/vagrant, which did not work with the WordPress files. Through the power of deduction, I set the actual root / as the root. 😜

I also used the Mappings tab on this screen to match up my local wordpress-develop directory with the one I’m interacting with remotely.

Run/debug configurations

This one ends up being very straight forward now that everything else is configured. All I needed to do is select the phpunit.xml.dist file via its local path on my machine.

And then I was good! I can now run the single site unit tests  for WordPress core just by using the Run command via green arrows everywhere in PHPStorm.

The tests took 3.14 minutes to run through the interface versus 2.45 minutes through the command line, but I was given a bunch of information on skipped tests throughout. I’ll likely stick to the command line interface for normal testing, though I’m looking forward to getting Xdebug configured along side this as it will make troubleshooting individual tests for issues a lot easier.

Big thanks again to Andrea Ercolina for the great material that brought me to a working config!

Flushing rewrite rules in WordPress multisite for fun and profit

Developing plugins and introducing new rewrite rules for features on single site installations of WordPress is pretty straight forward. Via register_activation_hook(), you’re able to setup a task that fires flush_rewrite_rules() and you can safely assume job well done.

But for multisite? A series of bad choices awaits.

Everything seems normal. The activation hook fires and even gives you a parameter to know if this is a network activation.

Your first option is to handle it exactly the same as a single site installation. flush_rewrite_rules() will fire, the rewrite rules for the current site will be built, and the network admin will be on their own to figure how these should be applied to the remaining sites.

It seems strange to say, but I’d really like you to choose this one.

Instead, you could detect if this is multisite and if the plugin is being activated network wide. At that point a few other options appear.

Grab all of the sites in wp_blogs and run flush_rewrite_rules() on each via switch_to_blog().

This sounds excellent. It’s horrible.

// Please don't. :)
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );

Switching sites with switch_to_blog() changes the current context for a few things. The $wpdb global knows where it is, your cache keys are generated properly. But! There is absolutely no awareness of what plugins or themes would be loaded on the other site, only those in process now.

So. We get this (paraphrasing):

// Delete the rewrite_rules option from
// the switched site. Good!
delete_option( 'rewrite_rules' );

// Build the permalink structure in the
// context of the main site. Bad!

// Update the rewrite_rules option for the
// switched site. Horrible!
update_option( 'rewrite_rules', $this->rules );

All of a sudden every single site on the network has the same rewrite rules. 🔥🔥🔥

Grab all of the sites in wp_blogs and run delete_option( ‘rewrite_rules’ ) on each via switch_to_blog()

This is much, much closer. But still horrible!

// Warmer.... but please don't.
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );

foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

It’s closer because it works. Deleting the rewrite_rules option occurs in the context of the switched site and you don’t need to worry about plugins registering their rewrite rules. On the next page load for that site, the rewrite rules will be built fresh and nothing will be broken.

It’s horrible because large networks. Ugh.

Really, not much of a deal at 10, 50, 100, or even 1000 sites. But 10000, 100000? Granted, it only takes a few seconds to delete the rewrite rules on all of these sites. But what’s going on with other various plugins on that next page load? If I have a network with large traffic on many sites, there’s going to be a small groan where the server has to generate and then store those rewrite rules in the database.

It’s up to the network administrator to draw that line, not the plugin.

You can mitigate this by checking wp_is_large_network() before taking these drastic actions. A network admin not wanting anything crazy to happen will be thankful.

// Much better. :)
if ( wp_is_large_network() ) {

// But watch out...
$query = "SELECT * FROM $wpdb->blogs";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

But it’s also horrible because of multiple networks.

When a plugin is activated on a network, it’s stored in an array of network activated plugins in the meta for that network. A blanket query of wp_blogs often doesn’t account for the site_id. If you do go the drastic route and delete all rewrite rules options, make sure to do it on only sites of the current network.

// Much better...
if ( wp_is_large_network() ) {

// ...and we're probably still friends.
$query = "SELECT * FROM $wpdb->blogs WHERE site_id = 1";
$sites = $wpdb->get_results( $query );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );

And really, ignore all of my examples and use wp_get_sites(). Just be sure to familiarize yourself with the arguments so that you know how to get something different than 100 sites on the main network by default.

// Much better...
if ( wp_is_large_network() ) {

// ...and we're probably still friends.
$sites = wp_get_sites( array( 'network' => 1, 'limit' => 1000 ) );
foreach( $sites as $site ) {
    switch_to_blog( $site->blog_id );
    delete_option( 'rewrite_rules' );


At this point you can probably feel yourself trying to make the right decision and giving up in confusion. That’s okay. Things have been this way for a while and they’re likely not going to change any time soon.

I honestly think your best option is to show a notice after network activation to the network admin letting them know rewrite rules have been changed and that they should take steps to flush them across the network.

Please: Look through your plugin code and make sure that activation is not firing flush_rewrite_rules() on every site. Every other answer is better than this.

If you’re comfortable, flush the rewrite rules on smaller networks by deleting the rewrite rules option on each site. This could be a little risky, but I’ll take it.

And if nothing else, just don’t handle activation on multisite. It’s a bummer, but will work itself out with a small amount of confusion.