“VR” photos with Google Cardboard Camera

I was really excited when Google offered a Daydream VR headset with my new Google Pixel back in October, but I wasn’t really familiar with why I should be excited. I figured I’d play a couple games, but that it would mostly be just another stepping stone toward a real VR system in a few years.

When I got the Daydream, it took me a couple uses before I discovered the Google Cardboard Camera app, which is freaking amazing. I was surprised how well the panorama shots I had taken with my iPhone translated into the VR format. Even cooler (!) is that the app itself lets you capture 360° shots with audio. When you share them with others later, they can experience the surroundings along with the ambient noise you recorded.

It’s still a bit gimmicky, but I’m a pretty big fan of the concept. If you have the Google Cardboard Camera app, here’s the view I enjoyed earlier today when snowshoeing in the Saint Joe National Forest:

https://goo.gl/vrphoto/ARUtiI0VHsWSvc0l2

The crunching of snow is me turning around while taking the shot. Otherwise, it was a super peaceful day with hardly any wind. More about that in another post!

Things I’ve enjoyed reading about open source in 2016

I started this post back in February after reading Nadia Eghbal’s “What success really looks like in open source” so that I’d have something other than tweets to reference when I wanted to refer back to some good stuff.

So here’s a list, not in any particular order, and including the article already mentioned, of some things I’ve enjoyed reading about open source in 2016:

There were many more great articles that I likely forgot to add here, but that list is a good one to revisit.

The next two are videos. The first is Pieter Hintjens on building open source communities. I was so in love with this talk from the beginning that I opened a new post window to start writing while I was watching.

Pieter Hintjens on Building Open Source Communities

Hintjens, who passed away earlier this year, left behind a so very useful body of work in his writings on open source, development, and everything else. I know I’ll continue to go back and discover nice pieces.

The next video is Joshua Matthews on optimizing your open source project for contribution. Yet another one where I was just nodding along the entire time. I know I picked up a few useful tips from this.

I kind of like that I kept this post rolling this year, so I’m going to try to do the same thing next year. It’d probably be good if I wrote a sentence or two along the way on why I felt the piece was important enough to revisit!

Enjoy and feel free to send me links on open source that you think might be missing from the list above.

Remove an administrator from 58 networks at once with WP-CLI

Ok, so this is super specific and probably won’t come in handy for you. But it’s another example of how quickly you can perform a task using WP-CLI.

At WSU, we currently have 58 networks configured in WordPress multisite.

  • Global administrators are users that have been set under the site_admins option on network 1.
  • Network administrators are users that have been set under the site_admins option on any other network.

Depending on the page view, these are used to make up what WordPress validates as is_super_admin(). See WordPress core ticket #37616 for an eye into how this will get a lot better soon.

One of our excellent interns has graduated and we needed to remove him from ~25 different networks. Because I rushed the original UI on adding network administrators, there’s no great way to remove them.

Enter WP-CLI:

wp site option list --field=site_id --search=site_admins | xargs -n1 -I % wp eval 'if ( $sa = get_network_option( %, "site_admins" ) ) { if ( $tk = array_search( "old.username", $sa ) ) { unset( $sa[ $tk ] ); update_network_option( %, "site_admins", $sa ); } }'

WP-CLI isn’t really multi-network aware out of the box, so the wp site option list command will output all options for all networks. We can use this to our advantage by having only site_id output for only the site_admins option on each network.

We then pipe this to xargs and use wp eval to run arbitrary PHP code. This code retrieves the current site_admins data for the given network, removes the old user from the array, and then updates the network option with the new value.

This is a prime candidate for packing into some code and turning it into an actual command—wp network admin remove or something—but it does the trick for now!

And if we haven’t written a better UI or WP-CLI command by the time somebody else needs to be removed, then referring to this post and using the command will take 10 seconds instead of the 5 minutes it  took to make it in the first place. 🙂

Update: Daniel pointed out wp super-admin, which I hadn’t noticed before. This will take a --url argument, so if you know a URL on the network, then you can pass that and it will work. There’s no great way (that I know of yet) to generate list of URLs that covers all networks, but I started an issue on the WP-CLI repo to start figuring that out.

Add users from one site to another on multisite by role with WP-CLI

Today I wanted to make sure a bunch of editors from one site existed as editors of a new staging site that we’re building out. Both sites exist as part of the same multisite network.

Thanks to WP-CLI and xargs, this is pretty straight forward:

wp user list --role=editor --url=prod.site.edu --field=user_login | xargs -n1 -I % wp --url=stage.site.edu user set-role % editor

This tells WP-CLI to list only the user_login field for all of the editors on prod.site.edu. It then passes this list via pipe to xargs, which runs another wp command that tells WP-CLI to set the role of each user as editor on stage.site.edu.

Because users are already “created” at the global level in multisite, they are added to other sites by setting their role with wp user set-role.

I’d estimate that with a list of 15 users, this probably saved closed to 15 minutes and didn’t require a whole bunch of clicking and typing with two browser windows open side by side.

Props to Daniel’s runcommand post for providing an easy framework.

​I guess a counteraction to long tweetstorms could be including URLs to blog posts for even the 117 character tweets.

Comment on Greenpeace’s Planet 4 technical discovery findings

Greenpeace has been undergoing what seems to be a pretty open process in developing Planet 4, the plan for redesigning greenpeace.org. A series of posts have been published on Medium providing details on the discovery process and some of the decisions that have been made.

A few days ago, Davin Hutchins posted “Why Greenpeace and WordPress need each other“, which shares some thoughts on why WordPress is the right choice for a new CMS platform and how Greenpeace and the WordPress community share a similar set of goals in their respective domains.

A few weeks ago, Kevin Muller posted “Checking technical options, we discovered…“, which dives into the more technical details of the stack. This links to a pretty great slide deck that shows how much research went into the technical discovery process.

Both posts, and the process in general, are looking for input, so I’m going to toss out some initial thoughts I have in response after reading through all of that material. I’m going to start with the most reactionary first and then work my way through the mundane. 🙂

And to Greenpeace – Hi! I’m happy you’re moving forward with WordPress!

Customizing WordPress

The word “fork” comes up at some point in the technical discovery post and “customized version of” appears in Davin’s post. Those start raising a yellow flag for me. Not because I think forking is bad, as that’s the heart of free and open source software, but because I think it would be the wrong technical decision.

The WordPress core project has hundreds of contributors and ships 3 major releases a year. As soon as a fork happens, a maintenance burden begins. This maintenance burden grows as things are built into the forked version that do not make their way into the upstream project.

That said, having your own production version of WordPress managed in git makes complete sense. This is a common practice, and something we do with our multi-network multisite WordPress platform at Washington State University.

This provides the ability to test patches in production or release hotfixes while you’re waiting for contributions to be merged upstream. It also makes management of local, staging, and production environments easier across the team.

Multisite vs Single Site

This list of pros and cons in the slide deck is a good one. I have quite a bit to say about multisite, as I think it’s a good enough decision for something like this. I’ll break it down to a couple points:

  • If you foresee using the same set of plugins on every site, or making the same set of plugins available for every site, multisite is a good answer.
  • If it makes sense for a global set of users to have access to multiple sites, multisite is a good answer.

There’s always more detail, but those are pretty good initial gut checks.

I’d be less concerned about the performance issues. Today’s modern LEMP stack is pretty dang good at serving requests and from a maintenance perspective I’d rather manage one installation than many.

For reference, WSU’s installation of WordPress has 57 networks, 1500 sites, and serves about 2.5 million page views a month on one server. WordPress.com is a single installation of WordPress with many millions of sites, spread out across many thousands of servers.

Performance

The slide deck mentions a couple things. Here are a couple more (very much my opinion):

  • Use something for object cache, and get a good object cache plugin. See plugins for PECL memcached and PECL memcache.
  • Use Batcache for page caching before moving on to something more complex like Varnish. This uses your object cache to cache pages and is super straight forward.
  • HyperDB is worth paying attention to when using multisite. This will help you scale.

Plugins

Don’t install too many plugins and be focused with the ones you do install.

WordPress is extremely extensible and has a great plugin ecosystem, but when approaching a project this large you’ll want to be sure that any plugin you install goes through a code review focused on security and performance. This also helps with developer familiarity when things go wrong.

Here are the basic criteria we use at WSU when vetting plugin requests.

REST API

The slide deck briefly mentions a REST API, but it’s not entirely obvious that it means the forthcoming WordPress REST API, which will be shipping next month in WordPress 4.7.

This should be a very useful tool in building custom applications that use data throughout the Greenpeace network. Use the built in routes and create your own custom routes rather than developing a new API on top of WordPress.

Community

And (of course) the last important note is community.

At the very least, pay attention to the notes that are frequently published on Make/Core, the official blog for WordPress core development. As things move forward, raise bugs and enhancements, submit patches, etc…

The easiest way to reduce maintenance burden when extending WordPress is to be familiar with WordPress.

Everyone contributing to WordPress is also interested in the ways that people use and extend WordPress, so please keep sharing your work as you build Planet 4 out. And as Petya mentioned on Twitter, it’d be great to hear your experience at next year’s WordCamp Europe or any other.

Please feel free to ask for more detail if you’d like! 💚☮

I’m going to watch the Matrix sequels because

I waited 17 years to watch the Matrix.

At first it was because of some long forgotten arbitrary reason involving my opinions of Keanu Reeves and the original hype of the movie. When that wore off it was kind of fun to have just not seen it.

I did see the copious gunfire in the lobby scene once in 2000 when we went to a friend of a friend’s apartment to check out fancy home surround sound for the first time. That was pretty cool.

Anyhow.

I’ve heard from many people over the years that I should watch the first, but not the sequels.

I get that, but I’m also the person that waited 17 years to see The Matrix just because. And I’m the person that has purposely sought out things like Eraserhead, Meet the Feebles, and countless other bad decisions. I’m pretty sure I’ve even seen Manos: The Hands of Fate, the non MST3K version.

So I appreciate the advice, and everyone is probably right, but I’m definitely going to watch the sequels.

🤓

For the next time I’m trying to figure out how to update the Java SDK

The only reason I find myself having to update Java is to maintain the Elasticsearch server we have running at WSU. Every time I want to update the provisioning configuration, I end up with 25 tabs open trying to figure out what version is needed and how to get it.

This is hopefully a shortcut for next time.

The Elasticsearch installation instructions told me that when they were written, JDK version 1.8.0_73 was required. My last commit on the provisioning script shows 8u72, which I’m going to guess is 1.8.0_72, so I need to update.

I found the page titled Java SE Development Kit 8 Downloads, which has a list of the current downloads for JDK 8. I’m going to ignore that 8 is not 1.8 and continue under the assumption that JDK 8 is really JDK 1.8 because naming.

At the time of this post, the available downloads are for Java SE Development Kit 8u101. I’m not sure how far off from the Elasticsearch requirements that is, so I found a page that displays Java release numbers and dates.

Of course now I see that there are CPU and PSU (OTN) versions. The PSU version is 102, the CPU is 101, what to do. Luckily, Oracle has a page explaining the Java release version naming. Even though 102 is higher than 101, Oracle recommends the CPU over the PSU. Ok.

I go back to the downloads page, click the radio button to accept the licensing agreement, copy the URL for jdk-8u101-linux-x64.tar.gz, and I’m done!

Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this and with a bit of elbow grease, you could get there on a $10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¯\_(ツ)_/¯

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail drop-in, add some static configuration, and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from jeremy@chipconf.com, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. 🍻

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case chipconf.com, check the Generate DKIM Settings option, and click Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • TXT record that acts as domain verification.
    • Three CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an MX record to route incoming email on your domain to AWS.
  5. Click Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to. As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note: You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to Domains under Identity Management and check that the status for your domain is listed as verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig to double check the TXT record’s response.

› dig TXT _amazonsesbroken.chipconf @ns1.dnsimple.com +short


› dig TXT _amazonses.chipconf.com @ns1.dnsimple.com +short
"4QdRWJvCM6j1dS4IdK+csUB42YxdCWEniBKKAn9rgeM="

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using ns1.dnsimple.com above. I can change that to ns2.dnsimple.com, ns3.dnsimple.com, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. chipconf.com) rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the appropriately named and MIT licensed AWS Lambda SES Email Forwarder function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail should be something like noreply@chipconf.com
    • emailBucket should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix should be an empty string.
    • forwardMapping is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like @chipconf.com as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
           {
              "Effect": "Allow",
              "Action": [
                 "logs:CreateLogGroup",
                 "logs:CreateLogStream",
                 "logs:PutLogEvents"
              ],
              "Resource": "arn:aws:logs:*:*:*"
           },
           {
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
           },
           {
              "Effect": "Allow",
              "Action": [
                 "s3:GetObject",
                 "s3:PutObject"
              ],
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
           }
        ]
      }
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.

Voilà!

Now I can go back to Postmark and add jeremy@chipconf.com as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of jeremy@chipconf.com to any address.

If someone replies to one of those emails (or just sends one to jeremy@chipconf.com), it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Becoming a better WordPress Developer

I wrote this as a post for a friend a bit ago with the intention of cleaning it up to publish at some point. I kind of like the informality, so I’m mostly leaving it as is. Enjoy!

Reading core code was a big step in becoming a better WordPress developer.

I remember having a self realization moment a few years ago where I knew that if I reached a point where I never referred to the Codex to figure out how to handle something, then I had become good at this stuff. My self-test wasn’t necessarily that the arguments for register_post_type() are locked in memory, but that I know how to get the answer from core code.

My biggest step toward becoming more familiar with core code was the introduction of an IDE into my toolset. I prefer PHPStorm, but Netbeans is great and open source.

The ability to click on a function I was using to jump to the code in core has gone a long, long way for me. Having functions autocomplete and tell me what to expect in return has been huge in understanding how all the pipes are connected. I know not everyone loves that world and would rather spend time in a text editor, but I’ve been much more efficient since I switched from Sublime Text.

I guess at some point you transcend to Nacin level and have it all memorized.

Previously: Thoughts on Contributing to WordPress Core

My first contribution to core (which I talk about in that post), was a direct result from reading core code. We were pouring over different possibilities and a typo just appeared. Now that I’m familiar enough with the codebase, there are probably some areas I could stare at for an hour or two and find similar little nuggets. See the documented return value(s) in get_blog_details() as an example of a patch waiting to be discovered. 😉

And actually, now that I’m trying to think of more examples, read that post. The “Types of Trac activity in Jeremy’s perceived order of difficulty” is important. The tedious task of going through Trac tickets is more fun when you comment on things. And every time you comment on something you become more familiar. Testing and trying to reproduce outstanding bugs will go a long way as well. Once you have the patch applied locally, there’s a good chance you’ll see something to change.

And that’s where the advice cuts off. There’s plenty more, but reading core code alone will get you started. 🛩