Comment on Greenpeace’s Planet 4 technical discovery findings

Greenpeace has been undergoing what seems to be a pretty open process in developing Planet 4, the plan for redesigning greenpeace.org. A series of posts have been published on Medium providing details on the discovery process and some of the decisions that have been made.

A few days ago, Davin Hutchins posted “Why Greenpeace and WordPress need each other“, which shares some thoughts on why WordPress is the right choice for a new CMS platform and how Greenpeace and the WordPress community share a similar set of goals in their respective domains.

A few weeks ago, Kevin Muller posted “Checking technical options, we discovered…“, which dives into the more technical details of the stack. This links to a pretty great slide deck that shows how much research went into the technical discovery process.

Both posts, and the process in general, are looking for input, so I’m going to toss out some initial thoughts I have in response after reading through all of that material. I’m going to start with the most reactionary first and then work my way through the mundane. ūüôā

And to Greenpeace – Hi! I’m happy you’re moving forward with WordPress!

Customizing WordPress

The word “fork” comes up at some point in the technical discovery post and “customized version of” appears in Davin’s post. Those start raising a yellow flag for me. Not because I think forking is bad, as that’s the heart of free and open source software, but because I think it would be the wrong technical decision.

The WordPress core project has hundreds of contributors and ships 3 major releases a year. As soon as a fork happens, a maintenance burden begins. This maintenance burden grows as things are built into the forked version that do not make their way into the upstream project.

That said, having your own production version of WordPress managed in git makes complete sense. This is a common practice, and something we do with our multi-network multisite WordPress platform at Washington State University.

This provides the ability to test patches in production or release hotfixes while you’re waiting for contributions to be merged upstream. It also makes management of local, staging, and production environments easier across the team.

Multisite vs Single Site

This list of pros and cons in the slide deck is a good one. I have quite a bit to say about multisite, as I think it’s a good enough decision for something like this. I’ll break it down to a couple points:

  • If you foresee using the same set of plugins on every site, or making the same set of plugins available for every site, multisite is a good answer.
  • If it makes sense for a global set of users to have access to multiple sites, multisite is a good answer.

There’s always more detail, but those are pretty good initial gut checks.

I’d be less concerned about the performance issues. Today’s modern LEMP stack is pretty dang good at serving requests and from a maintenance perspective I’d rather manage one installation than many.

For reference, WSU’s installation of WordPress has 57 networks, 1500 sites, and serves about 2.5 million page views a month on one server. WordPress.com is a single installation of WordPress with many millions of sites, spread out across many thousands of servers.

Performance

The slide deck mentions a couple things. Here are a couple more (very much my opinion):

  • Use something for object cache, and get a good object cache plugin. See plugins for PECL memcached and PECL memcache.
  • Use Batcache for page caching before moving on to something more complex like Varnish. This uses your object cache to cache pages and is super straight forward.
  • HyperDB is worth paying attention to when using multisite. This will help you scale.

Plugins

Don’t install too many plugins and be focused with the ones you do install.

WordPress is extremely extensible and has a great plugin ecosystem, but when approaching a project this large you’ll want to be sure that any plugin you install goes through a code review focused on security and performance. This also helps with developer familiarity when things go wrong.

Here are the basic criteria we use at WSU when vetting plugin requests.

REST API

The slide deck briefly mentions a REST API, but it’s not entirely obvious that it means the forthcoming WordPress REST API, which will be shipping next month in WordPress 4.7.

This should be a very useful tool in building custom applications that use data throughout the Greenpeace network. Use the built in routes and create your own custom routes rather than developing a new API on top of WordPress.

Community

And (of course) the last important note is community.

At the very least, pay attention to the notes that are frequently published on Make/Core, the official blog for WordPress core development. As things move forward, raise bugs and enhancements, submit patches, etc…

The easiest way to reduce maintenance burden when extending WordPress is to be familiar with WordPress.

Everyone contributing to WordPress is also interested in the ways that people use and extend WordPress, so please keep sharing your work as you build Planet 4 out. And as Petya mentioned on Twitter, it’d be great to hear your experience at next year’s WordCamp Europe or any other.

Please feel free to ask for more detail if you’d like!¬†ūüíö‚ėģ

I’m going to watch the Matrix sequels because

I waited 17 years to watch the Matrix.

At first it was because of some long forgotten arbitrary reason involving my opinions of Keanu Reeves and the original hype of the movie. When that wore off it was kind of fun to have just not seen it.

I did see the copious gunfire in the lobby scene once in 2000 when we went to a friend of a friend’s apartment to check out fancy home surround sound for the first time. That was pretty cool.

Anyhow.

I’ve heard from many people over the years that I should watch the first, but not the sequels.

I get that, but I’m also the person that waited 17 years to see The Matrix just because. And I’m the person that has purposely sought out things like Eraserhead, Meet the Feebles, and countless other bad decisions. I’m pretty sure I’ve even seen Manos: The Hands of Fate, the non MST3K version.

So I appreciate the advice, and everyone is probably right, but I’m definitely going to watch the sequels.

ūü§ď

For the next time I’m trying to figure out how to update the Java SDK

The only reason I find myself having to update Java is to maintain the Elasticsearch server we have running at WSU. Every time I want to update the provisioning configuration, I end up with 25 tabs open trying to figure out what version is needed and how to get it.

This is hopefully a shortcut for next time.

The Elasticsearch installation instructions told me that when they were written, JDK version 1.8.0_73 was required. My last commit on the provisioning script shows 8u72, which I’m going to guess is 1.8.0_72, so I need to update.

I found the page titled Java SE Development Kit 8 Downloads, which has a list of the current downloads for JDK 8. I’m going to ignore that 8 is not 1.8 and continue under the assumption that JDK 8 is really JDK 1.8 because naming.

At the time of this post, the available downloads are for Java SE Development Kit 8u101. I’m not sure how far off from the Elasticsearch requirements that is, so I found a page that displays Java release numbers and dates.

Of course now I see that there are CPU and PSU (OTN) versions. The PSU version is 102, the CPU is 101, what to do. Luckily, Oracle has a page explaining the Java release version naming. Even though 102 is higher than 101, Oracle recommends the CPU over the PSU. Ok.

I go back to the downloads page, click the radio button to accept the licensing agreement, copy the URL for jdk-8u101-linux-x64.tar.gz, and I’m done!

Send and receive email for your domain with Postmark and Amazon’s SES, S3, and Lambda services

A long, long, long time ago, sending email via your website was really horrible. Alongside the static HTML powering your Guestbook, you had some copy/pasted CGI script your ISP somehow allowed you to use that probably didn’t work, but oh crap it started working I hope it doesn’t break now.

A long, long time ago, sending email suddenly became easier. It kind of just worked accidentally. You install WordPress or another app on a shared host and you got emails when people left comments.

A while ago, things started to get hard again. When it’s easy for everyone to send email, it’s also really easy for people to send massive amounts of spam. So larger email clients got smart and started requiring things like DKIM and SPF to really guarantee mail delivery. Without these configured on your domain, you’re at the mercy of the algorithm. Thankfully, places like Digital Ocean had excellent documentation for configuring stuff like this¬†and with a bit of elbow grease, you could get there on a¬†$10/month Linode server.

But then it got super easy! Mandrill offered a free tier for transactional emails that had a limit nobody would reach with a standard blog/comment configuration. You could sign up for an account and use a wp_mail drop-in that handled the API interactions for you.

Of course, as with all free services, there’s a limit. Mandrill reached that limit this year and changed directions into a transactional email add-on for MailChimp accounts.

It happens, especially when it’s free. ¬Į\_(„ÉĄ)_/¬Į

And so it goes. On to the next service, hopefully configured in a structure that’s prepared for long-term use.

Why Postmark

It looks nice, I’ve seen it mentioned in a handful of¬†conversations, the API seems straight forward, and I didn’t run into anything that made it hard to setup an account. I’m easy.

You get 25,000 free emails with Postmark. After that you pay a really reasonable rate. If you send a ton of emails, the rate gets more reasonable. I think this is a good model and they should probably even charge earlier because it’s going to take me a while to send 25,000 emails.

Once you sign up, Postmark is just as easy as Mandrill. There’s an official plugin that provides a settings screen for you to add your API key and a wp_mail¬†replacement that handles the API calls. If you’re like me, you’ll skip the full plugin and grab only the wp_mail¬†drop-in, add some static configuration,¬†and toss it into mu-plugins.

The catch…

As it should, Postmark requires that you add Sender Signatures for any email address from which you’ll be sending email. So before I can send email from jeremy@chipconf.com, I need to show that I can already receive email on that same address.

At this point, a normal person decides to enable email forwarding through their domain registrar or host. That is the easy way, but it was Saturday and I was looking for a party.

Receiving email through Amazon

Amazon has 9 million AWS services. It only takes 3 of them to solve the email receipt problem and not one involves setting up a server. The hardest part is keeping track of all the open tabs.

  • Amazon Simple Email Service (SES) was originally built to handle the sending of transactional emails. In¬†September 2015, they added support for receiving email through the same service.
  • Amazon Simple Storage Service (S3) is a place to store things. In this case, it will be where we drop incoming emails to be processed.
  • Amazon’s AWS Lambda is the cool new kid on the block and allows for “serverless” computing. You define a function and are¬†charged only for the computing time that the function actually uses.

To get through this, you’re going to need a verified AWS account and access to your domain’s DNS settings via whichever name server you use. I use DNSimple, which¬†has made every DNS configuration in the last 5 years a pleasant experience. That’s an affiliate link even though it’s already only $7/month for me. ūüćĽ

Let’s do it.

Configuring SES to receive and forward email

  1. Go to SES via the Services menu your AWS Console and select Domains under Identity Management.
  2. Click on Verify a New Domain at the top of the screen.
  3. Enter the root of the domain you’re verifying, in my case chipconf.com, check the¬†Generate DKIM Settings option, and click¬†Verify This Domain.
  4. You’ll be presented with an overlay containing the new records that need to be attached to your domain as part of your DNS configuration. Carefully enter all of these as any mistakes may add extra time to the verification process. I set all of mine with a 10 minute TTL so that any errors may resolve sooner.
    • A¬†TXT record that acts as domain verification.
    • Three¬†CNAME records for the DKIM record set, which SES rotates through automatically.
    • And an¬†MX record to route incoming email on your domain to AWS.
  5. Click¬†Close on the DNS overlay. You’ll now need to be patient as the domain is verified. Amazon says this may take 72 hours, but it’s taken 5 minutes for 3 of my domains and 20 minutes for one where I had an error in the config at first. You’ll get a couple emails as soon as the verification goes through.

In the meantime, you’ll want to verify any email addresses that you will be forwarding email to.¬†As part of the initial SES configuration, you’re locked in the Amazon SES sandbox and can only send emails to addresses you have verified ahead of time.

  1. Select Email Addresses under Identity Management.
  2. Click on Verify a New Email Address at the top of the screen.
  3. Enter the address you’ll be forwarding mail to and click¬†Verify This Email Address.
  4. Once you receive an email from AWS, click on the link to complete the verification.

Note:¬†You’re also limited to sending 200 messages every 24 hours and a maximum of one per second. Because transactional emails will be sent using Postmark, and only replies to those emails will become through SES, that shouldn’t be a huge deal. If you do reach that limit, you’ll need to request access for a sending limit increase¬†for SES. If you think you’ll be receiving large volumes of email, you may want to also consider using SES for all of your transactional email (HumanMade has a plugin) and not use Postmark at all.

Ok, go back to¬†Domains under Identity Management and check that the status for your domain is listed as¬†verified. Once it is, we can continue. If you’re concerned that something isn’t working properly, use a command like dig¬†to double check the TXT record’s response.

‚Äļ dig TXT _amazonsesbroken.chipconf @ns1.dnsimple.com +short


‚Äļ dig TXT _amazonses.chipconf.com @ns1.dnsimple.com +short
"4QdRWJvCM6j1dS4IdK+csUB42YxdCWEniBKKAn9rgeM="

The first example returns nothing because it’s an invalid record. The second returns the expected value.

Note that I’m using ns1.dnsimple.com above. I can change that to ns2.dnsimple.com, ns3.dnsimple.com, etc… to verify that the record has saved properly throughout all my name servers. You should use your domain’s name server when checking dig.

  1. Once domain verification has processed, click on Rule Sets under Email Receiving on the left.
  2. Click on View Active Rule Set to view the default rule set. If a default rule set does not exist, create a new one.
  3. Click Create Rule to create a receipt rule for this domain.
  4. For recipient, enter the base of your domain (e.g. chipconf.com) rather than a full email address so that all addresses at that domain will match. Click Next Step.
  5. Select S3 as the first action.
  6. Choose Create S3 bucket in the S3 Bucket dropdown and enter a bucket name. Click Create Bucket.
  7. Leave Object key prefix blank and Encrypt Message unchecked.
  8. Choose Create SNS Topic in the SNS Topic dropdown and enter a Topic Name and Display Name. Click Create Topic.
  9. Click¬†Next Step. We’ll need to do some things before adding the Lambda function.
  10. Give the rule a name, make sure Enabled is checked, Require TLS is unchecked, and Enable spam and virus scanning is checked. Click Next Step.
  11. Review the details and click Create Rule.

Now head over to Lambda via the Services menu in the top navigation. Before completing the rule, we need to add the function used to forward emails that are stored in the S3 bucket to one of the verified email addresses.

Luckily, the hard legwork for this has already been done. We’ll be use the¬†appropriately named and MIT licensed¬†AWS Lambda SES Email Forwarder¬†function. The README on that repository is worth reading as well, it provides more detail for the instructions involved with this section.

  1. Click Create a Lambda function.
  2. Click Skip on the next screen without selecting a blueprint.
  3. Enter a name and description for the function. Make sure Runtime is set at Node.js 4.3. Paste the contents of the AWS Lambda SES Email Forwarder index.js file into the Lambda function code area.
  4. Edit the defaultConfig object at the top of this file to reflect your configuration.
    • fromEmail¬†should be something like noreply@chipconf.com
    • emailBucket¬†should be the name of the S3 bucket you created earlier.
    • emailKeyPrefix¬†should be an empty string.
    • forwardMapping¬†is used to configure one or more relationships between the incoming email address and the one the email is forwarded to. Use something like @chipconf.com¬†as a catch-all for the last rule.
  5. Leave Handler set to index.handler.
  6. Select Basic Execution Role from the role list. A new window will appear to grant Lambda permissions to other AWS resources.
  7. Choose Create a new IAM Role from the IAM Role drop down and provide a Role Name.
  8. Click View Policy Document and then Edit to edit the policy document. Copy and paste the below policy document, also taken from the AWS Lambda SES Email Forwarder repository, into that text area. Make sure to change the S3 bucket name in that policy to match yours. In the below policy document, I replaced S3-BUCKET-NAME with chipconf-emails.
    • {
        "Version": "2012-10-17",
        "Statement": [
           {
              "Effect": "Allow",
              "Action": [
                 "logs:CreateLogGroup",
                 "logs:CreateLogStream",
                 "logs:PutLogEvents"
              ],
              "Resource": "arn:aws:logs:*:*:*"
           },
           {
              "Effect": "Allow",
              "Action": "ses:SendRawEmail",
              "Resource": "*"
           },
           {
              "Effect": "Allow",
              "Action": [
                 "s3:GetObject",
                 "s3:PutObject"
              ],
              "Resource": "arn:aws:s3:::S3-BUCKET-NAME/*"
           }
        ]
      }
  9. Click Allow. You should be transferred back to the Lambda screen.
  10. Under Advanced Settings, set Memory to 128MB and Timeout to 10 seconds. You can leave VPC set to No VPC.
  11. Click Next.
  12. Review the new function details and click Create function.

Whew. Almost there.

Now head back to SES via the Services menu in the top navigation. We need to edit the rule set to use the new Lambda function.

  1. Click Rule Sets under Email Receiving and then View Active Rule Set to see the existing rules.
  2. Click on the name of the rule from the previous steps.
  3. Select Lambda as an action type next to Add Action.
  4. Select the new function you created next to Lambda function. Leave Event selected for the Invocation type. Leave None selected for SNS topic.
  5. Click Save Rule.
  6. A permissions overlay will appear to request access for SES to invoke the function on Lambda. Click Add permissions.

Voilà!

Now I can go back to Postmark and add jeremy@chipconf.com as a valid Sender Signature so that the server can use the Postmark API to send emails on behalf of jeremy@chipconf.com to any address.

If someone replies to one of those emails (or just sends one to jeremy@chipconf.com), it is now received by Amazon SES. The email is then processed and stored as an object in Amazon S3. SES then notifies Amazon Lambda, which fires the stored function used to process that email and forward it via SES to the mapped email address.

Now that you have 1800 words to guide you through the process, I’m going to dump a bunch of screenshots that may help provide some context. Feel free to leave a comment if one of these steps isn’t clear enough.

Becoming a better WordPress Developer

I wrote this as a post for a friend a bit ago with the intention of cleaning it up to publish at some point. I kind of like the informality, so I’m mostly leaving it as is. Enjoy!

Reading core code was a big step in becoming a better WordPress developer.

I remember having a self realization moment a few years ago where I knew that if I reached a point where I never referred to the Codex to figure out how to handle something, then I had become good at this stuff. My self-test wasn’t necessarily that the arguments for register_post_type() are locked in memory, but that I know how to get the answer from core code.

My biggest step toward becoming more familiar with core code was the introduction of an IDE into my toolset. I prefer PHPStorm, but Netbeans is great and open source.

The ability to click on a function I was using to jump to the code in core has gone a long, long way for me. Having functions autocomplete and tell me what to expect in return has been huge in understanding how all the pipes are connected. I know not everyone loves that world and would rather spend time in a text editor, but I’ve been much more efficient since I switched from Sublime Text.

I guess at some point you transcend to Nacin level and have it all memorized.

Previously: Thoughts on Contributing to WordPress Core

My first contribution to core (which I talk about in that post), was a direct result from reading core code. We were pouring over different possibilities and a typo just appeared. Now that I’m familiar enough with the codebase, there are probably some areas I could stare at for an hour or two and find similar little nuggets. See the documented return value(s) in get_blog_details()¬†as an example of a patch waiting to be discovered. ūüėČ

And actually, now that I’m trying to think of more examples, read that post. The “Types of Trac activity in Jeremy’s perceived order of difficulty” is important. The tedious task of going through Trac tickets is more fun when you comment on things. And every time you comment on something you become more familiar. Testing and trying to reproduce outstanding bugs will go a long way as well. Once you have the patch applied locally, there’s a good chance you’ll see something to change.

And that’s where the advice cuts off. There’s plenty more, but reading core code alone will get you started. ūüõ©

Pieter Hintjens on Building Open Source Communities

This was a really interesting listen. Pieter Hintjens, the founder of ZeroMQ, lays out a handful of rules for building open source communities.

  1. Put people before code.
  2. Make progress before you get consensus.
  3. Problems before solutions.
  4. Contracts before internals.

Everything you do as a founder of a community should be aimed at getting the people into your project and getting them happy and getting them productive.

And one of the ways to do that, according to Hintjens in rule 2, is to merge pull requests liberally and get new contributors’ “vision of progress on record” so that they immediately become members of the community. Worry about fixing the progress later.

His thoughts around licensing (our contract) were also interesting. Without formal contracts, then pull requests rely on the license applied to a project. If a project has a very lenient license, such as MIT, and somebody forks your project, it’s feasible that a different license could be applied to code the submit back to the project through a pull request. If a project has a share-alike license‚ÄĒhis example was MPLv2, I’m familiar with GPL‚ÄĒthen you can rely on incoming patches already being compatible without additional paperwork.

I’d like to explore more around that, as I’m sure there are some other legal angles. It does further stress that paying attention to the license you choose is good. It would be interesting to see if my thinking had changed during our late license selection process for VVV.

The Q/A has some good gems too, great questions and great answers, so keep listening. Here are two of my more favorite answers. ūüôā

“throw away your ego, it will help a lot”

And, in response to someone asking why you shouldn’t just let everyone commit directly to master.

“If you send a pull request and somebody says merge, it feels good. […] If you’re merging your own work, […] it feels lonely, you feel uncertain, you start making mistakes, and no one is there to stop you.”

 

How we’re using the WP REST API at Washington State University

As I write this, we have the WP REST API enabled for 1083 sites across 54 networks on our single installation of WordPress at Washington State University.

It’s probably worth noting that this only counts as one active installation in the WordPress.org repository stats. ūüôĀ It’s definitely worth noting how we use it! ūüôā

Our primary use for the WP REST API is to share content throughout the University via our WSUWP Content Syndicate plugin. With the simple wsuwp_json shortcode, anyone is able to embed a list of headlines from articles published on news.wsu.edu.

And just by changing the host, I can switch over and embed a couple of recent headlines from web.wsu.edu.

Having the ability to share information across the University is very useful to us. It helps various groups and sites throughout the ecosystem feel more connected as visitors and as site owners.

Of course, we could have used a pre-existing syndication format like RSS as a solution, but a REST API is so much more flexible. It didn’t take much work to extend the initial plugin using things¬†like register_rest_field() to support and display results from the central people directory we have in progress.

Jeremy Felt
SR WORDPRESS ENGR, UCOMM
INFO TECH 2008
jeremy.felt@wsu.edu

That’s me, pulled in from our people API.

This kind of data flexibility is a big part of our vision for the future of the web at WSU. Soon we’ll be able to highlight information for research faculty that may help to connect them with other groups working on similar topics. We’ll have ways to create articles on the edge of the network and have them bubble up through the various layers of the university‚ÄĒdepartment, college, central news. And we’ll be able to start tying data to people in a smarter way so that we can help to make sure voices throughout the university are heard.

And that’s just our first angle! One day I’ll expand on how we see the REST API changing our front end workflow in creative ways.

Thoughts on merging the WP REST API plugin

Daniel asked for official feedback from WordPress core committers on the REST API. Here goes. ūüôā

I’ve been thinking a lot about this over the last week since last week’s status meeting. And I think I can sum up my thoughts in a nutshell now.

I’m in favor of the REST API team’s proposal to merge the endpoints for the primary objects in WordPress‚ÄĒposts, comments, users, terms‚ÄĒwhen they’re ready.

When the endpoints for these objects are ready, I would like to see them merged early in a release cycle.

With these primary endpoints in, front end workflows can immediately start to take advantage. This is something groups have been doing for years with custom code already. Getting these groups to use the same structure is valuable.

Exposing all of wp-admin via the REST API is important for the future. I would like to see more discussion from groups planning on creating these interfaces. Determining what the most valuable endpoints are for creating initial versions of these custom admin interfaces could help guide iteration while also allowing progress on those interfaces to begin. Ideally, there should be a wider discussion about what this all means for the default WordPress admin interface.

In general, I think these status meetings should happen more often so that any disparities of opinion are not as surprising to the community at large. A good definition of what “ready” means for each endpoint would be valuable as well.

Managing SSL certificates and HTTPS configuration at scale

Our multi-network multisite WordPress installation at WSU has 1022 sites spread across 342 unique domain names. We have 481 SSL certificates on the server to help secure the traffic to and from these domains. And we have 1039 unique server blocks in our nginx configuration to help route that traffic.

Configuring a site for HTTPS is often portrayed as a difficult process. This is mostly true depending on your general familiarity with server configuration and encryption.

The good thing about process is only having to figure it out a few times before you can automate it or define it in a way that makes things less difficult.

Pieces used during SSL certification

A key‚ÄĒget it‚ÄĒto understanding and defining the process of HTTPS configuration is to first understand the pieces you’re working with.

  • Private Key:¬†This should be secret and unique. It is used by the server to sign encrypted traffic that it sends.
  • Public Key:¬†This key can be distributed anywhere. It is used by clients to verify that encrypted traffic was signed by your private key.
  • CSR:¬†A Certificate Signing Request. This¬†contains your public key and other information about you and your server. Used to request digital certification from a certificate authority.
  • Certificate Authority:¬†The issuer of SSL certificates. This authority is trusted by the server and clients to verify and sign public keys. Ideally, a certificate authority is trusted by the maximum number of clients. (i.e. all browsers)
  • SSL Certificate:¬†Also known as a digital certificate or public key certificate. This contains your public key and is signed by a certificate authority. This signature applies a level of trust to your public key to help clients when deciding its validity.

Of the files and keys generated, the most important for the final configuration are the private key and the SSL certificate. The public key can be generated at any time from the private key and the CSR is only a vessel to send that public key to a certificate signing authority.

Losing or deleting the SSL certificate means downloading the SSL certificate again. Losing or deleting the private key means restarting the process entirely.

Obtaining an SSL certificate

The first step in the process is to generate the private key for a domain and a CSR containing the corresponding public key.

openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout jeremyfelt.com.key -out jeremyfelt.com.csr

This command will generate a 2048 bit RSA private key and a CSR signed with the SHA-256 hash algorithm. No public key file is generated as it is inserted directly into the CSR file.

Next, submit the CSR to a certificate signing authority. The certificate signing authority will sign the public key and return a digital certificate including the signature, your public key, and other information.

The certificate signing authority is often the part of the process that is annoying and difficult to automate.

If you’re purchasing the signature of a certificate through a certificate authority or reseller such as GoDaddy or Namecheap, the steps to purchase the initial request, submit the CSR, and download the correct certificate file can often be confusing and very time consuming.

Luckily, in WSU’s case, we have a university subscription to¬†InCommon, a reseller of Comodo certificates. This allows us to request as many certificates as we need for one flat annual fee. It also provides a relatively straight forward web interface for requesting certificates. Similar to other resellers, we still need to wait as the request is approved by central IT and then generated by Comodo via InCommon.

Even better is the new certificate authority, Let’s Encrypt, which provides an API and a command line tool for submitting and finishing a certificate signing request immediately and for free.

Configuring the SSL certificate

This is where the process starts becoming more straight forward again. And where I’ll only focus on nginx as my familiarity with Apache disappeared years ago.

A cool thing about nginx when you’re serving HTTP requests is the flexibility of server names. It can use one server block in the configuration to serve thousands of sites.

server {
    listen 80;
    server_name *.wsu.edu wsu.io jeremyfelt.com foo.bar;
    root /var/www/wordpress;
}

However, when you serve HTTPS requests, you must specify which files to use for the private key and SSL certificate:

server {
    listen 443 ssl http2;
    server_name jeremyfelt.com;
    root /var/www/wordpress;

    ssl on;
    ssl_certificate /etc/nginx/ssl/jeremyfelt.com.cer;
    ssl_certificate_key /etc/nginx/ssl/jeremyfelt.com/key;
}

If you are creating private keys and requesting SSL certificates for individual sites as you configure them, this means having a server block for each server name.

There are three possibilities here:

  1. Use a wildcard certificate. This would allow for one server block for each set of subdomains. Anything at *.wsu.edu would be covered.
  2. Use a multi-domain certificate. This uses the SubjectAltName portion of a certificate to list multiple domains in a single certificate.
  3. Generate individual server blocks for each server name.

A wildcard certificate would be great if you control the domain and its subdomains. Unfortunately, at WSU, subdomains point to services all over the state. If everybody managing multiple subdomains also had a wildcard certificate to make it easier to manage HTTPS, the likelihood of that private key and certificate leaking out and becoming untrustworthy would increase.

Multi-domain certificates can be useful when you have some simple combinations like www.site.foo.bar and site.foo.bar. To redirect an HTTPS request from www to non-www, you need HTTPS configured for both. A minor issue is the size of the certificate. Every domain added to a SubjectAltName field increases the size of the certificate by the size of that domain text.

Not a big deal with a few small domains. A bigger deal with 100 large domains.

The convenience of multi-domain certificates also depends on how frequently domains are added. Any time a domain is added to a multi-domain certificate, it would need to be re-signed. If you know of several in advance, it may make sense.

If you hadn’t guessed yet, we use option 3 at WSU. Hence the 1039 unique server blocks! ūüôā

From time to time we’ll request a small multi-domain certificate to handle the www to non-www redirects. But that too fits right into our process of putting the private key and certificate files in the proper place and generating a corresponding server block.

Using many server blocks in nginx for HTTPS

Private keys are generated, CSRs are submitted, SSL certificates are generated and downloaded.

Here’s what a generated server block at WSU looks like:

# BEGIN generated server block for fancy.wsu.edu
#
# Generated 2016-01-16 14:11:15 by jeremy.felt
server {
    listen 80;
    server_name fancy.wsu.edu;
    return 301 https://fancy.wsu.edu$request_uri;
}

server {
    server_name fancy.wsu.edu;

    include /etc/nginx/wsuwp-common-header.conf;

    ssl_certificate /etc/nginx/ssl/fancy.wsu.edu.cer;
    ssl_certificate_key /etc/nginx/ssl/fancy.wsu.edu.key;

    include /etc/nginx/wsuwp-ssl-common.conf;
    include /etc/nginx/wsuwp-common.conf;
}
# END generated server block for fancy.wsu.edu

We listen to requests on port 80 for fancy.wsu.edu and redirect those to HTTPS.

We listen to requests on port 443 for fancy.wsu.edu using a common header, provide directives for the SSL certificate and private key, and include the SSL configuration common to all server blocks.

wsuwp-common-header.conf

This is the smallest configuration file, so I’ll just include it here.

listen 443 ssl http2;
root /var/www/wordpress;

Listen on 443 for SSL and HTTP2 requests and use the directory where WordPress is installed as the web root.

These directives used to be part of the generated server blocks until nginx added support for HTTP2 and immediately deprecated support for SPDY. I had to replace spdy with http2 in all of our server blocks so instead decided to create a common config and include it.

WSU’s wsuwp-common-header.conf¬†is open source if you’d like to use it.

wsuwp-ssl-common.conf

This is my favorite configuration file and one I often revisit. It contains all of the HTTPS specific nginx configuration.

# Enable HTTPS.
ssl on;

# Pick the allowed protocols
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# And much, much more...

This is a case where so much of the hard stuff is figured out for you. I regularly visit things like Mozilla’s intermediate set of ciphers¬†and this¬†boilerplate nginx configuration¬†and then make adjustments as they make sense.

WSU’s wsuwp-ssl-common.conf¬†is open source if you’d like to use it.

wsuwp-common.conf

And the configuration file for WordPress and other things. It’s the least interesting to talk about in this context.¬†But! It too is open source if you’d like to use it.

The process of maintaining all of this

At the beginning I mentioned defining and automating the process as a way of making it less difficult. We haven’t yet reached full automation at WSU, but our¬†process is now well defined.

  1. Generate a private key and CSR using our WSUWP TLS plugin. This provides an interface in the main network admin to type in a domain name and generate the required files. The private key stays on the server and the CSR is available to copy so that it can be submitted to InCommon.
  2. Submit the CSR through the InCommon web interface. Wait.
  3. Upon receipt of the approval email, download the SSL certificate from the embedded link.
  4. Upload the SSL certificate through the WSUWP TLS interface. This verifies the certificate’s domain, places it on the server alongside the private key, and generates the server block for nginx.
  5. Deploy the private key, SSL certificate, and generated server block file. At the moment, this process involves the command line.
  6. Run nginx -t to test the configuration and service nginx reload to pull it into production.
  7. In the WSUWP TLS plugin interface, verify the domain responds on HTTPS and remove it from the list.

Looking at the steps above, it’s not hard to imagine a completely automated process, especially if your certificate authority has a way of immediately approving request and responding with a certificate. And even without automation, having this process well defined allows several members of our team to generate, request, and deploy certificates.

I’d love to know what other ways groups are approaching this. I’ve often hoped and spent plenty of time searching for easier ways. Share your thoughts, especially if you see any holes! ūüôā

Previously:

David Bowie

David-Bowie_Chicago_2002-08-08_photoby_Adam-Bielawski
David Bowie performs at Tweeter Center outside Chicago in Tinley Park,IL, USA on August 8, 2002. Photo by Adam Bielawski

I have so many good memories around David Bowie.

My second band, Cycle Pinsetter, was obsessed. We ate up everything Bowie. Covering at least Ziggy Stardust, Rebel Rebel, Andy Warhol, and Queen Bitch‚ÄĒand playing them over and over and over again. The recordings that I have will always be fun to revisit.

The music. He was one of the first artists I actually dug in and discovered after finally getting over a sole obsession with The Smashing Pumpkins. I remember vividly the Bowie section of my¬†CD rack‚ÄĒHunky Dory, The Rise and Fall of Ziggy Stardust, Aladdin Sane, Diamond Dogs.

The movies.¬†Obviously, eating up Labyrinth again and again. My sister and I still do the “you remind me of the babe” back and forth.¬†Merry Christmas Mr. Lawrence. Pontius Pilate in The Last Temptation of Christ.

We were fortunate to see him in concert! August 8th, 2002 at the Tweeter Center, where the picture above was taken. I don’t remember much about the performance except that he went on as the sun was going down. It was the perfect rock star moment where the sky was a backdrop on this huge stage and Bowie’s hair was blowing just enough in the wind on the large video monitor as he belted out Fame or something similar. Such a great time.

It’s a sad moment. But the music still plays and I’m dancing while I write this. Thanks, Bowie. ‚ö°ÔłŹ