Pieter Hintjens on Building Open Source Communities

This was a really interesting listen. Pieter Hintjens, the founder of ZeroMQ, lays out a handful of rules for building open source communities.

  1. Put people before code.
  2. Make progress before you get consensus.
  3. Problems before solutions.
  4. Contracts before internals.

Everything you do as a founder of a community should be aimed at getting the people into your project and getting them happy and getting them productive.

And one of the ways to do that, according to Hintjens in rule 2, is to merge pull requests liberally and get new contributors’ “vision of progress on record” so that they immediately become members of the community. Worry about fixing the progress later.

His thoughts around licensing (our contract) were also interesting. Without formal contracts, then pull requests rely on the license applied to a project. If a project has a very lenient license, such as MIT, and somebody forks your project, it’s feasible that a different license could be applied to code the submit back to the project through a pull request. If a project has a share-alike license—his example was MPLv2, I’m familiar with GPL—then you can rely on incoming patches already being compatible without additional paperwork.

I’d like to explore more around that, as I’m sure there are some other legal angles. It does further stress that paying attention to the license you choose is good. It would be interesting to see if my thinking had changed during our late license selection process for VVV.

The Q/A has some good gems too, great questions and great answers, so keep listening. Here are two of my more favorite answers. 🙂

“throw away your ego, it will help a lot”

And, in response to someone asking why you shouldn’t just let everyone commit directly to master.

“If you send a pull request and somebody says merge, it feels good. […] If you’re merging your own work, […] it feels lonely, you feel uncertain, you start making mistakes, and no one is there to stop you.”

 

How we’re using the WP REST API at Washington State University

As I write this, we have the WP REST API enabled for 1083 sites across 54 networks on our single installation of WordPress at Washington State University.

It’s probably worth noting that this only counts as one active installation in the WordPress.org repository stats. 🙁 It’s definitely worth noting how we use it! 🙂

Our primary use for the WP REST API is to share content throughout the University via our WSUWP Content Syndicate plugin. With the simple wsuwp_json shortcode, anyone is able to embed a list of headlines from articles published on news.wsu.edu.

And just by changing the host, I can switch over and embed a couple of recent headlines from web.wsu.edu.

Having the ability to share information across the University is very useful to us. It helps various groups and sites throughout the ecosystem feel more connected as visitors and as site owners.

Of course, we could have used a pre-existing syndication format like RSS as a solution, but a REST API is so much more flexible. It didn’t take much work to extend the initial plugin using things like register_rest_field() to support and display results from the central people directory we have in progress.

Jeremy Felt
SR WORDPRESS ENGR, UCOMM
INFO TECH 2008
jeremy.felt@wsu.edu

That’s me, pulled in from our people API.

This kind of data flexibility is a big part of our vision for the future of the web at WSU. Soon we’ll be able to highlight information for research faculty that may help to connect them with other groups working on similar topics. We’ll have ways to create articles on the edge of the network and have them bubble up through the various layers of the university—department, college, central news. And we’ll be able to start tying data to people in a smarter way so that we can help to make sure voices throughout the university are heard.

And that’s just our first angle! One day I’ll expand on how we see the REST API changing our front end workflow in creative ways.

Thoughts on merging the WP REST API plugin

Daniel asked for official feedback from WordPress core committers on the REST API. Here goes. 🙂

I’ve been thinking a lot about this over the last week since last week’s status meeting. And I think I can sum up my thoughts in a nutshell now.

I’m in favor of the REST API team’s proposal to merge the endpoints for the primary objects in WordPress—posts, comments, users, terms—when they’re ready.

When the endpoints for these objects are ready, I would like to see them merged early in a release cycle.

With these primary endpoints in, front end workflows can immediately start to take advantage. This is something groups have been doing for years with custom code already. Getting these groups to use the same structure is valuable.

Exposing all of wp-admin via the REST API is important for the future. I would like to see more discussion from groups planning on creating these interfaces. Determining what the most valuable endpoints are for creating initial versions of these custom admin interfaces could help guide iteration while also allowing progress on those interfaces to begin. Ideally, there should be a wider discussion about what this all means for the default WordPress admin interface.

In general, I think these status meetings should happen more often so that any disparities of opinion are not as surprising to the community at large. A good definition of what “ready” means for each endpoint would be valuable as well.

Managing SSL certificates and HTTPS configuration at scale

Our multi-network multisite WordPress installation at WSU has 1022 sites spread across 342 unique domain names. We have 481 SSL certificates on the server to help secure the traffic to and from these domains. And we have 1039 unique server blocks in our nginx configuration to help route that traffic.

Configuring a site for HTTPS is often portrayed as a difficult process. This is mostly true depending on your general familiarity with server configuration and encryption.

The good thing about process is only having to figure it out a few times before you can automate it or define it in a way that makes things less difficult.

Pieces used during SSL certification

A key—get it—to understanding and defining the process of HTTPS configuration is to first understand the pieces you’re working with.

  • Private Key: This should be secret and unique. It is used by the server to sign encrypted traffic that it sends.
  • Public Key: This key can be distributed anywhere. It is used by clients to verify that encrypted traffic was signed by your private key.
  • CSR: A Certificate Signing Request. This contains your public key and other information about you and your server. Used to request digital certification from a certificate authority.
  • Certificate Authority: The issuer of SSL certificates. This authority is trusted by the server and clients to verify and sign public keys. Ideally, a certificate authority is trusted by the maximum number of clients. (i.e. all browsers)
  • SSL Certificate: Also known as a digital certificate or public key certificate. This contains your public key and is signed by a certificate authority. This signature applies a level of trust to your public key to help clients when deciding its validity.

Of the files and keys generated, the most important for the final configuration are the private key and the SSL certificate. The public key can be generated at any time from the private key and the CSR is only a vessel to send that public key to a certificate signing authority.

Losing or deleting the SSL certificate means downloading the SSL certificate again. Losing or deleting the private key means restarting the process entirely.

Obtaining an SSL certificate

The first step in the process is to generate the private key for a domain and a CSR containing the corresponding public key.

openssl req -new -newkey rsa:2048 -nodes -sha256 -keyout jeremyfelt.com.key -out jeremyfelt.com.csr

This command will generate a 2048 bit RSA private key and a CSR signed with the SHA-256 hash algorithm. No public key file is generated as it is inserted directly into the CSR file.

Next, submit the CSR to a certificate signing authority. The certificate signing authority will sign the public key and return a digital certificate including the signature, your public key, and other information.

The certificate signing authority is often the part of the process that is annoying and difficult to automate.

If you’re purchasing the signature of a certificate through a certificate authority or reseller such as GoDaddy or Namecheap, the steps to purchase the initial request, submit the CSR, and download the correct certificate file can often be confusing and very time consuming.

Luckily, in WSU’s case, we have a university subscription to InCommon, a reseller of Comodo certificates. This allows us to request as many certificates as we need for one flat annual fee. It also provides a relatively straight forward web interface for requesting certificates. Similar to other resellers, we still need to wait as the request is approved by central IT and then generated by Comodo via InCommon.

Even better is the new certificate authority, Let’s Encrypt, which provides an API and a command line tool for submitting and finishing a certificate signing request immediately and for free.

Configuring the SSL certificate

This is where the process starts becoming more straight forward again. And where I’ll only focus on nginx as my familiarity with Apache disappeared years ago.

A cool thing about nginx when you’re serving HTTP requests is the flexibility of server names. It can use one server block in the configuration to serve thousands of sites.

server {
    listen 80;
    server_name *.wsu.edu wsu.io jeremyfelt.com foo.bar;
    root /var/www/wordpress;
}

However, when you serve HTTPS requests, you must specify which files to use for the private key and SSL certificate:

server {
    listen 443 ssl http2;
    server_name jeremyfelt.com;
    root /var/www/wordpress;

    ssl on;
    ssl_certificate /etc/nginx/ssl/jeremyfelt.com.cer;
    ssl_certificate_key /etc/nginx/ssl/jeremyfelt.com/key;
}

If you are creating private keys and requesting SSL certificates for individual sites as you configure them, this means having a server block for each server name.

There are three possibilities here:

  1. Use a wildcard certificate. This would allow for one server block for each set of subdomains. Anything at *.wsu.edu would be covered.
  2. Use a multi-domain certificate. This uses the SubjectAltName portion of a certificate to list multiple domains in a single certificate.
  3. Generate individual server blocks for each server name.

A wildcard certificate would be great if you control the domain and its subdomains. Unfortunately, at WSU, subdomains point to services all over the state. If everybody managing multiple subdomains also had a wildcard certificate to make it easier to manage HTTPS, the likelihood of that private key and certificate leaking out and becoming untrustworthy would increase.

Multi-domain certificates can be useful when you have some simple combinations like www.site.foo.bar and site.foo.bar. To redirect an HTTPS request from www to non-www, you need HTTPS configured for both. A minor issue is the size of the certificate. Every domain added to a SubjectAltName field increases the size of the certificate by the size of that domain text.

Not a big deal with a few small domains. A bigger deal with 100 large domains.

The convenience of multi-domain certificates also depends on how frequently domains are added. Any time a domain is added to a multi-domain certificate, it would need to be re-signed. If you know of several in advance, it may make sense.

If you hadn’t guessed yet, we use option 3 at WSU. Hence the 1039 unique server blocks! 🙂

From time to time we’ll request a small multi-domain certificate to handle the www to non-www redirects. But that too fits right into our process of putting the private key and certificate files in the proper place and generating a corresponding server block.

Using many server blocks in nginx for HTTPS

Private keys are generated, CSRs are submitted, SSL certificates are generated and downloaded.

Here’s what a generated server block at WSU looks like:

# BEGIN generated server block for fancy.wsu.edu
#
# Generated 2016-01-16 14:11:15 by jeremy.felt
server {
    listen 80;
    server_name fancy.wsu.edu;
    return 301 https://fancy.wsu.edu$request_uri;
}

server {
    server_name fancy.wsu.edu;

    include /etc/nginx/wsuwp-common-header.conf;

    ssl_certificate /etc/nginx/ssl/fancy.wsu.edu.cer;
    ssl_certificate_key /etc/nginx/ssl/fancy.wsu.edu.key;

    include /etc/nginx/wsuwp-ssl-common.conf;
    include /etc/nginx/wsuwp-common.conf;
}
# END generated server block for fancy.wsu.edu

We listen to requests on port 80 for fancy.wsu.edu and redirect those to HTTPS.

We listen to requests on port 443 for fancy.wsu.edu using a common header, provide directives for the SSL certificate and private key, and include the SSL configuration common to all server blocks.

wsuwp-common-header.conf

This is the smallest configuration file, so I’ll just include it here.

listen 443 ssl http2;
root /var/www/wordpress;

Listen on 443 for SSL and HTTP2 requests and use the directory where WordPress is installed as the web root.

These directives used to be part of the generated server blocks until nginx added support for HTTP2 and immediately deprecated support for SPDY. I had to replace spdy with http2 in all of our server blocks so instead decided to create a common config and include it.

WSU’s wsuwp-common-header.conf is open source if you’d like to use it.

wsuwp-ssl-common.conf

This is my favorite configuration file and one I often revisit. It contains all of the HTTPS specific nginx configuration.

# Enable HTTPS.
ssl on;

# Pick the allowed protocols
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# And much, much more...

This is a case where so much of the hard stuff is figured out for you. I regularly visit things like Mozilla’s intermediate set of ciphers and this boilerplate nginx configuration and then make adjustments as they make sense.

WSU’s wsuwp-ssl-common.conf is open source if you’d like to use it.

wsuwp-common.conf

And the configuration file for WordPress and other things. It’s the least interesting to talk about in this context. But! It too is open source if you’d like to use it.

The process of maintaining all of this

At the beginning I mentioned defining and automating the process as a way of making it less difficult. We haven’t yet reached full automation at WSU, but our process is now well defined.

  1. Generate a private key and CSR using our WSUWP TLS plugin. This provides an interface in the main network admin to type in a domain name and generate the required files. The private key stays on the server and the CSR is available to copy so that it can be submitted to InCommon.
  2. Submit the CSR through the InCommon web interface. Wait.
  3. Upon receipt of the approval email, download the SSL certificate from the embedded link.
  4. Upload the SSL certificate through the WSUWP TLS interface. This verifies the certificate’s domain, places it on the server alongside the private key, and generates the server block for nginx.
  5. Deploy the private key, SSL certificate, and generated server block file. At the moment, this process involves the command line.
  6. Run nginx -t to test the configuration and service nginx reload to pull it into production.
  7. In the WSUWP TLS plugin interface, verify the domain responds on HTTPS and remove it from the list.

Looking at the steps above, it’s not hard to imagine a completely automated process, especially if your certificate authority has a way of immediately approving request and responding with a certificate. And even without automation, having this process well defined allows several members of our team to generate, request, and deploy certificates.

I’d love to know what other ways groups are approaching this. I’ve often hoped and spent plenty of time searching for easier ways. Share your thoughts, especially if you see any holes! 🙂

Previously:

David Bowie

David-Bowie_Chicago_2002-08-08_photoby_Adam-Bielawski
David Bowie performs at Tweeter Center outside Chicago in Tinley Park,IL, USA on August 8, 2002. Photo by Adam Bielawski

I have so many good memories around David Bowie.

My second band, Cycle Pinsetter, was obsessed. We ate up everything Bowie. Covering at least Ziggy Stardust, Rebel Rebel, Andy Warhol, and Queen Bitch—and playing them over and over and over again. The recordings that I have will always be fun to revisit.

The music. He was one of the first artists I actually dug in and discovered after finally getting over a sole obsession with The Smashing Pumpkins. I remember vividly the Bowie section of my CD rack—Hunky Dory, The Rise and Fall of Ziggy Stardust, Aladdin Sane, Diamond Dogs.

The movies. Obviously, eating up Labyrinth again and again. My sister and I still do the “you remind me of the babe” back and forth. Merry Christmas Mr. Lawrence. Pontius Pilate in The Last Temptation of Christ.

We were fortunate to see him in concert! August 8th, 2002 at the Tweeter Center, where the picture above was taken. I don’t remember much about the performance except that he went on as the sun was going down. It was the perfect rock star moment where the sky was a backdrop on this huge stage and Bowie’s hair was blowing just enough in the wind on the large video monitor as he belted out Fame or something similar. Such a great time.

It’s a sad moment. But the music still plays and I’m dancing while I write this. Thanks, Bowie. ⚡️

Email to Slack bot idea

It would be fun to have a Slack bot that could be copied late into a long email thread of reply-alls. It would create a channel or join an existing one, parse the thread into a conversation with messages applied to people in the conversation, and then reply-all to the email thread with a link to the conversation in Slack.

This would allow you to politely suggest a conversation move to Slack and move it there at the same time.

See also: the same, but with a new GitHub issue.

Specific focuses and vague goals for 2016

It’s funny how I’ve been looking forward more to writing this post than I was to writing my reflections on 2015. I guess it’s good to get some of that out of the way first to help focus more on what’s next.

I’ve adjusted last year’s self-reflection projection title a bit to better apply specificity and vagueness.

As 2016 progresses, I hope to revisit this post as a guide for what I thought I would enjoy doing as the year went on.

Reading.

I’m successfully falling in love with reading again. And now that I’m back on track I think I can get a bit more focused. I set a goal of 15 books in my 2016 Goodreads reading challenge. Here’s how I want that to break down.

2/3 should be fiction. 1/3 should be non-fiction.

I’d like to continue my Orwell streak. I rounded things out pretty well with some of his lesser known novels last year. Now it’s time to revisit my favorites, specifically Homage to Catalonia and 1984, and finally get to (finish?) The Road to Wigan Pier and Down and Out in Paris and London. Three of those count as non-fiction. Sweet!

I’m slightly more fascinated with James Joyce after finishing Ulysses. A Portrait of the Artist as a Young Man should be part of 2016.

I’d like to finish the original Foundation trilogy and I have Second Foundation waiting for me on the Kindle. The first took a bit and the second was better. If the story continues to be interesting I may find myself reading even more Asimov.

I’ll mention Hemingway because I finally read A Moveable Feast last year and want to dig in, but he’s a backup for the time being.

William Hertling’s Singularity series has been a ton of fun and I still need to finish the 4th, The Turing Exception. I’d like to explore more fiction along these lines if you have any suggestions!

As for non-fiction, I have a very specific list to start with:

I’m forcing myself to stop recommending myself books now. Time to read!

Learning.

In a similar vein, I’m starting to focus more on learning and how I can establish patterns for learning in my day to day life. Rather than attempting to adapt on the fly, I’d want to start being proactive.

In 2016…

A passable amount of German. We’re planning on being in Vienna for about a month in June around WordCamp Europe. I’d like to be prepared for some limited conversation. The time we spent in France and Spain in 2011 helped me realize how much I stumble in situations where English is not an option. While a large number of people in Vienna speak English, I’d rather attempt German more often than not.

JavaScript. Deeply. 😜

But seriously. I can sit down and hack at JavaScript. I can build things that rely on JavaScript. But I can’t give you a comparison of frameworks or really tell you why React seems like overkill and something like Ember might be better. And I’d like to.

One of my goals for WSU in 2016 is to establish a more friendly front-end development workflow for WordPress themes involving templates and a local environment requiring HTML, not PHP. The only way this really happens is if I start to know what I’m doing. 😉

And JavaScript, for now, is the future. There is a lot of fun to be had and I’d like to start digging in.

One of the things I’m going to try to introduce to my daily workflow is spaced repetition through Anki. I’m expecting this to help mostly with memorizing frequent German words, though it would be interesting if I can apply it the right way to JavaScript as well.

Writing.

Still a goal! It feels good to be 650 words into something, I should do it more often. I have so much to share that fades away once I do something else instead. I should start sharing instead of doing something else.

Speaking.

I was surprised to look back and see that I spoke 4 times last year. I remember feeling burnt out half-way through the year and not wanting to apply or speak at all.

My talk at WordCamp Vancouver was invigorating, mostly because I didn’t have time to prepare as a fill-in and I made some modifications to a talk I had already given.

Until then I had focused a lot on never giving the same talk twice. I used the process of creating the talk as a way to dive deeply into the subject and learn something about it.

But it’s fun to talk about something you’re the expert on! And I want to focus more on that in 2016. I’ll probably submit the same talk to a few camps, and be less worried about missing out. And when I do give the talk the 3rd or 4th time, those kinks will be gone and we’ll all have a better time.

I have a few weeks left to apply for WordCamp Europe. I’ll cross my fingers for that, LoopConf 2, Vancouver, Seattle, Portland, and US. I’d love to hit Denver or Chicago, but we’ll see. Budgets!

WordPress

Make multisite better? 😘

But really. Some big stuff should be figured out this year.

WP_Site, WP_Site_Query, and WP_Network_Query to start.

In the process I’d like to become less afraid at taking a scythe to stuff that’s been there since the beginning and replacing it with some definition of expectations.

I want to adapt our configuration at WSU to use WP Multi Network instead so that we aren’t reinventing the wheel. Doing so should help make better decisions around the future of multi-network in WordPress core.

Oh, and I’d like to introduce a new site switcher.

Washington State University

It’s going to be 3 years in July! Big things are going to happen in these first 6 months. To start, we’ll have a new WordPress developer joining the team at some point in the next couple months!

This means I’ll have time to focus more on connecting big picture stuff:

  1. Content syndication throughout the University.
  2. Give everyone a place to share their work with open registration for students, faculty, and staff. Free websites!
  3. A better search experience for the University built on Elasticsearch.

In the process, I’d like to do a much better job for our team of defining how we work with the web at WSU. Now that we have a new WordPress developer coming on board, it will be especially helpful to have documentation to match.

I’d also like to do a much better job of talking about the work that we’re doing. Things like being HTTPS forward and embracing HTTP/2.0 immediately are pretty cool. Professors and labs that are inspired to share their work with WordPress are awesome.

These should be written about in better ways.

And other wonderful experiences.

Have a great 2016!

 

 

2015 Reflection and Check-in

2015 went by in a hurry, so much that I missed the usual day of reflection and am starting off 2016 with one instead. 🙂

Some notables.

I did better at reading.

The challenge of 25 books was too high, but 14 feels good. I’ll read more next year.

One of the reasons that I read more books over the last year is because I started focusing on it more. As of sometime in the last several months, our phones started spending the night elsewhere in the house. The lack of looping distractions before bed—Twitter, Slack, Facebook, Twitter, Slack, Facebook—allows for much more focused reading time instead. Much more focused reading time makes for faster and more attuned reading.

All in all, a good decision.

That much more focused reading time finally allowed me to finish Ulysses after a 3 year struggle. And now that I’ve finished it once I’ll probably go back and try to read it again to understand. But not in 2016. 🙂

If you read, you should add me as a friend on Goodreads! If you haven’t used Goodreads yet, here’s a good explainer.

But not necessarily so great at writing.

I published 30 posts on jeremyfelt.com in 2015, compared to 26 in 2014. That’s not exactly what I had in mind last year when I wrote “An average of one thoughtful post a week wouldn’t be horrible.

But closer I guess. 🙂

We’re still in Pullman.

We moved in June from a rental house to apartment land. The transition has been nice in some ways, though the house was also pretty nice. We’re enjoying the area quite a bit and while we’re consistent in our back-and-forth about leaving or staying, we now tend to land on staying during most conversations. See also the part where moving is just a crappy experience.

We got rid of some stuff.

In the move from house to apartment, we were able to downsize a bunch of crap that had collected. I did finally get rid of that netbook from 2010 and those two laptops from 2008. How they managed to tag along this long is a disappointment.

I freelanced.

And actually met my goal of 100 hours even though I didn’t really get moving until June or July. I learned quite a bit about myself and some about working too much and getting burned out. I’m happy to have a steady and well paying job and I love contributing to open source software when I’m at home. I’m not entirely sure how freelance fits into that schedule yet, but I’m still working out the details. 2016 will probably be a bit more focused in how I apply freelance time.

I bottled a beer.

Almost a no-brew year. I couldn’t even remember if I had brewed this year until I looked at my photo library. It appears I brewed my last on December 28th, 2014, which means I finished it in late January. So, I’m still a homebrewer, technically. I’ll get started on some small batch stuff soon.

Travel!

I should have known when I set a goal of visiting a new country last year that there would be no new country. Oh well. We had a blast anyway.

  • Silverton, OR in February for Zach and Jennifer’s wedding. We took advantage of being in the area and drove out to the coast for a day and night at Cannon Beach before heading home.
  • Seattle, WA in March for WordCamp Seattle where I didn’t speak but ended up on a panel at the last minute.
  • Las Vegas (and Henderson) in May for LoopConf (where I spoke) and some Vegas-ing.
  • Portland, OR in May for, get this, an Ikea trip. We basically arrived, ate, and went to Ikea.
  • Penticton, BC in June for a couple sunny days in gorgeous Canada wine country.
  • Seattle, WA in July for a night to catch the NoFilterShow, which starred several YouTube personalities.
  • Vancouver, BC in August for WordCamp Vancouver. I had a chance to visit the UBC campus and hangout with Richard and team after stopping by the massive (!) blue whale exhibit. And then of course a few days of beer touring from the ever so knowledgable Flynn and friends. The Vancouver crowd is so great.
  • Glacier National Park for the first time in August! We only spent a couple days, and forest fires were blazing, but it was still such a gorgeous area. We’ll be back. There’s a great breakfast spot in Whitefish, MT and the Cheap Sleep Motel was shockingly pleasant. We then drove from Whitefish, MT to meet our friends in West Yellowstone for a couple days of hanging out in Yellowstone. It was Michelle’s first time and I hadn’t been there since 2004 or something. Such a fascinating place. And then! We drove from West Yellowstone to Bozeman for a nice last minute visit with my Aunt and Uncle for a couple days. We got in a couple great hikes and many great conversations. Lucky for us, we stumbled in with perfect timing to catch the local premiere of Meru including a nice Q+A afterward with Conrad Anker. You should see that movie.
  • In September, we drove off on another adventure. We stopped for a night near Devils Tower, hiked in the morning and then took off for Estes Park, CO. I’m not entirely sure why we stayed in Estes Park, but it was a fun reminder of one of the first trips Michelle and I took together (West!) from the Chicago area. We then kept going to Denver and to celebrate my Mom’s birthday. On the long way home we stopped in Glenwood Springs, CO for one night, spent an afternoon touching f’ing dinosaur bones in Dinosaur National Monument, and then relaxed for a couple days in Park City, UT, enjoying a really excellent hike in the process. On the (still going) continued long way home, we made a stop in Portland specifically for Vegan Beer Fest, at which we met Flynn! A lot of miles on that trip. 🙂
  • Made it back to Portland, OR a few weeks later for the reborn WordCamp Portland in October, where I spoke and had a great time being in Portland with everyone.
  • New York City for the first time right at the end of November for a WordPress core committer summit. I did not have time to sight see, but I did witness the existence of the Statue of Liberty at 3am from an Uber headed to my hotel from the airport. Sweet!
  • Philadelphia, PA for the first week of December for the WordPress community summit and first WordCamp US. That was an excellent, though draining week. Can’t wait for next year! 🙂

I found myself speaking.

I didn’t apply and/or didn’t get accepted much this year, but still ended up in a speaker role several times.

And of course, WordPress.

I’m still a fan, still a student, still plugging away, and still a committer. 🙂

We had what felt like a pretty consistent set of releases this year in 4.2, 4.3, and 4.4. No big surprises, everything on time for the most part.

And I now have a great memory of sitting down for lunch in the lodge next to Old Faithful with an Old Faithful beer and receiving a Twitter notification during a brief moment of cell service letting me know I had been given “permanent commit”. 🙂

I need to do a bigger mental regroup on what we accomplished in 2015 for the multisite component. We at least got WP_Network in, but there are several smaller wins as well. There’s a goal for 2016—better reflection!

Washington State University (#gocougs)

We’re still cruising! Right at the beginning of December, we hit 1000 hosted sites on our platform with just about 2000 users and 2 million page views per month.

These numbers are important because we still haven’t enabled open registration. Instead, a large number of institutional sites are in WordPress that could probably often be considered stagnant. This includes many that we thought would take years to be in.

Bonus highlight – we launched a brand new wsu.edu in March! Having that in WordPress has been amazing. Having it default to HTTPS on HTTP/2.0 makes me personally happy. 🙂

And that’s that.

There’s always more. See you in December!

Previous reflective posts: 2014, 2013.

Configure Nginx to allow for embedded WordPress posts

The ability to embed WordPress posts in WordPress posts is a pretty sweet feature from 4.4 and I’ve been looking forward to finding ways of using it throughout WSU. Today, when I tried it for the first time, I got an error because of our strict X-Frame-Options header that we had set to SAMEORIGIN for all page views.

To get around this, I added a block to our Nginx configuration that modifies this header whenever /embed/ is part of the requested URL. It’s a little sloppy, but it works.

Before our final location block, I added a new one to capture /embed/:

# We'll want to set a different X-Frame-Option header on posts which
# are embedded in other sites.
location ~ /embed/ {
    set $embed_request 1;
    try_files $uri $uri/ /index.php$is_args$args;
}

This sets the $embed_request variable to be used later in our final .php location block:

location ~ \.php$ {
    try_files $uri =404;

    # Set slightly different headers for oEmbed requests
    if ( $embed_request = 1 ) {
        add_header X-Frame-Option ALLOWALL;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";
    }

    # Include the fastcgi_params defaults provided by nginx
    include /etc/nginx/fastcgi_params;
    ...etc...

Now, all URLs except those specifically for embedding are prevented from being used in iframes on other domains.

And here we are!

Still searching for Amelia

 

My first Let’s Encrypt certificate

The timing of the Let’s Encrypt beta could not be more perfect as my previous certificate expires on November 18th. I purposely purchased only a 1 year certificate because I knew Let’s Encrypt was coming. Let’s see how this works!

6:00pm

In my email, I have an invite to Let’s Encrypt for 3 whitelisted domains—jeremyfelt.com, www.jeremyfelt.com, and content.jeremyfelt.com. Per the documentation, I cloned the git repository to a spot on my server—I chose /home/jeremyfelt/—so that I could use the client.

I admit that I haven’t read any documentation up until this point, so I’m flying blind and impatient like normal. 🙂

My first attempt at running the ./letsencrypt-auto command was interesting, but kind of a failure. A ton of dependencies were installed, which is good. I have an outdated version of Python apparently, which is annoying.

WARNING: Python 2.6 support is very experimental at present…
if you would like to work on improving it, please ensure you have backups and then run this script again with the –debug flag!

It took me several attempts before I finally read the message above and figured out that I was supposed to run the Let’s Encrypt command as ./letsencrypt-auto --debug to even pass to the next level. If you have Python not 2.6, this probably won’t be an issue.

Ok. Figured that out, then that crazy fake linux light blue GUI comes up… progress! Go through a couple steps and get this:

Screen Shot 2015-11-12 at 6.09.52 PM

Right then. By default, the Let’s Encrypt client wants to be a web server so that it can properly communicate authority. This would be excellent if I didn’t already have a web server running.

At this point, I read down the page of documentation a bit and realized I could (a) use a config file and (b) use text only instead of ncurses. Sweet!

When I setup the default config file as /etc/letsencrypt/cli.ini, I noticed an option for webroot authenticator. This looked more promising as a way to handle authentication through Nginx. I enabled this and tried again.

And failed! “Failed authorization procedure” to be exact. My client told the other side to verify at http://jeremyfelt.com/.well-known/acme-challenge/BIGLONGSTRING, but my default Nginx configuration blocks public access to all hidden files.

I added a location block to Nginx specifically to allow .well-known and tried again.

Success! Authorization worked and a bunch of files were generated that look like what I need. I went into my Nginx configuration and updated the ssl_certificate directive to point at fullchain.pem and the ssl_certificate_key directive to point to privkey.pem. nginx -t has no complaints… let’s restart the server.

Failure! Big red X, invalid everything! The issuing CA is….. happy hacker fake CA.

Oh. Quick Google search and sure enough:

“happy hacker fake CA” is the issuer used in our staging/testing server. This is what the Let’s Encrypt client currently uses when you don’t specify a different server using the--server option like you did in the original post. Because of this, I believe the --server flag was not included when you ran the client. Try running the client again, but make sure you include the --server option from your original post.

Thank you, bmw!

I failed to update the cli.ini file that I had copied from the web to use the production API instead of the staging API.

Fix the server URL, try again. Success! And for real this time.

Screen Shot 2015-11-12 at 6.24.04 PM

I repeated the process with www.jeremyfelt.com and content.jeremyfelt.com, making fewer mistakes along the way and that’s that.

  • Here’s the final cli.ini file that worked for me.
  • And the final command line arguments: ./letsencrypt-auto --config /etc/letsencrypt/cli.ini --debug --agree-dev-preview certonly

6:36pm

I have 3 new certificates. I still have an A+ in SSL Labs. Nothing is broken. The future is here.

Thank you to everyone at Let’s Encrypt!