First thoughts on our new wsu.edu

Today we launched a gorgeous new home page at WSU. For the most part everything went as planned and definitely without catastrophe. We’ll have a full stack write-up at some point soon on web.wsu.edu with more details (still a few more things to launch), but I’ve had a few thoughts throughout the day that I wanted to note.

Screen Shot 2015-03-24 at 11.24.01 PMWe’re HTTPS only. And that’s pretty freaking cool. It was a year ago today that we flipped the switch on WSU News to SPDY and ever since then I couldn’t wait to get the root domain. I had anticipated some push-back, though I don’t know why, and we haven’t heard a peep. I plan on running a script through a massive list of public university websites to see how many do this. Many don’t even support TLS on the home page, let alone force it.

Screen Shot 2015-03-24 at 11.28.24 PM

Our root domain is on WordPress. Typing that address in for the first time today after everything went live felt really, really cool. I don’t think that feeling is going to wear off. Even though this is site ~600 to launch on our platform, it’s a huge statement that the University is behind us on this. I don’t remember all of our conversations, but I don’t think that having the root on WordPress was really on our radar for the first 2 years. Dig it.

We’re pretty damn fast. That’s become a lot easier these days. But we have a lot of content on the home page—and a really big map—and we still serve the document to the browser super quickly. I actually screwed up pretty big here by microcaching with Nginx at the last minute. It made things even faster, but cached a bad redirect for quite a while. Lessons learned, and we’ll keep tweaking—especially with image optimization, but I love that we went out the gate with such a good looking waterfall.

And as I stormed earlier in my series of “I heart GPL” tweets, every part of our central web is open source. We publish our server provisioning, our WordPress multi-network setup, our brand framework, our themes, and our plugins. 134 repositories and counting. Not everything is pretty enough or documented enough, and will often serve more as an example than as a product. But, everything is out there and we’re sharing and doing our best to talk about it more.

Lots of this makes me happy. More to come! :)

Wired on Drone Geo-Fencing

“This is NOT something users want,” another critic added. “I have a good relationship with my local airports and have worked with every local tower or control center. I get clearance to fly and they have been great, but this ‘update’ takes away my control.

”Ryan Calo, a University of Washingtonlaw professor who studies robots and the law, traces the resistance to two sources. “One is a complaint about restricting innovation. The second one says you should own your own stuff, and it’s a liberty issue: corporate verses individual control and autonomy,” Calo says. “When I purchase something I own it, and when someone else controls what I own, it will be serving someone else’s interest, not mine.”

Source: Why the US Government Is Terrified of Hobbyist Drones | WIRED

Intersections of technology and government regulation are interesting.

When a piece of technology is so small and cheap, it’s easy to apply personal ideas of how you should be able to interact with it. At some level it make sense to compare geo-fence restrictions on drones to DRM on e-books. But really, it’s not the same concept at all.

When something is large and expensive, such as a private plane, then it’s probably easier to agree with (and understand) restrictions on where you can use it. The same thing applies to cars—just because I own a vehicle doesn’t mean I can drive it down a one way street or onto private property without consequence.

Academic publishing and scholarly communication: a status report | Harvard Magazine Jan-Feb 2015

That price pressure from commercial journal publishers highlights the core conundrum of academic publishing: the conflict between the scholarly ideal of universal, open sharing of information, and the economic model of business: to make money by selling things.

Source: Academic publishing and scholarly communication: a status report | Harvard Magazine Jan-Feb 2015

Various Networking Configurations in VVV

I dug in to some different configurations in VVV today and decided to write them up as I went. This will be posted in some form to the VVV wiki as well. There are other networking configurations available in Vagrant, though I’m not sure that any would be useful in development with VVV.

I would recommend using default settings for initial provisioning as things can get quirky inside the VM when trying to access outside sources. Run vagrant reload to process any network configuration changes.

Private Network (default)

config.vm.network :private_network, ip: “192.168.50.4”

This is the default configuration provided in VVV. A private network is created by VirtualBox between your host machine and the guest machine. The guest is assigned an IP address of 192.168.50.4 and your host machine is able to access it on that IP. VVV is configured to provide access to several default domains on this IP address so that browser requests from your host machine just work.

Outside access from other devices to this IP address is not available as the network interface is private to your machine.

Port Forwarding

config.vm.network “forwarded_port”, guest: 80, host: 8080

One option to provide other devices access to your guest machine is port forwarding. Uncommenting or adding this line in VVV’s Vagrantfile and then running vagrant reload will cause any traffic on port 8080 directed at your host machine to instead communicate with port 80 on the guest machine.

This configuration will work with private or public IP configurations as it deals with port forwarding rather than the IP of the virtual machine itself.

An immediate way to test this once configured would be to type your host machine’s IP address into a browser followed by :8080. With port forwarding enabled, something like http://192.168.1.119:8080 would bring up the default VVV dashboard.

Of course, this doesn’t do you much good with the default WordPress sites, as you’ll be stuck adding port 8080 to every request you make.

The easiest hack around this is to setup port forwarding on your router. Point incoming requests for port 80 to port 8080 on the IP address of your host machine. Requests through the router will then traverse ports 80 (public IP) -> 8080 (host) -> 80 (guest) and your development work can be shared with devices inside and outside of your network.

Say my router’s public IP is 14.15.16.17 and my computer’s local IP is 192.168.1.100.

  • Enable port forwarding in Vagrantfile.
  • Configure router to forward incoming port 80 to port 8080 on 192.168.1.100.
  • Visit src.wordpress-develop.14.15.16.17.xip.io on my phone, connected through LTE.

There are other things you can do on your local machine to reroute traffic from 80 to 8080 so that it forwards properly without the use of a router. Sal Ferrallelo has posted steps to take advantage of port forwarding directly in OSX using pfctl.

Public Network

config.vm.network “public_network”

Replacing our default private network configuration with a public network configuration immediately provides access to other devices on your local network. Using this configuration without specifying an IP address causes the guest machine to request an address dynamically from an available DHCP server—likely your router. During vagrant up, an option may be presented to choose which interface should be bridged. I chose my AirPort interface as that is what my local machine is using.

==> default: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) p2p0
4) bridge0
5) vnic0
6) vnic1
    default: What interface should the network bridge to? 1

Once the guest machine receives an IP address, access is immediately available to other devices on the network.

  • vagrant ssh and type ifconfig to determine the IP address of the guest – mine was 192.168.1.141.
  • Visit src.wordpress-develop.192.168.1.141.xip.io on my phone, connected to the wireless network.

To me this is most desirable as it provides access to devices on the local network, not to the outside. If you are using public wifi or another insecure network, be aware–this does open your machine up to other devices on that network.

config.vm.network “public_network”, ip: “192.168.1.141”

The same configuration would be available without DHCP by specifying the IP address to use. If you know what subnet your network is on, this may be a shortcut for providing access without having to use ifconfig inside the guest machine.

Science for the People!

Partly inspired by the experience, BSSRS staff member David Dickson later wrote in New Scientist magazine calling for “Community Science Resource Councils”. The idea, which sadly never took off, was a sort of scientific equivalent of legal aid. It would have provided scientific knowledge and technical expertise to minority and under-represented groups, and also allowed them a greater chance to shape what questions get asked and answered by science. “Perhaps the greatest gain would be in public education,” he wrote. “Members of the community would be able to answer back.”

People today often call for evidence-based policies, but the problem is that the power to collect evidence isn’t evenly distributed. In the 1970s, BSSRS worked to change this – and build a science for the people.

There are some fun parts to this story, which was passed to me in an internal email thread today. Especially great in the context of land grant universities.

Why the modern world is bad for your brain | Science | The Guardian

research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points.

And.

Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot‑smoking.

via Why the modern world is bad for your brain | Science | The Guardian.

Reactions

On reactions, from Twin Peaks, which I’ll write about more when we’re done. This from S02E10.

– Oh, as usual you’re overreacting.
– Am I? Maybe I am, but they’re my reactions. And the hurt I feel is my hurt, and how I react is none of your damn business.
– Dear, be sensible.
– I’m being very sensible. I want you out of this place. I want you out of my life. I don’t wanna be hurt by you anymore.

It’s a great exchange.

Deployment Workflows Wrap-up

This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.

While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.

Guidelines

All in all, I think my main guidelines for a successful workflow are:

  1. The one you use is better than the one you never implement.
  2. Communication and documentation are more important than anything else.

And I think there are a few questions that should be asked before you settle on anything.

Who is deploying?

This is the most important question. A good team deserves a workflow they’re comfortable with.

If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.

When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.

Push or Pull?

One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.

Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.

One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.

Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to jeremyfelt.com as the possibilities for things happening out of order rise as more people are involved.

When do we deploy?

Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.

I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version – 0001, 0002, etc… – and that works as well.

This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version 0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.

The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.

I think the following is my favorite from that piece:

We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!

Missing

You may have noticed—I’m missing a ton of possibilities.

I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.

Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.

And there are more, I’m very certain.

Wrapped.

That’s all I have for deployment. :)

I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!

And once again, here are mine:

  1. The WSUWP Platform
  2. jeremyfelt.com
  3. WSU Indie Sites
  4. The WSU Spine

Deployment Workflows, Part 4: WSU Spine

This post is the fourth in a series of deployment workflows I use to get code into production.

This is the one non-WordPress deployment writeup, though still interesting.

The WSU Spine plays the role of both branding and framework for websites created at WSU. It provides a consistent navigation experience, consistent default styles, and the proper University marks. At the same time, a fully responsive CSS framework makes it easy for front end developers at WSU to create mobile friendly pages. For sites that are in the WSUWP Platform, we provide a parent theme that harnesses this framework.

One of the great parts about maintaining a central framework like this is being able to serve it from a single location – repo.wsu.edu – so that the browsers of various visitors can cache the file once and not be continually downloading Spine versions while they traverse the landscape of WSU web.

It took us a bit to get going with our development workflow, but we finally settled on a good model later in 2014 centered around semantic versioning. We now follow a process similar to other libraries hosted on CDNs.

Versions of spine.min.js and spine.min.css for our current version 1.2.2 are provided at:

  • repo.wsu.edu/spine/1/* – Files here are cached for an hour. This major version URL will always be up to date with the latest version of the Spine. If we break backward compatibility, the URL will move up a major version to /spine/2/ so that we don’t break live sites. This is our most popular URL.
  • repo.wsu.edu/spine/1.2.2 – Files here are cached for 120 days. This is built for every minor and patch release. This allows for longer cache times and fine grained control in their own projects. It does increase the chance that older versions of the Spine will be in the wild. We still have not seen any traffic on this URL yet.
  • repo.wsu.edu/spine/develop/ – Files here are cached for 10 minutes. This is built every time the develop branch is updated in the repository and is considered bleeding edge and often unstable.

So, our objectives:

  1. Deploy to one directory for every change to develop.
  2. Deploy to the major version directory whenever a new release is tagged.
  3. Create and deploy to a minor/patch version directory whenever a new release is tagged.

As with the WSUWP Platform, we use GitHub’s webhooks. For this deployment process, we watch for both the create and push events using a very basic PHP script on the server rather than a plugin.

Here’s a shorter version of that script embedded:

  • If we receive a push event and it is on the develop branch, then we fire a deploy script with develop as the only argument.
  • If we receive a create event and it is for a new tag that matches our #.#.# convention, then we fire a deploy script with this tag as the only argument.

And here’s the deploy script that it’s firing:

The brief play by play from the script above is:

  1. Get things clean in the current repo.
  2. Checkout whichever branch or tag was passed as the argument.
  3. Update NPM dependencies.
  4. grunt prod to run through our build process.
  5. Move files to the appropriate directories.

Really, really smooth. Also really, really basic and ugly. :)

We’ve actually only done 2 or 3 releases with this model so far, so I’m not completely convinced there aren’t any bugs. It’s pretty easy to maintain as there really are few tasks that need to be completed. Build files, get files to production.

Another great bonus is the Spine version switcher we have built as a WordPress Customizer option in the theme. We can go to any site on the platform and test out the develop branch if needed.

In the near future, I’d like to create directories for any new branch that is created in the repository. This will allow us to test various bug fixes or new features before committing them to develop, making for a more stable environment overall.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.