Wired on Drone Geo-Fencing

“This is NOT something users want,” another critic added. “I have a good relationship with my local airports and have worked with every local tower or control center. I get clearance to fly and they have been great, but this ‘update’ takes away my control.

”Ryan Calo, a University of Washingtonlaw professor who studies robots and the law, traces the resistance to two sources. “One is a complaint about restricting innovation. The second one says you should own your own stuff, and it’s a liberty issue: corporate verses individual control and autonomy,” Calo says. “When I purchase something I own it, and when someone else controls what I own, it will be serving someone else’s interest, not mine.”

Source: Why the US Government Is Terrified of Hobbyist Drones | WIRED

Intersections of technology and government regulation are interesting.

When a piece of technology is so small and cheap, it’s easy to apply personal ideas of how you should be able to interact with it. At some level it make sense to compare geo-fence restrictions on drones to DRM on e-books. But really, it’s not the same concept at all.

When something is large and expensive, such as a private plane, then it’s probably easier to agree with (and understand) restrictions on where you can use it. The same thing applies to cars—just because I own a vehicle doesn’t mean I can drive it down a one way street or onto private property without consequence.

Academic publishing and scholarly communication: a status report | Harvard Magazine Jan-Feb 2015

That price pressure from commercial journal publishers highlights the core conundrum of academic publishing: the conflict between the scholarly ideal of universal, open sharing of information, and the economic model of business: to make money by selling things.

Source: Academic publishing and scholarly communication: a status report | Harvard Magazine Jan-Feb 2015

Various Networking Configurations in VVV

I dug in to some different configurations in VVV today and decided to write them up as I went. This will be posted in some form to the VVV wiki as well. There are other networking configurations available in Vagrant, though I’m not sure that any would be useful in development with VVV.

I would recommend using default settings for initial provisioning as things can get quirky inside the VM when trying to access outside sources. Run vagrant reload to process any network configuration changes.

Private Network (default)

config.vm.network :private_network, ip: “”

This is the default configuration provided in VVV. A private network is created by VirtualBox between your host machine and the guest machine. The guest is assigned an IP address of and your host machine is able to access it on that IP. VVV is configured to provide access to several default domains on this IP address so that browser requests from your host machine just work.

Outside access from other devices to this IP address is not available as the network interface is private to your machine.

Port Forwarding

config.vm.network “forwarded_port”, guest: 80, host: 8080

One option to provide other devices access to your guest machine is port forwarding. Uncommenting or adding this line in VVV’s Vagrantfile and then running vagrant reload will cause any traffic on port 8080 directed at your host machine to instead communicate with port 80 on the guest machine.

This configuration will work with private or public IP configurations as it deals with port forwarding rather than the IP of the virtual machine itself.

An immediate way to test this once configured would be to type your host machine’s IP address into a browser followed by :8080. With port forwarding enabled, something like would bring up the default VVV dashboard.

Of course, this doesn’t do you much good with the default WordPress sites, as you’ll be stuck adding port 8080 to every request you make.

The easiest hack around this is to setup port forwarding on your router. Point incoming requests for port 80 to port 8080 on the IP address of your host machine. Requests through the router will then traverse ports 80 (public IP) -> 8080 (host) -> 80 (guest) and your development work can be shared with devices inside and outside of your network.

Say my router’s public IP is and my computer’s local IP is

  • Enable port forwarding in Vagrantfile.
  • Configure router to forward incoming port 80 to port 8080 on
  • Visit src.wordpress-develop. on my phone, connected through LTE.

There are other things you can do on your local machine to reroute traffic from 80 to 8080 so that it forwards properly without the use of a router. Sal Ferrallelo has posted steps to take advantage of port forwarding directly in OSX using pfctl.

Public Network

config.vm.network “public_network”

Replacing our default private network configuration with a public network configuration immediately provides access to other devices on your local network. Using this configuration without specifying an IP address causes the guest machine to request an address dynamically from an available DHCP server—likely your router. During vagrant up, an option may be presented to choose which interface should be bridged. I chose my AirPort interface as that is what my local machine is using.

==> default: Available bridged network interfaces:
1) en0: Wi-Fi (AirPort)
2) en1: Thunderbolt 1
3) p2p0
4) bridge0
5) vnic0
6) vnic1
    default: What interface should the network bridge to? 1

Once the guest machine receives an IP address, access is immediately available to other devices on the network.

  • vagrant ssh and type ifconfig to determine the IP address of the guest – mine was
  • Visit src.wordpress-develop. on my phone, connected to the wireless network.

To me this is most desirable as it provides access to devices on the local network, not to the outside. If you are using public wifi or another insecure network, be aware–this does open your machine up to other devices on that network.

config.vm.network “public_network”, ip: “”

The same configuration would be available without DHCP by specifying the IP address to use. If you know what subnet your network is on, this may be a shortcut for providing access without having to use ifconfig inside the guest machine.

Science for the People!

Partly inspired by the experience, BSSRS staff member David Dickson later wrote in New Scientist magazine calling for “Community Science Resource Councils”. The idea, which sadly never took off, was a sort of scientific equivalent of legal aid. It would have provided scientific knowledge and technical expertise to minority and under-represented groups, and also allowed them a greater chance to shape what questions get asked and answered by science. “Perhaps the greatest gain would be in public education,” he wrote. “Members of the community would be able to answer back.”

People today often call for evidence-based policies, but the problem is that the power to collect evidence isn’t evenly distributed. In the 1970s, BSSRS worked to change this – and build a science for the people.

There are some fun parts to this story, which was passed to me in an internal email thread today. Especially great in the context of land grant universities.

Why the modern world is bad for your brain | Science | The Guardian

research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points.


Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot‑smoking.

via Why the modern world is bad for your brain | Science | The Guardian.


On reactions, from Twin Peaks, which I’ll write about more when we’re done. This from S02E10.

– Oh, as usual you’re overreacting.
– Am I? Maybe I am, but they’re my reactions. And the hurt I feel is my hurt, and how I react is none of your damn business.
– Dear, be sensible.
– I’m being very sensible. I want you out of this place. I want you out of my life. I don’t wanna be hurt by you anymore.

It’s a great exchange.

Deployment Workflows Wrap-up

This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.

While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.


All in all, I think my main guidelines for a successful workflow are:

  1. The one you use is better than the one you never implement.
  2. Communication and documentation are more important than anything else.

And I think there are a few questions that should be asked before you settle on anything.

Who is deploying?

This is the most important question. A good team deserves a workflow they’re comfortable with.

If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.

When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.

Push or Pull?

One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.

Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.

One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.

Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to jeremyfelt.com as the possibilities for things happening out of order rise as more people are involved.

When do we deploy?

Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.

I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version – 0001, 0002, etc… – and that works as well.

This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version 0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.

The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.

I think the following is my favorite from that piece:

We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!


You may have noticed—I’m missing a ton of possibilities.

I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.

Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.

And there are more, I’m very certain.


That’s all I have for deployment. :)

I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!

And once again, here are mine:

  1. The WSUWP Platform
  2. jeremyfelt.com
  3. WSU Indie Sites
  4. The WSU Spine

Deployment Workflows, Part 4: WSU Spine

This post is the fourth in a series of deployment workflows I use to get code into production.

This is the one non-WordPress deployment writeup, though still interesting.

The WSU Spine plays the role of both branding and framework for websites created at WSU. It provides a consistent navigation experience, consistent default styles, and the proper University marks. At the same time, a fully responsive CSS framework makes it easy for front end developers at WSU to create mobile friendly pages. For sites that are in the WSUWP Platform, we provide a parent theme that harnesses this framework.

One of the great parts about maintaining a central framework like this is being able to serve it from a single location – repo.wsu.edu – so that the browsers of various visitors can cache the file once and not be continually downloading Spine versions while they traverse the landscape of WSU web.

It took us a bit to get going with our development workflow, but we finally settled on a good model later in 2014 centered around semantic versioning. We now follow a process similar to other libraries hosted on CDNs.

Versions of spine.min.js and spine.min.css for our current version 1.2.2 are provided at:

  • repo.wsu.edu/spine/1/* – Files here are cached for an hour. This major version URL will always be up to date with the latest version of the Spine. If we break backward compatibility, the URL will move up a major version to /spine/2/ so that we don’t break live sites. This is our most popular URL.
  • repo.wsu.edu/spine/1.2.2 – Files here are cached for 120 days. This is built for every minor and patch release. This allows for longer cache times and fine grained control in their own projects. It does increase the chance that older versions of the Spine will be in the wild. We still have not seen any traffic on this URL yet.
  • repo.wsu.edu/spine/develop/ – Files here are cached for 10 minutes. This is built every time the develop branch is updated in the repository and is considered bleeding edge and often unstable.

So, our objectives:

  1. Deploy to one directory for every change to develop.
  2. Deploy to the major version directory whenever a new release is tagged.
  3. Create and deploy to a minor/patch version directory whenever a new release is tagged.

As with the WSUWP Platform, we use GitHub’s webhooks. For this deployment process, we watch for both the create and push events using a very basic PHP script on the server rather than a plugin.

Here’s a shorter version of that script embedded:

  • If we receive a push event and it is on the develop branch, then we fire a deploy script with develop as the only argument.
  • If we receive a create event and it is for a new tag that matches our #.#.# convention, then we fire a deploy script with this tag as the only argument.

And here’s the deploy script that it’s firing:

The brief play by play from the script above is:

  1. Get things clean in the current repo.
  2. Checkout whichever branch or tag was passed as the argument.
  3. Update NPM dependencies.
  4. grunt prod to run through our build process.
  5. Move files to the appropriate directories.

Really, really smooth. Also really, really basic and ugly. :)

We’ve actually only done 2 or 3 releases with this model so far, so I’m not completely convinced there aren’t any bugs. It’s pretty easy to maintain as there really are few tasks that need to be completed. Build files, get files to production.

Another great bonus is the Spine version switcher we have built as a WordPress Customizer option in the theme. We can go to any site on the platform and test out the develop branch if needed.

In the near future, I’d like to create directories for any new branch that is created in the repository. This will allow us to test various bug fixes or new features before committing them to develop, making for a more stable environment overall.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 3: WSU Indie Sites

This post is the third in a series of deployment workflows I use to get code into production.

While the WSUWP Platform is the preferred place for new sites at WSU, there are times when a single site WordPress installation is required.

Until about February of 2014, all new sites were configured on our WSU Indie server because the platform was not ready and setting things up in IIS was getting old. Now, a site only goes here if it is going to do something crazy that may affect the performance of the platform as a whole. It’s a separate server and definitely an outlier in our day to day workflows. In fact, I’ve been able to remove all sites except for three from that server, with one more scheduled.

Because WSU Indie is setup to handle multiple single site installs, I needed a way to deploy everything except for WordPress, which is controlled on this server as part of provisioning.

The best current example of how each repository is configured is probably our old P2 configuration:

  • config/ – Contains the Nginx configuration for both local (Vagrant) and production instances.
  • wp-content/ – Contains mu-plugins, plugins, and themes.
  • .rsync-exclude – A list of files to exclude when rsync is used on the server during deployment.

When placed alongside a wordpress/ directory and an existing wp-config.php file, everything just works.

On the server, each configured indie site has its own directory setup based on descriptions in a Salt pillar file. An example of what this pillar file looks like is part of our local indie development README.

When provisioning runs, the final web directory looks something this:

  • /var/www/
  • /var/www/news.wsu.edu/
  • /var/www/news.wsu.edu/wp-config.php
  • /var/www/news.wsu.edu/wordpress/
  • /var/www/news.wsu.edu/wp-content/
  • /var/www/project.web.wsu.edu/
  • /var/www/project.web.wsu.edu/wp-config.php
  • /var/www/project.web.wsu.edu/wordpress/
  • /var/www/project.web.wsu.edu/wp-content/

Our job with deployment is to get the files from the repository into the production directory.

I decided git hooks would do the trick and be fun to try. In a nutshell, a git repository has a .git/hooks/ directory where you can define one or more shell scripts that fire at various points during various git actions. Pre-commit, post-commit, etc… These hooks can be configured locally or on a remote server. They are similar in a way to GitHub’s webhooks and can be very powerful.

We have a repository which contains all of these scripts, but it’s currently one of the few that we have as private. I’m not entirely sure why, so I’ll replicate a smaller version of one here.

The file structure we’re dealing with during deployment is this:

  • /var/repos/news.wsu.edu.git/ – A bare git repository setup for receiving deploy instructions.
  • /var/repos/news.wsu.edu.git/hook/post-receive – The hook that will fire after receiving a push from a developer.
  • /var/repos/news.wsu.edu/ – The directory where the master branch is always checked out.
  • /var/www/news.wsu.edu/ – Production.

This is a shortened version of the post-receive hook for the WSU News repository:

For this to work, I add a remote named “deploy” to my local environment for news.wsu.edu that points to the var/repos/news.wsu.edu.git/ directory in production. Whenever I want to deploy the current master branch, I type git push deploy master. This sets in motion the following:

  • /var/repos/news.wsu.edu.git/ receives my push of whatever happens to come from my local repo.
  • /var/repos/news.wsu.edu.git/hooks/post-receive fires when that push is done.
  • /var/repos/news.wsu.edu/ is instructed by the post-receive hook to fetch the latest master.
  • rsync is then used to sync the non-excluded files from /var/repos/news.wsu.edu/ to /var/www/news.wsu.edu/

And boom, latest master is in production.

There are a few caveats:

  1. There is no way to rollback, unless you revert in master.
  2. There is no way to deploy specific versions, you must assume that master is stable.
  3. It is user friendly only to someone comfortable with the git command line, though it’s probably possible to setup a GUI with multiple remotes. This is not necessarily a bad thing.
  4. It requires a public key from each developer to be configured on the server for proper authentication. This is also not necessarily a bad thing.

I think this is a pretty good example of how one can start to use git hooks for various things. The script above is definitely not a great long term solution, and if I hadn’t started focusing more on the health of the WSUWP Platform, things likely would have changed drastically. That said, once it was there it worked for multiple sites and multiple people on our team—and having something that works is pretty much the biggest objective of all.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.

Deployment Workflows, Part 2: jeremyfelt.com

This post is the second in a series of deployment workflows I use to get code into production.

The site you’re reading this on is often a neglected playground of sorts. I’ll run into an interesting method of provisioning the server or maintaining code, apply it halfway and then get distracted with something else.

That said, somewhere around 16 months ago, I tried out Fabric to manage deployment actions for individual sites and decided to stick with it.

A fair warning is in order—I found an immediate, quick way to use Fabric to accomplish a handful of tasks and then let it be. It’s likely much more powerful than a wrapper for rsync, but that’s basically how I’m using it.

Ok. Backing up a bit, I think this workflow makes the most sense when you look at how the code is arranged.

  • www/ is where WordPress and its config files live.
  • www/wordpress is a submodule pointed at core.git.wordpress.org so that I can run trunk or a specific branch.
  • content/ is my wp-content directory. 1
  • tweets/ is a submodule pointed at https://github.com/jeremyfelt/tweets.jeremyfelt.com, where I maintain a version controlled copy of my Twitter archive.

The arrangement goes along with the types of code involved:

  • (wp-config, mu-plugins) Code custom or very specific to jeremyfelt.com. This should exist in the repository.
  • (plugins, themes) Code that cannot be sanely managed with submodules AND that I want to manage through the WordPress admin. This should exist in the repository.
  • (WordPress, tweets) Code part of sanely available repositories that can be used as submodules.

Anything in a submodule is really only necessary in the working directory we’re deploying from, so the pain points I normally have with submodules go away. I think (assume?) that’s because this is the purpose of submodules. 😉

Ok, all the code is in its right place at this point.

With the fabfile.py file in my directory, I have access to several commands at the command line. Here are the ones I actually use:

  • fab pull_plugins – Pull any plugins from production into the local plugins directory. This allows me to use WordPress to manage my plugins in production while keeping them in some sort of version control.
  • fab pull_themes – Pull any themes from production into the local themes directory. Again, I can use WordPress to manage these in production.
  • fab pull_uploads – Pull my uploads directory from production to maintain a backup of all uploaded content.
  • fab sync_themes – When I update WordPress via git, this command is a nice shortcut to sync over any changes to the default theme(s).
  • fab push_www – Push any local changes in the www directory to production. This is primarily used for any changes to WordPress.
  • fab push_tweets – Push any local changes in the tweets directory to production. This effectively updates tweets.jeremyfelt.com.
  • fab push_content – Push any plugin and theme updates that I may have done locally to production. I don’t have to separate these actions because I already have them in version control, so I’m not as worried about having a lot happen at once.

An obvious thing I’m missing is a method for backing up the database and syncing it locally. This is really only a few more lines in the fabfile.

I also need to reimplement a previous method I had for copying the site’s Nginx configuration, though that may also be part of another half completed provisioning attempt.

This workflow has been painless to me as an individual maintaining a production copy of code that is version controlled and locally accessible. It allows me to test new plugins and themes before adding them to version control. It also allows me to quickly test specific revisions of WordPress core in production.

I’m not sure how well this would expand for a team of developers deploying code. There are likely some easy ways to tighten up loose ends, mostly around internal communication and defining a release workflow. I am also certain that Fabric is more powerful than how I am using it. I look forward to digging in deeper at some point.

As with other posts in the series, I would love feedback! Please leave a comment or reach out with suggestions or questions.


1: Renaming wp-content/ to content/… Don’t ever do this, ugh. Keeping it a directory up from WordPress is great, but keep it named wp-content. It is possible some of my problems also came from changing my content domain to content.jeremyfelt.com, but I blame a lot of it on content/. My current favorite is www/wordpress/, www/wp-content/, and www/wp-config.php.