This is a wrap-up post for the last few days I spent documenting a series of deployment workflows I use to get code into production.
While writing all of this up over the last several days, I was able to compile some of my thoughts about what a proper workflow should look like. I’m not convinced I’ve reached anything perfect, though I’m fairly happy with how much work we’re able to get done with these in use. It’s definitely a topic I’ll continue to think about.
All in all, I think my main guidelines for a successful workflow are:
- The one you use is better than the one you never implement.
- Communication and documentation are more important than anything else.
And I think there are a few questions that should be asked before you settle on anything.
Who is deploying?
This is the most important question. A good team deserves a workflow they’re comfortable with.
If a developer is comfortable with the command line and is the only one responsible for deploys, the possibilities for deployment are pretty much endless. The possibilities for pain are pretty high as well. You’ll likely change your mind a lot and that’s okay.
When you’re working with a team of varying talents and a mix of operating systems, you’ll need to craft something that is straight forward to use and straight forward to support. The front end interface is most important, the dots that connect it can usually be changed.
Push or Pull?
One of my workflows is completely push based through Fabric. This means that every time I want to deploy code, Fabric processes my code base with
rsync to the remote server. With a large codebase, this can take a bit. With a crappy Internet connection, things can get completely unreliable.
Two workflows are entirely pull based. Well, mostly. GitHub is told to ping the server. The server then pulls down whatever data it needs to from GitHub. A push from a local machine to GitHub could initiate this. Clicking the “New Release” button on the web could do the same.
One workflow combines things a bit. I push all changes in the repository to a git remote. That machine then pulls from the master repository. This workflow is weird and should be replaced.
Overall I prefer the workflows that are pull oriented. I can’t imagine using a push workflow if more than one person was deploying to
jeremyfelt.com as the possibilities for things happening out of order rise as more people are involved.
When do we deploy?
Whatever processes are in place to get code from a local environment to production, there needs to be some structure about the when.
I’m very much in favor of using real versioning. I’ve become more and more a proponent of semantic versioning because it’s fairly easy to communicate. For some of the repositories I deploy, I’ll also use a raw build version –
0002, etc… – and that works as well.
This goes hand in hand with communication. Either on GitHub or somewhere else, a conversation about milestones and release dates is happening so that everyone knows version
0.10.1 is shipping this afternoon. Or, everyone expects the 5-10 deployments happening each day.
The Guardian’s developer team posted an article yesterday on continuous delivery and the use of frequent deployments. I would recommend reading through that to get an idea for some of the benefits and challenges.
I think the following is my favorite from that piece:
We view an application with a long uptime as a risk. It can be a sign that there’s fear to deploy it, or that a backlog of changes is building up, leading to a more risky release. Even for systems that are not being actively developed, there’s value in deploying with some regularity to make sure we still have confidence in the process. One note of caution: deploying so frequently can mask resource leaks. We once had a service fail over a bank holiday weekend, as it had never previously run for three days without being restarted by a deploy!
You may have noticed—I’m missing a ton of possibilities.
I think the one that stands out the most is Capistrano, something I’ve never gotten too familiar with. The answer to “who deploys?” at WSU made me exclude this early to avoid either the Ruby dependency or having to create a complex workflow in a virtual machine. From what I’ve heard, this is powerful and I think it’s worth a look.
Beanstalk provides repositories and automatic deployments over FTP and SSH. If you’re already a Beanstalk customer, this is definitely worth a look as the perceivable pain is pretty low. I have not actually administered this myself, only used, so I’m not sure what it looks like from an admin perspective.
And there are more, I’m very certain.
That’s all I have for deployment. 🙂
I’m very much interested in continuing the conversation. If you document a workflow, let me know and I’ll add it to a master list on the first post. I’m also wide open for feedback and/or critique. Leave a comment on any of the posts or reach out!
And once again, here are mine: