Using Fabric and the new Invoke to simplify and codify both development and deployment patterns.
We’ve been using Fabric to manage deployments for a while and lately Invoke for adding similar functionality where SSH based deployment isn’t required. It’s been fun to compare notes with developers in other dev shops about workflow especially around deployment, so we thought we’d share some of the what we do.
Anywhere an Invoke task is provided as an example, a Fabric task could be used in the same way.
By wrapping your test commands in a task you gain a modicum of simplicty, but you can also wrap other options in that command.
You’ve just pulled down fresh changes from the remote repository that includes changes across several branches, involving new dependencies, static files, database migrations. Just run one command and get up to speed.
In many of our projects we use Vagrant to manage virtual machines for development, getting something closer to production parity and simplifying configuration across developer workstations. But for some projects its just as simple to make sure Postgres.app is running and just use a virtualenv on your laptop.
This task installs from a development mode pip requirements file which includes the primary requirements file but adds in development-only dependencies (like Sphinx).
Last note: this actually takes an extra step and assumes your team is using Homebrew on Mac. That particular step could be removed or replaced with something else. This is rarely necessary unless you have C dependencies like libmemcached. Of course if these start piling up it probably makes more sense to use a virtual machine.
Presuming your project documentation is compiled using Sphinx, this is a pretty simple task, just “make html”. Using an Invoke or Fabric task there’s not need to specify or change directories. And we’ll make it easier to access the results.
This default task builds the docs and then opens the documentation index in your default browser. If you just want to build them you can do that of course.
And if you want to start perusing the documentation without building, you can skip that step.
Here’s the simple code.
What we’ll call the Capistrano style deploy works like so: update a remote fork of the repository (e.g. Git or Hg), then copy the app files into a new release directory. Run necessary deployment commands against this release location and upon completion, symlink the active or latest app directory to the latest release.
fab production deploy
This calls several tasks in order, so as to update the remote cached copy of the repository, create a new release directory, run remote tasks required for the release, and then restart the application server using the latest release.
The release task uses Cuisine, a “Chef-life” library for Fabric in order to simplify the directory updates.
For when deploying to Heroku consists of more than just pushing a commit.
git push heroku master
Works until you need to run additional tasks, like migrations or static asset generation. We’ll replace the Git push command with this Invoke task.
This task will push to our Heroku remote and then run the additional tasks like migrating database schema changes.
If you deploy frequently to Heroku then a custom buildpack integrating these steps might prove superior.
Sometimes you want to be able to watch the logs on a server.
This is just a simple task for tailing a default or specified log file.
A great example of this is accessing the remote shell if need be. The Heroku CLI lets you attach a command to the remote shell, e.g. a Django management command.
fab staging dj:shell_plus fab staging dj:update_index,–remove fab staging dj:import_locations,http://dataurl.com/locations.csv
For systems with only a few servers it’s really simple to just apply Puppet scripts locally rather than use a master-slave setup. One thing this tasks does depend on is that the application repo has already been set up on the server and Puppet already installed.
This can be run upon making system changes by updating the repo cache and reapplying the Puppet configuration.
fab production db deploy.refresh puppet
system would specify the manifest for the node type, e.g. “web”, “db”,
“search”, “dev” (a box running all services).
For this last use case at least we’re looking to completely replace. Our exploration with Ansible has been pretty basic so far but the feedback from very different corners has been so enthusiastically positive that we expect this to be the direction we take.