The dev / DevOps boundary
In Part 1 of this series, I gave you some tips on adding an environment setup script to your Rails app. That makes a new hire’s life easier, but what about you and your current teammates? You’ve all got your development environments set up just the way you want.
There are plenty of useful things you can put in your Rails bin/ directory. In parts 2 and 3 of this blog series, I’ll focus on one category: managing the boundary between your dev and DevOps teams.

DevOps grew up around me, like moss on an old tree trunk. I remember the days of having to SSH into a Linux machine in a datacenter (and, later, “the cloud”) to manage Java and Rails apps. I even got to visit a datacenter once—it’s cold in there! 🥶 Back then, “Ops” was just Linux system administration.
Now we have multiple competing cloud services (like AWS and Azure) and multiple DevOps tools (like Docker and Kubernetes). There are still seeds of SysAdmin in there. And it’s often helpful to drop down to that level. But true “DevOps” folks work at much higher levels of abstraction these days. And it may be tough for your typical application developer to adapt to this new world. (Oh, the number of failed local env Dockerfiles I’ve seen … 🫣.)
Heroku, the first PaaS I ever used that allowed me to deploy with Git pushes, was a revelation for me when it was first introduced. You may notice some inspiration from there in these scripts. But there’s also some good-old-fashioned Linux SysAdmin sprinkled in, particularly the inimitable power of the <tab> key.
Docker
Imagine an app that has gone all in on Docker. It’s used both in the local development environment and remotely in staging and production. In this scenario, here’s an example of how I’ve organized a ./bin/docker/ directory.
bin/docker
├── README.md
├── dev
│ ├── exec
│ ├── setup
│ └── up
└── prod
├── deploy
│ ├── exec
│ ├── setup
│ └── up
├── exec
├── setup
└── up
Using this directory structure, I can quickly navigate from my app’s root directory to the script I want to execute via tab-autocomplete.
./b<tab>/d<TAB>/d<TAB>/e<TAB!>
=> ./bin/docker/dev/exec
Think of these directories as defining arguments for a backwards function invocation:
(“hello world”)foo
("bin", "docker", "dev")exec
“Preceding the arguments” via this directory structure is similar to currying a function, locking in certain values while allowing others to change. Treating Rails utility scripts as “partial function applications” of underlying, more complicated scripts is an interesting exercise. Try it any time you find yourself writing a long line of code at your terminal.
The README.md is, naturally, the documentation. Always document all the things.
I won’t describe docker-compose.yml or Dockerfile. I don’t consider myself a Docker expert. (Would that be called a “Doxpert”? 🤔) I wish there were an official Rails Dockerfile (that I liked). A standard Rails Dockerfile may be a good topic for a future blog post. Reach out to us if you agree!
I will note, however, that if you need different docker-compose.yml files for different environments, you can drop them into the dev/ and prod/ directories in this bin/docker/ tree or in config/.
For the remainder of this blog, assume that, in your app setup script, you’ve installed the docker-compose library. Docker is a good example of a library, like a database, that may not be installed into your local environment by bundler.
Let’s start by defining a “Docker API” for your team. I like to break Docker scripts into four different categories.
up: To build and launch the container. The name comes from docker compose up.exec: To execute a command within a running container. The name comes from docker exec and Heroku’s exec.- One or more scripts that use
execto do common tasks. deploy: To do any of the above in a remote container.
These categories define a subset of commands your app dev team is likely to use most, especially if they’re unfamiliar with Docker.
Let’s take a look at the scripts.
dev/up
#!/usr/bin/env bash
#
# build and run a docker development env
#
set -euo pipefail
./bin/rails tmp:clear log:clear
./bin/bundle package
BUILDKIT_PROGRESS=plain docker compose -f ./somewhere/docker-compose.yml -p appname_dev build
docker compose -f ./somewhere/docker-compose.yml -p appname_dev up
bin/docker/dev/up
First, I configure “bash strict mode”. I also used this in Part 1. It’s a general recommendation for making these kinds of scripts better behaved.
Second, I clear the Rails tmp/ and log/ directories. This is part of my undescribed Dockerfile strategy. I’m reducing the volatility of the app to improve layer caching. (💡ref: “organize layers from most stable to most volatile”). I’m also reducing the container size. You can skip this if you’d like.
Third, I package my gems. This is also an optional tweak for improving Docker build performance.
I’m hand-waving here. In general, use this section of the script to optimize your app’s Docker build however you like.
The next line builds the application containers.
- I prefer the output produced by
BUILDKIT_PROGRESS=plain. It makes layer caching and parallelization more transparent, but those details aren’t necessary for everyone. Doxperts only. 🤓 - I’ve added a -f flag in case your
docker-compose.ymlis located somewhere other than the current (i.e., app root) directory. - The
-pflag defines a project name. The project name is used as a prefix for all the containers you create in yourdocker-compose.yml. So, for example, if you name your Compose services simply web and db, they’ll appear as corresponding containersappname_dev_webandappname_dev_db.That avoids name collisions with other Docker apps running on your laptop. It also avoids conflicts with other application environments, likeappname_prod, that you may be testing locally. I’ll show you an example of running in prod mode later.
The last line runs the container apps you’ve just built. I combine the build and up steps to guarantee every newly launched container is up-to-date and to incentivize keeping the Dockerfile efficient.
dev/exec
#!/usr/bin/env bash
#
# run something (bash by default) in a running dev web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail
docker compose -p appname_dev -f ./bin/docker/docker-compose.yml exec web "${@:-bash}"
bin/docker/dev/exec
The dev/exec script assumes dev/up is already running in a separate Terminal tab or window.
By default, if you just run ./bin/docker/dev/exec, it “shells into” your running app. That’s what the "${@:-bash}" magic does—run the bash shell inside your Rails web application container.
You can override that default by appending a different command to the end of exec, as demonstrated in the comment. So, for example, to execute the Rails console in your container, run ./bin/docker/dev/exec ./bin/rails c. From the root of your Rails app, the ./bin/rails c command will tab-autocomplete, which I think is pretty cool. 😎
The web part is the name of my Docker service, sans the prefix, as written in docker-compose.yml. All the naming conventions (web, db, appname_dev) are arbitrary. Just be consistent. Consistent naming is the secret ingredient for many of these scripts.
dev/setup
#!/usr/bin/env bash
#
# setup the docker dev env in an already running container
#
set -euo pipefail
./bin/docker/dev/exec ./bin/rails db:environment:set RAILS_ENV=development db:setup
bin/docker/dev/setup
This dev/setup script is the first example of a script that reuses exec to execute a different default command. Here, I’m only setting up the Rails database. My Dockerfile took care of installing Ruby, Bundler, Node, and all required Gems and Node packages. Only the database setup remained. The combination of the Dockerfile and this script represents the ./bin/setup script I wrote about in Part 1, but in Docker-land.
As your homework, try to create:
bin/docker/dev/consoleto launch a Rails console in the app dev container.bin/docker/dev/db/execto shell into the db dev container.
With this foundation in place, your team is free to create whatever Docker dev env shortcuts you want.
prod/up
#!/usr/bin/env bash
#
# build and run a docker production env
#
set -euo pipefail
./bin/rails tmp:clear log:clear
./bin/bundle package
BUILDKIT_PROGRESS=plain docker compose -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml -p appname_prod build
docker compose -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml -p appname_prod up "$@"
bin/docker/prod/up
The prod/up script is similar to the dev/up script. There are only three differences.
First, note that there are two -f flags. docker compose allows multiple config files to be set at once. The resulting configuration is the merge of the two, with the latter overriding the former.
Without getting into too much Docker detail, the things you may want to override in your docker-compose.prod.yml file are:
- Database credentials
- The build target, if a production one is specified in your
Dockerfile - Ports to expose, probably
80or443in prod (as opposed to3000in dev) - Environment variables, such as
RAILS_ENV=productionandSECRET_KEY_BASE - The production server’s start command
According to the docs, there are other ways to “extend” Docker compose files. Do what you prefer.
The second difference is that the project name (-p) is now appname_prod, to avoid conflicting with all of the appname_dev_* containers I created earlier.
The third difference is the magic Bash incantation "$@". That allows me to append more text to the end of the command, e.g., prod/up something. I’ll describe that “something” later.
So that’s how the prod/up script works. But why does it exist?
Rails apps perform differently in development and production modes. The ability to test your app in production mode locally before deploying it may be helpful.
prod/exec and others
#!/usr/bin/env bash
#
# run something (bash by default) in a running prod web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail
docker compose -p appname_prod -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml exec web "${@:-bash}"
bin/docker/prod/exec
The prod/exec script is similar to dev/exec. As with prod/up, I have:
- a new project name:
appname_prod - multiple merged Docker compose files
As with dev/setup, you can reuse this script to define prod/setup and prod/console.
prod/deploy
What are the scripts in the prod/deploy directory for?
Say you are renting cloud compute, such as an AWS EC2 instance, a Google Cloud instance, or a DigitalOcean droplet. Or maybe you’ve just been given SSH access to a VM in some corporation’s datacenter.
If so, you can install Docker on that instance and run it as a service.
Then you can deploy images you’ve built in your local Docker to the remote instance. Think of it like your own homemade capistrano script.
prod/deploy/up
#!/usr/bin/env bash
#
# build and run a remote docker production env
#
# see additional setup & docs in ./bin/docker/dev/deploy/up
#
# assumes an SSH config (~/.ssh/config) named "myapp"
# -d means --detach
#
set -euo pipefail
DOCKER_HOST="ssh://myapp" ./bin/docker/prod/up -d
bin/docker/prod/deploy/up
The prod/deploy/up script uses the prod/up script to build and launch your Rails app in production mode, but with two differences.
First, it defines a DOCKER_HOST. By default, Docker commands talk to the Docker service running on your local machine. By specifying a DOCKER_HOST, those commands will instead talk to the Docker service running on a remote machine.
To avoid describing the SSH complexity, I’ve configured a passwordless login and hidden all the host details in the configuration file ~/.ssh/config. If you can type ssh myapp from your local Terminal to shell into your production host, this DOCKER_HOST value should work.
The second difference is the -d flag, which turns on Docker’s “detached mode”. Instead of the remote production server running in the foreground, it’ll run detached on the remote host. That way, you can close your laptop at the end of the day without worrying that you’ve accidentally killed your production app.
prod/deploy/exec
#!/usr/bin/env bash
#
# run something (bash by default) in a running remote prod web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail
DOCKER_HOST="ssh://appname" ./bin/docker/prod/exec "${@}"
bin/docker/prod/deploy/exec
The prod/deploy/exec script uses the same trick as prod/deploy/up. It reuses the prod/exec script, but runs it in your remote server’s Docker instead of locally.
Just like the prod/exec script, it’ll default to running a bash shell in your remote host’s Dockerized Rails app. You can override that by appending a command, which this script will forward along to prod/exec.
For example, to run a Rails console in your production app, you can execute ./bin/docker/prod/deploy/exec ./bin/rails c.
Cool, right?
By combining all these scripts with the power of tab-autocomplete, you have most of what you need to quickly and easily run Docker both locally and in production.
If you have a staging environment, create a ./bin/docker/staging directory. It should be similar to ./bin/docker/prod, but with different SSH settings. You can even deploy your development environment to a remote server! (You shouldn’t, but you can! 🙊)
What about the test environment?
I prefer not to run Rails tests within Docker. Weird things happen in tests, like breakpoints and parallelization. Additionally, I suspect, but haven’t proven, that the overhead of connecting to a local Docker service slows down test performance. I need my tests to execute quickly. Otherwise, I may get distracted by my phone or the latest blog post about how AI is both savior and scourge.
That concludes part 2. I hope you can see the power of hiding common Docker interactions inside Rails scripts. In part 3, I’ll show you how to do the same with Kubernetes.
Ed Toro is a Senior Consultant at Test Double and has experience in Ruby, Rails, and Docker.









