Skip to main content
Test Double company logo
Services
Services Overview
Holistic software investment consulting
Software Delivery
Accelerate quality software development
Product Strategy & Performance
Level up product strategy & performance
Legacy Modernization
Renovate legacy software systems
Pragmatic AI
Solve business problems without hype
Upgrade Rails
Update Rails versions seamlessly
DevOps
Scale infrastructure smoothly
Technical Recruitment
Build tech & product teams
Technical & Product Assessments
Uncover root causes & improvements
Case Studies
Solutions
Accelerate Quality Software
Software Delivery, DevOps, & Product Delivery
Maximize Software Investments
Product Performance, Product Scaling, & Technical Assessments
Future-Proof Innovative Software
Legacy Modernization, Product Transformation, Upgrade Rails, Technical Recruitment
About
About
What's a test double?
Approach
Meeting you where you are
Founder's Story
The origin of our mission
Culture
Culture & Careers
Double Agents decoded
Great Causes
Great code for great causes
EDI
Equity, diversity & inclusion
Insights
All Insights
Hot takes and tips for all things software
Leadership
Bold opinions and insights for tech leaders
Developer
Essential coding tutorials and tools
Product Manager
Practical advice for real-world challenges
Say Hello
Test Double logo
Menu
Services
BackGrid of dots icon
Services Overview
Holistic software investment consulting
Software Delivery
Accelerate quality software development
Product Strategy & Performance
Level up product strategy & performance
Legacy Modernization
Renovate legacy software systems
Pragmatic AI
Solve business problems without hype
Cycle icon
DevOps
Scale infrastructure smoothly
Upgrade Rails
Update Rails versions seamlessly
Technical Recruitment
Build tech & product teams
Technical & Product Assessments
Uncover root causes & improvements
Case Studies
Solutions
Solutions
Accelerate Quality Software
Software Delivery, DevOps, & Product Delivery
Maximize Software Investments
Product Performance, Product Scaling, & Technical Assessments
Future-Proof Innovative Software
Legacy Modernization, Product Transformation, Upgrade Rails, Technical Recruitment
About
About
About
What's a test double?
Approach
Meeting you where you are
Founder's Story
The origin of our mission
Culture
Culture
Culture & Careers
Double Agents decoded
Great Causes
Great code for great causes
EDI
Equity, diversity & inclusion
Insights
Insights
All Insights
Hot takes and tips for all things software
Leadership
Bold opinions and insights for tech leaders
Developer
Essential coding tutorials and tools
Product Manager
Practical advice for real-world challenges
Say hello
Developers
Developers
Developers
Software tooling & tips

Power up with Rails scripts Part 2: Docker

In part 2 of the three part series on Rails scripts, learn about short shell scripts for simplifying Docker interactions from Rails apps.
Ed Toro
|
November 17, 2025
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The dev / DevOps boundary

In Part 1 of this series, I gave you some tips on adding an environment setup script to your Rails app. That makes a new hire’s life easier, but what about you and your current teammates? You’ve all got your development environments set up just the way you want.

There are plenty of useful things you can put in your Rails bin/ directory. In parts 2 and 3 of this blog series, I’ll focus on one category: managing the boundary between your dev and DevOps teams.

Base of a moss covered tree with roots covering the ground
https://picryl.com/media/root-tree-root-tree-nature-landscapes-3dcf4b by https://pixabay.com/users/cocoparisienne-127419/ 

DevOps grew up around me, like moss on an old tree trunk. I remember the days of having to SSH into a Linux machine in a datacenter (and, later, “the cloud”) to manage Java and Rails apps. I even got to visit a datacenter once—it’s cold in there! 🥶 Back then, “Ops” was just Linux system administration. 

Now we have multiple competing cloud services (like AWS and Azure) and multiple DevOps tools (like Docker and Kubernetes). There are still seeds of SysAdmin in there. And it’s often helpful to drop down to that level. But true “DevOps” folks work at much higher levels of abstraction these days. And it may be tough for your typical application developer to adapt to this new world. (Oh, the number of failed local env Dockerfiles I’ve seen … 🫣.)

Heroku, the first PaaS I ever used that allowed me to deploy with Git pushes, was a revelation for me when it was first introduced. You may notice some inspiration from there in these scripts. But there’s also some good-old-fashioned Linux SysAdmin sprinkled in, particularly the inimitable power of the <tab> key.

Docker

Imagine an app that has gone all in on Docker. It’s used both in the local development environment and remotely in staging and production. In this scenario, here’s an example of how I’ve organized a ./bin/docker/ directory.

bin/docker
├── README.md
├── dev
│   ├── exec
│   ├── setup
│   └── up
└── prod
    ├── deploy
    │   ├── exec
    │   ├── setup
    │   └── up
    ├── exec
    ├── setup
    └── up

Using this directory structure, I can quickly navigate from my app’s root directory to the script I want to execute via tab-autocomplete.

./b<tab>/d<TAB>/d<TAB>/e<TAB!>

=> ./bin/docker/dev/exec

Think of these directories as defining arguments for a backwards function invocation:

(“hello world”)foo

("bin", "docker", "dev")exec

“Preceding the arguments” via this directory structure is similar to currying a function, locking in certain values while allowing others to change. Treating Rails utility scripts as “partial function applications” of underlying, more complicated scripts is an interesting exercise. Try it any time you find yourself writing a long line of code at your terminal.

The README.md is, naturally, the documentation. Always document all the things.

I won’t describe docker-compose.yml or Dockerfile.  I don’t consider myself a Docker expert. (Would that be called a “Doxpert”? 🤔) I wish there were an official Rails Dockerfile (that I liked). A standard Rails Dockerfile may be a good topic for a future blog post.  Reach out to us if you agree! 

I will note, however, that if you need different docker-compose.yml files for different environments, you can drop them into the dev/ and prod/ directories in this bin/docker/ tree or in config/.

For the remainder of this blog, assume that, in your app setup script, you’ve installed the docker-compose library. Docker is a good example of a library, like a database, that may not be installed into your local environment by bundler.

Let’s start by defining a “Docker API” for your team. I like to break Docker scripts into four different categories.

  • up: To build and launch the container. The name comes from docker compose up.
  • exec: To execute a command within a running container. The name comes from docker exec and Heroku’s exec.
  • One or more scripts that use exec to do common tasks.
  • deploy: To do any of the above in a remote container.

These categories define a subset of commands your app dev team is likely to use most, especially if they’re unfamiliar with Docker.

Let’s take a look at the scripts.

dev/up

#!/usr/bin/env bash
#
# build and run a docker development env
#
set -euo pipefail

./bin/rails tmp:clear log:clear
./bin/bundle package

BUILDKIT_PROGRESS=plain docker compose -f ./somewhere/docker-compose.yml -p appname_dev build

docker compose -f ./somewhere/docker-compose.yml -p appname_dev up 

bin/docker/dev/up

First, I configure “bash strict mode”. I also used this in Part 1. It’s a general recommendation for making these kinds of scripts better behaved. 

Second, I clear the Rails tmp/ and log/ directories. This is part of my undescribed Dockerfile strategy. I’m reducing the volatility of the app to improve layer caching. (💡ref:  “organize layers from most stable to most volatile”). I’m also reducing the container size. You can skip this if you’d like.

Third, I package my gems. This is also an optional tweak for improving Docker build performance. 

I’m hand-waving here. In general, use this section of the script to optimize your app’s Docker build however you like.

The next line builds the application containers. 

  • I prefer the output produced by BUILDKIT_PROGRESS=plain. It makes layer caching and parallelization more transparent, but those details aren’t necessary for everyone. Doxperts only. 🤓
  • I’ve added a -f flag in case your docker-compose.yml is located somewhere other than the current (i.e., app root) directory.
  • The -p flag defines a project name. The project name is used as a prefix for all the containers you create in your docker-compose.yml. So, for example, if you name your Compose services simply web and db, they’ll appear as corresponding containers appname_dev_web and appname_dev_db. That avoids name collisions with other Docker apps running on your laptop. It also avoids conflicts with other application environments, like appname_prod, that you may be testing locally. I’ll show you an example of running in prod mode later.

The last line runs the container apps you’ve just built. I combine the build and up steps to guarantee every newly launched container is up-to-date and to incentivize keeping the Dockerfile efficient.

dev/exec

#!/usr/bin/env bash
#
# run something (bash by default) in a running dev web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail

docker compose -p appname_dev -f ./bin/docker/docker-compose.yml exec web "${@:-bash}"

bin/docker/dev/exec

The dev/exec script assumes dev/up is already running in a separate Terminal tab or window.

By default, if you just run ./bin/docker/dev/exec, it “shells into” your running app. That’s what the "${@:-bash}" magic does—run the bash shell inside your Rails web application container.

You can override that default by appending a different command to the end of exec, as demonstrated in the comment. So, for example, to execute the Rails console in your container, run ./bin/docker/dev/exec ./bin/rails c. From the root of your Rails app, the ./bin/rails c command will tab-autocomplete, which I think is pretty cool. 😎

The web part is the name of my Docker service, sans the prefix, as written in docker-compose.yml. All the naming conventions (web, db, appname_dev) are arbitrary. Just be consistent. Consistent naming is the secret ingredient for many of these scripts.

dev/setup

#!/usr/bin/env bash
#
# setup the docker dev env in an already running container
#
set -euo pipefail

./bin/docker/dev/exec ./bin/rails db:environment:set RAILS_ENV=development db:setup

bin/docker/dev/setup

This dev/setup script is the first example of a script that reuses exec to execute a different default command. Here, I’m only setting up the Rails database. My Dockerfile took care of installing Ruby, Bundler, Node, and all required Gems and Node packages. Only the database setup remained. The combination of the Dockerfile and this script represents the ./bin/setup script I wrote about in Part 1, but in Docker-land.

As your homework, try to create:

  • bin/docker/dev/console to launch a Rails console in the app dev container.
  • bin/docker/dev/db/exec to shell into the db dev container.

With this foundation in place, your team is free to create whatever Docker dev env shortcuts you want.

prod/up

#!/usr/bin/env bash
#
# build and run a docker production env
#
set -euo pipefail

./bin/rails tmp:clear log:clear
./bin/bundle package


BUILDKIT_PROGRESS=plain docker compose -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml -p appname_prod build

docker compose -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml -p appname_prod up "$@"

‍

bin/docker/prod/up

The prod/up script is similar to the dev/up script. There are only three differences.

First, note that there are two -f flags. docker compose allows multiple config files to be set at once. The resulting configuration is the merge of the two, with the latter overriding the former.

Without getting into too much Docker detail, the things you may want to override in your docker-compose.prod.yml file are:

  • Database credentials
  • The build target, if a production one is specified in your Dockerfile
  • Ports to expose, probably 80 or 443 in prod (as opposed to 3000 in dev)
  • Environment variables, such as RAILS_ENV=production and SECRET_KEY_BASE
  • The production server’s start command

According to the docs, there are other ways to “extend” Docker compose files. Do what you prefer.

The second difference is that the project name (-p) is now appname_prod, to avoid conflicting with all of the appname_dev_* containers I created earlier.

The third difference is the magic Bash incantation "$@". That allows me to append more text to the end of the command, e.g., prod/up something. I’ll describe that “something” later.

So that’s how the prod/up script works. But why does it exist?

Rails apps perform differently in development and production modes. The ability to test your app in production mode locally before deploying it may be helpful.

prod/exec and others

#!/usr/bin/env bash
#
# run something (bash by default) in a running prod web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail

docker compose -p appname_prod -f ./somewhere/docker-compose.yml -f ./somewhere/docker-compose.prod.yml exec web "${@:-bash}"

bin/docker/prod/exec

The prod/exec script is similar to dev/exec. As with prod/up, I have:

  • a new project name: appname_prod
  • multiple merged Docker compose files

As with dev/setup, you can reuse this script to define prod/setup and prod/console.

prod/deploy

What are the scripts in the prod/deploy directory for? 

Say you are renting cloud compute, such as an AWS EC2 instance, a Google Cloud instance, or a DigitalOcean droplet. Or maybe you’ve just been given SSH access to a VM in some corporation’s datacenter.

If so, you can install Docker on that instance and run it as a service.

‍

Then you can deploy images you’ve built in your local Docker to the remote instance. Think of it like your own homemade capistrano script.

prod/deploy/up

#!/usr/bin/env bash
#
# build and run a remote docker production env
#
# see additional setup & docs in ./bin/docker/dev/deploy/up
#
# assumes an SSH config (~/.ssh/config) named "myapp"
# -d means --detach
#
set -euo pipefail

DOCKER_HOST="ssh://myapp" ./bin/docker/prod/up -d

bin/docker/prod/deploy/up

The prod/deploy/up script uses the prod/up script to build and launch your Rails app in production mode, but with two differences.

First, it defines a DOCKER_HOST. By default, Docker commands talk to the Docker service running on your local machine. By specifying a DOCKER_HOST, those commands will instead talk to the Docker service running on a remote machine.

To avoid describing the SSH complexity, I’ve configured a passwordless login and hidden all the host details in the configuration file ~/.ssh/config. If you can type ssh myapp from your local Terminal to shell into your production host, this DOCKER_HOST value should work.

The second difference is the -d flag, which turns on Docker’s “detached mode”. Instead of the remote production server running in the foreground, it’ll run detached on the remote host. That way, you can close your laptop at the end of the day without worrying that you’ve accidentally killed your production app.

prod/deploy/exec

#!/usr/bin/env bash
#
# run something (bash by default) in a running remote prod web container
#
# e.g. "exec ./bin/rails c"
#
set -euo pipefail

DOCKER_HOST="ssh://appname" ./bin/docker/prod/exec "${@}"

bin/docker/prod/deploy/exec

The prod/deploy/exec script uses the same trick as prod/deploy/up. It reuses the prod/exec script, but runs it in your remote server’s Docker instead of locally.

Just like the prod/exec script, it’ll default to running a bash shell in your remote host’s Dockerized Rails app. You can override that by appending a command, which this script will forward along to prod/exec.

For example, to run a Rails console in your production app, you can execute ./bin/docker/prod/deploy/exec ./bin/rails c. 

Cool, right?

By combining all these scripts with the power of tab-autocomplete, you have most of what you need to quickly and easily run Docker both locally and in production.

If you have a staging environment, create a ./bin/docker/staging directory. It should be similar to ./bin/docker/prod, but with different SSH settings. You can even deploy your development environment to a remote server! (You shouldn’t, but you can! 🙊)

What about the test environment?

I prefer not to run Rails tests within Docker. Weird things happen in tests, like breakpoints and parallelization. Additionally, I suspect, but haven’t proven, that the overhead of connecting to a local Docker service slows down test performance. I need my tests to execute quickly. Otherwise, I may get distracted by my phone or the latest blog post about how AI is both savior and scourge.

That concludes part 2. I hope you can see the power of hiding common Docker interactions inside Rails scripts. In part 3, I’ll show you how to do the same with Kubernetes.

Ed Toro is a Senior Consultant at Test Double and has experience in Ruby, Rails, and Docker.

Software tooling and tips in your inbox

The Test Double Dispatch newsletter includes useful tools and tips for software development and product management.

Subscribe

Related Insights

🔗
Power up with Rails scripts Part 1: Environment setup
🔗
Automate Docker deployment for Ruby: A DevOps guide
🔗
How to speed up Docker builds for cloud deployments

Explore our insights

See all insights
Developers
Developers
Developers
Power up with Rails scripts Part 1: Environment setup

Short shell scripts for simplifying onboarding from Rails apps.

by
Ed Toro
Developers
Developers
Developers
Pydantically perfect: Normalize legacy data in Python

Learn how to normalize inconsistent data structures in Python with Pydantic. The post guides you through different approaches and pitfalls, using Pydantic's alias path and alias choices features.

by
Gabriel Côté-Carrier
by
Kyle Adams
Leadership
Leadership
Leadership
5 rules to avoid the 95% AI project failure rate

MIT research shows 95% of corporate AI pilots fail. The problem isn't the technology—it's transformation. Based on decades of implementation experience, here are the 5 non-negotiables every C-suite needs to master for AI success.

by
Ed Frank
Letter art spelling out NEAT

Join the conversation

Technology is a means to an end: answers to very human questions. That’s why we created a community for developers and product managers.

Explore the community
Test Double Executive Leadership Team

Learn about our team

Like what we have to say about building great software and great teams?

Get to know us
Test Double company logo
Improving the way the world builds software.
What we do
Services OverviewSoftware DeliveryProduct StrategyLegacy ModernizationPragmatic AIDevOpsUpgrade RailsTechnical RecruitmentAssessments
Who WE ARE
About UsCulture & CareersGreat CausesEDIOur TeamContact UsNews & AwardsN.E.A.T.
Resources
Case StudiesAll InsightsLeadership InsightsDeveloper InsightsProduct InsightsPairing & Office Hours
NEWSLETTER
Sign up hear about our latest innovations.
Your email has been added!
Oops! Something went wrong while submitting the form.
Standard Ruby badge
614.349.4279hello@testdouble.com
Privacy Policy
© 2020 Test Double. All Rights Reserved.

Software tooling and tips in your inbox

The Test Double Dispatch newsletter includes useful tools and tips for software development and product management.

Subscribe