Skip to main content
Test Double company logo
Services
Services Overview
Holistic software investment consulting
Software Delivery
Accelerate quality software development
Product Strategy & Performance
Level up product strategy & performance
Legacy Modernization
Renovate legacy software systems
Pragmatic AI
Solve business problems without hype
Upgrade Rails
Update Rails versions seamlessly
DevOps
Scale infrastructure smoothly
Technical Recruitment
Build tech & product teams
Technical & Product Assessments
Uncover root causes & improvements
Case Studies
Solutions
Accelerate Quality Software
Software Delivery, DevOps, & Product Delivery
Maximize Software Investments
Product Performance, Product Scaling, & Technical Assessments
Future-Proof Innovative Software
Legacy Modernization, Product Transformation, Upgrade Rails, Technical Recruitment
About
About
What's a test double?
Approach
Meeting you where you are
Founder's Story
The origin of our mission
Culture
Culture & Careers
Double Agents decoded
Great Causes
Great code for great causes
EDI
Equity, diversity & inclusion
Insights
All Insights
Hot takes and tips for all things software
Leadership
Bold opinions and insights for tech leaders
Developer
Essential coding tutorials and tools
Product Manager
Practical advice for real-world challenges
Say Hello
Test Double logo
Menu
Services
BackGrid of dots icon
Services Overview
Holistic software investment consulting
Software Delivery
Accelerate quality software development
Product Strategy & Performance
Level up product strategy & performance
Legacy Modernization
Renovate legacy software systems
Pragmatic AI
Solve business problems without hype
Cycle icon
DevOps
Scale infrastructure smoothly
Upgrade Rails
Update Rails versions seamlessly
Technical Recruitment
Build tech & product teams
Technical & Product Assessments
Uncover root causes & improvements
Case Studies
Solutions
Solutions
Accelerate Quality Software
Software Delivery, DevOps, & Product Delivery
Maximize Software Investments
Product Performance, Product Scaling, & Technical Assessments
Future-Proof Innovative Software
Legacy Modernization, Product Transformation, Upgrade Rails, Technical Recruitment
About
About
About
What's a test double?
Approach
Meeting you where you are
Founder's Story
The origin of our mission
Culture
Culture
Culture & Careers
Double Agents decoded
Great Causes
Great code for great causes
EDI
Equity, diversity & inclusion
Insights
Insights
All Insights
Hot takes and tips for all things software
Leadership
Bold opinions and insights for tech leaders
Developer
Essential coding tutorials and tools
Product Manager
Practical advice for real-world challenges
Say hello
Developers
Developers
Developers
Software tooling & tips

Power up scripts for Rails apps Part 3: Kubernetes

In part 3 of the three part series on Rails scripts, learn about short shell scripts for simplifying Kubernetes interactions from Rails apps.
Ed Toro
|
November 24, 2025
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In Parts 1 and 2 of this blog series, I described how short Rails scripts can simplify app setup and Docker interactions. In this final chapter, I’ll show you how to script common Kubernetes actions. I’ve never received more compliments for something I’ve written than I have for this.

Kubernetes (K8s) seems like it’s everywhere these days. 

If you thought Docker was hard to wrap your head around, just wait until you’re buried under miles of k8s config files.

The additional complexity makes sense. The Docker deployment strategy I described in Part 2 works for one or a few remote servers. K8s works for thousands. 

How can we use Rails bin/ scripts to define a sensible API for our app developers, allowing them to focus on Dev instead of DevOps?

The k8s concepts I think matter to most devs are:

  • ☁️Cloud configuration: Assuming you’re using a hosted k8s provider, the first thing you’ll have to configure is access to those resources. I use AWS EKS in this example. 
  • 🔑K8s configuration: Once you have permission to access k8s, how do you configure it? 
  • 👀K8s querying: Once you have configured access to the k8s resources, how do you see what’s going on? 
  • ⚡K8s execution: Once you can see what’s going on, how do you act on what you see? 

Here’s an example directory structure.

bin
└── aws
    ├── configure
    ├── current
    ├── exec
    ├── exec_in_pod
    ├── login
    ├── pods
    ├── prod
    │   ├── configure
    │   ├── exec
    │   └── switch
    ├── README.md
    ├── tail_logs
    └── staging
        ├── configure
        ├── exec
        └── switch

These scripts assume you’ve installed the following libraries during your app’s setup:

  • aws-cli
  • kubectl

As in the previous blogs, don’t forget to document your scripts in the README.md. For science! 🔬

AWS configure and login

#!/usr/bin/env bash
#
# ./bin/aws/configure
#
# https://github.com/aws/aws-cli/issues/7835

set -euo pipefail

echo "sso-session-name
https://sso-alias.awsapps.com/start
us-east-1
sso:account:access" | aws configure sso-session

When doing anything with AWS, the first step is to configure your session. Your “sso session” sets up how you “log into” AWS so you can interact with its API via the Terminal.

Unfortunately, aws-cli doesn’t offer a non-interactive way to configure an AWS session. This hack works, but it may cause your Terminal to pause and ding at you. I assume that’s the sound of aws-cli cursing at me. 🤬 Or vice versa. 🫢

This script echoes the text inputs you must manually enter in response to the questions aws-cli asks you. Your AWS admins will provide these values. The sso-session-name and sso-alias are placeholders.

These values are stored in the config file ~/.aws/config. That means you can also fetch that file manually in your version of this configure script. But I prefer to use the AWS CLI.

Thankfully, you typically only need to configure your AWS session once. 

Even though these values don’t change often, it’s still helpful to hide them away in a script file instead of copy-pasting them from an onboarding document.

#!/usr/bin/env bash
#
# ./bin/aws/login

set -euo pipefail

aws sso login --sso-session sso-session-name

This is how you log in to the session you’ve defined above. You’ll have to run this every one to twelve hours, depending on how your AWS account is configured. Otherwise, you’ll see some strange errors in later scripts. Hopefully, you’ll learn to recognize the errors generated by an expired AWS session. Just run this login script again to make them go away.

aws/<env>/configure

To support multiple environments, k8s uses contexts. Each context must be configured separately.

AWS sessions may contain multiple roles. An easy way to check which roles you have access to is to log in to the web-based version of the AWS console and check the header menu. You may have the ability to switch between all of your roles from there.

AWS roles usually define the set of AWS resources you can access. A common pattern is to allow everyone on the team to access the “developer” role and its associated AWS resources (EC2 instances, S3 buckets, etc.), but allow only a select few to use the “production” role.

AWS EKS k8s clusters may be associated with AWS roles. For example, the “staging” k8s cluster may only be available to AWS users who have access to the “staging” role.

All of these role and cluster names are arbitrary.

What all that means is, after “logging into” AWS overall, you may need to “log in” to any k8s clusters you have been given access to via your AWS roles. That’s what these configure scripts do.

#!/usr/bin/env bash
#
# ./bin/aws/staging/configure

set -euo pipefail

aws configure sso --profile appname-eks-staging

aws eks update-kubeconfig --name staging-cluster-name --profile appname-eks-staging --region us-east-1 --alias appname-staging

kubectl config set-context --current --namespace=appname

This script does three things:

1. Configures the AWS staging role and stores it in a profile named appname-eks-staging.

As with the top-level configure, you will be prompted to enter various text values the first time you run this script. Your AWS admins will provide the correct inputs. You may be able to use the echo trick from bin/aws/configure here as well. However, the next time you run this script, your previous values will be assumed by default, so you can just press <return> over and over again until AWS stops interrogating you. Also, as with configure, your staging profile will be stored in ~/.aws/config. 

You can name your profile (appname-eks-staging) whatever you’d like. The data you’ll be asked to enter at the prompts will be whatever gibberish your DevOps team decides. They have to work around AWS-wide name conflicts. This AWS profile name is your opportunity to pick a name you prefer.

2. Configures k8s.

Using the AWS staging profile you just created, this line configures the associated staging cluster. In this example, the profile appname-eks-staging has permission to access the EKS cluster staging-cluster-name. Like aws-cli, this line will generate a config file at ~/.kube/config. 

The cluster name (staging-cluster-name) will be given to you. As with the AWS profile name, you can select a cluster alias (appname-staging) that makes more sense to you. You don’t have to worry about it conflicting with your teammates or other EKS users. You only have to worry about other apps on your local machine.

3. Sets a default namespace for your current “staging” context.

This is similar to Docker’s project name in Part 2. It helps you avoid conflicts with any other apps on your machine that may be using k8s.

You typically only need to run this once per cluster. Each cluster should correspond to an environment, like staging and production. You’ll only have to rerun configure if your AWS admins send you new values to enter into the prompts.

The production (and/or any other environment’s) k8s configuration will look similar to staging.

#!/usr/bin/env bash
#
# ./bin/aws/prod/configure

set -euo pipefail

aws configure sso --profile appname-eks-prod

aws eks update-kubeconfig --name prod-cluster-name --profile appname-eks-prod --region us-east-1 --alias appname-prod

kubectl config set-context --current --namespace=appname

If all goes well, your app devs will only need to run configure once for all the k8s environments they have and need access to.

That’s a lot of configuration! Your managed k8s setup may vary, so the configuration process may be different for you. The goal of these scripts is to make k8s configuration as simple as possible for your teammates.

Now that that’s out of the way, what can we do with our newly configured access?

aws/<env>/switch

#!/usr/bin/env bash
# 
# ./bin/aws/staging/switch

set -euo pipefail

kubectl config use-context appname-staging

Once kubectl is configured for all the EKS clusters that you can access with your various AWS profiles, the switch scripts are used to change your current context. So, for example, by running bin/aws/staging/switch, all of your kubectl commands will hit the staging k8s cluster. 

The only difference between this and bin/aws/prod/switch is the cluster alias name: appname-prod instead of appname-staging. 

Make sure the cluster alias name you configured in bin/aws/<env>/configure (e.g., appname-staging) matches the name in the corresponding bin/aws/<env>/switch. Like in Part 2, much of the magic of these scripts is in using a consistent naming scheme.

aws/current

#!/usr/bin/env bash
#
# ./bin/aws/current

set -euo pipefail

kubectl config current-context

The current script simply reminds you which k8s context you’re currently operating within. Context/env switching can get confusing. You don’t want to accidentally execute something in production that you meant to run in staging! 😱

We now have scripts to change and query our current environment. Next, let’s get some real work done!

aws/pods

#!/usr/bin/env bash
#
# ./bin/aws/pods

set -euo pipefail

./bin/aws/current

kubectl get pods

K8s clusters are made up of pods. Each pod is an instance of some piece of your app. For example, you may have a dozen clones of Rails app pods and a dozen different kinds of Sidekiq worker pods.

K8s experts should be called “pod people”. (Image source)

To keep you oriented, this script will first output the current context/environment. Then it’ll dump the list of pods along with their statuses.

If you use an alternative to kubectl, like K9s, it should be able to use the same ~/.kube/config file you generated earlier. You can script that alternative into this file instead.

Pods also have names and tags you can use to reference them later. That’ll be important in the next section.

exec

#!/usr/bin/env bash
#
# e.g. ./bin/aws/exec rails db:migrate:status

set -euo pipefail

./bin/aws/current

COMMAND="${@:-/bin/bash}"

# app pod's name: rails_pod
# container within the pod: container_name
kubectl exec -it rails_pod -c container_name -- $COMMAND

Similar to the Docker scripts in Part 2, you can “shell” into a pod. 

Pods have unique names, but may be grouped into categories by tags. This exec script hard-codes a particular pod tag, rails_pod. By using this tag, running this script is like saying, “Shell into a Rails app pod and I don’t care which one.”

Pods may be composed of multiple containers, like their own mini Docker services. By passing the -c flag, you can specify that you want, by default, to shell into a particular container within the pod. For example, there may be a container in each Rails pod dedicated to running the Rails console.

Putting aside all the pod and container naming details, the goal of the exec script is to allow your app devs to easily run a command within a Rails app pod without worrying about selecting any particular one. Since they all should be the same, any pod will do.

Like the Docker exec script in Part 2, this script will default to running a bash shell. Append a different command after exec to run whatever you’d like.

Also, like the Docker script, you can create your own scripts that reuse exec to execute other common tasks, like running a Rails console or, as in the example, querying the database migration status.

#!/usr/bin/env bash
#
# ./bin/aws/exec_in_pod sidekiq-pod-XXXXXXX ./bin/rails db:migrate:status

set -euo pipefail

./bin/aws/current

POD="$1"
shift
COMMAND="${@:-/bin/bash}"
kubectl exec -it $POD -- $COMMAND

If you want to execute a command within a specific pod, copy the pod name from the output of ./bin/aws/pods and use exec_in_pod instead. It works like exec, but the first argument is the pod name. Everything that follows it is the command you want to execute in that pod.

So, for example, you can query the status of a single worker pod instead of whatever random one k8s selects for you.

#!/usr/bin/env bash
#
# ./bin/aws/staging/exec

set -euo pipefail

./bin/aws/staging/switch

./bin/aws/exec $@

All the exec scripts so far have executed ./bin/aws/current to display the current cluster name, so you can be sure you’re in the correct environment.

The environment-specific exec scripts will guarantee that you execute your command in the correct cluster by calling the associated switch script before passing your command to exec—for those times when you want to be absolutely, positively sure you’re in the right place.

Source: Giphy

tail_logs

#!/usr/bin/env bash
#
# ./bin/aws/tail_logs

set -euo pipefail

echo "Streaming logs from..."
./bin/aws/current
echo -n 3...
sleep 1
echo -n 2...
sleep 1
echo 1...
sleep 1


kubectl logs -f rails_pod

For my final trick, here’s a script I often use to stream the logs from a random Rails pod. I added a countdown to give myself a chance to read the current context before entering The Matrix.

late-night logfile debugging 😵‍💫

Source: Giphy

Wrap Up

These are, I hope, the Rails scripts that’ll earn you a raise. From Part 1, new hires and folks with new laptops will thank you. From Part 2 and Part 3, anyone struggling with Docker and K8s will praise you. And you’ll save tons of time tabbing through these commands like a CLI master.

Docker and K8s are powerful tools. I don’t know all that they can do. These scripts exist to make it so I don’t have to. 

With power comes complexity. The length of the lines of shell code I have to write to use these libraries, especially configuring and building with them, causes my brain to run out of RAM. 🤯 By hiding those details in simple scripts, I can focus on feature work instead of Googling or asking AI agents for advice on the proper command line flags.

The scripts themselves aren’t as valuable as the general strategy. They’ll probably go stale as soon as any of the underlying libraries are updated. There may already be bugs in them. (Schedule office hours with me if you need help getting these working in your Rails app.) You may not want to depend on Bash. You may want to write these in another shell-scripting language, or even in Ruby! You can even try some of these in non-Rails apps.

Regardless of the scripts’ contents, the big ideas to take away are:

  1. Define a simplified API around external libraries your team often uses.
  2. When that API requires enumerated arguments (like environment names), as an alternative to conditional blocks, use a tab-autocompleted directory structure to pave paths toward the most common argument combinations. 

I hope these Rails scripts bring you as much joy as they have for my clients. 🎁

Ed Toro is a Senior Consultant at Test Double and has experience in Ruby, Rails, and Kubernetes.

‍

Software tooling and tips in your inbox

The Test Double Dispatch newsletter includes useful tools and tips for software development and product management.

Subscribe

Related Insights

🔗
Power up with Rails scripts Part 1: Environment setup
🔗
Power up with Rails scripts Part 2: Docker
🔗
Automate Docker deployment for Ruby: A DevOps guide

Explore our insights

See all insights
Developers
Developers
Developers
Power up with Rails scripts Part 2: Docker

In part 2 of the three part series on Rails scripts, learn about short shell scripts for simplifying Docker interactions from Rails apps.

by
Ed Toro
Developers
Developers
Developers
Power up with Rails scripts Part 1: Environment setup

Short shell scripts for simplifying onboarding from Rails apps.

by
Ed Toro
Developers
Developers
Developers
Pydantically perfect: Normalize legacy data in Python

Learn how to normalize inconsistent data structures in Python with Pydantic. The post guides you through different approaches and pitfalls, using Pydantic's alias path and alias choices features.

by
Gabriel Côté-Carrier
by
Kyle Adams
Letter art spelling out NEAT

Join the conversation

Technology is a means to an end: answers to very human questions. That’s why we created a community for developers and product managers.

Explore the community
Test Double Executive Leadership Team

Learn about our team

Like what we have to say about building great software and great teams?

Get to know us
Test Double company logo
Improving the way the world builds software.
What we do
Services OverviewSoftware DeliveryProduct StrategyLegacy ModernizationPragmatic AIDevOpsUpgrade RailsTechnical RecruitmentAssessments
Who WE ARE
About UsCulture & CareersGreat CausesEDIOur TeamContact UsNews & AwardsN.E.A.T.
Resources
Case StudiesAll InsightsLeadership InsightsDeveloper InsightsProduct InsightsPairing & Office Hours
NEWSLETTER
Sign up hear about our latest innovations.
Your email has been added!
Oops! Something went wrong while submitting the form.
Standard Ruby badge
614.349.4279hello@testdouble.com
Privacy Policy
© 2020 Test Double. All Rights Reserved.

Software tooling and tips in your inbox

The Test Double Dispatch newsletter includes useful tools and tips for software development and product management.

Subscribe