Unless you’re a real Test Double superfan, you’ve probably never heard of Lineman.js. Lineman was our attempt to create a pleasant developer experience for building single-page JavaScript applications without tying developers to any particular backend language or framework. And last week Lineman’s initial release turned 10 years old, which is an impressive 70 years in dog years. I was its primary contributor, so I’m biased, but I really believe Lineman was the best tool available at the time for what is now an incredibly common job.
When I noticed the upcoming anniversary in my calendar, I could only think of one question to reflect on: why did Lineman fail and what can we learn from it today?
Setting the scene a bit
If you rewind to 2011 or 2012, a web application’s backend stack was more or less all that mattered to anyone. Teams hired programmers for .NET, Java, Ruby, or Python experience—having JavaScript on your resume (but let’s be real, “jQuery”) was at best a nice-to-have. And each programming language’s leading application framework might have given developers a dozen different directories to stash things like routers, controllers, models, and mailers, but the entirety of an app’s JavaScript was always relegated to a single file or directory (I felt like DHH was punishing me every time I was forced to type javascripts/
). Whatever engineering practices a team applied to their backend code, that rigor and diligence was typically nowhere to be found when it came to JavaScript.
(You might be reading this thinking “all that is still true,” and you wouldn’t be wrong, but it was even more true a decade ago, if you’ll believe it!)
The status quo was holding back the innovation of better web applications in numerous ways. Developers attended meetups and conferences associated with their backend affiliation, so frontend thought leadership was unnecessarily Balkanized. In addition to a lack of cross-pollination across communities, tools that made it easier to build and deliver JavaScript were always coupled to a single backend framework (sprockets may have been a buggy mess, but it was better than anything Java had, and it only worked with Ruby on Rails). And while the best frontend developers were capable of rapidly prototyping user interfaces, they were routinely hobbled by slow, cumbersome development servers that weren’t designed to serve static assets.
The birth and death of Lineman
Test Double was barely a year old, but it seemed clear that improving the status quo of frontend tooling could accelerate our growth by enabling us to work with a broader base of clients. Countless companies were building single-page JavaScript apps in Backbone & AngularJS and we had the expertise to help them, but there was no reason they should be deprived a first-class frontend experience because of their choice of backend tech stack. At the same time, many organizations were struggling to coordinate separate frontend and backend efforts—development of JavaScript interfaces was frequently blocked for weeks waiting for the requisite backend JSON APIs to materialize.
We created Lineman to solve all of these problems.
(Also, Dave Mosher drew this logo. And once your project has a logo, you’re obligated to follow through with a bespoke marketing web site—that was just an ironclad rule of open source at the time.)
A few highlights of Lineman’s main features (and an archive of that aforementioned doc site):
lineman new
generated a ready-to-deploy project that, importantly, didn’t duplicate a bunch of code or configuration. Defaults were stored in Lineman itself, so users only had to specify how their app deviated from the golden path. This compared favorably to Yeoman scaffolds or (much later) Create React App, which littered your project with details that were out of date as soon as they were generated- The testing story (
lineman spec
andspec-ci
) was a sheer delight by contemporary standards and probably still beats most frontend developers’ lived experience today. This was thanks in large part to Toby Ho’s Testem runner, which could run all your tests against multiple browsers simultaneously in a slick interactive CLI after every file change - It’s commonplace now, but
lineman build
was the first tool I’d ever seen designed to create performant and cacheable assets for production out-of-the-box. It was CDN-friendly from day one: concatenating, compressing, and fingerprinting production JS, CSS, and image assets without requiring users to configure anything or maintain a file manifest - The plugin API could be used to add new JS/CSS/HTML languages and compilers with zero config for users beyond adding the plugin as a dependency. Because Lineman’s core functionality was implemented with plugins, third-party plugins could so dramatically change Lineman’s default behavior that a one-line change to
package.json
could transform Lineman into a competent static site generator or JavaScript library packager - The
lineman run
development server included my favorite feature—and the one I’ve seen least often elsewhere—the ability to mock out HTTP endpoints to facilitate rapid prototyping and define executable API specifications. This empowered frontend developers to stay productive when a backend API was unfinished, while still proxying requests to the routes that were available. And if that backend was too slow, stubbed routes could unlock much faster feedback than running each page load through the full stack
I did what I could to tell people about Lineman. Made a few screencasts. Appeared on the Changelog podcast. Gave a talk at RailsConf 2014 on Lineman’s Rails integration, and another presentation at Øredev. Showed up at number of regional user groups. It got a bit of traction, but mostly with other consultants based in the Midwestern United States—presumably because we were all inundated with large enterprise clients trying to create complex JavaScript apps but with no competent frontend tooling to speak of.
Later on, Jo Liss created the much more rigorous Broccoli asset pipeline and Stefan Penner used it to build the really-very-good Ember CLI. I toodled around on a framework-agnostic Broccoli-based build tool, but never managed to cross the finish line. Lineman was basically “finished” by 2015, in both senses of the word.
But, in the end, we all know what really happened: React ate the universe.
The worst timeline—the one I spent years warning people of—ultimately came to pass: myriad frontend teams would be saddled with hundreds of out-of-date dependencies, configuration files generated by a CLI command that doesn’t work anymore, and application code so hopelessly tangled with their build tooling that by 2018 Test Double was seeing a glut of “legacy rescue” React projects. I started hearing from VPs of Engineering who’d decided to pursue server-side rendered React but were seeing such anemic productivity they were now reversing course and upgrading their atrophying Rails monoliths (whose frontend strategy is also finally coming together a decade later).
What makes any of these tools popular?
Before drawing any lessons from Lineman’s failure, it’s worth asking “what drives the popularity of the most successful frontend tools in the first place?”
As a standard that runs on numerous fiercely-competitive platforms, JavaScript is fundamentally different than any other programming language. Want to guess what the most popular Elixir tools are? Probably whatever its benevolent dictator for life recommends. Outcomes are even more obviously predetermined when a corporation controls a language, as Oracle does with Java, Microsoft with C#, and Apple with Swift.
Making a cool thing, sharing it, and moving on your with life won’t cut it in JavaScript the way it will in Ruby.
Thanks to JavaScript’s unparalleled ubiquity and lack of centralized control, open source adoption is far more competitive on the frontend than anywhere else. That’s why corporate sponsors are willing to bankroll glitzy marketing campaigns for free-as-in-beer open source. But, while the benefits to vendors for bolstering their backend language ecosystems are pretty cut-and-dry, the incentives driving companies to invest aggressively in JavaScript open source “growth hacking” are typically much less clear—and sometimes intentionally obfuscated.
A few examples from blockbuster frontend open source projects:
- When React started spreading internally at Facebook, open sourcing it gave them a rare opportunity to win goodwill with the software community they needed to hire from (unforced licensing controversies aside)
- Google burnished its “open web” bona fides by hiring a team around AngularJS and a stack of related tools to deepen the moat surrounding its vision for the web as a platform
- Speaking of, Microsoft winked at AngularJS once, and suddenly half the booths at NGConf were populated by legacy .NET vendors selling paid component libraries
- Cypress provided a genuinely novel UI testing experience, but they also took on enough outside funding to lead them to blocking libraries that undercut their cloud service
- Insomnia, an open source API client that looks an awful lot like Postman (right down to its business model), apparently started requiring a registered account to function, despite the fact most of its users adopted it to avoid signing into Postman
Large or small, what each of these projects had in common was a powerful incentive to not only invest in the creation of open source tools but in marketing them, too. And in each case, the roundabout nature of that underlying incentive wasn’t exactly spelled out in the README.
So why did Lineman fail?
Lineman did something people didn’t think they needed, and we lacked the marketing horsepower to convince them otherwise. Perhaps more importantly, Test Double’s business model doesn’t incentivize us to incur significant expense to maximize our open source projects’ adoption.
I was under no delusions that a tool like Lineman would be an easy sell. It was a struggle to get engineering organizations to take JavaScript seriously for more than ten minutes at a time, even as they were trying to staple impossibly complex JavaScript apps onto their legacy systems. I probably gave 30 presentations on introductory JavaScript testing from 2009 to 2014, and I was always bewildered by developers who were adamant about testing every line of their backend code but would nevertheless laugh off the idea of testing their JavaScript.
Lineman may have failed to take the world by storm, but it was hardly a waste of time: it supercharged Test Double’s ability to deliver great web apps and informed our team’s thinking about how to coach clients amid an increasingly dizzying landscape of frontend technologies. But Test Double pays the bills by serving clients who (rightly) care more about outcomes than implementation details, so scoring points on Reddit and Hacker News wouldn’t have really moved the needle for us. And while I empathize with anyone who had a Bad Time™ fighting half-baked frontend tooling this past decade, I’m not too salty that our little tool didn’t succeed. Hell, if Lineman had taken off, the open source maintenance burden would have been such a distraction that it probably would have hampered Test Double’s success in other areas.
To be clear, I don’t mean to say that Lineman would have been a wild success if it had been backed by a massive budget that afforded full-time developer “evangelists,” conference sponsorships, and paid advertising. All I’m saying is that when so many vendors have a financial incentive to put a thumb on the scale to influence which frontend tools developers adopt, it complicates the picture of open source as a purely technical, meritocratic marketplace of ideas. Making a cool thing, sharing it, and moving on your with life won’t cut it in JavaScript the way it will in Ruby.
How to pick the best tool for the job
Every developer has a handful of tires they kick before adding a dependency to a project. Check the download count. See how many GitHub stars it has. Verify the most recent release wasn’t too long ago. See how many contributors are actively committing to the project. Make sure there aren’t a thousand open issues with titles like, “postinstall script contains ransomware that stole my customer data.”
You know, the basics.
But one thing they don’t teach you at programmer school is how to evaluate open source tools in the context of the broader economy. What company is sponsoring it and why? How would that company stand to benefit if the tool they’re promoting dominated the industry?
Going forward, consider not just the popularity of a tool, but the incentives of whoever’s promoting it. When two tools both have GitHub READMEs, both npm install
the same way, and both purport to solve the same problems, we tend to compare them on equal footing. But, if one tool is backed by a growth-stage SaaS company and one is a humble community effort, it’s probably time we start factoring that into our decision criteria.
As a worst-case thinker, here are a few questions I think through before adopting a nontrivial dependency:
- How many dependencies does it have? If I install this thing, how many transitive dependencies am I saddling myself with?
- Can I isolate it from the rest of my code via an adapter object I own or must it “infect” the rest of my code, tests, or build configuration in a way that makes switching more difficult?
- If a project isn’t particularly active, is it because it’s dangerously unmaintained or is it simply a small, encapsulated “solved problem”?
- Do I know any likeminded developers who’ve used it in anger for more than a few months and, if so, what’s the worst thing they’ll say about it?
- If the company sponsoring it turns heel and embraces the dark side, would a community-maintained fork be feasible?
All of these questions basically redound to, “how screwed would I be if this thing goes south?” If I don’t like the answer to that question, I’ll go out of my way to explore alternatives or find a way to eliminate the need for the tool entirely—adopting it only as a last resort.
Leering skepticism comes naturally to me, but even if you’re wired differently, it’s probably safe to say the industry has earned our critical thinking before we adopt whatever tools they’re peddling.
Some parting advice
As someone who’s created a lot of open source and who consorts with other open source maintainers, I dare say my experience has been much more positive than average. So here’s some free advice on why open source contribution has worked for me and why it doesn’t work for others.
How to have a good time:
- Create an open source thing in the course of your needing it yourself, don’t try dreaming up what others might need (and why would they? You don’t even need it!)
- Focus your efforts on helping those who express an interest in your thing, not on convincing the people who don’t
- Look for an employer that will benefit from you open sourcing things, but don’t try to get paid directly to make open source. Developer relations budgets are always first on the chopping block. Sponsorships significant enough to pay the bills are ephemeral and come with strings attached
- Don’t add configuration options when people ask you to. Every additional
if
statement compounds the combinatorial complexity of cases to test, and the code paths you don’t use yourself will be the ones you find yourself fixing most often - Don’t add dependencies that aren’t absolutely necessary. Every direct dependency for you is a transitive dependency for your users, and represents a point of failure that could strike at literally any time without your involvement. The median number of dependencies in libraries I’ve published in recent years? Zero
- As soon as you’re not having a good time, stop. You don’t owe anyone shit, much less your free labor
If this discussion has stimulated anything you’d like to sound off about, shoot me an email. If not, join me in pouring one out for Lineman, anyway. 🫗