Imagine you’ve worked for years to try to create a better way to write software, and now you’ve finally succeeded. But nobody wants to use it. What do you do next?
This is a question I’ve thought about a lot over the past few years. I’ve tried using a variety of programming languages and frameworks that purport to make software development better—and I think they do! But most of them haven’t caught on, usually resulting in them struggling to be maintained. Seeing this play out time and time again, I have to ask myself: Is it worth it to try to make software better? Or is my desire for software to be better a desire that will inevitably leave me disappointed?
This question came to mind again as I’ve been setting up a vintage Macintosh to run BeOS, an operating system developed in the 1990s. I’ve really been getting into vintage Macintoshes lately. I used Macs as a kid, but now I can do things I wasn’t able to afford as a kid, like buy 1995’s most powerful Macintosh to run BeOS, an advanced alternative operating system.
BeOS was supposed to be revolutionary, to change the way computers worked. But it never caught on, and the company making it was eventually dissolved in 2001. So why didn’t BeOS work out? The answer is instructive for us — even though BeOS failed, it might still serve as an inspiring lesson for those of us who want to make computing better.
Where Be has been
To understand how I saw BeOS at the time, we have to go back further. The Macintosh was released in 1984, just two years after I was born. My dad was a Macintosh true believer, and as my interest in computers quickly developed, I picked up the same mindset from him. In our view, the Macintosh was making the world better. The graphical user interface was much better than the command-line, helping computers reach their potential to be usable by and helpful to people.
Into the 90s, this utopian ideal was still the only thing on my mind about the Macintosh, but most observers realized what I didn’t: Something was very wrong at Apple. Architectural decisions that had been made in creating the Macintosh system software, its operating system, were not scaling with the advances in computing. A radically new operating system would need to be developed, but Apple’s management was struggling to make the decisions necessary to tackle this problem. Two former Apple employees, executive Jean-Louis Gassée and engineering leader Steve Sakoman, recognized this problem, so in 1990 they founded a company named Be Inc. to create a new kind of computer.
Creating a new kind of computer was audacious even in 1990: the Microsoft/Intel combo already dominated the personal computer market, with Apple taking a sizable second place. (Versions of Unix existed, but not yet Linux.) But Be had reason to be confident: They were going to create a new OS from scratch without the encumbrances of backward compatibility and past technical decisions. They could design it right.
And it worked. Public BeOS demos showed a commercially available machine accomplishing much more than Macintosh or Windows with the same processing power: multiple windows simultaneously rendering 3D images in real-time, playing a movie, rendering a web page, and playing audio. As a kid at the time, the only thing on my mind was how amazing it was that a personal computer could do all that; I didn’t dig deeper to ask how it was possible. But the way it was possible was by writing an OS from scratch to take full advantage of the PowerPC processor—even multiple processors per machine—and target the kind of multimedia needs of the day.
Not to Be
So why didn’t these advancements lead to BeOS taking over the computing industry? Others are more qualified than me to analyze, but from my reading, it seems that a key factor was difficulty with gaining traction. Users couldn’t do anything with BeOS until there were applications for it. Developers would need to completely rewrite their applications to work on BeOS, and they weren’t motivated to do that when there were no users. Despite the problems facing Mac OS and Windows, those operating systems were “good enough” for most users. Was the potential of computing being weighed down by the limitations of these OSes’ designs? Yes. But most users wouldn’t adopt BeOS just for the sake of the future of computing; they just wanted to get their work or web browsing done.
There is a classic article titled “The Rise of Worse is Better.” In it, the author contrasted the Lisp and C programming languages. He argued that although Lisp was a much better language (an opinion many people I respect agree with), C had attributes that allowed it to meet needs quicker and get wider distribution. The worse attributes of C were better for it to catch on. One key takeaway from this article is that what is theoretically the best is not necessarily going to do the best in the dynamics of the real world.
BeOS was attempting to solve particular problems of its day, but you can extrapolate these dynamics if you think in terms of potential and waste. If you get excited about the potential of computers, it feels like a waste when they aren’t reaching that potential. It feels like a wrong you need to make right. It’s painful for you to endure the waste day in and day out. These dynamics are the same regardless of which specific aspect of a computer’s potential you personally care about—whether performance, usability, software reliability, ease of programming, or personal control of your data.
3 ways to make a difference
So what do you do when you care deeply about optimizing one of these things, but the average user, and therefore the world, doesn’t care about it? If Be Inc. wasn’t able to succeed with all the engineering expertise they had, what hope do I have of being part of something that changes computing for the better?
So far, I’ve come up with a few possibilities:
One option is to do just what Gassée and Sakoman did, regardless: Start a company to try to bring about the change you want to see, to “make a dent in the universe.” If you do, the story of Be Inc. serves as a warning in the same way that many startups’ stories do. Your chances of success are statistically low, and whether you succeed or not, you’ll pay a high cost in terms of time and stress. And that success isn’t entirely based on whether your technical ideas are good or not. Building the right thing is not enough.
Another way to make a difference is research. The world of research isn’t one I’m deeply immersed in, so my understanding of it is limited, but I would define “research” as “exploring new ideas where the direct goal is not monetization.” Research happens at institutions like universities and research labs, but my definition of research also includes hobbyists who explore new ideas in their free time. There are online communities where you can find researchers of all kinds talking together; two examples that my interests have led me to are the Malleable Systems Collective and Future of Coding.
The third way to make a difference in computing is to build your own tools. This means starting with what you can personally influence and building something that makes things better right where you are. This might mean something as simple as writing some shared code in the programming language and framework you already work within, code that pays off right away in making your daily work easier. A more radical approach is described by Devine Lu Linvega in their Strange Loop talk that I wrote about before: radically adjusting how you engage with computing in a way that fits a more human scale. What these approaches have in common is that they are applied to real-world software (unlike research), and yet they are focused on something maintainable by a person or a small community without needing to create a product that outsells competitors.
Settling in for the journey
Looking at the dynamics of potential and waste through the lens of BeOS has helped me better understand the worries I feel. From my introduction to computers with the Macintosh, I’ve believed computers have a great potential to make a positive difference.
Since I got into consulting in 2015, I’ve been looking for ways to make software development better (hence the appeal to me of Test Double’s mission: “Software is broken. We’re here to fix it.”) Recognizing the dynamic of potential and waste helps me realize that it’s not just my weird interests that cause me to feel stuck: there have been many people over the decades who want to see computers reach their potential in various ways, and we all face the same challenges. And that’s why, whatever approach I take to making computing better at a given time, it’s encouraging for me to dive into computing history, whether by reading, watching videos, or using old computers myself. There’s perspective to be gained: ideas that have been tried that worked or didn’t, and trends that you can still see today. And there is also the encouragement of knowing that we aren’t the first group of people who have seen the potential of computers and struggled to get them there.
If you’re part of that group, let’s find a way to keep going.