The video above was recorded at Agile and Beyond 2017 in May 2017.
It’s an accepted truth in software development: When deadlines loom the team will be pressured to GO FASTER.
Unable to convince management of the risks, we resign to cutting corners and working longer hours. Unfortunately the result is just as predictable, a short spike in velocity and many delays to come.
This talk is an exploration of how to reverse the velocity spiral by building high trust relationships with management.
We’ll discuss root causes of our distrust like estimation, communication, and abuse of power so that the team can resist short sighted solutions that will plague them for years to come.
Transcript of the talk
[00:00:00] This talk is called Under Pressure and How We Resist the Urge to Go Faster. Ironically, I'm like the last thing standing in front of you in happy hour and free drinks. And I have 120 slides, so I'm gonna fly through them. I'm gonna go as fast as I possibly can. For those of you who don't know me, my name is Todd Kaufman.
[00:00:18] About five years ago, I started up a consultancy with Justin Searles called Test Double. Our mission is to really improve the way the world builds software. It's something that we're all extremely passionate about. My experience has spanned a broad number of roles with regards to software development.
[00:00:35] So I've developed a little bit of a unique perspective. I've been a software developer professionally for about 20 years. Steve works with me, though he might argue my capabilities as a software developer at this point. I've been a certified project management professional. Lately I do a lot of sales and account management.
[00:00:52] So I'm working with vice presidents, CIOs. Various stakeholders. And that experience has really helped me build up a lot of empathy. So hopefully I have a little bit to share with you regarding some of these various viewpoints. I'd love your feedback. Please feel free to tweet at me. At Todd Kaufman in test doubles, always looking for fun, challenging projects to work on.
[00:01:14] So if you think you could use our help reach out at hello at test double. com, or if you think you'd like to join our ranks as a double agent, also feel free to reach out there,
[00:01:26] minor disclaimer. These stories are completely hypothetical. I would never get up in a conference and, talk about any of our clients or my past.
[00:01:39] All right. So the team that I'm referring to, we're gonna, we're going to talk about a few personas here just for the purpose of this discussion. So these are our stakeholders, right? These people typically care a lot about things like how much this is going to cost, when it's going to be done and how it's going to align with their expectations.
[00:01:56] What level of business value it's going to derive for the business, right? Typically at the start of a project, they have a ton of good intentions and they're just looking for something that's going to Provide that roi for their business further. I'll use developers a little bit loosely here this could mean like a user experience specialist or a quality engineer but for the most part when i'm talking about developers people with hands on keyboard writing code and typically at the start of the project You Our developers have a lot of good intentions as well.
[00:02:29] They want to build something they can be proud of. That does exactly, what the users want. What's been specified. And the final persona is the project manager here. One of the interesting things when I was studying for the PMP that really stuck with me was that 90 percent of project management is good communication.
[00:02:46] Those of you who have worked with good project managers know this to be true, right? Project managers are the clue that keeps this team together and keeps them, speaking the same language. Thank you very much. So again, completely hypothetical. Say we've done some initial planning.
[00:03:02] We have maybe 180 pandas worth of work that we need to get done, 20 iterations to do it in. And we get off to maybe a little bit of a slow start. We're starting to slip a little bit, but the last couple of iterations, we've picked up our pace. So everybody feels pretty good. Then we have that one like backlog grooming session where we pull a story and it's, something generic that we hadn't accounted for.
[00:03:26] It's like the ability to report on data. And when we read that initially, when we were pointing this thing out, we said, that's probably about five pandas worth of data, or five pandas worth of effort, because we're just going to throw a bunch of data on a screen and that'll suffice. But as we talked more to our stakeholders about this, they really want a dynamic reporting engine.
[00:03:46] So it's blown up our scope quite a bit. So we say, that's okay. We made a lot of good progress. We can still muddle on here. But then a few more iterations happen and we see we're really not catching up. We have started to build up enough technical debt where it's slowing down our teams. Maybe some of the developers have gotten a little frustrated and moved on to other projects.
[00:04:07] So we've had some people move in and move out, right? This is the point typically where a project manager starts looking at this and saying, we have an issue, right? This thing was only supposed to take four months, but now it's looking like it's going to take six. And it's at this point when we have this big gap, the gap of disillusionment here, that we start to make some really bad decisions.
[00:04:31] Our stakeholders may, they may own the purse strings for this project, and they understand that the milestones are more important than the budget. They're like, hey, easy. We had four developers on it, it was going to take four months, it's going to take two extra months, we'll just add two more developers.
[00:04:47] Problem solved. Our project manager knows that maybe that's not the best approach, so they start mentioning, hey, we should work weekends. We're going to have to go with the team that we have. Let's just work weekends until this thing's back on track. Developers here, let's work weekends. They're like, hell no, I don't want to do that.
[00:05:07] I'll just cut out some tests, right? I'll stop automating all these pesky unit tests. It'll save me a bunch of time. Or maybe I'll just come back to them. It's the biggest lie in software development, by the way. We'll get back to it later. So my question for all of you is, does this make us speed up? I agree.
[00:05:27] Survey says, in my experience, about 10, we actually go slower immediately. In both the short term and the long term, we wind up taking a lot more time, our velocity dips, and it never recovers. But that's not the biggest problem. The biggest problem is that 1 out of 10 shot that we have of actually going faster.
[00:05:50] Because what happens then? First of all, we only go faster in the short term. We've probably cut enough corners that we've created so much technical debt, it's going to slow us down in the long term. But even worse, we've reinforced all of these bad decisions that people have made. We've basically told stakeholders, look, just keep throwing people at a problem whenever it's late.
[00:06:10] We've told the project manager with this that, hey, we're willing to work weekends. Whenever we start running into a problem with the timeline, we'll just start working weekends. So how does this set us up for the next time we have to do a project? My guess is they start using the data from this project to predict future outcomes.
[00:06:31] They see somewhat inflated velocity numbers to use for estimation the next time. Let alone the developers who have to work in this code base now have to deal with all this technical debt, so they can't go as fast. So the project starts slipping again. So they start getting more pressure. So they start making more bad decisions.
[00:06:50] We keep repeating the cycle again and again, and we can't snap out of it. So this talk really is about how we react to that pressure. Is that point where we have that gap, we have to make better decisions. And ideally, can we do enough things so that we can avoid that pressure entirely? So we can actually become a more predictable software development organization and get stuff done when we think it will be done, when we predict it should be done.
[00:07:19] All right. So let's talk about some of these things specifically, right? Our stakeholders, again, they're a little bit distant from the problem, so they're reaching for whatever levers or knobs they can turn to try and make this thing successful. Problem is they're so distant, there's really not a lot that they can do.
[00:07:36] So hopefully this conjured up thoughts of mythical man month in your mind when I mentioned this. Somewhat of a popular book written 40 years ago, by the way, it's taught us so much that I still see vice presidents, CIOs, people time and time again, just throwing bodies at a problem. But Fred Brooks wrote this back in, I think 1975.
[00:07:58] And really, if I'm going to summarize it for you, it's this question. Does adding resources to a late project actually speed it up? And I'll save you the 20 at Amazon. It does not. Exactly. All right. More succinctly, maybe our stakeholders, we can't communicate that to them effectively. Try just showing them this Dilbert comic strip instead.
[00:08:23] Maybe this will make it resonate for them. Why is this? Why is it that adding people to the problem doesn't make it go faster? All right I think if we can start communicating with our stakeholders about the reasons why We'll be less apt to try and make that happen. So first of all ramp up is key.
[00:08:42] I run a consulting company We start probably 20 or 30 new projects every year, right? Every time it happens There's a gradual ramp up before our developers are productive It's just the way it is. You're learning the team, you're learning the process, you're learning the existing code base, the business domain, all of these things take time before you can be truly productive, right?
[00:09:02] Two, if there are only two people on a software development project, there's one channel of communication, person A to person B, what about if there are 10? It's 45 channels of communication. That is a cost and it's significant. Making sure that you have 10 developers who all know what the others are doing, who understand the code base as a cohesive whole, instead of working in silos.
[00:09:28] It takes a lot of communication. There's a definite cost to that. As you add people, that cost goes up. That's less time programming, less time getting things done.
[00:09:41] And further, I haven't seen a ton of giant code bases. So my data kind of skews towards the smaller projects. I'll grant you that, but even how many different lines of work can we maintain successfully on a software project in a single code base? There's only so many before we start stepping on each other before I start like implementing something that affects something that you're working on We have all these merge conflicts etc, right?
[00:10:11] There are only so many lines of development that we can pursue I mentioned this already but integration costs are real So when you have a lot of different developers somehow their work has to come together and actually do Things as it's supposed to be done, right? Somebody has to test this. There's cost there, right?
[00:10:29] What I don't hear discussed a lot Is the level of dysfunction and how that is amplified when we have bigger teams. Somebody mentioned Brooks law earlier, which is this,
[00:10:42] adding people to a late software project makes it later a slightly less famous law is that adding dysfunctional software, adding people to a dysfunctional software environment actually makes it more inept. You start exposing all of these issues that may have been minor when there were only a couple of people working on the project, they become major issues.
[00:11:04] When you add a lot of people, it consumes time. All of this makes your team slow down, right? Instead, what if we focus more on fixing the existing issues, right? Instead of just throwing people at this problem and hoping it works out well, can we maybe be a little bit more deliberate here? So I mentioned separate lines of work.
[00:11:27] Say you're looking at the code base and really there's only about three lines of work that can be completed in parallel, right? But you have eight developers and maybe they're pairing every day. So you really have four lines of work here. The thing I think you should do to speed up get rid of two people.
[00:11:46] You can pick them however you like. All right. Nobody does this though. This is never anyone's like instinct when a project is starting to run late, but you really have to inspect and see how big of a team can we possibly support and still work efficiently. Cause I think nine times out of 10, you'll actually speed up doing this if you can't support so many lines of development.
[00:12:08] Further, one of the first development manager jobs I had, I was, hypothetically, I was in charge of about 30 or so. net developers who are extremely talented. It's a great group of people to work with. Except for one, right? This one person was extremely negative, very ego driven, and it was extremely frustrating to work with on a day to day basis for the rest of our developers.
[00:12:34] We spent a ton of time arguing about what really were minor technical decisions. This person was a boat anchor to five of the best developers I've ever worked with. Slowed them down probably by 50 percent or more. Maybe you haven't worked with people like that in the past. I have. And when you do.
[00:12:52] There's one solution to it and one solution only, especially on a late project. Get them out of there. So remove negativity and waste instead of focusing on adding more people. Step one. All right, so project managers, again, they're somewhat like our stakeholders in that they're not extremely close to the solutions.
[00:13:15] They're not the ones who are typically implementing the software. So they don't have as much control as software developers do. All right, so they start reaching for anything that they can do to influence this outcome. And one of the things that I've seen more junior project managers do is try to get a level of commitment down at the iteration level or the sprint level.
[00:13:39] Project managers like per sprint estimation, and I get this, because in their minds it yields predictable outcome. Okay, and there's this thing called the cone of uncertainty that I think has reinforced this belief. For those of you who are just seeing it, basically it's saying, at the very inception of the project, When we estimate it, we're probably going to be off by a factor of four or a factor of one fourth, right?
[00:14:02] That's the variance But as we work and get to know more about the solution and each other and normalize as a team We'll wind up being more productive and we'll be able to accurately estimate when we get farther along in that project I have serious issues with this graph though, right?
[00:14:21] A realistic cone of uncertainty looks a lot more like this Or x if you can estimate a project code without writing line of code And come in, and it comes in at four times what your estimate is please go apply a test double. We want to talk to you. Seriously, it's usually more like 8x or 10x or something like that.
[00:14:40] And nobody in the history of software development has ever come in at one fourth of their estimate. On anything. That has never happened.
[00:14:49] So again, the other thing I'm trying to show with this more realistic cone of uncertainty is that even farther along in the project, you can still have a significant amount of variance. That's more 2x or 4x. Halfway through the project because you never know when you're going to pull that vague requirement that winds up multiplying like rabbits, right?
[00:15:08] Like that winds up being 20 different requirements. You just hadn't uncovered yet. So again, our project managers show a little empathy for them. They're getting beat on probably by stakeholders and users and others asking them, when are we going to be done? When are we going to be done? So when they're asking you as a development team, when will we be done?
[00:15:29] It's a valid question, but project managers, you also have to understand. We don't know is a valid answer. It's inherently hard to predict, and you can't simply change estimates into commitments and feel like you've made that problem go away. And I've seen people do this time and again, right? What used to be an estimation session at the beginning of an iteration that came up with 18 pandas of work somehow gets turned into the team committed to doing 18 pandas of work this iteration.
[00:16:04] What? No, we didn't. Yeah, you did. Got to get them done. That's what we've committed to, right? This is why no estimates was born. No Estimates wasn't born because people hated estimating. I've estimated probably a hundred projects in my lifetime. I don't mind estimation. I'd mind getting beaten over the head with my estimates after the fact when they're wrong.
[00:16:32] This has a lot of validity, right? So let's think about this for a second. So if we're doing per sprint estimation or commitments, there are three possible outcomes. One is that you absolutely nailed it. You said, I'm going to get 18 pandas worth of work time. Okay. You basically submit that PR right before walking into the demo to get that 18th Panda worth of work done.
[00:16:57] Does anybody think this happens on their projects? You are all correct. This never happens. What happens is you get done early sometimes and something called Parkinson's law takes over, which Parkinson's law is basically saying work expands to meet the estimate. And this isn't something that's malicious, right?
[00:17:16] Like developers aren't stealing your money. It's not like they're off Googling, and reading blogs or something like that. What they're doing is saying, Hey, I can go back and add a little bit more fidelity to that story that I just can. I can go back and add a few more automated tests that I wish it would have had the first time.
[00:17:34] I can make that user interface a little bit better. That's how things start expanding, right? These arbitrary sprint boundaries lead us to just chewing up time. If those didn't exist, we'd probably just pull the next car to move on. Obviously the third outcome is that we're late and we're all probably pretty familiar with this, right?
[00:17:51] Cause software developers are horrible at estimation. But when we're late, what happens?
[00:17:59] How does that affect the next estimation session or commitment session? So if our think back to turning this into a commitment If we only got 16 pandas worth of work done and our project manager beats on us that no you committed to 18 You have to get these other two pandas worth of work done Maybe by the next sprint or maybe over the weekend something like that What do we do as developers heading into the next estimation session?
[00:18:28] We game it. This system is easily gamed. We're dealing with pandas or t shirt sizes or Fibonacci points or whatever, right? They aren't real. These are numbers we're coming up with on the fly. So if you start beating on your developers that, hey, they're not hitting enough velocity. Okay, problem solved.
[00:18:46] That's a five, not a three. Oh, and by the way, this defect's big enough. It should also be a three. And you get in these big arguments, right? I've seen teams do this where they're arguing about, Listen, AvroPoach, what do you do When you only get a story partially done in the sprint. Do you credit yourself three points for the 60 percent you got done and carry over two points as a new card?
[00:19:09] It's a complete and utter waste of time. Instead, let's think a little bit differently. No estimates is valid. Especially at the per sprint level. Stop it. Stop estimating completely at the sprint level. Just don't do it. Our project managers are trying to figure out when we're done. If you want to figure out when we're done, track cycle time instead.
[00:19:34] Cycle time is basically real data that says, on average, it's taking us this amount of time to get a card through the system, right? And you can keep this as a rolling window so that you see, hey, this is the last months of cycle time or from project inception. You can actually start to track this as a relative number, right?
[00:19:53] Hey, our cycle time is trending down. We're getting faster. This is a good thing. Or, hey, our cycle time is trending up. Do we have too much technical debt? Do we need to start focusing on that? This is a much better number at predicting how the team is going to do in the future. It's based on real things, calendar days.
[00:20:14] The big gripe with cycle time is typically that people can't break down the system into similarly sized features. Okay, fine. Do a really quick sizing and then track cycle time based on those sizes. Really quick, like small and large. If your estimation session takes more than 30 minutes, get rid of it.
[00:20:34] And just deal with the variance, but then you can track cycle time at small and largest, right? So you can say, Hey, we know it takes about two days for a small to get through the system, about five days for a large to get.
[00:20:48] Okay. So we've hampered our project manager and our stakeholders ability to do anything at this point.
[00:20:59] Unfortunately, this just makes them stress out because they're still having to answer questions about why are we late? When are we going to be done? So their typical reaction to this is radiate that stress out to you all as developers. And they'll start applying pressure to you. Sometimes subtly, sometimes not so subtly.
[00:21:20] I was part of a project where this person wasn't really, as a project manager, very experienced with that. And the first retrospective item, every iteration for like our first eight iterations was velocity. No other detail. Velocity. Velocity's bad. It's okay, great. You get the message. We got to go fast.
[00:21:40] But does applying pressure to software developers work? Has anybody seen this before? This is called the stress response curve. There's been a lot of study on how pressure affects our performance. Dating back over 100 years now. So this is I think late 70s or something. But, what they're trying to articulate here is that, look, left to our own devices, developers get nothing done.
[00:22:06] Kind of accurate. We do yak shaves and stuff like that all the time. We aren't the most productive thing when no pressure is applied. Similarly a month ago when I was working on this slide deck, there was no pressure. I got very little done when it became end of last week. And I'm like, Oh God, I got to give a talk here in less than a week.
[00:22:21] I started making a lot more progress, right? As that stress increased, I started producing more. Eventually you hit this comfort zone where it's the ideal amount of stress. You're getting enough pressure to where you're highly productive, but there's a precipitous drop off, right? As people apply more pressure beyond that point, you start to become exhausted.
[00:22:43] Even ill, you start to have breakdowns and performance just plummets. So the challenge with this though is do we even know where we are on this graph? I feel like I don't at the time. I guarantee our project managers don't. These studies were typically about like, rote jobs, right? Like manufacturing jobs, things like that.
[00:23:10] As knowledge workers, how does this play with creativity? So Harvard Business Review had a really good article called Creativity Under the Gun. And it was basically a research project that set out to answer this question, like, how is our creativity affected by pressure? Unsurprisingly, they found that we were less creative when a lot of pressure was applied.
[00:23:32] And the way they studied this was basically they had people answering this daily survey, and they had to, in detail articulate what they had produced that day. And they talked a lot about how much pressure they felt like they were under at the time, how creative they felt they were, et cetera, et cetera.
[00:23:47] What happened though, is that was interesting is they found a couple of other things with this. One, people didn't have any idea really that they were less creative when this pressure was applied. They still felt like they were pretty creative, but the solutions that they were articulating were subjectively less two, there's this interesting facet where tracking this on a daily basis, they found that on Monday, say on Monday, your. Writing some software, I'm doing a great job, feel highly creative. Tuesday, the same thing. All of a sudden your planning meeting happens on Wednesday and you get this huge speech about velocity that creates all this pressure on your team.
[00:24:21] It immediately drops your creativity, but then your creativity stays down the next day and the day after that. So it wasn't even a situation like, Oh, Hey, it's a minor blip in the radar when you apply the stress. You completely derail a developer for the better part of a week.
[00:24:40] And I think the problem that we're running into here is, That again, project managers don't know where we're at on the stress response curve. When we're behind on a project, they're assuming, hey, these people are at the beginning of that stress response curve. They're largely bored.
[00:24:55] Maybe they're lazy. And developers admittedly don't do a good job of combating this belief, right? When we're talking about yak shaves or how we have to wait for four hours for our cucumber sweets to end. When we say that in front of project managers and stakeholders, they're typically hearing I'm throwing money out the window on these knuckleheads.
[00:25:17] We have to be a little bit more disciplined in how we communicate to people. But it's my firm belief that developers aren't lazy. At least when they're properly motivated. Anybody recognize this picture? Apollo 13. That's right. Apollo 13, I'll summarize. Spaceship blows up in space. Three astronauts on board wound up having to go into this lunar module that was designed for two people for a period of hours.
[00:25:44] They had to be in there for four days while the spacecraft was coming back towards Earth. And there's a boatload of issues with this, right? Like the things that blew up caused them to have a lack of electricity, lack of oxygen. And the issue that they were dealing with here was that it wasn't designed to filter out the amount of carbon dioxide that the astronauts were producing in that lunar module.
[00:26:07] So they were slowly asphyxiated. So presented with this problem, NASA basically Got a bunch of engineers in a room with all of the materials that the astronauts would have on board. And they put a laser focus on them to solve this problem. And this is what they came up with. So they basically took a, one of these filters that didn't fit from the command module into the lunar module, a sock, the cover of a flight manual, a bunch of duct tape, cause of course and like some tubing, and a bungee cord.
[00:26:42] They MacGyver'd the hell out of this thing. And as it happens, it managed to save the lives of three people. So under tremendous amounts of stress, these people were able to come up with what I would argue is a very creative solution to a problem at hand. So I know some of you are thinking like, look, we're not saving lives here.
[00:27:04] And I'll grant you that, but. We had a client recently who was in this very same mode of applying a ton of pressure to our developer so much so that we wound up rolling some of the developers off of their team. Okay. And this client didn't do a great job of talking about the impact that this software would have.
[00:27:26] They just tried to radiate stress onto the team that people would work harder, faster, longer.
[00:27:34] Once the project was completed, we actually did get it done on time. We got a message from the client. About how their end user called them on the verge of tears because two years of their career they had spent building up to this point when this software would be released. They had expended a ton of political capital.
[00:27:54] Basically their career at this company hinged on the balance of this successful release. And they call it on the verge of tears articulating how when they first saw these applications rolling through the system, how thankful they were, how much gratitude they had for the people who were behind it.
[00:28:09] What if we heard that story in the beginning instead of the stress? Just blanket stress that we got to be done by this date. Do you think the developers would be more, more productive? More creative? I do. We need to focus less on stress. Instead, maybe focus a little bit more on the why. What is meaningful to our end users when the software's wrong?
[00:28:34] Who are the, what impact does this have on the stakeholders careers? On the business? Do we know this on all of our software projects? Unfortunately, not enough. All right. Developers are not immune to criticism when it comes to these things. Oftentimes we get all this pressure, we get all this stress, and we do want to do a good job.
[00:28:57] We want to make sure this thing gets done by the date, if the date's important. So what we wind up doing is cutting corners. Okay. And this is hard to talk about because quality is like so ephemeral, right? Do we reduce quality in order to speed up hat tip to the late Robert Pierce. If you wrote a great book called Zen and the Art of Motorcycle Maintenance which is primary character, slowly battling insanity while trying to define quality.
[00:29:24] I feel like this is me with regards to trying to come up with a definition of software quality, but we'll give it a shot anyways. With software quality, some people, we, we try to grab for the easy, like metrics, like cyclomatic complexity. So again, this is largely measuring the number of branches in a code base, for those of you who aren't developers or unfamiliar.
[00:29:47] And it is true, the more branches there are in a section of code, the harder it is to reason about and understand. But I can still name things really horribly. I can use abstractions that make no sense to anyone but myself. I can still write arguably bad code. Horrible quality code, even with a low cyclomatic complexity.
[00:30:06] So that's definitely not the whole story. Some people glom onto Uncle Bob's solid, right? We think the solid principles define what's good code and what's not, but solid doesn't cover everything either. If you're in Node. js dealing with like idiomatic callbacks and promises, show me where that's covered in solid.
[00:30:24] Anything Node. js is covered in solid. If you're dealing with a functional language, right? Like how does this apply? It may not. So we can't use that as just a measurement of quality. Typically, this is the one that people try to glom onto the most. It's high test coverage. We have 98 percent code coverage.
[00:30:42] We're good. Maybe. Have 8 percent code coverage, I guarantee you have problems. You may have a lot of problems, though, still, and have 98 percent code coverage. Quality is inherently hard to define, but I'm going to take a stab at it. With regards to software development, I think quality should be how easily and safely we can change a code base.
[00:31:03] Because this affects our likelihood of creating issues in front of a user in the form of defects or downtime or what have you. And it affects how quickly we can get things done, which everybody else cares about, stakeholders and project managers alike. So again, this is largely nebulous though. So it's hard to put a number on.
[00:31:26] But again, coming back to the original question, do we reduce our quality in order to speed up? Let's think about the corners that we cut as software developers. Do we start writing less automated tests? Yep. What does that lead to? Typically less refactored, right? If I have code coverage of 8 percent on the code base, once it works, I slowly back away from the keyboard, right?
[00:31:50] I ship that thing and get the hell out of there. I don't go back and refactor it to make sure that Amber, who's following in my footsteps, will understand it. Or that future Todd, who is even more of an adult than current Todd, will understand it. This leads to more complexity.
[00:32:09] Oftentimes we'll start doing things like less pairing, Anybody ever done this when timeline pressure rears its head? Constantly, right? So if we have more complexity in a code base, and less pairing, what do we get? We have less understanding of this code base, across a greater number of people. Alright, this is inevitably going to lead to more defects.
[00:32:33] You have less tests, less refactoring, more complexity, less comprehension among your team members, you're gonna have more issues rear their head. When you have defects, you have reworks. This is leading to less quality, you can't argue against that. We cut these corners, it leads to less quality. But remember the reason we cut these corners in the first place was to speed up.
[00:32:55] I'd argue it also leads to less speed. If you're reworking the solution because you're finding defects late, instead of automating tests early, if it's taking others longer to come up to speed on what the hell you were thinking when you wrote this code, the team is going to be moving less fast. So instead, let's focus on being intentional.
[00:33:13] Steve McConnell had a great white paper. Maybe 10 years ago at this point. I can't remember about being intentional versus unintentional technical debt So he likens intentional technical debt to a mortgage because you have a finite date when you're going to pay that thing off Maybe 45 freaking days or 45 years in the future, right?
[00:33:31] But there's a date there. You have a plan Unintentional technical debt is like credit card debt. You have no plan for this And that's typically what we're doing when we're cutting corners just saying hey i'm going to cut corners and write less tests That doesn't mean you're intentional. That just means you're doing a half assed job, right?
[00:33:49] Being intentional is something like, hey, we're running on two databases that are basically the same thing right now we know is not ideal, but we're going to migrate to this one, once we have this project done. Something along those lines. So be intentional and make that debt visible. I once was critiqued by people who I really respect in the community for saying we should have technical debt cards up on the board.
[00:34:10] And their question to me was like, why do you have technical debt? I'm like, what the hell are you guys doing where you don't have technical debt? Every project I've ever worked on has had something that I wish I could go back and change. Treat it like any other feature, throw it up on the board, do a quick estimation of it, keep tabs on it, prioritize it like anything else.
[00:34:30] And we got to get away from this mindset that we slow down when we're pairing. I talked to a lot of clients. We're not dogmatic about pairing. Don't get me wrong. I give it simple kind of rails, cred screens and things like that. Yeah. People are going to follow in my footsteps and probably know exactly what I was thinking when I built it.
[00:34:47] But if it's something that has a little bit more complexity, make sure you have another set of eyes on it. You're probably going to create some abstractions or make some decisions that may be hard to follow for others. So you want a second set of eyes on it. Alright, so all of these things are how we react when that pressure is applied.
[00:35:06] None of them are really talking about how we might be able to avoid it in the future. So let's come back to estimation here a little, right? No estimates I mentioned is extremely valid as a movement that's trying to get people from clubbing developers over the head with these numbers, right? But are there times where we should be estimating?
[00:35:31] Should we take no estimates like at its core, just never ever estimate again? Again, I have a little bit of empathy for our stakeholder friend, right? Typically, they're trying to determine, should I undertake the software development initiative or not? And more than likely, they're probably communicating this up to a C level person or a board of directors or something like that, right?
[00:35:54] This is a big decision. Oftentimes, the investment itself may be 500, 000 million. The return on that investment may be well under the seven figures. This is a big decision. So when they're trying to understand, okay, what's our investment going to be, what kind of data do we provide them with, right?
[00:36:14] If they say, look, I can make 500 grand, I estimate this will make 500 grand in three months if we release this. That's our return. Developers go through first passive estimation. They're like, hey, stakeholder, it could take 150k to build this. Stakeholder's yep, let's get rolling. They didn't hear the second part of this, right?
[00:36:32] Which is, it could also take 780, 000. We have no idea how to estimate, by the way. Like that variance is still 1 to 8, right? Remember? So there's a pretty big range there. As a consulting company, we've tried to bid in this manner against people who are like just hungry for business in the past.
[00:36:49] They all bid at 150 K and we bid at 600 K and we're like, that's a whole bunch of idiots. They don't know what they're doing. Of course then the project actually comes in at 600 K but by then it's too late, right? So we can't leave them with just this data. With just this data, they're going to make bad decisions.
[00:37:06] They're going to outsource it as a fixed bid contract to some unscrupulous vendor, right? Or even worse, they'll actually bring it to your team. under the assumption that you can get it done for 150k. And if you can't, they're just gonna beat on you until you get it done for somewhere under 500k. Right?
[00:37:23] This is the cause of all this pressure. We don't do a good job of setting expectations with them at the early part of the project, and it winds up killing us in the later part. We need estimation. I'm not saying at the sprint level, get rid of that completely. But stakeholders need data to make decisions.
[00:37:43] So we need to get better at it. That's on us. All right, for us to get better at this let's think a little bit about why this is so difficult as a practice. Why is it that we're one to eight times, range? I'd say first and foremost, we are inherently optimistic as developers, right? We try to pull from historical data and we like filter out a lot of noise and bad experiences when we do it.
[00:38:10] So I think of that one time it took me like four hours to build a pretty complex system. Page in rails and I'm like, yep, it takes four hours. I forget about the time. It took me like 34 hours to build a moderately complex page in rails. Cause I didn't know what I was doing. I fixate on that one, shining moment where I was actually productive.
[00:38:27] Further, like when we do this, we tend to aggregate all these things, right? So we have, all right, it took me four hours to do this. Took me eight hours to do this. And somebody says, okay, let's throw these in a spreadsheet and we'll add them all up. And okay. It takes 4, 000 hours for this project to be done.
[00:38:44] So they say, okay, 4, 000 hours. That means we'll put four developers on it for a happy year and we'll be done. And they're thinking the whole time look, there's maybe some distractions and there's a little ramp up. So that will consume 5 percent of the time, but most part of developers or developers will get it done, right?
[00:39:00] This is not what any developer's day looks like that I know, right? Their days look more like this, right? Especially on bigger teams, you're dealing with meetings constantly because nobody's on the same page. So you've got to communicate. God forbid you're using Cucumber because that means you're waiting four hours every day on something to finish when you made a minor change.
[00:39:18] Ten percent, who knows what. This is much more accurate, but we don't factor this in when we're estimating. We try to be very optimistic with it instead. Additionally, estimates are hard because nothing is the same. Our environment that we're working within is constantly changing. Not just technology, which, like Andy said this morning, there's probably three different MVC frameworks that were created in JavaScript today.
[00:39:42] We'll be using another one here in a week, right? They aren't all the same. When I estimate something in Angular, doing the same thing in React may take more or less time. I have no idea. Let alone our team, right? Two of us working together may be more productive than four of us. We don't know when we're estimating what the team's gonna be like, so we can't predict that.
[00:40:02] We can't use historical data to predict it. All
[00:40:08] right, third, we're really pretty horrible at trying to understand what the hell a business wants at the start of the project. And a lot of that's on them, right? Like they can't articulate what the hell they want either. So requirements suck. I literally had this on a fixed bid government RFP, right? This was like item 176 in this giant list of 500 things they wanted built.
[00:40:30] And it was at the same level, like item 175 was the ability to calculate sales tax. I know what that means. The ability to report on data. What is that? That could be 3, 000 or 350, 000. I have no idea how to respond to this, right? So we have to coach them into actually voicing what they want, what problems they want solved and what they think are the solutions, right?
[00:40:53] Instead, for us to get better at estimation, again, I'm coming back to cycle time because this is a more accurate predictor of future, right? Especially if we start tracking this at a team level. So if we know, hey, this team of four people. When they were working on this Epic, and say this Epic was like Facebook integration or something like that, it took them about 11 days.
[00:41:17] And then when they were doing like a Twitter integration, it took them like 17 days. Now when they were doing like a Salesforce integration, it took them 24 days. Then when we get some random third party API integration, and we know this team's going to be working on it in the future, we can say, look, it's probably somewhere between 11 and 24.
[00:41:32] It's new, so we'll buffer it. Our average was 17, we'll call it 20 days. That's a lot more accurate than what we would have been in the past. Because I guarantee developers like, yep, took us 11 days last time, we'll do it in 11 days. Not a problem. Or it's probably they're thinking like, hey, it was eight hours of effort spread across 11 days.
[00:41:50] So track team level cycle time. The caveat to that though, is if your team changes drastically, throw those numbers out the window. And this sounds very unagile to put up on a screen at an agile conference, right? Like resist change. Change is constant. I get it. I've sat through two team forming presentations today.
[00:42:10] I know that we change teams often. Don't get me wrong. I'm just saying resist it if you can. If you want accurate and predictable estimates, try to keep a core team together and use their past data to predict future results. If you can't do that, you'll have to make some sacrifices. Some of those sacrifices may be in the form of technical spikes.
[00:42:32] So oftentimes when we're presented with, Hey, estimate this project. We may want to use test double for it. We say, great. It's between 150 K and 380 K. And they're like, you have no idea what you're doing. And I'm like, yes, correct. Here are the areas where we have no idea what we're doing, right? You say you want us to integrate to this third party API that I've never seen.
[00:42:52] Can we spend a week and maybe 10, 000 of your budget to go see what that's like? And maybe we building up of a solution there where you can just pick that up and run with it. Or maybe it's throwaway, I don't know. And we realized that, hey, this is actually closer to 380k, and we shouldn't do this project.
[00:43:09] Leverage these technical spikes to get more data and mitigate that risk. Further, we need to adjust the fidelity of these solutions, all right? This is for our stakeholders. Our stakeholders have to be comfortable saying, the project is going to take longer than what we thought. We need to ramp down the fidelity of these solutions.
[00:43:29] You can't just continue to think, hey, we're going to get everything, right? And it's all going to have the same amount of polish that we thought it would when we originally set out to do this project. Can't. You got to ratchet it down.
[00:43:43] All right. So this may leave you thinking like, are we really just running into a lack of trust? Is that our core issue? Project managers don't trust that developers are going to work with any level of urgency. Stakeholders don't trust developers to come up with an estimate that doesn't have 500 grand between the upper and lower bounds.
[00:44:03] Developers don't trust stakeholders or project managers to not beat them on the head with estimates or milestones. Of course, it's a lack of trust. That is definitely the problem that we have, but have we earned it? Probably not, right? The reason that stakeholders don't trust us to provide estimates is because we provide horrible estimates and then we consistently miss them, right?
[00:44:30] Devs don't trust the project managers and stakeholders to not beat on them with random dates and commitments because it happens all the time, right? If we want to break out of this cycle, in my opinion, we have to get better at a couple of things. Working with a level of discipline and understanding accountability across these groups.
[00:44:46] For each one of these personas, our stakeholder friends, they need to be able to communicate the objectives of the software, hopefully in a way that's meaningful for developers, that's intrinsically motivating. They need to be able to articulate what the constraints are and why they're there. Then those numbers have meaning, those dates have meaning, and it will affect the development teams.
[00:45:10] And they need to be responsible for adjusting fidelity down. When we start to run into issues. Alright, Project Managers. Communication is your primary goal. This hasn't wavered in my mind since I first read it in the pin box. 90 percent of project management is communication. 10 percent is not beating developers over the head with estimates.
[00:45:32] You're responsible for clearing impediments from this team and making sure this team is organized in such a way that they're the most productive version of themselves. And if that means shrinking the team down to four developers instead of eight, do it. Alright, developers, we have the most work to do here.
[00:45:48] We have to get better at estimation. We can't just hide behind no estimates. It's going to do us a disservice, and it's going to just keep this cycle of timeline pressure rolling. We're also responsible for quality. We're the only ones who have our hands on the keyboard affecting the end product. We have to have a level of discipline and resist that urge to go faster at the sake of quality.
[00:46:11] I think if we do all these things, the conversation comes more about how predictable we are instead of how fast we are. Because it doesn't matter if we're fast. If we're able to estimate at the beginning of a project that hey, this is going to be done in six months And we get it done, you know within five to six months.
[00:46:30] All right, that's all I have i'm standing between you and free drinks So i'd be happy to answer questions, but I won't be ashamed if you just walk out of here Thank you very much for your time