There are murmurs that a type system will be coming to Ruby. Before that happens, you should get informed about what is hot in the current type system market.
Haskell is known for its type system, but instead of describing it with dense language, let’s take a journey through code examples. Throughout our trip, we will avoid scary buzzwords like “monad” and “algebraic data type” because, honestly, what good is a formal definition when you don’t understand the power behind the concept?
We will compare solutions in Ruby to solutions in Haskell, and each stop on our trip will introduce a new mind-blowing paradigm brought to you by Haskell’s type system.
You will learn about type systems through a non-threatening story, and you’ll understand the value a type system can bring to your code. Key stops on our route will be “forget about nil," “declaratively model your domain,” and “allow your compiler to drive your design”.
The video above was recorded at CodeMash on January 12, 2017.
Relevant links
Transcript of the talk
[00:00:00] Alright, how's everybody doing today? I'm Sam Jones, and I'm on Twitter, at Sam Jonester. Go ahead and follow me, send some tweets my way. I would love to hear from you. So I live in Philadelphia, but I'm a recent transplant from Cleveland. So it's nice to see some familiar faces from Cleveland out in the audience today.
[00:00:20] Let's catch up after the talk. So I'm passionate about code design, outside in testing, and I love going point free. Hopefully that'll make more sense as the talk progresses. When I'm not coding, I like to, I like to cook, and I cycle. I'm also a husband, and I have a new 8 month old daughter. And I work for Test Double.
[00:00:41] Test Double is a group of awesome people who write nice code. And if you'd like to write nice code with us, or if you'd like our help writing nice code, we're on Twitter at Test Double, or send us an email at hello at testdouble. com, or go talk to Todd or me after the talk. So I told you I was a new father, and I've got this gigantic projector here, everybody, this is Quinn. This is my daughter. She was three months in this picture, so she's a little bit bigger now, but she's still just as cute. So let's get started. So there are rumors that Ruby is getting a type system. So everybody's getting upset about it because Ruby is a dynamic language. And because it's dynamic, it allows you to do some really crazy things like completely redefine your system at runtime.
[00:01:29] But instead of talking about Ruby's new type system, what it's going to look like, and how it's going to function, let's ask ourselves a more important question. Is there anything positive that can be gained by adding a type system to our coding? So it turns out there is, which is good for me because otherwise I wouldn't have a talk.
[00:01:46] To illustrate that, I'm going to Hopefully teach four valuable lessons that can be learned by a type system. I'm going to use Haskell as the example language. You might have heard that Haskell is a bunch of buzzwords, that it's an intimidating language to approach, that there are a lot of really weird symbols, and that the documentation is dense and hard to absorb.
[00:02:08] Or you might have heard that Haskell is known as a highfalutin language that's only accessible to programmers up in their secluded ivory towers. And some people even describe writing Haskell as writing a mathematical equation. And who here didn't do that well in math college? I'm one of you too. So all of those opinions give Haskell a bad name.
[00:02:28] And it makes you wonder why I'm even here to talk to you about it today. But, while some of that might be true, Haskell also has a renowned type system. And, when I learned Haskell, and appreciated the value that type system added to my day to day coding, it was an illuminating experience. Today, I'm gonna try to light that bulb over your head too, and make your face look like Quince.
[00:02:53] I'm not going to try to convince you to drop Ruby or whatever language you're programming in for Haskell. I haven't, and I don't think you should either. Instead, we're just going to look at some of the programming paradigms brought to you by Haskell's type system and examine how they can improve the code that you're writing today.
[00:03:14] So we're going to do that by looking at four concepts. First, we can forget about nil. It adds a hidden implicit contract, it causes runtime errors, and it adds a lot of complexity to the code that you're writing. Next, we'll declaratively model our domain, because when we say what our domain is by using types, we can make transitions from one domain object to the next much more clear.
[00:03:37] And then finally, we'll compose larger pieces by building larger things from smaller pieces and naming them with domain concepts. This is going to make our code more cleanly describe what it's doing. After we've learned those three concepts, we'll circle back and look at them again in the last section, letting the compiler drive.
[00:03:55] We're going to work top down on a little example problem. First we'll define an API from the outside, then we'll use a type signature and a test to design the rest of the code. We're going to explore the first three topics in more detail in this section. Awesome. So let's get started. So to get started, we're going to play a game.
[00:04:20] And the game is called, What's Being Returned. So we can start with a really simple function here, it's called divide, it takes a top and a bottom, and divides the two values. It's simple, and that lets us focus on a problem that you see in a lot of type systems, without being bogged down by any complex algorithms.
[00:04:38] What does this return? That's easy, right? If we give it two numbers, we get back a number. But, what happens if the bottom is zero? We raise a zero division error. Or if any of the inputs coming into the function are nil, we raise two separate errors. So we can see how this causes a lot of problems, right?
[00:04:59] All over the code, we're going to need to check the types of the inputs going into this function or wrap it to ensure that no errors are thrown by the function. So how can we prevent some of those errors? We can be a little bit more safe And prevent the division by zero error. We can ensure that no exceptions are being thrown, and just return nil if the input is invalid.
[00:05:22] And we can do that with a guard clause. So a guard clause is pretty idiomatic in Ruby. You've probably seen these a bunch of times, or even written a few yourself. But what are we really saying when we write that guard clause? I think we're saying that divide maybe returns that fixed num value. So why don't we just say that instead?
[00:05:44] So this is a Haskell type signature for that same divide function. So a type signature, it says what kinds of values a function accepts and what kinds it produces. And you can read this one something like divide has the type a to maybe a, where a is any fractional number. So it accepts two fractional numbers and produces a third.
[00:06:06] In this context you can think of maybe as a box that holds either just a value or nothing at all. And we can implement the same function with the guard clause in Haskell. So if the divide, if we divide and the bottom is zero, then we just return nothing. Otherwise, we return just the top divided by the bottom.
[00:06:27] That's pretty elegant, right? We've used types to provide more valuable information about this function. So the type system lets us encode the knowledge. The divide may be returns value that may be value, tells the callers of this function explicitly. What to expect as a response, and that it's going to be an optional value.
[00:06:48] But if we go back to that Ruby example, the last two lines indicate that we're missing a requirement. In addition to producing a maybe value, divide should also accept maybe values. That's not a big deal, right? We have this awesome new maybe thing. Let's just go ahead and use it and see what happens.
[00:07:06] That would have been too easy, wouldn't it? This code is really hard to understand. It's hard to tell what's happening. It's hard to see which pattern up here is the main important path through the function. There's just way too much noise to really understand what's going on. When we're writing in languages like Ruby, There's a phrase for this concept.
[00:07:23] It's called the Pyramid of Doom, and we already know we don't want this polluting our codebases, right? If we saw this, we'd probably run out that door screaming. And it adds a lot of complexity. It makes the code difficult to debug, difficult to understand, and it makes it difficult to test.
[00:07:41] Luckily, when we're writing Haskell, we're able to program as if that maybe value just doesn't exist at all. So here are a few new fancy symbols. They basically just allow us to ignore the fact that we're dealing with that maybe value. And we can read this function something like, The function divide applied to the top and the bottom if they exist.
[00:08:05] That last piece is really important to this, right? It lets us ignore that the maybes are there and just focus on the code. Focus on the function that we'd like to apply those values to. So if the top and the bottom are both just values, Then we execute divide and return the result. Otherwise, that nothingness that comes into the function passes right on through.
[00:08:26] And my favorite part about this, is that it's built into the standard library. It's a language feature in Haskell, and that's really cool. So let's try out that new function. If we provide just 5 and just 2, we're going to get back just 2. 5, just as we would expect. If we provide zero is the bottom, we hit that guard clause, and we get back nothing.
[00:08:53] And when we provide nothing for either of the inputs, if we provide invalid inputs, that special new syntax lets that nothingness pass right on through the function. And we've done a very important thing here. We've defined a consistent return type, so there's no more guessing, and there's no more hoping that we'll get back a value that we can use, and a value that won't raise an exception later on down the process.
[00:09:16] We know exactly what we're getting back, and that it's going to be easy to deal with. So one thing you may have noticed, if you're one of the more astute listeners in this group, is that, that maybe, We'll go viral in the code base. If we start accepting maybes and return, and returning maybes in our functions It's going to go everywhere all the way up and down the stack, but I actually think that's a good thing I think that gives you a contract that states that the value is optional at that point in time There's no more guessing and no more hoping.
[00:09:47] This contract allows us to handle the maybe at the edges of the application like when we're Writing values to a database or when we're presenting them to a user. It allows the complexity around handling non existent values to be defined where I think it should be. We can handle our errors and create error messages where they're being displayed and instead of at the bottom of the stack where it's going to be too generic to mean anything to the end user.
[00:10:11] And that's a really powerful concept. Alright, so there was a lot there. Let's recap and understand how we can use this concept to write better Ruby. When we strive to return a single type in our code, we can Eliminate any hidden implicit contracts, and we make interacting with our APIs a lot more pleasant.
[00:10:32] We redu, we reduce complexity downstream where consumers of the APIs that we build have no choice about what types of objects we're providing to them. And when we push the complexity of handling missing information to the edges of our application, we put control around error handling and error messaging where it should be.
[00:10:50] This allows the presenters of the error message to define the error message that makes the most sense to them.
[00:10:58] Awesome, so we made it through forget about nil. That was a pretty cool concept. So let's look at the next concept, declaratively modeling our domain.
[00:11:11] So we saw how to use a maybe data type, that it's either nothing or just a value. Here is the real Haskell source code for defining that data type. Nice and elegant, just like the function that we looked at before. So this is pretty cool. It makes it really simple to say that a type is one thing or something else.
[00:11:29] So how can we use that in real life? To figure that out, let's look at a domain that we're probably all familiar with, customers. Customers make us money, so we all know about them, right? We have a customer and the customer is either confirmed or unconfirmed. If the customer is unconfirmed, then they have the login information that they've provided and the confirmation token to turn them into a confirmed customer.
[00:11:53] But if they're already confirmed, they just need the login information. That confirmation token is no longer relevant. We don't need to worry about it in our system. So how could we model this first in Ruby? So here's the simplest thing that we could do. We can use a plain Ruby object, and we can define adder accessors for both the login information and the confirmation token.
[00:12:13] This allows us to set that information to the customer to store it later. And to tell if a customer has already been confirmed, We can just maybe accept a confirmation date, right? When they've been confirmed, we set this date and then we have a flag that tests for the presence of that date. This is pretty standard.
[00:12:29] You've probably seen active model records like this before, or the equivalent in other languages. But when we, when we write code like this, we lose control. We lose control of how the customer is created. We just accept the values which are set elsewhere in the code. We lose control of how the customer is toggled to a confirmed state.
[00:12:49] Okay. Our customer really has no idea what it even means to be a confirmed customer at this point. And we lose control of how the confirmation token is removed. All right, the logic to do all of this stuff is spread throughout the code base. Some of the logic might be in like a login controller and the rest in like a user model, and who knows where to start looking when it changes.
[00:13:07] And it always does change, right? So to combat this, we act defensively. So please don't try to read this. I know you can't see it in the back, but that's the point. All of this code here is basically just to protect the creation of a customer and the transition from an unconfirmed to a confirmed customer.
[00:13:24] And the reason why I'm showing it to you, and I know that you aren't able to see it, is because I want to make the point that it's hard to read, it's hard to do correctly, and it's probably inconsistent from one class to the next because it's so hard. But what if we could do we do this because it adds a lot of value.
[00:13:41] We've centralized the logic of creating a customer and toggling the types of customers into one place to change in the codebase. And we've clearly defined that the transitions from one domain concept to another. But what if this kind of safety was free? If it was part of the language and our type system helped us accomplish it?
[00:14:02] That would be pretty cool, right? We can accomplish this in Haskell. First though, let's refresh our memories. So we have a customer that is confirmed or unconfirmed. Keyword or. And the two types of customers hold different types of information. We can implement this with a declarative domain in Haskell by creating a data type customer and saying that the customer is confirmed or unconfirmed.
[00:14:26] And only the relevant data for that type of customer can be defined in this data type. And each piece of information within that type documents what it is and basically how it's being used. So that final Ruby solution that you weren't able to read protected the creation of a customer object. We can do that in Haskell as well.
[00:14:46] We can create a function for making a customer, and all we have to do is put it right next to the customer constructors. Here we're declaring that we can make a new customer by defining an unconfirmed customer with a generated token. And since that unconfirmed customer still accepts login information, so does this function.
[00:15:07] Nice and easy. Next thing that we did was control the state change from an unconfirmed customer to a confirmed customer. So we can do that in Haskell too. First we'll define a type signature that accepts a customer and produces a new customer that has already been confirmed. If the customer is unconfirmed, we can destructure the unconfirmed customer to get the login information.
[00:15:36] And then we can use that login information to create a new confirmed customer. And if the customer is already confirmed, we just return it. That's what the function ID does. It just returns the value that you provided. But why do we even care about this path through the application? Haskell forces us to handle all types.
[00:15:55] We have defined a function that accepts a customer and produces a new customer. And the compiler forces us to handle both types of customers in our code. This is awesome, because when we add a new type of customer for example, an invalid customer that no longer wants to be billed, our compiler tells us everywhere in the code base that we need to go add a path to handle this new type of customer.
[00:16:20] So let's recap again, and understand how this concept can help us write better Ruby. When we type the objects within our domain, they become richer objects. They gain context about the values that they hold, and how they're being used. When we act defensively and protect the creation of our domain objects, and the transitions between them, we are able to centralize the logic about these changes into just one place in our codebase.
[00:16:46] There's only one place that you need to go look to change this stuff.
[00:16:52] Alright, so we've made it through the first two concepts, forgetting about nil and declaratively modeling our domain. Let's look at the last important process, or the last important concept in this talk. Composing larger pieces from smaller things.
[00:17:09] We're going to do that by talking about an operator that you're all familiar with, the dot. We're going to look at how it's used differently in Ruby versus Haskell, and how it's used to do function composition. So to set the story, here's a set of sequential events that we're telling our Ruby interpreter to perform.
[00:17:29] This is a map reduce pattern. It's something that we see all the time. We're taking some data in, performing a few transformations on it, and then returning the results.
[00:17:41] We can chain those methods together in Ruby with the dot. Ruby's dot operator is used to send messages to objects. Here, we're using it to send a message to a string. To different innumerable objects, telling them explicitly how to perform the transformations that we care about. This is a lot cleaner, but it's still a sequential list of steps that say how to do each transformation.
[00:18:03] And what happens if the array that we're providing is too large? We'll end up iterating the entire thing three times. First, we'll iterate it to add one, a second time to multiply by three, and a third time to reduce for the sum. We can make this a little bit better by using this fancy method in Ruby's enumerable module called lazy to prevent the processing of the transformations until it's absolutely necessary.
[00:18:27] By waiting to perform the transformations until necessary, we're able to basically group them together to be processed all at once. And you can visualize this as wrapping the more specific transformations into a new function that's composed. of those specific transformations. So for each item, when we would like to read the results, all of the transformations get performed at once, and that's a very powerful concept.
[00:18:54] So this is the Haskell equivalent to that same function. We're still doing the same transformations, but in Haskell lazy evaluation is a language feature. And that's why you don't explicitly see a call to any lazy function like you did in the last example. But more importantly, notice what's being used to group those transformations together.
[00:19:15] That dot again. So that dot allows us to compose larger transformations from a set of smaller ones. The only difference between this and that ruby example now is that this one's backwards. And that's because of math. So this is the proof of function composition. Basically it just means that f composed with g is the same as f applied to the results of g applied to x.
[00:19:41] And notice the dot? That's why the composition operator in Haskell is a dot as well. And why it's backwards. So that's cool because it allows us to combine the composition of three smaller functions with the compiler. And when we do that, awesome things can happen. The Haskell compiler actually does glue those functions together with a process that's called lazy partial evaluation.
[00:20:05] And this is somewhat possible because of the type safety guarantee. Our compiler tells us if the things coming from one transformation don't have the right shape to go to the next transformation. This is also cool because function composition allows us to create a pipeline by gluing together lots of smaller transformations.
[00:20:23] The new composition now takes data and sends it through the entire pipeline. And it's also cool because we've used a technique that's called partial application. So that means we can supply some of the arguments to a function, and get back a new function that later accepts the rest of the arguments and eventually, computes the result.
[00:20:42] So this lets us create a new specific version of a generic function like map. Here we're using it for map. Map typically takes two things. First, it takes the function The trans that tells it how to transform each piece of data. And then it takes the data to perform that transformation on. But what we're using it here to create a specific version of that transformation and to create larger building blocks from those specific pieces.
[00:21:11] We've used a domain concept to describe the types in our system. We can expand on that. So let's recap the idea a little bit here in this example as well. So in Haskell, we can use a named function in the composition to say what each specific transformation is, rather than how to explicitly do it. And I really don't think code gets much prettier than this.
[00:21:31] So since we've named the transformations, the composition now becomes sum of times 3 of plus 1. The composition operator just lets, just gets out of the way and lets us focus on the specific transformations that are happening in this pipeline. And it basically just reads like English, and how cool is that?
[00:21:50] So let's recap and look at how. We can use this concept to write better Ruby code. When we use domain concepts as names, our anonymous blocks can become a set of well named transformations. And the code basically just reads like English. And that's really awesome. We're able to tell at just a glance, exactly what's happening.
[00:22:08] And then when we break the problem up into the composition of little pieces, we can influence the design of our entire system. And this is going to become very important when we go into the next section and explore top down TDD with Haskell. All right. So we've made it through the first three important concepts.
[00:22:26] Let's look at how we would use them in a real life example and let the compiler drive us through the implementation of a small of a small exercise.
[00:22:40] We're going to start at the top of a very small application by defining an acceptance test with a type signature that describes how the application should function in a happy path, high level type of situation. And then next we're going to let the compiler drive With the help of our tests drive us through the implementation of that code.
[00:23:00] So who remembers this song? It's been stuck in my head for about eight months. So it's the perfect example for me in this talk We're going to make trigrams with the phrase row your boat So a trigram is just a two word phrase matched to a list of words that come after the phrase For example, if we look at the phrase row the words that come after that phrase are row and row your,
[00:23:27] so we can start by defining a test that describes how we would like to interact with this, with the outside of the API and Haskell. There's a testing framework that's called H spec. It gives us something that's similar to our spec syntax and whether you love it or you hate it, our spec does allow us to gather our thoughts and describe the functionality and sentences.
[00:23:47] And in plain English, H spec gives us this ability, and it doesn't have a lot of the more controversial features that you see in our spec. So it's perfect if you if you're a hater too. So here we're writing a test that defines how we would like to interact with the entry point into our application.
[00:24:04] This is a powerful concept. It allows us to create an API by first documenting exactly how we would like to use it. We can iterate on the design while it's cheapest to do we haven't built anything yet, so there's absolutely no cost associated with tweaking that public API until it's perfect. We can write a test to match that first description that I gave you for trigrammation, but we're missing a requirement.
[00:24:29] Because we just copy pasted this, it doesn't describe very well that the shape of the data being returned doesn't that the shape of the data being returned doesn't describe all of the requirements. It doesn't describe that each phrase should be a key that is only present in the results once. So we can fix that by defining a better type.
[00:24:49] So there we go. We can use a map instead. It gives us the shape that we're after, so we can just update our test to use it. And it's a good thing that we hadn't implemented this yet, or we might be looking at a painful refactor. We made the change while it was cheap. We defined the API that we wished that we were interacting with, rather than fighting with something that's awkward.
[00:25:10] So the next step is to listen to the compiler. So the compiler tells us that the function doesn't exist. So we can listen, and create a type signature for this function. We can use the map that we just looked at in our tests, but that creates a situation that's called primitive obsession. Primitive data types don't really describe the values very well, and they don't tell us how the function uses those types.
[00:25:37] Instead, we'll create a type alias to an already existing data type. So we can create a new type called trigrams, and it allows us to give more context about what this function is doing. So now trigram rate, Turns a string into trigrams. That's pretty powerful, right? Our type signature describes what the function does, and the signature outlines what's happening in this function.
[00:26:00] So the next thing that the compiler tells us to do, now that we have a type signature, is to go ahead and implement the function. So we learned how to compose functionality from small, named concepts. We can apply that here to define the steps that need to be taken to trigramorate a string. We can name the actions that we would like to perform before they exist.
[00:26:22] And this allows us to experiment with dividing the responsibilities before they're even written and before they're awkward to use. Now that we've divided the responsibilities up, we can let the compiler tell us to implement each of these smaller pieces. We'll listen by first defining type signatures for them.
[00:26:40] We can define type signatures to ensure that the transformations work together. We can use the compiler as a sounding board. to experiment with the way that these components interact with each other. Once again, defining the API between components while it's still cheapest to change. So we made sure that our small pieces wire together properly, just by defining types for the functions, but there's still something wrong.
[00:27:05] Just like in the last step, The types were confusing and hard to read, and they don't describe the transformations very well. I wouldn't even want to try to read them out loud to you if they were that confusing. So this is just another example of primitive obsession. And we can fix that with the same strategy.
[00:27:22] We can define type aliases for those specific pieces of data. And this is much better. Now our type signatures basically document the purpose of our code. And we've once again circled back to the concept of declaratively modeling our domain. The types describe the functionality and just reading them gives us an outline of the transformations that are being performed in each step of the process.
[00:27:46] They also provide safety and we know that our components are going to glue together properly because the compiler tells us that they will. In Ruby, I would have had to write a test to ensure that all of my objects are implementing the proper interface. And I love mocking, so I probably would need to ensure that my mocks implement that interface as well.
[00:28:05] Luckily, we have Haskell's compiler to perform that action for us. We can leverage the compiler as a our things wired together properly type of integration test and ensure that one function returns a value that has the right shape for the next function in the transformation. It just can't make sure that the data within that shape is correct.
[00:28:24] We're still going to need to test that ourselves. So let's recap on what we've done so far. First, we created the top level function by composing three smaller domain concepts. And then we defined type signatures for the smaller pieces that declaratively modeled our domain. And now we can let the compiler tell us to implement those functions.
[00:28:49] There's a keyword that we're going to use called undefined. This allows us to basically stub the method body and let the code compile. But if we run this, we're going to see one of the only runtime errors you ever see in Haskell. It just blows up and tells you to go implement that method that you left behind.
[00:29:07] So what's next? We can run our tests and they tell us to go implement cons by three. That's where they blow up. So let's listen. It's driving, after all. So let's start with the same process again. We'll start by describing how we would like to interact with cons by three. And we know that we like the way that it interacts with the other components, so we just need to describe its functionality here.
[00:29:32] If you've ever played innumerable golf in Ruby there's a method called eachCons that we're going to try to reproduce here. We're going to chunk a phrase into consecutive elements of size three. And here's a failing test that serves as an example of what that means. So our code compiles and our test fails.
[00:29:51] And we've also got a type signature describing the function. So what's left? We just need to fill in the blanks. The last thing that's left to do is just implement the code. We've front loaded all of the really hard work of designing the interactions with our API. And even our private API.
[00:30:06] So that coding our methods basically just becomes an exercise that's fill in the blank. And this is a commonality that you'll see in all top down TDD workflows. And if you'd like to learn more about that, I'd love to chat with you after this talk. Or you can check out our blog, testdouble. com.
[00:30:23] Alright, let's get back on track. Just like that last slide might have been a distraction, we have distractions all day in, in our workplace, right? Somebody comes over and, Chats with you for five minutes and you don't know where you left off. This process allows us to jump right back on track. We can just run our tests and the tests tell us where to go next.
[00:30:42] They tell us that we need to replace that undefined keyword that we left behind in cons by three. We need to replace it with a real implementation. So once again, we can use domain concepts to break the work up into smaller components, following the same process again as we move down another level in the stack.
[00:30:59] So words is already in the standard library. It splits the string into words, pretty simple. And that just leaves us to implement the generic function here, cons by, that operates on a list of words. The compiler drives us to the next step, telling us that conspy doesn't exist and that we need to go implement it.
[00:31:16] So we'll start by defining a type signature for conspy. This, once again, allows us to experiment with the function. And make sure that it interacts with the smaller components the way that we would like it to. And to make sure that it's not awkward to use once it's finished. And also make sure that standard library function words doesn't have a negative impact on our design.
[00:31:35] That being lazy and not implementing it ourselves won't hurt the design of our code. And if the code compiles like this, then we know that our types line up. That the data coming from one piece in the transformation has the right shape to go through the next. So what's next? You might know the answer by now, right?
[00:31:53] It's to go ahead and write a test for the next step in the process. So the description looks a little bit familiar, but it's focused on just lists now. We can define a test that describes how cons by should work. If we get a, if we give it a list of three items to cons by two, we should get back two lists with two items.
[00:32:14] And we can also define what should happen when we have invalid input. So what should happen if we call cons by with a number that's larger than the size of the list? While your first instinct might be to just raise an error. But when we were forgetting about nil at the beginning of the talk, we learned the value and returning a consistent type.
[00:32:38] We could use that awesome new maybe here, or there are other data types that are like maybe that can hold maybe the cause of an error. But in this case, I think the best thing to do is simply to just return a list. The code and the test compiles. And when we run the test, we're told what to do next. The test documents what should happen when edge cases pop up too.
[00:33:01] So the last step in creating the slice of the application is implementing that function conspy. So this slide is mostly just here to show off how cool Haskell looks. To implement conspy, there are no looping constructs and no conditionals. We're using recursion and guards, and that's a bunch of new buzzwords.
[00:33:19] that make the syntax hard to understand at first. But there is an important thing happening. We're able to clearly see exactly what's happening when we call this function with different types of inputs. It's easy to see which is the important path through this function and which is just the termination clause for this recursion.
[00:33:39] The takeaway here is that Ruby methods should be small, understandable, and focused on the important path through that method. While we've been implementing this vertical slice, we've been playing a game with our compiler and with our tests. We've been asking them to tell us what to do next. And this allows us to zoom in and just focus on what we're implementing.
[00:33:58] We don't need to hold the entire system in our head anymore. We don't need to remember how we want all of the pieces to fit together in the end. And we don't need to think about how this tiny little thing that we're building at the bottom is going to be used by the top of our application. We've defined the public entry point.
[00:34:15] And filled in the blanks with named domain concepts. Then we asked our compiler in our test to drive us to the next task. We don't have to think about it ourselves. So it drove us down a layer, and we just continued with the same process again. And we kept doing this until we reached the bottom. And this allowed our application to emerge much more naturally.
[00:34:35] So how can we use this concept to write better Ruby? When we build from the top down, we can define the private API that we want to use, rather than the one that we had hoped we would need. This ultimately makes the API much more clean and makes it less awkward to use. When we build abstractions by starting with domain terms, our intent is much more clear, and our code basically just documents itself.
[00:34:57] It reads like English. It's very friendly for bringing new team members into the code base. And you know this is important to me because my shirt says nice code. When we have a type system, we can leverage it as an are things wired together properly type of integration test. And that guarantees that our functions return a value that's of the shape that we're expecting in the next step in the transformation.
[00:35:19] Just don't assume that it's going to validate the correctness of the values in that shape, though. We're still going to need to test that ourselves. And then by using named domain concepts to describe functionality And then giving those concepts a type signature, we allowed our test to focus just on that specific piece of functionality.
[00:35:38] We front loaded all of the hard design work and all of the decisions that we need to make about our code when they were cheapest to make, before the code was implemented, before we would need to change it. And this means that when the time came to implement it, it was basically just fill in the blanks.
[00:35:59] So we made it through all four concepts. I hope I was able to light a bulb over your head and make your face look like Quinn's. If you'd like to learn more about Haskell, here's what I would recommend. The first link here is a book called Maybe Haskell. It teaches you how to think in Haskell and how to It like the way I did in this presentation at a higher level.
[00:36:21] The second is to understand all of the crazy syntax and the weird symbols you saw. That's called understanding. Oh, that's called learning a Haskell. And then finally, if you'd just like to try it out yourself. Go to Haskell. org. So that wraps it up. I really appreciate your attention. And if you take only one thing away from this talk, I hope that it's, that learning new languages and new paradigms can make you a better programmer in all of the languages that you write.
[00:36:48] I know that's certainly been the case for me with Haskell. So once again, I'm Sam Jones, and I'm on Twitter, at Sam Jonester. And I know there was a lot packed into these slides. So I'd love to hear any questions that you have. I left enough time for a few questions at the end. So please don't be shy.