The video above was recorded at JazzCon 2018 in New Orleans, Louisiana.
Many teams relish building applications with React these days. Composable components and unidirectional data flow with Redux create a solid foundation for modular applications. By this point, you know higher-order components, selectors, and render props like the back of your hand. But what about testing?
If you neglect to write React tests because you don’t know what to test, which types of tests provide the most value, or how to start testing, then this talk will guide your path. I explain the difference among unit, isolation, and end-to-end tests, and provide a general framework for how to test your application.
You will learn about the testing framework Jest, how to test-drive components with Enzyme for design feedback, the magic and trade-offs of snapshot testing, how to unit and integration test your reducer and actions, and how to test your entire application in a browser by simulating user scenarios. Of course, I offer both pros and cons to each type of test, so you and your team can discover what works best for you. You will also learn how to use our nifty testdouble.js library.
If this talk excites you and you want to level up on testing, please reach out to us. We love helping teams grow.
Transcript of the talk
[00:00:00] Good afternoon, everyone. Thank you for joining me. I'm Jeremy Fairbank, known as El Papapoyo on the Twitters, and this is Effective React Testing. Really briefly about me. I work for Test Double as a remote software developer and consultant. We believe that software is broken, and we're here to fix it.
[00:00:17] Our mission is to improve how the world builds software, and we like to do that by not only working alongside your team to help you get stuff done, but to also help your team grow and improve. If you're interested in learning more, you can Check us out at testdouble. com. And also, the author of Programming Elm from Pragmatic Programmers.
[00:00:36] If you're interested in checking this language out, it's very similar to the React and Redux world. It allows you to build applications in a similar manner, but it gives, along with that, strong static types, no runtime exceptions, and really lots of other cool features. So if you're interested, you can check that out with bit.
[00:00:51] ly slash programming elm. When we think about React applications, we have this grand picture of our heads of these sleek and elegant applications that are just ready to sell the seven seas of production. When in reality, they probably look a little something more like this, at least if we're not testing.
[00:01:08] We're possibly just barely staying afloat. And We don't really have confidence that our application is fully behaving as we expect and that there aren't really bugs in there. It raises the question, though, is that really a big deal? Do we need to bother testing? Is it really that important? And what do I get out of it as a developer?
[00:01:26] There's a few things. First and foremost, we want to ensure that our applications behave as expected. We probably have requirements from our product management that, It says this application needs to do this, that, and the other. So there's benefit as us as development team to have automated tests that ensure these things work.
[00:01:47] Check, check. We all good? Cool. Can you all hear me on the lavalier now? Awesome. The other thing we get out of testing though, is we get useful feedback when it comes to designing our APIs. And now I'm not talking about RESTful APIs per se, but the functions and utilities we write, the modules and our components.
[00:02:08] Test driven design is really useful for helping us figure out what these things need to look like and I can get useful feedback without even having to write a lot of my source code yet, which is really crucial for refactoring. And mentioning refactoring, when we don't have testing, we can also run into this dumpster fire of if we try to refactor our applications, that things could just break.
[00:02:30] But with test coverage, we have Good confidence that if I make these type of changes to my application that I can hopefully catch any bugs that I might introduce so that we get a lot of confidence out of testing. So we've established that testing is good. We should probably be doing this, but what should we be testing when it comes to React?
[00:02:50] For our purposes, we're going to assume that we have a React Redux application. And for our asynchronous side, we're just going to use Redux Thunks. That's typically something simple you can get up and rolling with. So when it comes to testing, though, when we talk about components, the primary thing we're interested in is testing that the component is going to produce the output that we expect.
[00:03:11] And there's some caveats there. So if your component's doing a lot of static HTML, that's pretty low value. That's not something you want to focus on testing. But if you have, if you're taking certain props, or doing some logic a little bit in your components, then that's useful things that you do need to test.
[00:03:27] On the Redux side of things, Your reducer, that's clearly something that's really important. It's driving how the state changes in your application. So that's going to be something you want to test. And along with that, your actions. You're going to want to test your actions. And you can test your synchronous actions through the reducer.
[00:03:44] Because the reducer takes the state and it takes action. So it's easy to get that stuff tested through your reducer. On asynchronous things, though, like your thunks from Redux Thunk, you're going to want to do a little more to test those, because there can be some important business logic there that it's crucial you're making sure it's behaving as expected.
[00:04:02] So I have this testing pyramid. It's a modification of Martin Fowler's testing pyramid, where I've added a new level. But there's four types of tests that we're going to be concerned with when it comes to testing our applications. We have snapshot tests, unit tests, integration tests, and end to end tests.
[00:04:19] And the idea with this testing pyramid is that toward the bottom we have tests that are really easy to write, they're going to run really fast, and they're cheap to debug and write. But as we move up the pyramid, tests are going to get slightly slower and slightly more expensive to write. So when we look at these different levels, snapshot tests, I've added this on because we're talking about React, and if you've not done snapshot testing, we'll cover that a little bit more later, but it's a way for us to not have to do so many manual assertions.
[00:04:49] When we test our components, we can just render the component out, save the output, and then compare it later. On unit tests, this is where we're going to want to test our components, or our functions, whatever utilities we have, in isolation. And the idea being, we want to make sure this single thing is behaving as we expect it.
[00:05:08] So we're going to probably isolate all of our dependencies that it depends upon, and we want to focus on, does this thing do exactly what it's supposed to do in isolation? And that's going to be where we incorporate that idea of isolated design feedback. As we move up, we're going to hit this level called integration tests.
[00:05:25] And integration tests can be fuzzy as far as what does that really mean. But the main gist of it is, I want to test when things work together versus a unit test, I'm testing that single thing does its job well. So with integration tests, I could say test the integration between a parent component and its child component.
[00:05:44] Or I could test the whole stack of my components with my Redux store. So it's, again, fuzzy what you want to focus on. And because of that, some of these tests can be fast, they can be slow. And they can be expensive to write or cheap. And they can also end up being harder to debug than unit tests.
[00:06:02] And then finally we have end to end tests. So end to end tests are going to be similar to the integration tests, but we're going to test the full stack. We're going to test our API calls, probably with still some controlled test data. And we're going to test this in our browser, simulating a user interacting with our application.
[00:06:19] So we get the benefit of the integration tests, of seeing things work together, But we also get the benefit of running that in a real browser, like a user might use our application. How do we get started scaling this pyramid of tests? I want to just give one disclaimer. This is a 50 minute talk, so this is not a replacement for a thorough testing workshop.
[00:06:39] There's going to be lots of nuance that I won't be able to get into details with, and there's going to be lots of trade offs, too, to all the things we're going to talk about. But hopefully, I'm going to provide you with a It's a really good testing framework of ideas of different ways you can approach testing your React applications so you can get good coverage and get coverage of all the components working together as well.
[00:07:01] So we're going to use a test framework called Jest. It comes from Facebook, and it's really nice for React applications. There's built in support for React. It's fast. It's got an intelligent file watcher for running your tests. And so it's pretty good for running your React tests. To use Jest, it'll automatically come with Create React app if you're using it for building your applications, or you can add it yourself with a Yarn or NPM install.
[00:07:28] In addition to Jest, we're going to use another tool called Enzyme, which provide a lot of testing utilities for React. And we can add a couple libraries, Enzyme and Enzyme Adapter. And there's different adapters based on which version of React you're using. So let's start by talking about testing these components.
[00:07:47] So we're going to take a TDD approach. If you're not familiar with TDD, that's test driven development. And the idea is, I can isolate a component, I can design out what it needs to take as props, what kind of output I expect out of it, and I can write tests for it before I even implement it. And that gives us good, useful feedback, that API design feedback that I mentioned.
[00:08:08] So I'm going to switch gears to a demo that we're going to use for our purposes. So I have an application here, it lists 100 of the top jazz albums from the past century. We're at JazzCon, so it feels appropriate to talk about some jazz. So this app just lists these albums out, we can scroll down and see more.
[00:08:27] We can search for a particular artist, like John Coltrane. We can filter albums by liked and disliked albums, and we currently can't like or dislike albums, so we're going to implement that. And then we can also sort albums. So pretty typical. Front end application. Now the thing we're going to implement is if I click on an album, I want to be able to display the album title, the artist, the cover, as well as being able to like or dislike that album and maybe leave a review on the album.
[00:08:56] So we're going to use some test driven development for that. So this is going to be a test file. If you're not familiar with doing any sort of testing in JavaScript, I'm using Jest, like we mentioned, and it uses, underneath the hood, a modification of the Jasmine testing framework. So it uses this syntax with describe, which is a way of grouping tests together.
[00:09:16] So when I say describe here, I'm going to say I'm going to test create tests for the album component. So I'm going to start off by pulling in the album component. And then down here I have a example album with a title, a list of artists, a cover URL, a rating, and I'm using essentially an enum here. And it has a couple of reviews on it.
[00:09:41] Now the implementation of this component currently, is it's just a functional component that returns null, so nothing's implemented yet. So we want to be able to, first off, let's say, display the title of the album. So I can use a it here, and it is a way for us to write individual tests for the thing that we're testing, which is the album component.
[00:10:09] So I'm going to have to pull in that enzyme tool that I mentioned as well. I'm going to pull in a function called shallow. Now this is somewhat controversial as of late in the JavaScript testing world, but I think there are still benefits to using it, especially when we get into test driven development and kind of isolating our components and making sure they do what we expect on their own.
[00:10:32] So this shallow function I imported, I can call it and I can give it JSX. So I could say, here's my album component. Now, I wanted to display information about the album. That's what's important for us to test here, that given certain props it displays. It has certain output that comes out. So let's say we wanted to take an album component.
[00:10:53] And so far, we're just deciding what does the API of this thing need to look like. We haven't even implemented anything yet. And that's one of the benefits we get out of test driven development. So when I call shallow on this, it's going to give me back something called a wrapper, which has a host of methods on it for you to be able to query this rendered component tree.
[00:11:14] Check for certain things in it, and make other types of assertions. So in this case, let's start off by saying we just want to say, look for an H1 tag, that we want to render this. So there's a find method on it, and it takes typical CSS like selectors. So I'm going to say, find the H1 tag. Then there's a text method.
[00:11:34] Say, give me your text. And then I want to make an assertion about this. So with just And Jasmine, which it uses underneath, there's an expect function. And I can provide to it a value, and then I want to say I want to compare that to what I expect to come out of this. So I can say I expect that the H1's text to equal the album's title, an album.
[00:12:00] So I can save that, and then over here, I can start up our test with npm test. And this will start booting up my Jasmine Jest test.
[00:12:12] And so far we have a failing test, which is exactly what we would expect. We are implementing this right now. We're going to see a failing test. And the goal of TDD is to not only design the component, but when we have a failing test, we'll then go implement and make the test green or pass. So with Jest, it's going to let me know that this test failed, and it's going to show me what line.
[00:12:33] It's really useful for showing you in your source code where the failures are happening. So let's implement this. So we're going to change the implementation of album.
[00:12:47] We'll give it a div tag and an h1 tag. And we'll say it's going to have the album title. And then I will pull in the album here as a prop. So I'm just using the es6 function here and es6 destructuring for the props. Now when I save this, the watcher will automatically rerun my test for me, which is really nice.
[00:13:09] And now we have a passing test. And so that's the gist of what we're going to do with test driven development. We'll just make an assertion about what this thing should do, have a failing test, make the test pass, And now we've implemented it. So let's do that with something else. Let's say we want to display the artist.
[00:13:29] We'll copy all of that and reuse it. So we'll create a wrapper here with the shallow function. And let's say for this we want it to be an h2 tag. So over here, I'm going to expect the text of that to be both the artist with a dash between their names. So we have two artists that's in an array. So hit save.
[00:13:52] It's going to fail. And We can see the particular line where it's filling,
[00:14:00] and then we can implement that.
[00:14:04] So we know it's an array, we can do album artists join, and I'm going to use the join method to put a delimiter between every item, which is a dash in this case. And it ran so fast we didn't even see that, but it is passing, so We're making good progress here. Now the next thing is we want to display the album cover.
[00:14:28] So we're going to need an image tag for that. So I'll just copy this test to reuse it. We'll say displays the album cover. Still have this same wrapper that we want to create. But now we're going to do something different here. Instead of looking for h1 or h2, we're going to use a helper function called containsMatchingElement.
[00:14:52] It's a little wordy, but it allows for us to inspect this whole rendered component tree and look for a specific component with some specific props. So I can say I want to look for an image tag whose source will be album. jpg. Now containsMatchingElement returns a boolean. We want it to return true. So down here in my assertion, instead of doing to equal a string, we can just say to be true.
[00:15:22] I save this, and prettier auto format that, so it's going to be really long, but that's what we have. And over here we have a failing test, because we need to implement that. So we can come back here, we can add the image element, say it's album. coverurl, and we'll leave the alt tag there for accessibility, that way we can indicate that this is just for display purposes.
[00:15:50] And now the test is passing. But the nice thing with this contains matching element, I can look in the document tree to see if something was included. I didn't have to specify that the alt tag was important in this particular case. Contains matching element says look for this thing and only the props I specify.
[00:16:07] We don't have to worry about any missing props that I leave out. And so that's good for checking the DOM there for things that exist. Now we do have some problems, though, with the current way we've approached writing this. So if I go back to here, in our implementation of the album component, and let's say I've made some design changes or something, and I have changed this to an h2 tag.
[00:16:31] If I save this, my tests are failing now. Now the album component's still printing out a title and the list of artists, and it's still printing out a title and a list of artists. But the tests are failing and that seems like a false negative. So what's the problem here? The issue is, we're coupled to our markup.
[00:16:49] And so that's going to set us up for failure when it comes to any style or markup changes we need to make that aren't affecting the real core thing we're interested in. Does this thing print out what I expect? So I can fix this in a less coupled way. And in a few ways, so one way is instead of looking for the H1 tag, I can use a contains method, just similar to contains matching element.
[00:17:15] And I can also look for strings. So I can say, Rapper contains an album, to be true. And I can do something similar on the artist. Because both of these are failing. We can't find an H1 anymore. It doesn't exist. And this one is failing because there are two H2s and it's looking at the first one instead of the second one.
[00:17:38] Which is the artist the album title.
[00:17:45] So we can also say that we expect the rapper to contain both the artists. And now we're passing again. So the benefit of this is we can still ensure this component renders what we expect, but we're not going to get deeply coupled to our markup. Now, if we're still wanting to make sure that we are rendering pieces of this component in certain places, we can take an alternative approach.
[00:18:10] So we could add identifiers or anchors in our markup that we use strictly for testing purposes. So we could use something like class name. But that wouldn't be quite a good idea because we're still going to end up with issues if we're using class names for styling purposes and we need to change class names.
[00:18:30] We're going to run into the same issue where our tests fail for the wrong reasons. So I like using just a custom attribute like data test. So I could add a data test here for my album title and for my artists. And now I can go back here and instead of looking for H1, I can look for my data test attribute like this.
[00:18:56] And if we get rid of the wrapper contains, we still have a passing test, and we can still look for a specific spot without being coupled to our markup. Now the tradeoff here is we have to deal with peppering our source code with these custom attributes. So that's one of the tradeoffs that you and your team have to decide if it's worth putting that in your source code or if you would rather take a sort of more high level wrapper contains type approach.
[00:19:19] But those are ways we can easily test that this component renders what we expect. We're only focusing on. The thing is with Logic, that it takes these props and renders things based on them. Now another thing we mentioned is that we want to be able to like and dislike the album. That's a feature we need to implement.
[00:19:38] So we can start off by doing an ItLikesAnAlbum test here. We'll create a
[00:19:47] wrapper again. But now we've got to think, we need to some way dispatch back that we want to rate an album, whether we liked it or disliked it. So we're going to have to add a new prop here. Let's say we were to call it onRate. So I'm just going to create a no op function here, just an arrow function that does nothing.
[00:20:07] And now we'll say that the album component should take an onRate value. Prop callback. And so anytime we like or dislike the album, we expect that the on rate would be called.
[00:20:20] Now there's a couple of ways we can approach this. And the first one I'm going to take is probably one of the more controversial uses of shallow rendering. So I can say wrapper contains, and I'm going to expect it to have a certain child component. And this is what one of the things shallow rendering does.
[00:20:38] So when I render with shallow. It renders the component, but it does not render any of its children components. It leaves it at one level. It'll render normal markup like div, tags, h1 tags, etc. But in children components will not get rendered. We'll just have a reference to the child component.
[00:20:56] And in fact, I need to import the component we expect to use, which will be a like icon. That's already been implemented.
[00:21:07] So I can say that I expect Wrapper to contain a matching element of like with an album rating that's equal to the current test album that we created up top and that it will also use that onRate callback. So what we're going to test is that we're using the like icon. And that we're passing along the onRate prop callback here.
[00:21:32] So I can wrap that in expect, and we can expect that to be true. So currently that's failing because it's not implemented. And for time's sake, instead of implementing that as well, I do have a finished version of this component that I'm going to bring over.
[00:21:52] And so the test will pass now. So the main difference with this new component is I have more styling, more class names in here for styling, and And then we have the and dislike buttons right here, as well as being able to leave reviews. And we can see now that this is implemented in our app. So we can like the album, we can leave reviews, and so on.
[00:22:16] And our test is also passing because it now finds this like icon here. But, as some people point out, this is very coupled to the implementation now. We are reaching in and saying, I expect you to have this like component here. And some people would argue that's too coupled. That if you want to make changes in there, that could make your test fail.
[00:22:37] And so we could take a different approach here with integration testing. So instead of shallow rendering it, I could use a different method, or function, and it's called mount, and it also comes from enzyme. I'm also going to import testdouble from your friends at testdouble, which is a library for making fakes, mocks, and, spice,
[00:23:05] and I'm going to do this in a different way. So I'm going to create a mock or fake of my on rate function. We'll come at the old test L.
[00:23:18] So we're still doing to do a render here, but we're going to do it with that mount. So what's different with mount is that mounts going to render out the children components as well. So we're going to have full access to the whole components. Tree. And this wrapper then allows me to do interactions. So I could say find.
[00:23:39] The icon, in this case, I know it's going to be, it's a font awesome icon I'm using, and it's SVG, so it's going to have a thumbs up. And this is going to come from the like icon, because we're fully rendering it all. So I can find that, and then there's a simulate method, where I can simulate events like a click event.
[00:24:00] So now I'm acting like I am really interacting, I've clicked on the like icon, and my expectation is that I will have called this onRate callback that I provided earlier. And I can do that with testDouble by just calling a tdVerify with how I expected onRate to have been called. In this case, I will expect it to be called with ratingLiked.
[00:24:27] And that test passes. And just to show that it can fail, if I gave it a pass, So if I had a fake wrong value, it would fail because it would expect it to have been called with one.
[00:24:39] So what this benefit we got out of this was that we were able to test the behavior of liking an album from the album component, but we also tested the integration with that like icon. So this would be an integration test between two components. But one of the downsides is that we had to reach into the implementation details of the like icon.
[00:25:01] I had to know that there's an SVG element in there that I have to find and click on. And if I have test for the like icon in addition to my album component, then we're going to also run into redundant test coverage. So if I change implementation details of the like icon, not only could its test fail, but the album test could fail as well.
[00:25:21] So I have doubly failing test. So that's one of the trade offs you have to decide if you want to test. If you want to have some of these more integrative, realistic looking tests versus just completely isolated tests that you could run into some issues like that. So when it comes to just doing TDD and designing these components, I still lean toward this approach.
[00:25:42] Where I can at least ensure that it's coordinating its dependencies as I expect. And I don't have to worry about the test failing if I change the implementation of the like icon.
[00:25:54] So that's doing TDD, shallow rendering, some mount rendering in Enzyme and React. Now, start imagining you have a larger code base with lots and lots of components. And you might argue, as some teams have, that it gets really tedious to test lots and lots of components, to do all these manual assertions, that it just slows you down.
[00:26:14] There are different approaches we can take, but there are big trade offs that we will accept if we do that. Let's say we now want to test. The whole list of albums here. So each, the list will render out an individual sort of thumbnail for each album. And the implementation of it is pretty simple.
[00:26:35] It's just a unordered list. We map over the provided albums and print out a list item with the album list item component. So if we took the shallow rendering approach that we have been using, we could give it some albums and we could shallowly render the album list. And then basically iterate over each album and verify that wrapper contains each individual album less item.
[00:27:01] But again, you would argue that this is very coupled to the implementation and it's a low value test. And it's really tedious to write lots of tests like these. So a different approach we could take is I can import something called Renderer from React Test Renderer. And what I'm about to show you we can do with Enzyme as well, but I do the output a little bit better with the React test renderer.
[00:27:32] And instead of doing this shallow render here, I'm going to call renderer. create, and it's going to give me back an output object. And instead of doing my own manual assertions here with Jest, I can provide that to expect and use something called to match snapshot and create what we call a snapshot test.
[00:27:54] And now I'll react
[00:28:00] so I can run the tests and notice here we have this one snapshot written message. If I look at my file tree, we have this album list test JS snap file that was written. So what this React test renderer did and what the snapshot, to match snapshot did is just rendered out my whole component, including the children components.
[00:28:26] To HTML, and then save that as a file, a snapshot in time of what this component looked like with certain props. So there's the unordered list, there are the individual list items, and inside each of those is the rendered sort of thumbnail for each album. So we see the album title and the album artist, and then there's the other album.
[00:28:49] And so what's nice is that if I rerun my test,
[00:28:57] it'll rerun, it's not going to write another snapshot because it now detects I already have a snapshot saved. It's going to make a new rendered output and see these two things match. So nothing's broken here. But if I go to my component, whether it's the album list, or even if I made a change in album list item, let's say I did some refactoring and I accidentally deleted the album artist.
[00:29:20] Now I'm going to have a failing test over here, failing on the expect output to match snapshot. And it's going to show a difference. It created a rendered output, compared it to the previous snapshot, said, hey, let's see. This, the artist is now missing, and so I can go through here, and I can verify that, oh, yeah, that looks like it's a problem, and I can go fix it, or if I know, yes, this is expected I made the change, the artist does not need to be displayed here anymore, then I can press U, and basically take this new snapshot, and update it.
[00:29:54] To say this is now the new valid snapshot for my test. We'll go ahead and replace that and update it again. And so the benefit of snapshots, we can just render everything out and compare it, and so if anything ever changes, then we have some output to inspect to know if there is a true bug or not. But along those lines, what happens if I change this to an H3, for example?
[00:30:23] Now my snapshot's failing, but it's not failing for the right reasons. It's failing just because I changed some markup, possibly for style changes. And so now I have to go inspect this and verify, okay, it's just a style change. We can update it, but in a larger code base, if we have tons of these components, it's the end of the day and you have some failing snapshot tests.
[00:30:44] You're hoping that when you look through all the output, you're not gonna miss a real bug. So this can open it up to human error when it comes to inspecting a lot of these snapshots. So that's one of the big trade offs that comes through snapshot testing.
[00:31:01] But there can be benefits to those. So let's briefly review where we're at so far before we keep going forward.
[00:31:12] So for recapping on the unit tests we did, we saw that they were easy to write. We were able to use enzyme and do some shallow rendering. They're very fast because we're just testing things in isolation. So they have that as a good benefit. We also got isolated design feedback through test driven development.
[00:31:28] We can decide what does this component need to look like? What kind of props is it going to take in? What should it output? And I can make all those expectations up front and then go implement them and get passing tests. Now some of the downsides is, in a larger code base, it would feel that's tedious to test all your components.
[00:31:46] But I would argue in any code base where you're doing test driven development, you're going to have to test a lot of things, whether that's functions or components. So it's one of those trade offs. Now there is also the problem of coupled development. To the implementation. We saw that with that like icon where we did shallow rendering.
[00:32:04] We didn't fully render, and so we're coupled to the implementation details of our component. That also means we didn't really get good feedback on the interaction between our components. Now for snapshot tests, We saw that we could easily test the component tree. We just fully render it out and we can inspect to verify that it looks correct.
[00:32:25] These tests are fast as well. And another benefit that I didn't quite highlight is that snapshot tests are good for filling in test gaps very quickly. So if you don't have any tests today and you quickly want to get some test coverage, you could just go through, add snapshot tests to everything and that at least gives you a bit of a regression safety net.
[00:32:45] And in fact, Snapshot tests do work fairly well as a regression safety net even when you have Snapshot tests. But some of the downsides, we don't get any design feedback. If we only ever use Snapshot tests, then we're not getting feedback on the design of our components and how we use them. And so we don't know if they're hard to use from an API perspective unless we're really doing test driven development with them.
[00:33:10] We can also see that we get breaking chain, break, failing tests from markup changes. So we have to deal with. Looking through the output, which means it can be prone to human error. If we get really tired and we have to look through tons of rendered output, we could miss real bugs. And then finally, for those integration tests, we saw the one integration test with the like icon.
[00:33:31] The big benefit is that we can now test interaction between components to verify that they are working correctly together. But the downsides is we were coupled to the child components implementation now, and we did end up with. Potential redundant test coverage. Another thing I didn't quite highlight is that it can be slightly harder to debug integration tests.
[00:33:54] Because if you don't have any unit tests and you have an integration test that fails, it might not be immediately evident. Is it the integration that's failing or is an individual unit not behaving correctly? So armed with this feedback we've gotten so far, I wouldn't make this slight adjustment to our testing pyramid and have the testing Christmas tree.
[00:34:15] Where we're going to go a little light on the snapshot test. There are some benefits that they provide. They can provide that good regression safety net, but I would not quite go overboard on just snapshot testing and completely avoiding all these other types of tests. So we've talked about components.
[00:34:33] Let's start looking over how would we test the Redux side of this. We'll look at some unit testing and some integration testing approaches. So we'll return back to our app here.
[00:34:48] And I have some fully implemented tests here. That we'll just look through and see different ways to test React. So we can start with the reducer. It's a pure function, which means it just depends, its output depends solely on its arguments. So we take in whatever the existing state is, and we take in an action, and we're supposed to return new state.
[00:35:10] So we can start off with a simple test here, when we receive the, Albums from the API. And we want to store that in our state. So I have a list of fake albums here. In this case, I'm just using two empty object, which is fine for this test. And up here inside of my describe, I start off with my initial state.
[00:35:29] So I'm going to call my reducer with undefined and an empty object. And if you've not seen that pattern before, that typically should give you your initial state. If you follow the approach with. The reducer of setting the default value on your state with ES6 default parameters to be your initial state.
[00:35:49] So if that's undefined, you'll end up getting your initial state. And then the action, because it has no type, you'll typically hit your default case statement in your switch statement there. So that'll give you your initial state, and that's something we're going to need for running these tests. So back in my actual test, I can call my reducer with that initial state, and use my action creator that I have for receiving a list of albums.
[00:36:15] And then for the assertion, it's pretty straightforward. I just take the result, and I expect it to be equal to the new state. In this instance, I'm using the ES6 spread operator to essentially copy what the previous state was, which was initial state, and then specify the new property I expect to be in here.
[00:36:34] In this case, I expect to have the albums. This remote data is a special type I'm using mainly to allow me to include metadata about Am I currently fetching the data? Is it successfully loaded? Or did I have an error loading it? So that's what remote data is doing here for us. So I can save that.
[00:36:53] Over here, run the test. We have a passing test. Now another nice thing about Jest, when I have the test runner going, I can press P. And I can specify part of a test name. And it'll only run tests for that file. So that's another little nice benefit we can get out of Jest. So now I'm only running tests for Redux.
[00:37:14] So we see our receives album test is passing and we could do another reducer test here, a slightly more complex example. This one allows me to update an album. So when I click on an album here and say or dislike the album, it's going to update that album and it's going to dispatch that to the Redux store, to my reducer, to update the album inside the list.
[00:37:37] And that's what this does. Test is doing so it's going to verify that this update here propagates up to here and to do that We're going to need identifiers now to know which particular album in the list. I want to update So I'm again going to start off with some sort of initial state in this case I'm going to take initial state receive the albums and so that our current state will have the albums list already in there because we want To update an album that already exists in the store So I can have an updated version of this first test, or first album with ID 1.
[00:38:12] In this case, I'm going to leave a review, and so it's going to now include Great album as one of the reviews. So I can call my reducer on that state with my update album action and the payload on that will be updated album. And so my expectation from a testing perspective is that now this first album is going to have that review inside of it.
[00:38:34] So in my test, I can. Use the spread operator to copy that previous state and then say that now I expect albums to have updated album in the first place where the old album was and still have the original second album unchanged. So I can save that and that is passing. And so that's, we could keep going.
[00:38:56] It's fairly straightforward with reducer tests. They're just unit testing that I give it certain state and actions and I get certain output back. But now let's look at some of our asynchronous actions. So I mentioned we're using Redux Thunk here. And there can be important business logic that we need to test.
[00:39:14] So let's say we want to test the process of fetching and loading albums from the server. So the action's going to look something like this. I have a loadAlbums Thunk. And if you're not familiar with the terminology for Thunk, it's essentially a function that takes no arguments and returns some sort of result.
[00:39:34] Let's In this case, it's a function that's going to return another function. There's a middleware for Redux that allows me to dispatch these thunks, and it will be responsible for injecting certain dependencies that I can then use inside my innermost function. So in this case, it gives me the stores, dispatch method, and the get state method.
[00:39:55] And I can also inject some of my own dependencies. In this case, I'm injecting my API object. So I'm using async await here just to not have to do some of the promise then. And here's how I load my albums. I call the injected API all method, and then when I get the albums, I'm going to call that receiveAlbums action that we tested earlier.
[00:40:17] So there's some important business logic here. We're coordinating things. We're getting the albums, and then we're dispatching them to the Redux store. So as far as testing this, though, if we're going to unit test it, I know some people don't love mocks, but if we're going to unit test it, it's designed that we need to isolate all of the dependencies.
[00:40:38] So what I'm doing here is for getState, dispatch, and my additional dependencies, I'm going to mock all of them out with testDouble here. With the idea being, with all of that isolated, I can just focus on what's the specific Behavior, or job that this particular async should do. And that's just do a lot of coordination, fetch the data and send it to the store.
[00:41:03] So I can have a list of fake albums with test double, I can do a setup here and say, when this code calls my API, all method then resolve with a promise that contains the album. So I'm just mocking out this fake data, like mocking the server layer. And then I can call the album. Load albums thunk that returns back a function recall.
[00:41:28] And that function is where I'm going to inject my mocked out dependencies, my dispatch, my get state, and my additional dependencies with API. And then I can verify that internally that code would have called the stored dispatch method because I've mocked it out with the actions, receive albums. So if I save that test is now passing.
[00:41:53] And so that's good, but now we can make the argument just as we did with components that were really coupled to the implementation here because we had to mock out all these dependencies. And could we get more bang for our buck if we were to integration test this instead? So here's what I mean.
[00:42:09] What if we were able to test the actions, the reducer, all together with a real store? So we can write integration test here. The idea is the only thing I'm going to mock out is the fetch function. And this is a built in function in modern browsers that allows us to do a cleaner way for XMLHttpRequest.
[00:42:31] So that's the only thing I'm going to mock out is fetch. But when I create my store, I'm going to create a full blown store. I'm going to get the real API dependency, and also my selectors, which we're not covering here. But I'm using my real dependencies here in a real store. And I can have a similar load albums test.
[00:42:49] I still have fake albums, but I'll just mock out the fetch call. So I'm fetch mocking out my REST API instead, so to speak. And it looks a little weird here, but it's deals with the response that fetch gives. So I'm just mocking out that response and making sure it resolves with the albums.
[00:43:09] And then I can call the real store dispatch method with my asynchronous load albums thunk. Returns a promise, so I'm going to wait that. And then, because everything's integrated together, it's going to really dispatch to the store and really call my reducer. After the fact, I can then inspect the store's get state, check the albums, and verify that the equal is for remote data success with albums.
[00:43:38] So I can save that, and that's now passing. So the benefit is, we are verifying all of these things are working together, and that I can load albums and get that into the store. But what if we want to take that one step further? If we're already going through the trouble of that, why not test this in terms of my whole application?
[00:43:57] So bring in the React side of this too. So we can do a full blown integration test with Redux and React together. So the idea is I want to load up the entire application and verify that when it mounts, It's going to dispatch that asynchronous action to load the albums and then render it out for me. So I'm going to have a real album now inside this list of albums that I'm going to mock out.
[00:44:25] So it'll have ID, title, and all those properties. I'll still mock out fetch and resolve with this list of fake albums. But then I'm going to use the mount function that we saw earlier from Enzyme that does a full render And I'm going to provide to it the Provider component from React Redux. And give it that real store that I'm configuring with real dependencies.
[00:44:50] And then inside, I'm going to give it the real app. So this is just how I'm rendering it over here in the real application. And when I do that, I can then first verify that it's displaying some sort of loading message when it starts off. And then, I have to do a little bit of trickiness here, which kind of stinks, is Because it's doing something asynchronous, it's going to put that off on the event loop.
[00:45:16] So I'm going to have to force myself to also wait for that to complete. So this little utility I wrote basically allows me to do a set immediate set timeout thing where I'm just going to wait until that asynchronous action completes. So once I know that completes, there's an update method on the wrapper that allows me to force it to re render everything given the new context of what's in the Redux store.
[00:45:40] So the expectation is that it's already dispatched that action, got back the stuff, Ran through the reducer and added the albums into the store. And so now I can verify that the wrapper contains the title and the artist as I might expect. And so that's a nice full blown integration test with all that.
[00:46:00] But it was painful having to do some of that setup. So let's recap this and see if there's maybe a better way that we can approach writing these sort of full blown integration tests.
[00:46:16] So con the pros for doing some of the unit testing and Redux, those were easier, right? They were fast. We got isolated design feedback, just like we saw with components. Some of the downsides though, is for thunk tests, we were heavily coupled to the implementation. We had to do a lot of mocking and that can be annoying.
[00:46:32] Sometimes we also had no interaction with the rest of the application because everything was isolated. So when we took the integration test approach, we were able to test the reducer, the actions, the APA all in concert, working together. In fact, we could even bring in React and we were able to test all of it working together.
[00:46:53] But the cons for that was we had more setup. We had to manually wait. We had to force a re render. So that's a lot of extra steps. And that's not ideal for writing some of these tests. These tests will also be harder to debug. Like I mentioned before, if something fails, sometimes I may not know is it an integration failing or is it an individual thing failing.
[00:47:13] And finally, this is the big one for me. It doesn't test user interaction. So it would be nice if we could test the full stack, And do it in a way like a user might use our application. And so that leads us into our final section on end to end testing. So the idea is that we want to test the entire application like we were doing with integration tests.
[00:47:34] But we want to do it in a way where we're simulating user interactions. How would our user actually use the application? And ideally we want to interact with a real API. We want to make real REST API calls. But we might still use a test server with fake data for consistency. And we primarily want to focus on the happy path here.
[00:47:54] Because these tests are going to run in the browser, they're going to be a little slower. Now, you can run them in headless browsers for a little bit of gain in speed, but there's still going to be a lot of debugging with them, and so we don't want to test every possible thing. That's just going to be a ginormous test suite that's going to run really slow.
[00:48:11] For our purposes, we're going to look at Cypress for writing these tests. Cypress is a relatively new tool out there for writing end to end tests, and it's really amazing. So we're not going to do a full blown here's how you use Cypress, but it's mainly just, let's look at high level, what does end to end testing give us, and especially, how can Cypress be useful for that?
[00:48:33] So we're going to return back to our demo, and I'm going to take a brief detour over here, and I'm going to start up my test server. So we want to have consistent data for our tests.
[00:48:47] And then over here, I'm going to run this command, npm run cypress. So this allows us to run this with a local desktop app, So I already have an app spec file for these end to end tests. I can load it up. It's going to load up Chrome for me. And it's going to start running through all of these tests. But it's interacting with my real application.
[00:49:11] Testing that it works as I expect as a user might use the application. And so over here we have all the assertions that I make in my test file displayed on the left. And what's nice about Cypress is that I can go back, it snapshots all these states, and I can go back and look at how it interacted with the application.
[00:49:31] So when we start off, we expected it to display a loading message. And if I make a little adjustment, it might be tiny, it's hard for me to bring up the Size of the iframe there on the right, but it's saying loading and then I can verify that maybe it displays the loaded albums here So when I visit we have loading and then I can verify each albums in there now for more complex tests, though Let's say we're adding a review to an album I can go I can say visit the page look for an album that contains blue train in the album title click on that And then it'll show me a before and after, before I clicked and after I clicked.
[00:50:13] And then I can drive, looking for the reviews, looking for the text box, typing in the text, great album, and then clicking the save button. But what I'm doing is I'm interacting with the application like a user might. And just to give you an idea of kind of what that looks like in code. Cypress gives me tons of methods for chaining where I can visit URLs, I can check that it contains, this is very similar to what we saw with Enzyme, where we can say, does it contain a certain kind of text?
[00:50:47] When it comes to reviewing an album, if you look at that test down here, I'm using that data test attribute that we were talking about that gives us good anchors to get into the DOM and find things. But it's got a very clean syntax for finding everything and verifying it's in there, and we're testing everything.
[00:51:03] Everything we're testing, the reducer, the components, and we're testing it in a browser with real API calls. The other nice thing is that I'm not having to manually wait like I was doing in that integration test. Cypress automatically knows how to wait long enough when there are asynchronous things going on.
[00:51:23] And so I don't have to worry about all that setup. And that makes these tests a lot less fragile from the integration tests. And so just to recap all of what we've discussed, for end to end tests, We can test all those pieces working together in user scenarios. So how would our user interact with our application?
[00:51:44] And we can verify the pieces work together to make that happen. And if we use a tool like Cypress, which I definitely highly recommend, it makes end to end testing a lot more pleasant, easier to debug than a lot of the tools that I've used in the past. Now some of the downsides. These tests are going to be slow.
[00:52:02] The test suite we just looked at usually runs around 5 seconds. And so that's very slow in terms of if we compare that to say unit tests. We also have to deal with breaking from markup changes unless we're using that sort of data test custom attribute, which the trade off there is that now we've peppered our source code with all these data test attributes, and that's going into production as well.
[00:52:25] So that's not ideal, but it's the trade off we have to pay. And then finally, even though Cypress is a really good tool, it can still be slightly harder to debug into Intest as compared to, say, a unit test. Because we may not know at what layer things are failing. So if we return back to our Christmas tree, given what we know now, I think we could say, we could go a little less on the integration test.
[00:52:52] I think the benefits we get out of integration tests, we can get more out of end to end tests, because we are involving this with the full stack in a browser with user scenarios. And then my main focus then would be isolated design with unit tests, and then getting most of your integration out of end to end tests.
[00:53:11] And so I think it looks like a falcon. So I'm proposing the obsessed testing falcon that would admonish us all to write tests as much as we can. So thank you all so much today for joining me. I hope you have a good framework for beginning to test your react applications.