090: Big Testing in JavaScript

Hosted byCharles Lowell and Wil Wilsman

November 30th, 2017.

How do we ensure a high level of quality and maximize the refactorability of our code? Frontsiders, Wil and Charles, talk about their battle tested techniques for testing web applications, not only in React JS, but in any JavaScript framework.

Links

Transcript

CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 90. My name is Charles Lowell, a developer here at The Frontside and your podcast host-in-training. And with me today is Mr. Wil Wilsman. Wil, who just got back from Nodevember just walked straight into the office and is ready to podcast with us on a very, very, very interesting subject, I think, today. We’re going to be talking about acceptance testing in JavaScript applications, especially some of the techniques that we’ve developed here around testing React applications based on the lessons that we learned from the Ember community. But really, more than just React applications. Really, testing any JavaScript application from the inside out, making acceptance tests for that.

So, I think we’re going to talk about some of the challenges that you encounter and some of the really novel solutions that are out there that we had nothing to do with. And I guess we really didn’t have, just more of cobbling together of various techniques for a powerful witch’s brew for acceptance testing.

Anyway, so Wil, just to round out the problem space or explore the problem space, what are some of the challenges that you encounter with an acceptance test? Actually, let me back it up even further. What is an acceptance test in a JavaScript application compared to what people normally encounter?

WIL: Acceptance testing or end-to-end testing is just a problem that every JavaScript app should face. Not everyone does, but they definitely should. And basically, it’s how the user interacts with your app through the browser. And every part of that we want to test, from the browser triggering browser events, interacting with the app, not calling functions or clicking buttons, and we’re pretending we’re a user.

CHARLES: Yeah. You know, I know that when we showed up in the React space, that was not really the way that most people tested their applications.

WIL: No, not at all. they’re all about unit testing. Make sure every small piece of your code works, and to some degree integration testing, making sure your components work with other components. But, nothing is out there really for those big acceptance tests that you want the user to click a button and expect them be brought to a page or these fields to be filled out, et cetera.

CHARLES: Mmhmm. Yeah. And there certainly was a very high level of maturity around unit testing, like you said. There are tools like Enzyme and…

WIL: Jest.

CHARLES: Yeah, Jest. But I was actually shocked to find out that Jest didn’t even run in the browser.

WIL: Yeah, it’s all virtual.

CHARLES: It’s all virtual. It’s completely and totally simulated and stubbed. And that presents some problems.

WIL: Yeah. The main problem is cross-browser testing. Some people might consider that to be separate from their acceptance testing but you should be able to just run your acceptance tests in multiple browsers and be able to also test cross-browser support.

CHARLES: Mmhmm. Yeah. And so, if you’re using something like Jest, you’re never actually running the code inside Safari. You’re never actually running it inside Internet Explorer. You’re actually running it in NodeJS.

WIL: And you know, your user is not going to run it in Node.

[Laughter]

WIL: They’re going to use a browser.

CHARLES: I don’t know about your users.

WIL: [Chuckles]

CHARLES: [Laughs] You know, we like to stick to the pretty advanced. It’s like, go to getNodeJSBrowser.com.

WIL: [Laughs]

CHARLES: Enough of this Firefox BS. But not seriously, it was certainly a problem. We were looking around, because we never like to build anything ourselves if we can avoid it. But it really just seemed like there was not an off-the-shelf solution for writing these big style acceptance tests in JavaScript. There are some services out there. There’s a couple now. What was the…

WIL: I think the main one here is Cypress.

CHARLES: Cypress, yeah. So, there’s Cypress now. I’ve watched the instructional videos but never actually tried to integrate it into my application.

WIL: Yeah. I think at its core it takes the same approach that we’ve been doing with how we’re interacting with our tests.

CHARLES: Mmhmm. Okay. The main difference is, is it that it’s a service? Like you have to edit your tests through…

WIL: Yeah.

CHARLES: Their web browser, their web interface, and use their assertion library?

WIL: Yeah. I’m not sure about the editing part. But yeah, it’s their assertion library. I’m pretty sure it’s their test runner and it’s their testing environment. Really, the only control through that is through their UI, or through settings, basically. And you’re stuck with those. You can’t use other… I don’t think you can use Mocha with Cypress.

CHARLES: Right.

WIL: Although it’s very much like Mocha…

CHARLES: Right.

WIL: It’s not.

CHARLES: Right, right. And I also noticed, we’ll touch on this later, the assertions, most of the side effects that were happening were happening right there inline inside your assertions. And that might be an opaque statement, but we will actually get into that later.

WIL: Yeah. And I think one of the things about their side effects so to speak is everything leading up to a side effect is a promise with Cypress.

CHARLES: Mmhmm.

WIL: So, when you select a button and click it, Cypress is going to wait for that button to actually exist before it clicks it.

CHARLES: Right, right, which is actually pretty cool. So, that’s actually a perfect into into one of the primary challenges with doing acceptance testing in general in a JavaScript application. This is a problem when you’re doing it in Ember. It’s a problem in React. It’s really a problem anywhere. And that is, how do you know when the effects of a user’s interaction have been realized? Right?

WIL: Yeah. And in Ember you take advantage of the run loop. Once that action happens, you wait for the run loop to complete and then your tests run.

CHARLES: Right. So, the idea is that I’ve clicked some button or I’ve typed some key or I’ve moved the mouse. And then I listen for the run loop and when it’s “settled” then I can now run my assertions because I know that the side effects that I was looking for have now been realized.

WIL: Yeah, hopefully, if you’re...

CHARLES: Hopefully.

WIL: Writing your app right. [Chuckles]

CHARLES: Right, right. But that actually presents some problems in itself because it requires visibility into the internals of the framework.

WIL: Yeah, so Ember is built with testing in mind.

CHARLES: Mmhmm.

WIL: And other libraries like React just being a view library might not be built with testing in mind. So, we don’t have those hooks to wait for this loop to complete, wait for all of these things to be rendered before you continue.

CHARLES: Exactly. And so, this is, I think it’s actually kind of both a blessing and a curse. Because there are such strong conventions in Ember, they were able to build this wonderful acceptance testing regimen from the get go.

WIL: Yeah.

CHARLES: But like you said, that doesn’t exist at all in the React ecosystem. And so, what do you do? There’s no run loop. You’re cobbling together a bunch of different components. And maybe you’re using Redux, maybe you’re using MobX, maybe you’re using… you’re certainly using React. And all of these things have their own asynchronies built-in. And there’s not one unifying abstraction that’s keeping track of all the asynchrony in the system. And so that presents a challenge. So, the question is then, if you’re trying to not actually check and observe the state of a system until the right moment, how do you know when that right moment is?

WIL: Yeah. And in early testing of a side React project I had, I would basically wait for a state to be complete before I continued my ‘before each’. And in the testing we’re doing now, it’s essentially what we’re doing except the state is what the browser sees, or what the user would see in the browser.

CHARLES: So you were actually querying the...

WIL: Yeah. So, I was using Redux. So, in my app I was saying, when the Redux app is done loading...

CHARLES: Mmhmm.

WIL: The instance is set to true or false, then continue the test.

CHARLES: Mmhmm. And so, what that means, what we’re doing, is doing the same thing except observing at the DOM...

WIL: Yeah, exactly.

CHARLES: Level. And what it means is we actually… I would love to set this up and have a big reveal but I guess we’ll just have a big reveal, is that essentially what we do is polling.

WIL: Yeah.

CHARLES: Right? So, when we run an assertion, let’s say you click a button and you want the button to become disabled, there’s an inherent asynchrony there. But what we will do is we’ll actually run the assertion to see if it’s disabled not one time. We’ll run it a thousand times.

WIL: Yeah, as many times as needed until it passes.

CHARLES: Right, exactly. As many times as needed until it passes. And I think that is, at least to most programmer instincts, an odious idea.

WIL: Yeah.

CHARLES: [Laughs]

WIL: It’s like, “Oh wait, you’re just looping over every single assertion how many times?”

CHARLES: Yeah, exactly. And it feels, yeah, it feels weird as an idea. But when you actually see the code that it produces, it just sweeps away so much complexity.

WIL: Yeah. And…

CHARLES: Because you don’t worry about asynchrony at all.

WIL: Yeah. And it’s pretty genius. If I’m a user and I click a button, it’s loading when I see that it’s loading. So, our tests are going to wait until that button says it’s loading. And then the test passes.

CHARLES: Right. And so, what we do is we essentially, we use Mocha but you could do it with QUnit or anything else, is that when you run your assertion, you declare, you have an ‘it’ block or I guess, what would it be in QUnit?

WIL: A test?

CHARLES: A test?

WIL: I think it’s just a test.

CHARLES: You have your test block. And so that function that actually runs the assertion and checks the state will actually run, yeah it could run three times. It could run a thousand times. It’s just sitting there waiting. And it will time out. And it will only fail if that assertion has failed a thousand times or it has failed through, I think two seconds is our default.

WIL: Yeah, yeah. I think we default to the runner’s default timeout.

CHARLES: To the runner’s default timeout, yeah.

WIL: Yeah. Or you can set that yourself with how we have it set up. And the other thing that comes from that is if your tests are only failing when they time out, how do you know what’s actually failing? And our solution to that was we catch the error every time it fails and right before the timeout actually happens we throw the real error.

CHARLES: Yeah. Exactly. But the net effect is that you’re able to write your assertions completely and totally oblivious of asynchrony. You don’t, we don’t have to worry about asynchrony pretty much at all. I mean, we do, and we’ll get into that. So, I made a global statement and then immediately contradicted it.

WIL: [Chuckles]

CHARLES: But hey, you got to be controversial. But for the most part, asynchrony just disappears because asynchrony is baked into the fabric. So, rather than thinking about it as a one-off concern or a onesie-twosie, it’s just every single assertion is just assumed to be asynchronous. And so, that actually means you don’t have to deal with promises. You don’t have to deal with run loops. You don’t have to deal with anything. You just write your assertion and when it passes, it passes. And there are some really unique benefits for this. And there are some challenges. So, I think one of the first benefits is that it’s actually way faster.

WIL: Right.

CHARLES: Which is counterintuitive.

WIL: It’s incredibly fast.

CHARLES: It’s very fast.

WIL: Yeah, for all the loops that’s happening you might think every loop is going to slow it down slightly. But it really doesn’t. Our tests, each test, even though it asserts five or six times, it takes milliseconds.

CHARLES: Mmhmm.

WIL: The test itself might only loop twice.

CHARLES: Right. Exactly. Whereas if you’re waiting for a run loop to settle, you might have some… you click a button, it disables, it also fires off an Ajax request and does all this stuff. But if all my assertion wants to know is “Is this button disabled?” then I only need to assert until that has happened. I don’t need to wait until all the side effects have settled...

WIL: Mmhmm.

CHARLES: And then do the assertion. I just know, “Hey, my assertion, the thing that I was waiting for - that happened. Let’s move on.” Yeah. And so, it’s so, so fast. And that was actually, I didn’t predict that. But I was definitely pleasantly surprised.

WIL: Yeah, that was a very nice surprise. And all of our tests ran so much quicker than they would have in a run loop environment with Ember or something.

CHARLES: Right. Yeah, yeah. That was, we actually had just come off a project where we were having that thing, that exact problem, which was that yeah, our animations were slow. Or, the animations were fine. They were perfect. [Laughs] But there were slowing down the tests.

WIL: Yes. I think in that project, it was like, 30-minute tests...

CHARLES: Mmhmm

WIL: For the whole suite to run.

CHARLES: Yeah. To which I’ll add a public service announcement. I think this is a conjecture but I do believe that animations are best applied not to individual components but by the thing that uses a component. So, I shouldn’t have an animation that’s like, implicit to a dialog. It should be the thing that’s showing the dialog that gets to decide the animation to use. Anyway, just throwing that out there.

WIL: [Laughs]

CHARLES: Because animations are about context. And so, the context should provide the animation, not the individual atom. Anyway, moving on.

WIL: Some other podcast.

CHARLES: Yeah.

[Laughter]

CHARLES: That’s another podcast right there. But there’s also, this does present some challenges or requires code to be structured in a way that facilitates this. So, there are some challenges with this approach, some things you need to be aware of if you’re using this kind of system. We’ve kind of settled on a name for what we call these types of assertions and these types of systems.

WIL: Yeah. We call them convergent assertions, because you’re converging on something to happen. It’s going over and over until it happens.

CHARLES: Right.

WIL: And yeah, a lot of these challenges that we’ve come across are things that you might not think of, like there are a few instances of false positives...

CHARLES: Mmhmm.

WIL: That happen with these convergent assertions.

CHARLES: Right. So, what would be an example there?

WIL: So, the most common example that I’m seeing so far is when you’re asserting that something didn’t happen.

CHARLES: Mm, right.

WIL: That would immediately pass. But if it takes your app...

CHARLES: [Laughs]

WIL: A few seconds for it to actually happen, then you could still have an actual failure but your test passed immediately.

CHARLES: Right, right. So, what’s the countermeasure then?

WIL: We invert our assertions. So, we make sure they fail for a certain amount of time.

CHARLES: Right. So, the normal case where you just want to say, “I want to make sure that my state converges to this particular state.”

WIL: Alright. I said fail at first. I meant, pass. We have to make sure it passes for a certain period of time.

CHARLES: Right, exactly.

WIL: So yeah, the normal way is it fails until it passes, and then it passes. When you invert one of these convergent assertions, you’re just making sure it passes repeatedly and if it fails at any point, you throw a failure.

CHARLES: Right, okay. And so, that’s like, if I want to check that the button is not disabled, I need to check again and again and again and again.

WIL: Until you’re comfortable with saying, “Alright. It’s probably not going to be disabled.”

CHARLES: Yeah, exactly. And so there, it’s kind of weird because it is dependent on a timeout.

WIL: Mmhmm.

CHARLES: You could go for two seconds and then at the very end it becomes disabled. So, you just kind of have to take that on faith. But...

WIL: Yeah.

CHARLES: In practice, I don’t think that’s been much of a problem.

WIL: No.

CHARLES: It’s more indicative of, if your button disables after...

WIL: A few seconds.

CHARLES: A few seconds, what’s up with your...

WIL: Yeah, what’s up with your app?

CHARLES: Yeah, exactly.

WIL: If you’re waiting for an Ajax request or something, an example, then you should be using something like Mirage Server.

CHARLES: Right. Which is, man, we got to get into that, too. There are a couple of other things that I wanted to talk about too, with these convergent assertions. And that is, typically when you look through the READMEs for most testing frameworks, you see the simple case of the entire test, the setup, the teardown, and the actual assertions, are in the actual test.

WIL: Yeah. The ‘it’ block in Mocha or the test block in QUnit. You click a button, and then make sure it’s disabled, and it moves onto the next test.

CHARLES: Mmhmm.

WIL: Then you click a different button or the same button and you assert something else in the next test.

CHARLES: Mmhmm. Right.

WIL: And yeah, you can’t do that with convergent assertions because they’re looping. So, if you click a button in a loop it’s going to keep clicking that button over and over and over again.

CHARLES: Yeah. [Laughs] Right, right. So, it means that you need to be very conscientious about separating the parts of your tests that actually do the things that actually act the part of the user from the part of your test that’s about observation.

WIL: Yeah. So, our solution to that is we move out all of our things that have side effects like clicking a button or filling in a form, all that stuff happens in ‘before each’s. And all of our actual assertions happen in these convergent ‘it’ blocks that loop over and over again. So, our ‘before each’ runs and clicks the button and then we have 10 or so tests that will loop and wait for various states to...

CHARLES: Yeah.

WIL: Be true.

CHARLES: Right. That means that yeah, all these assertions do is they read state. And you just have to, you do have to be conscientious. You’re not allowed to have any side effects inside your tests, your actual assertion blocks.

WIL: Mmhmm.

CHARLES: But that’s actually, it’s a good use case for the whole Act-Arrange-Assert, which has been around way, way, way before these techniques. But here, we’re doing Act and Arrange in our ‘before each’...

WIL: Mmhmm.

CHARLES: And then we’re doing Assert later. And I think it actually leads for more readable...

WIL: Yeah, definitely.

CHARLES: Things.

WIL: And it also opens the door to something that we can’t really take advantage of yet but if you have 10 assertions with one ‘before each’ side effect, you could run all of those assertions in parallel.

CHARLES: That’s right.

WIL: And your tests would be 10 times more faster.

CHARLES: Mmhmm. Yeah, exactly. Or you could run them in parallel or you could just run them one after the other but you wouldn’t have to run that ‘before each’ 10 times.

WIL: Yeah. But something with that, that I found, is if we move all of our side effects to ‘before’ blocks instead of ‘before each’ blocks, sometimes a test three tests down that’s waiting for something to happen...

CHARLES: Yeah.

WIL: That thing might have already happened earlier...

CHARLES: Yeah.

WIL: And it already went away. A loading state is the best example of that.

CHARLES: Mmhmm.

WIL: You show the loading state, the loading state goes away. So, if you move that button click into a ‘before’ and that loading state test is three tests in, that loading state is already going to be gone.

CHARLES: Yeah, so I think the long story short is we’ve kind of come to the conclusion that we would have to write our own runner.

WIL: Yeah.

CHARLES: Essentially to take advantage of this. But that said, we’ve done some sketching about what we would gain by writing our own runner. And the speed, we’re talking about exponential speedups.

WIL: Yeah.

CHARLES: Maybe taking an entire acceptance test suite and having it run in five or six seconds.

WIL: Yeah. We’re talking about these tests that are already extremely fast.

CHARLES: Mmhmm.

WIL: Each test takes a few milliseconds or tens of milliseconds to complete. But then if you can run all of those at the same time, all of your tests for that entire ‘describe’ block just ran tens of milliseconds.

CHARLES: Right. Yeah. So, it’s really exciting and pretty tantalizing. And we would love to invest the time in that. I’ve always wanted to write our own test runner. But never had, [chuckles] never really had a reason.

WIL: Yeah.

CHARLES: Certainly not just for the sheer joy of it. Although I’m sure there is joy in writing it. But that, yeah, we’ll have to wait on that. But I am actually really excited about the idea of being able to maybe bring this back to the Ember community.

WIL: Yeah.

CHARLES: Because acceptance tests getting out of control in terms of the speed is I think a problem with Ember applications. And I think this would do a lot to address that.

WIL: Yeah.

CHARLES: I think, how long, if we were just using a stock Ember acceptance testing setup for this, I think we have about 250 tests in this React app...

WIL: Yeah.

CHARLES: How long does it take to run?

WIL: Right now I think our tests take something like 20 seconds. And that’s also somewhat due to they have to print the tests on the screen on Travis so that takes a little time. In an Ember setup, that could maybe take a few minutes. I mean, that’s not that big of a deal, a few minutes.

CHARLES: Right.

WIL: But compared to 20 seconds.

CHARLES: Right. you’re still talking about an order of magnitude...

WIL: Yeah, exactly.

CHARLES: Difference. And using this, I think you could get a 30-minute test suite...

WIL: Yeah.

CHARLES: Down to the order of 3 minutes.

WIL: Now when we’re talking about those times, we’re talking about the tests themselves. Of course, the CI would have to download stuff and set up the [inaudible].

CHARLES: Mmhmm, right, right.

WIL: And that of course all adds to the time.

CHARLES: Yeah, mmhmm, yeah. Earlier you mentioned, we talked about Ember CLI Mirage. This is actually something that is having now been using it for what, 2 years or something like that, it’s just… it’s impossible...

WIL: To go back, yeah.

CHARLES: To go back. It is. It’s like [chuckles] you come outside the Ember community and you’re like, “How is anybody ever dealing without this?”

WIL: Yeah.

CHARLES: [Laughs]

WIL: A lot of the mocks are usually mocking the function that makes the request and it returns it in that function. That’s what’s out there currently, minus the Mirage stuff.

CHARLES: Mmhmm.

WIL: But once you use Mirage, you’re mocking the requests themselves.

CHARLES: Yeah. And you’ve got such great support for the whole factories. I love factories. It’s something that is very prevalent in the Ruby community, and maybe not so much elsewhere. But the ability to very, very quickly crank out high-fidelity production data...

WIL: Yeah. And you don’t have to have files upon files of fixtures.

CHARLES: Yeah, exactly. And you can change, if something about your schema changes, you can change the factory and now your test data is up and running. So, Ember has this tool called Mirage which is just, like I said, it’s so fantastic. Oh yeah, it’s also go support for running your application. Not just in your tests, but you can actually run your application with Mirage on and...

WIL: Oh right, yeah.

CHARLES: And you’ve got now the most incredible rapid prototyping tool.

WIL: Yeah. You don’t need to connect to a server to see fake data.

CHARLES: Right, right. And we were even talking about this yesterday to a potential client. they’re trying to, they’ve got to present something to investors. And how wonderful is it to just be like, “You know what? We just don’t want to invest in all, we don’t want to move the inertia, invest the money to generate the force to move the inertia of a backend.” Especially in this particular use case, the backend was going to be really, really heavy.

WIL: Yeah. And there were some questions about the backend that we couldn’t address quite yet but we wanted to start working on something that we could show, something demo-able.

CHARLES: Right, exactly. And so, Mirage is just so wonderful for that. But again, Mirage is, it’s an Ember-specific project.

WIL: Right.

CHARLES: So, the question was, “How are we going to use that?”

WIL: And you actually took this on yourself.

CHARLES: Mmhmm.

WIL: I just saw this pop-up one day and boom, you converted Ember Mirage to vanilla JavaScript.

[Chuckles]

CHARLES: So, I did extract it. But the lion’s share of the credit goes to the developers of Mirage themselves. Sam Selikoff and the Mirage community, they built Mirage not using much of Ember. There were some utilities that they were using, but mainly things like string helpers to convert between camel-case and dash-case, and using a Broccoli build, or using an Ember CLI build.

WIL: Yeah. That was one of the challenges that we came across using Mirage outside of Ember, was how do we autoload this Mirage folder with all this Mirage config and Mirage factories and models, et cetera.

CHARLES: Right. The internals were all just straight up JavaScript classes, for the most part. And so, extracting it, it was a lot of work. But 90% of the work was already done. It only took three or four days to do it.

WIL: Amazing.

CHARLES: Yeah. So, it was actually a really pleasant experience. I was able to swap out all of the Ember string helpers for Lodash. So now, it’s good to go. It shares a Git history with Ember CLI Mirage, so it’s basically a fork.

WIL: Mmhmm.

CHARLES: Like, a very heavily patched Ember CLI Mirage. But I keep it up-to-date so that it doesn’t...

WIL: Good. [Inaudible]

CHARLES: Yeah, so I think the last time I merged in from master was about a month ago, something like that. Because it’s got all the features that we need but it’s not a big deal to rebase or just to merge it on over in. Because yeah, it’s a really straightforward set of patches.

WIL: Was there any talk with the creators of Ember Mirage about getting this upstream?

CHARLES: So, I’ve talked a little bit with Sam about it. And from what I can tell, his feeling on it is like, “Hey, my goal right now is to focus on this being the best testing and data stubbing platform for Ember. Anything that happens out there, outside of that scope, that’s great. And I certainly won’t get in the way of it. But I’m pretty maxed out in terms of the open source credits that I have to spend.”

WIL: [Chuckles]

CHARLES: And there hasn’t been much motion there. I’m happy where it is right now. I would like to see it merged into upstream. I think it would be great to have basically this Mirage Server and then have Ember CLI bindings for it.

WIL: Yeah, yeah. I was going to say, either another Ember CLI specific package for Mirage or maybe to make it a non-breaking change or something.

CHARLES: Yeah.

WIL: Just like an Ember-specific entry point.

CHARLES: Right, exactly. And I think that’s definitely doable, if someone wants to take it on. I will say, we have been using this extracted plain vanilla JavaScript Mirage Server now for what, almost six months?

WIL: Yeah.

CHARLES: And it really hasn’t...

WIL: Yeah, I don’t think I ran into one issue with that.

CHARLES: Yeah. It’s solid. It’s really, really good. So, kudos to the Mirage team for doing that. And if anybody is interested in using Mirage in their projects, it’s definitely there and we’ll put it in the show notes.

WIL: Yeah. We call it Mirage Server.

CHARLES: Mirage Server, yeah. So, I don’t know. Maybe it’s time to reopen that conversation. But it has become a very integral and critical piece of the way that we test our JavaScript applications now.

So, what are the foundations of it? We’ve got, we’re using Mirage. We’re using these convergent assertions. We’re using Mocha, although that’s really...

WIL: Yeah, we have jQuery and Chai jQuery just to help us out with interacting with the browser as a user would. And I think one of the big challenges with that actually, I just remembered, was triggering changes in React.

CHARLES: Yeah.

WIL: I think this is pretty specific to React. You might run into problems with the view. I don’t know how to mess with view. But in React, at least I think 15 or React 16, one of them, they changed the descriptor of the value property on an element so that they can appropriately interact with it, make changes, watch for changes, et cetera. So, when you set this value property using jQuery or just straight up ‘.value()’, that change event isn’t triggered in React. Your on-change handlers are never called.

CHARLES: Wait, they actually update the JavaScript property descriptor of the DOM element.

WIL: Yes.

CHARLES: Boo.

WIL: Yeah. So, there’s a nice little helper out there called React Trigger Change. I dug through it and I’ve stripped some of it down to just be for more modern browsers, more modern React. But there’s a lot of good code in there. And basically, it just caches that descriptor, updates the value, triggers the change, and then adds the descriptor back. And that ends up triggering that React element.

CHARLES: Okay. [Laughs]

WIL: [Chuckles] Yeah, so that calls your handlers and that’s how we get around that.

CHARLES: Right. There’s a few fun little hacks there. But I think it is good to tie that into a larger point, is that the amount of touch that you have with the framework is actually very low.

WIL: Yeah.

CHARLES: So, the amount of affordances that we’ve had to make just for React, there’s that, that you just mentioned. And is there… there’s not much else. We had to write a test harness to mount the app. But that’s like...

WIL: Yeah, yeah. Our describe application helper is pretty React-specific.

CHARLES: Right.

WIL: So, you’d have to render it and set up a Mirage server, et cetera.

CHARLES: Right. But that’s application-specific setup.

WIL: Yeah, that’s one file.

CHARLES: Right.

WIL: So, your goal for acceptance tests is you want to be able to have a refactor and your acceptance tests still pass.

CHARLES: Right.

WIL: So, what if that refactor involves switching libraries?

CHARLES: Right.

WIL: If you’re writing Ember acceptance tests, you’re going to have to rewrite all your acceptance tests.

CHARLES: Right.

WIL: That’s a huge downside.

CHARLES: Right.

WIL: So, with this method of interacting with the actual library very little, we have that one file that sets up our app and then we have that one trigger change helper, we remove those, we can use whatever framework we want underneath this. And our tests would still work.

CHARLES: Yeah, exactly. And I think that we actually could theoretically, and honestly I have enough confidence in this style that we’re developing the tests now, we could refactor this application to Ember and not have to rewrite our tests.

WIL: Yeah, exactly.

CHARLES: In fact, the tests would be an aide to do that.

WIL: Yeah, and the tests would be faster than Ember testing with that run loop problem.

CHARLES: Yeah, exactly. that’s really something to think about or to think on, is like, “Wow. You’re really at this point completely, not completely, but very loosely coupled to the actual internal library code.” Which is one of the goals of a nice, big acceptance test, is to be able to make major changes, break big bones, and be able to set them and have your acceptance test suite be the bulwark that holds it all together.

WIL: Yeah.

CHARLES: So, I actually don’t know what a bulwark is.

WIL: [Laughs]

CHARLES: I just know that it’s a really strong thing.

[Laughter]

CHARLES: Maybe we could put that in the show notes.

[Laughter]

WIL: A link to what that is.

CHARLES: [Laughs] So, alright. Well, I’m trying to think if there is anything else that we wanted to mention. Any challenges? Any next things?

WIL: So, one of our next steps is something we mentioned that Cypress does, is they wait for elements to exist before they interact with them. And we’re actually not doing that in our app currently. And we don’t have helpers out there for it yet.

CHARLES: Right.

WIL: But that’s very much the next step. When we go to click an element in our ‘before each’, we have these describes that are nested. Say, you have nested describes and you get down three levels into a ‘before each’ when you’re clicking a button. That button might not exist yet.

CHARLES: Right.

WIL: And especially since we’re using jQuery, if you trigger a change on an empty jQuery element, it’s not going to throw an error. It’s just not going to tell you that it triggered anything.

CHARLES: Right.

WIL: So, we get those skips where that button’s not getting clicked and we should really be waiting for that button to exist.

CHARLES: Right. So, what we’ve done right now is we’re converging on our assertions at the backend of a test. But at the frontend of a test we need to also be converging at some state before we can actually interact with the application.

WIL: Right, yeah.

CHARLES: So yeah, so that part is missing. And that actually brings up, we are very slowly but nevertheless doing, we’re collecting these convergent assertions and convergent helpers in a repository on our GitHub account. We’re going to be adding these things so that you can either use them out of the box or use them to make your own testing library.

WIL: Yeah. And one of the other next steps that goes along with waiting for the element to exist is when you need to chain convergences. Like, wait for this element to exist and then click it and then wait for this thing to happen before actually running a test. And that presents the problem of our convergences are waiting for that timeout and those timeouts will accumulate. So if you have three chained convergences, that’s now a six thousand millisecond timeout as opposed to a two thousand millisecond timeout.

CHARLES: Right.

WIL: So, one of the next steps is getting that tracking under control so if you chain three convergences together, they’re smart about it and they still fail under the two thousand millisecond timeout.

CHARLES: Right, right. So yeah, so we’re going to be collecting all this stuff that we’re learning into some publicly available code. We have a repository set up. I don’t know if I want to announce it just yet, because it’s really early days.

WIL: Yeah.

CHARLES: But that definitely is the plan. And that way, whether you’re using Mocha or whether you’re using QUnit or whether you’re using Chai or jQuery, you’ve got these underlying primitives that help you converge on a state, whether that state is to interact with some piece of the DOM or to just assert some observation is made about that state. We’ll be continuing on that. But by all means, get in touch if this is something that is of interest to you. Let’s make something happen, because it’s something that we’re pretty excited about. And honestly, it’s pretty comfortable living inside the four walls of this test suite.

WIL: Yeah.

CHARLES: It feels pretty good.

WIL: It does, yeah. They’re very fast. And some places in the test might need a little reworking, but for the most part all of our tests are very well-written, very well-readable. And you can just open up a test and know exactly what’s going on.

CHARLES: Yeah. Alright. Well, I think that about does it for Episode 90. Wow, Episode 90.

WIL: Man, coming up on that 100.

CHARLES: Yeah. We’re going to have to have a birthday cake or something.

WIL: Do we celebrate Episode 100 or Episode 104?

CHARLES: What’s 104?

WIL: 104 would be 2 years.

CHARLES: Oh really?

WIL: Well, I mean 2 years’ worth of podcasts.

CHARLES: Oh, right. 2 years’ worth of podcasts. Yeah.

WIL: Yeah, like if you go every week.

CHARLES: Maybe we should celebrate a hundred hours or something like that.

WIL: Oh yeah.

CHARLES: We can add up the thing or celebrate… I don’t know, be like, “You’ve literally wasted 2 years of your life.”

WIL: [Laughs]

CHARLES: “2 weeks of your life listening to the podcast.” Anyway, so that’s it for Episode 90. and thank you so much, Wil.

WIL: Thanks for having me.

CHARLES: It’s always a pleasure to talk about these topics with you. And as always, if you need to get in touch with us, please reach out to us on Twitter. We are @TheFrontside. Or you can send an email to contact@frontside.io.

Listen to our podcast:

Listen on Apple Podcasts