Salto for
DevOps
Articles
SHARE
Ori Moisis
November 26, 2020
8
min read
In the past year we've been writing and expanding our open source typescript project. One of the first things we prioritized when setting the project up was establishing a solid automatic test baseline so we can develop with confidence and maintain a level of safety when accepting contributions from the community.
In this post I will share a few issues we encountered while working with the very popular and powerful Jest framework and some mitigations and ways to work around them.
Jest has a really nice framework for creating mock functions for unit tests and we use that framework quite extensively.
In our early tests we would create mock functions in the most straight forward way with jest.fn().
This worked great for a while, but the problem with using jest.fn() is that it creates a mock function that is completely decoupled from interface of the function being mocked. There is nothing preventing the mock function from returning the wrong type of answer.
This is usually not a major problem when the test is first written, because it is fairly easy to create a mock of a specific function / interface. The problem begins when the code undergoes refactoring and some interfaces change over time.
Having mock implementations decoupled from the interfaces they supposedly mimic makes it very easy to miss mocks during a refactor and could lead to a situation where some tests are not failing even when the code has a bug in the scenario under test.
For example, let's consider testing the following function:
We have a function that accepts a number and a predicate. If that predicate is true for the input number then it will return the number. If it is false then it will return zero.
To test this function we can create a few tests and use a mock predicate:
So far so good.
Now let's consider what happens if somewhere down the line we decide that we need to change the NumberPredicate interface to be asynchronous.
Assuming we do not change the function implementation:
We will have a bug.
The code itself is still valid in terms of typescript but it is no longer correct - predicate now returns a promise, and a promise is always considered true, which means that predicateOrZero will not return 0 even if the predicate evaluates to false.
In our tests, however, this bug will not be caught even though in theory we have this case covered by the when the predicate is false test case. This is because our mock predicate was not updated to return a promise as the interface requires, it still returns false.
This mistake is not caught by typescript because jest.Mock and jest.fn() are typed as a function from any to any, so the typescript compiler is willing to accept our mock as the predicate, even though its implementation returns the wrong type.
Luckily for us, we can use typescript to create this coupling of an interface and a mock function - we just have to be a little more explicit when we write the mock function and specify the type of the function it is supposed to mock.
Jest types already provides the useful MockedFunction type for this purpose, in our repository we added a couple of additional helpers:
We can use the more explicit mock type to define our mockPredicate
By using the correct type in the mock function we add a "reference" to the interface, so the typescript compiler helps us catch mocks that do not comply with the interface they are supposed to implement.
In our example this would cause the typescript compiler to emit an error on line 9 saying:
error TS2345: Argument of type 'true' is not assignable to parameter of type 'Promise<boolean>'
So as soon as someone makes a breaking change to the interface of the function being mocked, they will have to fix these tests as well.
One of the nice things that jest mock functions do is record the arguments passed to every call.
This is useful for testing functions that use callbacks. For example, it can be used to check that a function calls the callback with the correct parameters, as we did in the following test:
One thing to keep in mind when using this capability is that mock functions which are declared outside of beforeEach will accumulate all calls from all unit tests unless they are reset.
For example, if we were to refactor the tests a bit such that they use the same mock function for both scenarios:
In the example above, the second should call the predicate with the input number test is not really testing anything.
Even if we change the test setup to not call predicateOrZero at all like so:
The test that expects mockPredicate to have been called would pass because the previous test scenario already called the mock function with 10.
There are two main ways to resolve such issues:
1. As we did in the original example, we can create all test context in beforeEach hooks rather than globally / directly under describe.
As a rule of thumb for not accidentally sharing context between tests - try to avoid const definitions outside test setup unless they are for immutable values like strings or numbers.
This is generally my preferred method, as it requires little to no thinking and it solves the problem at the root - we will not share context between tests because the context is created separately for each test.
The disadvantage of this approach is in cases where creating the mock implementation relies on something that is time / resource heavy to create, so running the setup before each test would cause performance issues.
2. Reset the mock calls in beforeEach or before specific tests.
This requires a little more attention to detail and understanding the different ways to reset mock functions. For the purposes of tracking calls, it would be sufficient to add a mockClear for our function before each test.
The reason I prefer the other solution is that this option doesn't actually fix the root cause of our problem - we are still sharing context between tests through the mock function - we only address the symptom by explicitly clearing the part of the shared context that we noticed was affecting our tests.
As our codebase grew, we started creating index.ts files inside package subdirectories to have better segmentation of submodules.
These index.ts files basically act as a separator between the "internal" implementation of a certain submodule / collection of files and the "external" interface of it. Kind of like a package within a package.
Consider, for example, a rather minimal and silly example of a math utility functions "submodule":
And an example of a test that would like to spyOn an exported method from mathUtils
Here, we wish to test our function when one of its dependencies fails, so we mock that dependency with a failing implementation.
The example above looks ok and would even pass the typescript compiler, but if we tried to run it we would find that jest simply cannot spy on the method we asked it to spy on, and we will get the following error:
TypeError: Cannot set property getOneMore of #<Object> which has only a getter
When we encounter such an error, but still want to use spyOn we can do one of two things:
1. We can import the function from the "internal" file that exported it in the first place instead of mocking it through the index.ts file, by changing our import to:
This is not ideal as it requires a more intimate knowledge of the file structure, but this might be ok for tests that are so coupled with the implementation anyway.
2. We can mock the whole package and use jest.requireActual to avoid mocking the other functions and to also preserve the original implementation:
Then we just need to slightly change our test setup to get the mock function:
This method has a few disadvantages:
In this post I tried to describe 3 specific issues we faced when writing tests for our codebase over the past year. Issues that have affected how I think about best practices for writing tests, specifically with the Jest framework.
I hope my view on these issues and their solutions can be helpful to anyone that encounters similar dilemmas, as I am sure there are many more interesting details around this topic and this framework :)
Salto for
DevOps
SW Dev
SHARE
Ori Moisis
November 26, 2020
8
min read
In the past year we've been writing and expanding our open source typescript project. One of the first things we prioritized when setting the project up was establishing a solid automatic test baseline so we can develop with confidence and maintain a level of safety when accepting contributions from the community.
In this post I will share a few issues we encountered while working with the very popular and powerful Jest framework and some mitigations and ways to work around them.
Jest has a really nice framework for creating mock functions for unit tests and we use that framework quite extensively.
In our early tests we would create mock functions in the most straight forward way with jest.fn().
This worked great for a while, but the problem with using jest.fn() is that it creates a mock function that is completely decoupled from interface of the function being mocked. There is nothing preventing the mock function from returning the wrong type of answer.
This is usually not a major problem when the test is first written, because it is fairly easy to create a mock of a specific function / interface. The problem begins when the code undergoes refactoring and some interfaces change over time.
Having mock implementations decoupled from the interfaces they supposedly mimic makes it very easy to miss mocks during a refactor and could lead to a situation where some tests are not failing even when the code has a bug in the scenario under test.
For example, let's consider testing the following function:
We have a function that accepts a number and a predicate. If that predicate is true for the input number then it will return the number. If it is false then it will return zero.
To test this function we can create a few tests and use a mock predicate:
So far so good.
Now let's consider what happens if somewhere down the line we decide that we need to change the NumberPredicate interface to be asynchronous.
Assuming we do not change the function implementation:
We will have a bug.
The code itself is still valid in terms of typescript but it is no longer correct - predicate now returns a promise, and a promise is always considered true, which means that predicateOrZero will not return 0 even if the predicate evaluates to false.
In our tests, however, this bug will not be caught even though in theory we have this case covered by the when the predicate is false test case. This is because our mock predicate was not updated to return a promise as the interface requires, it still returns false.
This mistake is not caught by typescript because jest.Mock and jest.fn() are typed as a function from any to any, so the typescript compiler is willing to accept our mock as the predicate, even though its implementation returns the wrong type.
Luckily for us, we can use typescript to create this coupling of an interface and a mock function - we just have to be a little more explicit when we write the mock function and specify the type of the function it is supposed to mock.
Jest types already provides the useful MockedFunction type for this purpose, in our repository we added a couple of additional helpers:
We can use the more explicit mock type to define our mockPredicate
By using the correct type in the mock function we add a "reference" to the interface, so the typescript compiler helps us catch mocks that do not comply with the interface they are supposed to implement.
In our example this would cause the typescript compiler to emit an error on line 9 saying:
error TS2345: Argument of type 'true' is not assignable to parameter of type 'Promise<boolean>'
So as soon as someone makes a breaking change to the interface of the function being mocked, they will have to fix these tests as well.
One of the nice things that jest mock functions do is record the arguments passed to every call.
This is useful for testing functions that use callbacks. For example, it can be used to check that a function calls the callback with the correct parameters, as we did in the following test:
One thing to keep in mind when using this capability is that mock functions which are declared outside of beforeEach will accumulate all calls from all unit tests unless they are reset.
For example, if we were to refactor the tests a bit such that they use the same mock function for both scenarios:
In the example above, the second should call the predicate with the input number test is not really testing anything.
Even if we change the test setup to not call predicateOrZero at all like so:
The test that expects mockPredicate to have been called would pass because the previous test scenario already called the mock function with 10.
There are two main ways to resolve such issues:
1. As we did in the original example, we can create all test context in beforeEach hooks rather than globally / directly under describe.
As a rule of thumb for not accidentally sharing context between tests - try to avoid const definitions outside test setup unless they are for immutable values like strings or numbers.
This is generally my preferred method, as it requires little to no thinking and it solves the problem at the root - we will not share context between tests because the context is created separately for each test.
The disadvantage of this approach is in cases where creating the mock implementation relies on something that is time / resource heavy to create, so running the setup before each test would cause performance issues.
2. Reset the mock calls in beforeEach or before specific tests.
This requires a little more attention to detail and understanding the different ways to reset mock functions. For the purposes of tracking calls, it would be sufficient to add a mockClear for our function before each test.
The reason I prefer the other solution is that this option doesn't actually fix the root cause of our problem - we are still sharing context between tests through the mock function - we only address the symptom by explicitly clearing the part of the shared context that we noticed was affecting our tests.
As our codebase grew, we started creating index.ts files inside package subdirectories to have better segmentation of submodules.
These index.ts files basically act as a separator between the "internal" implementation of a certain submodule / collection of files and the "external" interface of it. Kind of like a package within a package.
Consider, for example, a rather minimal and silly example of a math utility functions "submodule":
And an example of a test that would like to spyOn an exported method from mathUtils
Here, we wish to test our function when one of its dependencies fails, so we mock that dependency with a failing implementation.
The example above looks ok and would even pass the typescript compiler, but if we tried to run it we would find that jest simply cannot spy on the method we asked it to spy on, and we will get the following error:
TypeError: Cannot set property getOneMore of #<Object> which has only a getter
When we encounter such an error, but still want to use spyOn we can do one of two things:
1. We can import the function from the "internal" file that exported it in the first place instead of mocking it through the index.ts file, by changing our import to:
This is not ideal as it requires a more intimate knowledge of the file structure, but this might be ok for tests that are so coupled with the implementation anyway.
2. We can mock the whole package and use jest.requireActual to avoid mocking the other functions and to also preserve the original implementation:
Then we just need to slightly change our test setup to get the mock function:
This method has a few disadvantages:
In this post I tried to describe 3 specific issues we faced when writing tests for our codebase over the past year. Issues that have affected how I think about best practices for writing tests, specifically with the Jest framework.
I hope my view on these issues and their solutions can be helpful to anyone that encounters similar dilemmas, as I am sure there are many more interesting details around this topic and this framework :)