Note: this is best read on my website. The original post includes runnable React demos that I had to remove, as dev.to does not support MDX.
This is also my first post here, hope you'll enjoy it :)
Many blog articles talk about loading api/async data in a React apps, with componentDidMount
, useEffect
, Redux, Apollo...
Yet, all those articles are generally optimistic, and never mention something important to consider: race conditions could happen, and your UI may end up in an inconsistant state.
An image is worth a thousand words:
You search for Macron, then change your mind and search for Trump, and you end up with a mismatch between what you want (Trump) and what you get (Macron).
If there is a non-null probability that your UI could end up in such a state, your app is subject to race conditions.
Why this happens?
Sometimes, multiple requests are fired in parallel (competing to render the same view), and we just assume the last request will resolve last. Actually, the last request may resolve first, or just fail, leading to the first request resolving last.
It happens more often than you think. For some apps, it can lead to very serious problems, like a user buying the wrong product, or a doctor prescribing the wrong drug to a patient.
A non-exhaustive list of reasons:
- The network is slow, bad, unpredictable, with variable request latencies...
- The backend is under heavy load, throttling some requests, under a Denial-of-Service attack...
- The user is clicking fast, commuting, travelling, on the country side...
- You are just unlucky
Developers don't see them in development, where the network conditions are generally good, sometimes running the backend API on your own computer, with close to 0ms latency.
In this post, I'll show you what those issues do, using realistic network simulations and runnable demos. I'll also explain how you can fix those issues, depending on the libraries you already use.
Disclaimer: to keep the focus on race conditions, the following code samples will not prevent the React warning if you setState
after unmounting.
The incriminated code:
You probably already read tutorials with the following code:
const StarwarsHero = ({ id }) => {
const [data, setData] = useState(null);
useEffect(() => {
setData(null);
fetchStarwarsHeroData(id).then(
result => setData(result),
e => console.warn('fetch failure', e),
);
}, [id]);
return <div>{data ? data.name : <Spinner />}</div>;
};
Or with the class API:
class StarwarsHero extends React.Component {
state = { data: null };
fetchData = id => {
fetchStarwarsHeroData(id).then(
result => setState({ data: result }),
e => console.warn('fetch failure', e),
);
};
componentDidMount() {
this.fetchData(this.props.id);
}
componentDidUpdate(nextProps) {
if (nextProps.id !== this.props.id) {
this.fetchData(this.props.id);
}
}
render() {
const { data } = this.state;
return <div>{data ? data.name : <Spinner />}</div>;
}
}
All 2 versions above lead to this same result. When changing the id very fast, even with your own good home network and very fast API, something is wrong and sometimes, previous request's data is rendered. Please don't think debouncing protects you: it just reduces the chances of being unlucky.
Now let's see what happens when you are on a train with a few tunnels.
Simulating bad network conditions
Let's build some utils to simulate bad network conditions:
import { sample } from 'lodash';
// Will return a promise delayed by a random amount, picked in the delay array
const delayRandomly = () => {
const timeout = sample([0, 200, 500, 700, 1000, 3000]);
return new Promise(resolve =>
setTimeout(resolve, timeout),
);
};
// Will throw randomly with a 1/4 chance ratio
const throwRandomly = () => {
const shouldThrow = sample([true, false, false, false]);
if (shouldThrow) {
throw new Error('simulated async failure');
}
};
Adding network delays
You might be on a slow network, or the backend may take time to answer.
useEffect(() => {
setData(null);
fetchStarwarsHeroData(id)
.then(async data => {
await delayRandomly();
return data;
})
.then(
result => setData(result),
e => console.warn('fetch failure', e),
);
}, [id]);
Adding network delays + failures
You are on a train in the countryside, and there are a few tunnels: requests are delayed randomly and some of them might fail.
useEffect(() => {
setData(null);
fetchStarwarsHeroData(id)
.then(async data => {
await delayRandomly();
throwRandomly();
return data;
})
.then(
result => setData(result),
e => console.warn('fetch failure', e),
);
}, [id]);
This code very easily leads to weird, inconsistant UI states.
How to avoid this problem
Let's suppose 3 requests R1, R2 and R3 gets fired in this order, and are still pending. The solution is to only handle the response from R3, the last issued request.
There are a few ways to do so:
- Ignoring responses from former api calls
- Cancelling former api calls
- Cancelling and ignoring
Ignoring responses from former api calls
Here is one possible implementation.
// A ref to store the last issued pending request
const lastPromise = useRef();
useEffect(() => {
setData(null);
// fire the api request
const currentPromise = fetchStarwarsHeroData(id).then(
async data => {
await delayRandomly();
throwRandomly();
return data;
},
);
// store the promise to the ref
lastPromise.current = currentPromise;
// handle the result with filtering
currentPromise.then(
result => {
if (currentPromise === lastPromise.current) {
setData(result);
}
},
e => {
if (currentPromise === lastPromise.current) {
console.warn('fetch failure', e);
}
},
);
}, [id]);
Some might be tempted to use the id
to do this filtering, but it's not a good idea: if the user clicks next
and then previous
, we might end up with 2 distinct requests for the same hero. Generally this is not a problem (as the 2 requests will often return the exact same data), but using promise identity is a more generic and portable solution.
Cancelling former api calls
It is better to cancel former api requests in-flight: the browser can avoid parsing the response and prevent some useless CPU/Network usage. fetch
support cancellation thanks to AbortSignal
:
const abortController = new AbortController();
// fire the request, with an abort signal,
// which will permit premature abortion
fetch(`https://swapi.co/api/people/${id}/`, {
signal: abortController.signal,
});
// abort the request in-flight
// the request will be marked as "cancelled" in devtools
abortController.abort();
An abort signal is like a little event emitter, you can trigger it (through the AbortController
), and every request started with this signal will be notified and canceled.
Let's see how to use this feature to solve race conditions:
// Store abort controller which will permit to abort
// the last issued request
const lastAbortController = useRef();
useEffect(() => {
setData(null);
// When a new request is going to be issued,
// the first thing to do is cancel the previous request
if (lastAbortController.current) {
lastAbortController.current.abort();
}
// Create new AbortController for the new request and store it in the ref
const currentAbortController = new AbortController();
lastAbortController.current = currentAbortController;
// Issue the new request, that may eventually be aborted
// by a subsequent request
const currentPromise = fetchStarwarsHeroData(id, {
signal: currentAbortController.signal,
}).then(async data => {
await delayRandomly();
throwRandomly();
return data;
});
currentPromise.then(
result => setData(result),
e => console.warn('fetch failure', e),
);
}, [id]);
This code looks good at first, but actually we are still not safe.
Let's consider the following code:
const abortController = new AbortController();
fetch('/', { signal: abortController.signal }).then(
async response => {
await delayRandomly();
throwRandomly();
return response.json();
},
);
If we abort the request during the fetch, the browser will be notified and do something about it. But if the abortion happens while the browser is running the then()
callback, it has no way to handle the abortion of this part of the code, and you have to write this logic on your own. If the abortion happens during the fake delay we added, it won't cancel that delay and stop the flow.
fetch('/', { signal: abortController.signal }).then(
async response => {
await delayRandomly();
throwRandomly();
const data = await response.json();
// Here you can decide to handle the abortion the way you want.
// Throwing or never resolving are valid options
if (abortController.signal.aborted) {
return new Promise();
}
return data;
},
);
Let's get back to our problem. Here's the final, safe version, aborting the request in-flight, but also using the abortion to eventually filter the results. Also let's use the hooks cleanup function, as I was suggested on Twitter, which makes the code a bit simpler.
useEffect(() => {
setData(null);
// Create the current request's abort controller
const abortController = new AbortController();
// Issue the request
fetchStarwarsHeroData(id, {
signal: abortController.signal,
})
// Simulate some delay/errors
.then(async data => {
await delayRandomly();
throwRandomly();
return data;
})
// Set the result, if not aborted
.then(
result => {
// IMPORTANT: we still need to filter the results here,
// in case abortion happens during the delay.
// In real apps, abortion could happen when you are parsing the json,
// with code like "fetch().then(res => res.json())"
// but also any other async then() you execute after the fetch
if (abortController.signal.aborted) {
return;
}
setData(result);
},
e => console.warn('fetch failure', e),
);
// Trigger the abortion in useEffect's cleanup function
return () => {
abortController.abort();
};
}, [id]);
And now only we are safe.
Using libraries
Doing all this manually is complex and error prone. Hopefully, some libraries solve this problem for you. Let's explore a non-exhaustive list of libraries generally used for loading data into React.
Redux
There are multiple ways to load data into a Redux store. Generally, if you are using Redux-saga or Redux-observable, you are fine. For Redux-thunk, Redux-promise and other middlewares, you might check the "vanilla React/Promise" solutions in next sections.
Redux-saga
You might notice there are multiple take
methods on the Redux-saga API, but generally you'll find many examples using takeLatest
. This is because takeLatest
will protect you against those race conditions.
Forks a saga on each action dispatched to the Store
that matches pattern. And automatically cancels any previous saga
task started previously if it's still running.
function* loadStarwarsHeroSaga() {
yield* takeLatest(
'LOAD_STARWARS_HERO',
function* loadStarwarsHero({ payload }) {
try {
const hero = yield call(fetchStarwarsHero, [
payload.id,
]);
yield put({
type: 'LOAD_STARWARS_HERO_SUCCESS',
hero,
});
} catch (err) {
yield put({
type: 'LOAD_STARWARS_HERO_FAILURE',
err,
});
}
},
);
}
The previous loadStarwarsHero
generator executions will be "cancelled". Unfortunately the underlying API request will not really be cancelled (you need an AbortSignal
for that), but Redux-saga will ensure that the success/error actions will only be dispatched to Redux for the last requested Starwars hero. For in-flight request cancellation, follow this issue
You can also opt-out from this protection and use take
or takeEvery
.
Redux-observable
Similarly, Redux-observable (actually RxJS) has a solution: switchMap
:
The main difference between switchMap and other flattening operators
is the cancelling effect. On each emission the previous inner observable
(the result of the function you supplied) is cancelled and
the new observable is subscribed. You can remember this
by the phrase switch to a new observable.
const loadStarwarsHeroEpic = action$ =>
action$.ofType('LOAD_STARWARS_HERO').switchMap(action =>
Observable.ajax(`http://data.com/${action.payload.id}`)
.map(hero => ({
type: 'LOAD_STARWARS_HERO_SUCCESS',
hero,
}))
.catch(err =>
Observable.of({
type: 'LOAD_STARWARS_HERO_FAILURE',
err,
}),
),
);
You can also use other RxJS operators like mergeMap
if you know what you are doing, but many tutorials will use switchMap
, as it's a safer default. Like Redux-saga, it won't cancel the underlying request in-flight, but there are solutions to add this behavior.
Apollo
Apollo lets you pass down GraphQL query variables. Whenever the Starwars hero id changes, a new request is fired to load the appropriate data. You can use the HOC, the render props or the hooks, Apollo will always guarantee that if you request id: 2
, your UI will never return you the data for another Starwars hero.
const data = useQuery(GET_STARWARS_HERO, {
variables: { id },
});
if (data) {
// This is always true, hopefully!
assert(data.id === id);
}
Vanilla React
There are many libraries to load data into React components, without needing a global state management solution.
I created react-async-hook: a very simple and tiny hooks library to load async data into React components. It has very good native Typescript support, and protects you against race conditions by using the techniques discussed above.
import { useAsync } from 'react-async-hook';
const fetchStarwarsHero = async id =>
(await fetch(
`https://swapi.co/api/people/${id}/`,
)).json();
const StarwarsHero = ({ id }) => {
const asyncHero = useAsync(fetchStarwarsHero, [id]);
return (
<div>
{asyncHero.loading && <div>Loading</div>}
{asyncHero.error && (
<div>Error: {asyncHero.error.message}</div>
)}
{asyncHero.result && (
<div>
<div>Success!</div>
<div>Name: {asyncHero.result.name}</div>
</div>
)}
</div>
);
};
Other options protecting you:
- react-async: quite similar, also with render props api
- react-refetch: older project, based on HOCs
There are many other library options, for which I won't be able to tell you if they are protecting you: take a look at the implementation.
Note: it's possible react-async-hook
and react-async
will merge in the next months.
Note:: it's possible to use StarwarsHero key={id} id={id}/>
as a simple React workaround, to ensure the component remounts everytime the id changes. This will protect you (and sometime a useful feature), but gives more work to React.
Vanilla promises and Javascript
If you are dealing with vanilla promises and Javascript, here are simple tools you can use to prevent those issues.
Those tools can also be useful to handle race conditions if you are using thunks or promises with Redux.
Note: some of these tools are actually low-level implementation details of react-async-hook.
Cancellable promises
React has an old blog post isMounted() is an antipattern on which you'll learn how to make a promise cancellable to avoid the setState after unmount warning. The promise is not really cancellable
(the underlying api call won't be cancelled), but you can choose to ignore or reject the response of a promise.
I made a library awesome-imperative-promise to make this process easier:
import { createImperativePromise } from 'awesome-imperative-promise';
const id = 1;
const { promise, resolve, reject, cancel } = createImperativePromise(fetchStarwarsHero(id);
// will make the returned promise resolved manually
resolve({
id,
name: "R2D2"
});
// will make the returned promise rejected manually
reject(new Error("can't load Starwars hero"));
// will ensure the returned promise never resolves or reject
cancel();
Note: all those methods have to be called before the underlying API request resolves or reject. If the promise is already resolved, there's no way to "unresolve" it.
Automatically ignoring last call
awesome-only-resolves-last-promise is a library to ensure we only handle the result of the last async call:
import { onlyResolvesLast } from 'awesome-only-resolves-last-promise';
const fetchStarwarsHeroLast = onlyResolvesLast(
fetchStarwarsHero,
);
const promise1 = fetchStarwarsHeroLast(1);
const promise2 = fetchStarwarsHeroLast(2);
const promise3 = fetchStarwarsHeroLast(3);
// promise1: won't resolve
// promise2: won't resolve
// promise3: WILL resolve
What about Suspense?
It should prevent those issues, but let's wait for the official release :)
Conclusion
For your next React data loading usecase, I hope you will consider handling race conditions properly.
I can also recommend to hardcode some little delays to your API requests in development environment. Potential race conditions and bad loading experiences will be more easy to notice. I think it's safer to make this delay mandatory, instead of expecting each developer to turn on the slow network option in devtools.
I hope you've found this post interesting and you learned something, it was my first technical blog post ever :)
Originally posted on my website
If you like it, spread the word with a Retweet
Browser demos code or correct my post typos on the blog repo
For more content like this, subscribe to my mailing list and follow me on Twitter.
Thanks for my reviewers: Shawn Wang, Mateusz Burzyński, Andrei Calazans, Adrian Carolli, Clément Oriol, Thibaud Duthoit, Bernard Pratz