Source:  Twitter logo

I'm using the ultimate combination of React + Redux + Reselect + Immutable.js in my application. I like the idea of reselect because it lets me keep my state (maintained by the reducers) as simple as possible. I use a selector to calculate the actual state I need which is then fed to the React components.

The problem here is that a small change in once of the reducers causes the selectors to recalculate the whole derived output and as the result also the whole React UI is updated. My pure components don't work. It's slow.

Typical example: The first part of my data comes from server and is basically immutable. The second part is maintained by the client and is mutated using the redux actions. They are maintained by separate reducers.

I use a selector to merge both parts into a single list of Records which is then passed to the React components. But obviously, when I change a single thing in one of the objects, the whole list is regenerated and new instances of Records is created. And the UI is completely re-rendered.

Obviously running the selector every time is not exactly efficient but is still reasonably fast and I'd be willing to make that trade off (because it does make the code way simpler and cleaner). The problem is the actual rendering which is slow.

What I'd need to do would be to deep merge the new selector output with the old one because Immutable.js library is smart enough not to create new instances when nothing was changed. But as selectors are simple functions that do not have access to previous outputs, I guess it's not possible.

I assume that my current approach is wrong and I'd like to hear other ideas.

Probably the way to go would be to get rid of reselect in this case and move the logic into a hierarchy of reducers that would use incremental updates to maintain the desired state.

I solved my problem but I guess there is no right answer as it really depends on a specific situation. In my case, I decided to go with this approach:


One of the challenges that the original selector handled nicely was that the final information was compiled from many pieces that were delivered in an arbitrary order. If I decided to build up the final information in my reducers incrementally, I'd have to make sure to count with all possible scenarios (all possible orders in which the information pieces could arrive) and define transformations between all possible states. Whereas with reselect, I can simply take what I currently have and make something out of it.

To keep this functionality, I decided to move the selector logic into a wrapping parent reducer.

Okay, let's say that I have three reducers, A, B and C, and corresponding selectors. Each handles one piece of information. The piece could be loaded from server or it could originate from the user on the client side. This would be my original selector:

const makeFinalState(a, b, c) => (new List(a)).map(item => 
  new MyRecord({ ...item, ...(b[item.id] || {}), ...(c[item.id] || {}) });

export const finalSelector = createSelector(
  [selectorA, selectorB, selectorC],
  (a, b, c) => makeFinalState(a, b, c,));

(This is not the actual code but I hope it makes sense. Note that regardless of the order in which the contents of individual reducers become available, the selector will eventually generate the correct output.)

I hope my problem is clear now. In case the content of any of those reducers changes, the selector is recalculated from scratch, generating completely new instances of all records which eventually results in complete re-renders of React components.

My current solution looks lite this:

export default function finalReducer(state = new Map(), action) {
  state = state
    .update('a', a => aReducer(a, action))
    .update('b', b => bReducer(b, action))
    .update('c', c => cReducer(c, action));

  switch (action.type) {
    case HEAVY_ACTION_AFFECTING_A:
    case HEAVY_ACTION_AFFECTING_B:
    case HEAVY_ACTION_AFFECTING_C:
      return state.update('final', final => (final || new List()).mergeDeep(
        makeFinalState(state.get('a'), state.get('b'), state.get('c')));

    case LIGHT_ACTION_AFFECTING_C:
      const update = makeSmallIncrementalUpdate(state, action.payload);
      return state.update('final', final => (final || new List()).mergeDeep(update))
  }
}

export const finalSelector = state => state.final;

The core idea is this:

  • If something big happens (i.e. I get a huge chunk of data from the server), I rebuild the whole derived state.
  • If something small happens (i.e. users selects an item), I just make a quick incremental change, both in the original reducer and in the wrapping parent reducer (there is a certain duplicity, but it's necessary to achieve both consistency and good performance).

The main difference from the selector version is that I always merge the new state with the old one. The Immutable.js library is smart enough not to replace the old Record instances with the new Record instances if their content is completely the same. Therefore the original instances are kept and as a result corresponding pure components are not re-rendered.

Obviously, the deep merge is a costly operation so this won't work for really large data sets. But the truth is that this kind of operations is still fast compared to React re-renders and DOM operations. So this approach can be a nice compromise between performance and code readability/conciseness.

Final note: If it wasn't for those light actions handled separately, this approach would be essentially equivalent to replacing shallowEqual with deepEqual inside shouldComponentUpdate method of pure components.

6 users liked answer #0dislike answer #06
tobik profile pic
tobik

This kind of scenario can often be solved by refactoring how the UI is connected to the state. Let's say you have a component displaying a list of items: instead of connecting it to the already built list of items, you could connect it to a simple list of ids, and connect each individual item to its record by id. This way, when a record changes, the list of ids itself doesn't change and only the corresponding connected component is re-rendered.

If in your case, if the record is assembled from different parts of the state, the selector yielding individual records could itself be connected to the relevant parts of the state for this particular record.

Now, about the use of immutable.js with reselect: this combination works best if the raw parts of your state are already immutable.js objects. This way you can take advantage of the fact that they use persistent data structures, and the default memoization function from reselect works best. You can always override this memoization function, but feeling that a selector should access its previous return value if often a sign that it is in charge of data that should be hold in the state / or that it is gathering too much data at once, and that maybe more granular selectors could help.

1 users liked answer #1dislike answer #11
VonD profile pic
VonD

It looks like you are describing a scenario very close to the one why I wrote re-reselect.

re-reselect is a small reselect wrapper, which initializes selectors on the fly using a memoized factory.

(Disclaimer: I'm the author of re-reselect).

0 users liked answer #2dislike answer #20
Andrea Carraro profile pic
Andrea Carraro

Copyright © 2022 QueryThreads

All content on Query Threads is licensed under the Creative Commons Attribution-ShareAlike 3.0 license (CC BY-SA 3.0).