Menu

How to Prevent Unnecessary Renders in React

React's rendering process is powerful but can become inefficient when components re-render without meaningful changes.

Let's explore strategies to prevent these unnecessary renders.

Understanding the Problem

React components typically re-render in three scenarios:

  • If the state changes
  • If the props change
  • Parent component re-renders

The last scenario can lead to wasted renders when child components don't actually need updating:

Let's take a look at an example of this scenario:

function ParentComponent() {
  const [count, setCount] = useState(0);
  
  return (
    <div>
      <button onClick={() => setCount(count + 1)}>
        Clicked {count} times
      </button>
      <ChildComponent /> {/* Re-renders on every click despite no prop changes */}
    </div>
  );
}

React.memo for Function Components

Wrap function components with React.memo() to skip renders when props haven't changed:

const ChildComponent = React.memo(function ChildComponent() {
  console.log("Child rendered!");
  return <div>I'm a memoized component</div>;
});

// Now ChildComponent only re-renders when its props change

shouldComponentUpdate for Class Components

For class components, implement shouldComponentUpdate():

class ListItem extends React.Component {
  shouldComponentUpdate(nextProps) {
    // Only re-render if the item data changed
    return nextProps.item.id !== this.props.item.id || 
           nextProps.item.content !== this.props.item.content;
  }
  
  render() {
    return <div>{this.props.item.content}</div>;
  }
}

useMemo and useCallback Hooks

Prevent recreating objects and functions on each render:

function SearchComponent({ data }) {
  const [query, setQuery] = useState("");
  
  // Without useMemo, filteredData would be recalculated on every render
  const filteredData = useMemo(() => {
    return data.filter(item => item.name.includes(query));
  }, [data, query]); // Only recalculate when data or query changes
  
  // Prevent handleClick from being recreated on every render
  const handleClick = useCallback(() => {
    console.log("Button clicked!");
  }, []); // Empty dependency array means this function never changes
  
  return (
    <div>
      <input value={query} onChange={e => setQuery(e.target.value)} />
      <button onClick={handleClick}>Search</button>
      <DataList data={filteredData} />
    </div>
  );
}

By implementing these techniques, you'll significantly reduce unnecessary renders and improve your React application's performance!

0
58

Related

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

0
75

Slow initial load times can drive users away from your React application. One powerful technique to improve performance is lazy loading - loading components only when they're needed.

Let's explore how to implement this in React.

The Problem with Eager Loading

By default, React bundles all your components together, forcing users to download everything upfront. This makes navigation much quicker and more streamlined once this initial download is complete.

However, depending on the size of your application, it could also create a long initial load time.

import HeavyComponent from './HeavyComponent';
import AnotherHeavyComponent from './AnotherHeavyComponent';

function App() {
  return (
    <div>
      {/* These components load even if user never sees them */}
      <HeavyComponent />
      <AnotherHeavyComponent />
    </div>
  );
}

React.lazy() to the Rescue

React.lazy() lets you defer loading components until they're actually needed:

import React, { lazy, Suspense } from 'react';

// Components are now loaded only when rendered
const HeavyComponent = lazy(() => import('./HeavyComponent'));
const AnotherHeavyComponent = lazy(() => import('./AnotherHeavyComponent'));

function App() {
  return (
    <div>
      <Suspense fallback={<div>Loading...</div>}>
        <HeavyComponent />
        <AnotherHeavyComponent />
      </Suspense>
    </div>
  );
}

Route-Based Lazy Loading

Combine with React Router for even better performance:

import React, { lazy, Suspense } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';

const Home = lazy(() => import('./pages/Home'));
const Dashboard = lazy(() => import('./pages/Dashboard'));
const Settings = lazy(() => import('./pages/Settings'));

function App() {
  return (
    <BrowserRouter>
      <Suspense fallback={<div>Loading...</div>}>
        <Routes>
          <Route path="/" element={<Home />} />
          <Route path="/dashboard" element={<Dashboard />} />
          <Route path="/settings" element={<Settings />} />
        </Routes>
      </Suspense>
    </BrowserRouter>
  );
}

Implement these techniques in your React application today and watch your load times improve dramatically!

0
73

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

3
224