Menu

How to Prevent Unnecessary Renders in React

React's rendering process is powerful but can become inefficient when components re-render without meaningful changes.

Let's explore strategies to prevent these unnecessary renders.

Understanding the Problem

React components typically re-render in three scenarios:

  • If the state changes
  • If the props change
  • Parent component re-renders

The last scenario can lead to wasted renders when child components don't actually need updating:

Let's take a look at an example of this scenario:

function ParentComponent() {
  const [count, setCount] = useState(0);
  
  return (
    <div>
      <button onClick={() => setCount(count + 1)}>
        Clicked {count} times
      </button>
      <ChildComponent /> {/* Re-renders on every click despite no prop changes */}
    </div>
  );
}

React.memo for Function Components

Wrap function components with React.memo() to skip renders when props haven't changed:

const ChildComponent = React.memo(function ChildComponent() {
  console.log("Child rendered!");
  return <div>I'm a memoized component</div>;
});

// Now ChildComponent only re-renders when its props change

shouldComponentUpdate for Class Components

For class components, implement shouldComponentUpdate():

class ListItem extends React.Component {
  shouldComponentUpdate(nextProps) {
    // Only re-render if the item data changed
    return nextProps.item.id !== this.props.item.id || 
           nextProps.item.content !== this.props.item.content;
  }
  
  render() {
    return <div>{this.props.item.content}</div>;
  }
}

useMemo and useCallback Hooks

Prevent recreating objects and functions on each render:

function SearchComponent({ data }) {
  const [query, setQuery] = useState("");
  
  // Without useMemo, filteredData would be recalculated on every render
  const filteredData = useMemo(() => {
    return data.filter(item => item.name.includes(query));
  }, [data, query]); // Only recalculate when data or query changes
  
  // Prevent handleClick from being recreated on every render
  const handleClick = useCallback(() => {
    console.log("Button clicked!");
  }, []); // Empty dependency array means this function never changes
  
  return (
    <div>
      <input value={query} onChange={e => setQuery(e.target.value)} />
      <button onClick={handleClick}>Search</button>
      <DataList data={filteredData} />
    </div>
  );
}

By implementing these techniques, you'll significantly reduce unnecessary renders and improve your React application's performance!

0
68

Related

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
378

When working with SQL Server, you may often need to count the number of unique values in a specific column. This is useful for analyzing data, detecting duplicates, and understanding dataset distributions.

Using COUNT(DISTINCT column_name)

To count the number of unique values in a column, SQL Server provides the COUNT(DISTINCT column_name) function. Here’s a simple example:

SELECT COUNT(DISTINCT column_name) AS distinct_count
FROM table_name;

This query will return the number of unique values in column_name.

Counting Distinct Values Across Multiple Columns

If you need to count distinct combinations of multiple columns, you can use a subquery:

SELECT COUNT(*) AS distinct_count
FROM (SELECT DISTINCT column1, column2 FROM table_name) AS subquery;

This approach ensures that only unique pairs of column1 and column2 are counted.

Why Use COUNT DISTINCT?

  • Helps in identifying unique entries in a dataset.
  • Useful for reporting and analytics.
  • Efficient way to check for duplicates.

By leveraging COUNT(DISTINCT column_name), you can efficiently analyze your database and extract meaningful insights. Happy querying!

1
120

Reading a file line by line is useful when handling large files without loading everything into memory at once.

✅ Best Practice: Use File.ReadLines() which is more memory efficient.

Example

foreach (string line in File.ReadLines("file.txt"))
{
    Console.WriteLine(line);
}

Why use ReadLines()?

Reads one line at a time, reducing overall memory usage. Ideal for large files (e.g., logs, CSVs).

Alternative: Use StreamReader (More Control)

For scenarios where you need custom processing while reading the contents of the file:

using (StreamReader reader = new StreamReader("file.txt"))
{
    string? line;
    while ((line = reader.ReadLine()) != null)
    {
        Console.WriteLine(line);
    }
}

Why use StreamReader?

Lets you handle exceptions, encoding, and buffering. Supports custom processing (e.g., search for a keyword while reading).

When to Use ReadAllLines()? If you need all lines at once, use:

string[] lines = File.ReadAllLines("file.txt");

Caution: Loads the entire file into memory—avoid for large files!

4
302