How to Use Common Table Expressions (CTEs) in SQL Server for Readable Queries

Common Table Expressions (CTEs) are a powerful SQL Server feature that can dramatically improve query readability and maintainability.

Introduced in SQL Server 2005, CTEs let you define a temporary result set that you can reference within a SELECT, INSERT, UPDATE, DELETE, or MERGE statement.

Basic CTE Syntax

A CTE follows this pattern:

WITH CTE_Name AS (
    -- Your query here
)
SELECT * FROM CTE_Name;

The main components are:

  • The WITH keyword to start the CTE
  • A name for your CTE
  • The AS keyword
  • Parentheses containing your query
  • A statement that references the CTE

Why Use CTEs?

CTEs offer several advantages:

  • Improved readability: Breaking complex queries into named, logical segments
  • Self-referencing capability: Useful for hierarchical or recursive data
  • Query simplification: Reducing nested subqueries
  • Code reusability: Using the same temporary result multiple times in a query

Simple CTE Example

Here's a basic example that calculates average order values by customer category:

-- Without CTE
SELECT 
    c.CustomerCategory,
    SUM(o.TotalAmount) / COUNT(DISTINCT o.OrderID) AS AvgOrderValue
FROM Customers c
JOIN Orders o ON c.CustomerID = o.CustomerID
GROUP BY c.CustomerCategory;

-- With CTE
WITH OrderSummary AS (
    SELECT 
        c.CustomerCategory,
        o.OrderID,
        o.TotalAmount
    FROM Customers c
    JOIN Orders o ON c.CustomerID = o.CustomerID
)
SELECT 
    CustomerCategory,
    SUM(TotalAmount) / COUNT(DISTINCT OrderID) AS AvgOrderValue
FROM OrderSummary
GROUP BY CustomerCategory;

The CTE version clearly separates the data gathering from the aggregation logic.

Multiple CTEs in a Single Query

You can chain CTEs for even more complex scenarios:

WITH 
CustomerOrders AS (
    SELECT 
        c.CustomerID,
        c.CustomerName,
        COUNT(o.OrderID) AS OrderCount
    FROM Customers c
    LEFT JOIN Orders o ON c.CustomerID = o.CustomerID
    GROUP BY c.CustomerID, c.CustomerName
),
OrderCategories AS (
    SELECT
        CustomerID,
        CASE 
            WHEN OrderCount = 0 THEN 'Inactive'
            WHEN OrderCount BETWEEN 1 AND 5 THEN 'Regular'
            ELSE 'VIP'
        END AS CustomerCategory
    FROM CustomerOrders
)
SELECT 
    c.CustomerName,
    o.CustomerCategory
FROM CustomerOrders c
JOIN OrderCategories o ON c.CustomerID = o.CustomerID
ORDER BY o.CustomerCategory, c.CustomerName;

Recursive CTEs

One of the most powerful CTE features is recursion, which is perfect for hierarchical data like organizational charts or category trees:

WITH EmployeeHierarchy AS (
    -- Anchor member (starting point)
    SELECT 
        EmployeeID,
        EmployeeName,
        ManagerID,
        0 AS Level
    FROM Employees
    WHERE ManagerID IS NULL -- Start with top-level employees
    
    UNION ALL
    
    -- Recursive member (references itself)
    SELECT 
        e.EmployeeID,
        e.EmployeeName,
        e.ManagerID,
        eh.Level + 1
    FROM Employees e
    INNER JOIN EmployeeHierarchy eh ON e.ManagerID = eh.EmployeeID
)
SELECT 
    EmployeeID,
    EmployeeName,
    Level,
    REPLICATE('--', Level) + EmployeeName AS HierarchyDisplay
FROM EmployeeHierarchy
ORDER BY Level, EmployeeName;

This query produces an indented organization chart starting from top-level managers.

CTEs vs. Temporary Tables or Table Variables

Unlike temporary tables or table variables, CTEs:

  • Exist only during query execution
  • Don't require explicit cleanup
  • Can't have indexes added to them
  • Are primarily for improving query structure and readability

Best Practices

  1. Use meaningful names that describe what the data represents
  2. Keep individual CTEs focused on a single logical operation
  3. Comment complex CTEs to explain their purpose
  4. Consider performance - CTEs are not always more efficient than subqueries
  5. Avoid excessive nesting - if your query becomes too complex, consider stored procedures or multiple queries

When Not to Use CTEs

CTEs might not be the best choice when:

  • You need to reference the same large dataset multiple times (temp tables may be more efficient)
  • You need to add indexes for performance optimization
  • Your recursive CTE might exceed the default recursion limit (100)

By mastering CTEs, you can write SQL that's not only more maintainable but also easier to understand and debug.

3
51

Related

Closing a SqlDataReader correctly prevents memory leaks, connection issues, and unclosed resources. Here’s the best way to do it.

Use 'using' to Auto-Close

Using using statements ensures SqlDataReader and SqlConnection are closed even if an exception occurs.

Example

using (SqlConnection conn = new SqlConnection(connectionString))
{
    conn.Open();
    using (SqlCommand cmd = new SqlCommand("SELECT * FROM Users", conn))
    using (SqlDataReader reader = cmd.ExecuteReader())
    {
        while (reader.Read())
        {
            Console.WriteLine(reader["Username"]);
        }
    } // ✅ Auto-closes reader here
} // ✅ Auto-closes connection here

This approach auto-closes resources when done and it is cleaner and less error-prone than manual closing.

⚡ Alternative: Manually Close in finally Block

If you need explicit control, you can manually close it inside a finally block.

SqlDataReader? reader = null;
try
{
    using SqlConnection conn = new SqlConnection(connectionString);
    conn.Open();
    using SqlCommand cmd = new SqlCommand("SELECT * FROM Users", conn);
    reader = cmd.ExecuteReader();

    while (reader.Read())
    {
        Console.WriteLine(reader["Username"]);
    }
}
finally
{
    reader?.Close();  // ✅ Closes reader if it was opened
}

This is slightly more error prone if you forget to add a finally block. But might make sense when you need to handle the reader separately from the command or connection.

0
161

XML (Extensible Markup Language) is a widely used format for storing and transporting data.

In C#, you can create XML files efficiently using the XmlWriter and XDocument classes. This guide covers both methods with practical examples.

Writing XML Using XmlWriter

XmlWriter provides a fast and memory-efficient way to generate XML files by writing elements sequentially.

Example:

using System;
using System.Xml;

class Program
{
    static void Main()
    {
        using (XmlWriter writer = XmlWriter.Create("person.xml"))
        {
            writer.WriteStartDocument();
            writer.WriteStartElement("Person");

            writer.WriteElementString("FirstName", "John");
            writer.WriteElementString("LastName", "Doe");
            writer.WriteElementString("Age", "30");

            writer.WriteEndElement();
            writer.WriteEndDocument();
        }
        Console.WriteLine("XML file created successfully.");
    }
}

Output (person.xml):

<?xml version="1.0" encoding="utf-8"?>
<Person>
    <FirstName>John</FirstName>
    <LastName>Doe</LastName>
    <Age>30</Age>
</Person>

Writing XML Using XDocument

The XDocument class from LINQ to XML provides a more readable and flexible way to create XML files.

Example:

using System;
using System.Xml.Linq;

class Program
{
    static void Main()
    {
        XDocument doc = new XDocument(
            new XElement("Person",
                new XElement("FirstName", "John"),
                new XElement("LastName", "Doe"),
                new XElement("Age", "30")
            )
        );
        doc.Save("person.xml");
        Console.WriteLine("XML file created successfully.");
    }
}

This approach is ideal for working with complex XML structures and integrating LINQ queries.

When to Use Each Method

  • Use XmlWriter when performance is critical and you need to write XML sequentially.
  • Use XDocument when you need a more readable, maintainable, and flexible way to manipulate XML.

Conclusion

Writing XML files in C# is straightforward with XmlWriter and XDocument. Choose the method that best suits your needs for performance, readability, and maintainability.

1
253

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

0
302