How to Serialize and Deserialize JSON in C#

JSON serialization and deserialization in C# has become remarkably straightforward with the System.Text.Json namespace, introduced in .NET Core 3.0 as a modern alternative to Newtonsoft.Json.

The JsonSerializer class provides static methods to convert objects to JSON strings (Serialize) and parse JSON strings back into objects (Deserialize).

For basic serialization, you can simply call JsonSerializer.Serialize(object) on any object, and it will automatically convert public properties into their JSON representation.

Similarly, JsonSerializer.Deserialize<T>(jsonString) converts JSON back into strongly-typed objects. The process becomes even more powerful when combined with custom attributes like [JsonPropertyName] to control property naming and [JsonIgnore] to exclude specific properties from serialization.

When working with more complex scenarios, you can customize the serialization process using JsonSerializerOptions.

This allows you to control various aspects such as case sensitivity, indentation, handling of null values, and custom converters. For example, setting PropertyNameCaseInsensitive = true enables case-insensitive property matching during deserialization, while WriteIndented = true produces formatted JSON output.

It's also worth noting that System.Text.Json is designed with performance in mind, offering better performance compared to Newtonsoft.Json for most scenarios.

Example

// Define a class to serialize
public class Person
{
    public string Name { get; set; }
    [JsonPropertyName("birth_date")]
    public DateTime BirthDate { get; set; }
    [JsonIgnore]
    public int InternalId { get; set; }
}

// Serialization example
Person person = new Person 
{ 
    Name = "John Doe", 
    BirthDate = new DateTime(1990, 1, 1) 
};
string json = JsonSerializer.Serialize(person);

// Deserialization example
Person deserializedPerson = JsonSerializer.Deserialize<Person>(json);

// Using JsonSerializerOptions
var options = new JsonSerializerOptions
{
    WriteIndented = true,
    PropertyNameCaseInsensitive = true,
    PropertyNamingPolicy = JsonNamingPolicy.CamelCase
};
string prettyJson = JsonSerializer.Serialize(person, options);

// Working with collections
List<Person> people = new List<Person> { person };
string jsonArray = JsonSerializer.Serialize(people);
List<Person> deserializedPeople = JsonSerializer.Deserialize<List<Person>>(jsonArray);
2
860

Related

Removing duplicates from a list in C# is a common task, especially when working with large datasets. C# provides multiple ways to achieve this efficiently, leveraging built-in collections and LINQ.

Using HashSet (Fastest for Unique Elements)

A HashSet<T> automatically removes duplicates since it only stores unique values. This is one of the fastest methods:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = new HashSet<int>(numbers).ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Using LINQ Distinct (Concise and Readable)

LINQ’s Distinct() method provides an elegant way to remove duplicates:

List<int> numbers = new List<int> { 1, 2, 2, 3, 4, 4, 5 };
numbers = numbers.Distinct().ToList();
Console.WriteLine(string.Join(", ", numbers)); // Output: 1, 2, 3, 4, 5

Removing Duplicates by Custom Property (For Complex Objects)

When working with objects, DistinctBy() from .NET 6+ simplifies duplicate removal based on a property:

using System.Linq;
using System.Collections.Generic;

class Person
{
    public string Name { get; set; }
    public int Age { get; set; }
}

List<Person> people = new List<Person>
{
    new Person { Name = "Alice", Age = 30 },
    new Person { Name = "Bob", Age = 25 },
    new Person { Name = "Alice", Age = 30 }
};

people = people.DistinctBy(p => p.Name).ToList();
Console.WriteLine(string.Join(", ", people.Select(p => p.Name))); // Output: Alice, Bob

For earlier .NET versions, use GroupBy():

people = people.GroupBy(p => p.Name).Select(g => g.First()).ToList();

Performance Considerations

  • HashSet<T> is the fastest but only works for simple types.
  • Distinct() is easy to use but slower than HashSet<T> for large lists.
  • DistinctBy() (or GroupBy()) is useful for complex objects but may have performance trade-offs.

Conclusion

Choosing the best approach depends on the data type and use case. HashSet<T> is ideal for primitive types, Distinct() is simple and readable, and DistinctBy() (or GroupBy()) is effective for objects.

1
349

String interpolation, introduced in C# 6.0, provides a more readable and concise way to format strings compared to traditional concatenation (+) or string.Format(). Instead of manually inserting variables or placeholders, you can use the $ symbol before a string to directly embed expressions inside brackets.

string name = "Walt";
string job = 'Software Engineer';

string message = $"Hello, my name is {name} and I am a {job}";
Console.WriteLine(message);

This would produce the final output of:

Hello, my name is Walt and I am a Software Engineer

String interpolation can also be chained together into a multiline string (@) for even cleaner more concise results:

string name = "Walt";
string html = $@"
    <div>
        <h1>Welcome, {name}!</h1>
    </div>";
37
148

When working with SQL Server, you may often need to count the number of unique values in a specific column. This is useful for analyzing data, detecting duplicates, and understanding dataset distributions.

Using COUNT(DISTINCT column_name)

To count the number of unique values in a column, SQL Server provides the COUNT(DISTINCT column_name) function. Here’s a simple example:

SELECT COUNT(DISTINCT column_name) AS distinct_count
FROM table_name;

This query will return the number of unique values in column_name.

Counting Distinct Values Across Multiple Columns

If you need to count distinct combinations of multiple columns, you can use a subquery:

SELECT COUNT(*) AS distinct_count
FROM (SELECT DISTINCT column1, column2 FROM table_name) AS subquery;

This approach ensures that only unique pairs of column1 and column2 are counted.

Why Use COUNT DISTINCT?

  • Helps in identifying unique entries in a dataset.
  • Useful for reporting and analytics.
  • Efficient way to check for duplicates.

By leveraging COUNT(DISTINCT column_name), you can efficiently analyze your database and extract meaningful insights. Happy querying!

1
116