EF Core Getting Started

I am learning Entity Framework Core as part of my Azure journey. Database is an important part in an application. In the old days, developers wrote raw SQL queries. Later, we have had ADO.NET. Recently we have ORM. I have had a chance to work (know) the 2 big guys: NHibernate and Entity Framework.

ORM does more than just a mapping between object model and database representation, such as SQL Table, Column. Each ORM framework comes with a plenty of features, supports variety of scenarios. ORM helps you build a better application. Let’s discover some from the latest ORM from Microsoft: Entity Framework Core.

I was amazed by visiting the official document site. Everything you need to learn is there, in well-written, understandable pages. To my learning, I started with courses on Pluralsight, author Julie Lerman. If you happen to have Pluralsight account, go ahead and watch them. I is worth your time. Then I read the EF document on its official site.

It is  easy to say that “Hey I know Entity Framework Core“. Yes, I understand it. But I need the skill, not just a mental understanding. To make sure I build EF skill, I write blog posts and write code. It is also my advice to you, developers.

Journey to Azure

Getting Started Objectives

  1. Define a simple domain model and hook up with EF Core in ASP.NET Core + EF Core project
  2. Migration: From code to database
  3. API testing with Postman or Fiddler (I do not want to spend time on building UI)
  4. Unit Testing with In Memory and real databases.
  5. Running on Azure with Azure SQL
  6. Retry strategy

1 – Domain Model

To get started, I have only these super simple domain model

namespace Aduze.Domain
{
    public abstract class Entity
    {
        public int Id { get; set; }
    }

    public class User : Entity
    {
        public string LoginName { get; set; }
        public string FullName { get; set; }
        public Image Avatar { get; set; }
    }

    public class Image : Entity
    {
        public string Uri { get; set; }
    }
}

A User with an avatar (Image).

Next, I have to setup DbContext

namespace Aduze.Data
{
    public class AduzeContext : DbContext
    {
        public DbSet<User> Users { get; set; }

        public AduzeContext(DbContextOptions options)
        :base(options)
        {
            
        }
        
        protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
        {
        }

        protected override void OnModelCreating(ModelBuilder modelBuilder)
        {
            base.OnModelCreating(modelBuilder);
        }
    }
}

Pretty simple just like the example in the document site. Just a quick note here, I organize domain classes in Domain project, data access layer in Data project. I do not like the term Repository very much.

Wire them up in the ASP.NET Core Web project

       public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddDbContext<AduzeContext>(options =>
            {
                options.UseSqlServer(Configuration.GetConnectionString("AduzeSqlConnection"))
                    .EnableSensitiveDataLogging();
            });
            services.AddLogging(log =>
                log.AddAzureWebAppDiagnostics()
                    .AddConsole());
        }

Just call the extension method: AddDbContext and done. God damn simple!

2 – Migration

The system cannot work unless there is a database. There are 2 possible solutions

  1. Use your SQL skill and create database with correct schema.
  2. Use what EF offers

I have done the former many years. Let’s explore the later.

Having your VS 2017 opened, access the Package Manager Console window

Add-Migration

EF Core Add Migration
EF Core Add Migration
  1. Default project: Aduze.Data where the DbContext is configured.
  2. Add-Migration: A PowerShell command supplied by EF Core. Tips: Type Get-Help Add-Migration to ask for help
  3. InitializeUser: The migration name. One can give whatever makes sense.

After executed, The “Migrations” folder is added into the Data project. Visit EF Core document to understand what it does and syntaxes.

Script-Migration

So how does the SQL script look like?

PM> Script-Migration
IF OBJECT_ID(N'__EFMigrationsHistory') IS NULL
BEGIN
    CREATE TABLE [__EFMigrationsHistory] (
        [MigrationId] nvarchar(150) NOT NULL,
        [ProductVersion] nvarchar(32) NOT NULL,
        CONSTRAINT [PK___EFMigrationsHistory] PRIMARY KEY ([MigrationId])
    );
END;

GO

CREATE TABLE [Image] (
    [Id] int NOT NULL IDENTITY,
    [Uri] nvarchar(max) NULL,
    CONSTRAINT [PK_Image] PRIMARY KEY ([Id])
);

GO

CREATE TABLE [Users] (
    [Id] int NOT NULL IDENTITY,
    [AvatarId] int NULL,
    [FullName] nvarchar(max) NULL,
    [LoginName] nvarchar(max) NULL,
    CONSTRAINT [PK_Users] PRIMARY KEY ([Id]),
    CONSTRAINT [FK_Users_Image_AvatarId] FOREIGN KEY ([AvatarId]) REFERENCES [Image] ([Id]) ON DELETE NO ACTION
);

GO

CREATE INDEX [IX_Users_AvatarId] ON [Users] ([AvatarId]);

GO

INSERT INTO [__EFMigrationsHistory] ([MigrationId], [ProductVersion])
VALUES (N'20180420112151_InitializeUser', N'2.0.2-rtm-10011');

GO

Cool! I can take the script and run in SQL Management Studio. Having scripts ready, I can use them to create Azure SQL database later on.

Update-Database

Which allows me to create the database directly from Package Manager Console (which is a PowerShell). Let’s see

PM> Update-Database -Verbose

By turning Verbose on, It logs everything out in the console. The result is my database created

EF Update Database
EF Update Database

It is very smart. How could It do?

  1. Read the startup project Aduze.Web and extract the ConnectionString from appsettings.json
  2. Run the migrations created from Add-Migration command.

3 – API Testing

So far nothing has happened yet.

namespace Aduze.Web.Controllers
{
    public class UserController : Controller
    {
        private readonly AduzeContext _context;

        public UserController(AduzeContext context)
        {
            _context = context;
        }
        [HttpPost]
        public async Task<IActionResult> Create([FromBody]User user)
        {
            _context.Add(user);
            await _context.SaveChangesAsync();
            return Json(user);
        }

        [HttpGet]
        public async Task<IActionResult> Index()
        {
            var users = await _context.Users.ToListAsync();
            return Json(users);
        }
    }
}

A typical Web API controller.

  1. Create: Will insert a user. There is no validation, mapping between request to domain, … It is not a production code.
  2. Index: List all users.

Here is the test using Postman

API Test with Postman
API Test with Postman

If I invoke the /user endpoint, the user is on the list.

Hey, what was going on behind the scene?

EF SQL Log
EF SQL Log

There are plenty of information you can inspect from the Debug window. When inserting a user, those are queries sent to the database (you should see the one to insert the avatar image).

So far so good. I have gone from domain model and build a full flow endpoint API. How about unit testing?

4 – Unit Test

One of the biggest concern when doing unit test is the database dependency. How could EF Core help? It has In-Memory provider. But first, I have to refactor my code since I do not want to test API controller.

namespace Aduze.Data
{
    public class UserData
    {
        private readonly AduzeContext _context;

        public UserData(AduzeContext context)
        {
            _context = context;
        }

        public async Task<User> Create(User user)
        {
            _context.Add(user);
            await _context.SaveChangesAsync();
            return user;
        }

        public async Task<IEnumerable<User>> GetAll()
        {
            return await _context.Users.ToListAsync();
        }
    }
}

namespace Aduze.Web.Controllers
{
    public class UserController : Controller
    {
        private readonly UserData _userData;

        public UserController(UserData userData)
        {
            _userData = userData;
        }
        [HttpPost]
        public async Task<IActionResult> Create([FromBody]User user)
        {
            return Json(await _userData.Create(user));
        }

        [HttpGet]
        public async Task<IActionResult> Index()
        {
            return Json(await _userData.GetAll());
        }
    }
}

That’s should do the trick. Then just register the new UserData service to IoC

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddMvc();
            services.AddDbContext<AduzeContext>(options =>
            {
                options.UseSqlServer(Configuration.GetConnectionString("AduzeSqlConnection"))
                    .EnableSensitiveDataLogging();
            });
            services.AddScoped<UserData>();
        }

Time to create a test project: Aduze.Tests. And then install the Microsoft.EntityFrameworkCore.InMemory package

PM> Install-Package Microsoft.EntityFrameworkCore.InMemory

This is really cool, see below

Unit Test DbContext in Memory
Unit Test DbContext in Memory

Because my refactor UserData uses async version. It seems to have a problem with MS Tests runner. But it is the same with testing directly again AduzeDbContext.

  1. Use DbContextOptionsBuilder to tell EF Core that the context will use In Memory provider.
  2. Pass the options to DbContext constructor.

Having the power to control which provider will be using is a powerful design. One can have a test suite that is independent to the provider. Most of the time we will test with In Memory provider. But when time comes to verify that the database schema is correct, can switch to a real database.

5 – Azure SQL

Time to grow up … to the cloud with these simple steps

  1. Publish the web to Azure
  2. Create Azure SQL database
  3. Update connection string
  4. Run the script (remember the Script-Migration command?) to create database schema
Azure set ConnectionString
Azure set ConnectionString

Just add the connection string: AduzeSqlConnection (is defined in appsettings.json at the local development).

Test again with Postman. Oh yeah baby. It works like a charm.

6 – Retry Strategy

This topic is not something I want to explore at this stage of my learning journey. But it is important to be aware of, at least note down the reference link to Connection Resiliency.

 

Wrap Up

It is not something new nor complicated if we look at its surface. However, when I get my hands dirty at the code and writing, I learn so much. Knowing how to define a DbContext is easy, understanding why it was designed that way is another complete story.

But is that all about EF Core? No. It is just a beginning. There are many things that developers will look at them when they experience problems in real projects. The document is there, the community is there. Oh, S.O has all answers.

What I will look at next is how EF Core supports developers with DDD (Domain Driven Design).

Welcome to Azure – Getting Started: Codename Aduze

I have not had an official chance to work deeply with Azure. Most of my knowledge come from reading here and there; from watching Pluralsight courses. I have knowledge about Azure but lacking skills. Besides Azure, ASP.NET Core has been there for a while; the latest is ASP.NET Core 2.0. I need to catch the train before it’s gone too far.

To learn technologies, you have to build something using them. I have to build a web application to get started. The focus is not about the business domain, it is about learning technologies. What should I call my project? Naming is always a problem 🙁

While thinking about Azure, when pronouncing in Vietnamese, there is a similar sound: A dua. It means “follow the trend, follow the crowd, …” Sounds like a good idea, I said. Because I am learning the newest technology stacks. Let’s call it: Aduze.

Aduze – Vietnamese Azure

I like it. Let’s start.

Starting with create a new project with ASP.NET MVC Razor (3) project template. Here what I got

ASP.NET Core 2 Project Template
ASP.NET Core 2 Project Template

Looks clean and simple. Press F5 and you have a website. Because I have been working with ASP.NET MVC, I understand most of the parts. The new stuff is the project file. Let’s take a quick tour Core Project File. One can click and view a detail explanation on MS Docs. To my learning, I just summary (repeat) them here

  1. Sdk: Specify the MSBuild tasks and target that will build the project. There are 2 valid IDs: Microsoft.NET.Sdk and Microsoft.NET.Sdk.Web
  2. TargetFramework: the framework ID: netcoreapp2.0
  3. PackageReference: Define NuGet packages to restore while building.
  4. DotNetCliToolReference: CLI tool to restore. Not sure I understand what it is 😛

What is special about this project file? That is I could not find any file/folder reference. Remember the old days where a project file has all files/folders included in the project. We have not paid much attention to that until there is a conflict in a team. When 2 members add 2 different files, the end result is that, quite often, the other guy has to resolve the conflict.

wwwroot serves files from public access such as images, CSS files, or any file that allows public access. If I need to view the jquery license file, simply type http://localhost/lib/jquery/license.txt There is no wwwroot in the URL.

On Visual Studio, run the application with F5 and see the output

Core Web Server Console Output
Core Web Server Console Output

 

The ConfigureServices is called and then Configure. There are a whole bunch of other things we can learn such as which middleware invoked.

 

Step back and look at the default scaffold template. There are things I want fully understand first. When MVC first introduced, I jumped directly into the business code with the default scaffold template. And since then I have not had a chance to look back once. Mostly because there was no such a need for that. It was one of my learning mistake.

The Dependencies node tells us that there are 2 dependencies

  1. NuGet: Package manager
  2. SDK: Manage the build

Entry Point and Integration

The entry point is the Program class where it will start the application server Kestrel. Kestrel is responsible for running MVC application code. In the production environment, there will be 2 servers

  1. External Server: The old friend IIS (there are other in another OS but I just know IIS). IIS takes care of heavy tasks when dealing with outside world, things such as Security, DDoS, … before it sends the request to the internal server.
  2. Internal Server: Kestrel.

A developer can start a Kestrel server using dotnet command line (.NET Core CLI): dotnet run

The ASP.NET Core itself does not require or use web.config file. However, when hosting in IIS, there is a web.config file which is used by IIS; see the full explanation of AspnetCoreModule.

Startup

3 important things

  1. ConfigureServices: Where all dependencies are registered and setup
  2. Configure: Configure system components and pipeline (Middleware)
  3. Configuration (IConfiguration): Allow access to external configuration files. The obvious example is accessing values in appsettings.json file.

 

So far so good. I can understand the default project structure and how things work. Let’s deploy the application to Azure and explore more from there.

Deploying to Azure from VS2017 is super easy. Just right click on the web project and choose publish. What interesting is the files being deployed to Azure.

Publishing Azure
Publishing Azure

Those are files being deployed to Azure App Service. Notice that there is a web.config file. It does not exist in .NET Core 2.0 project

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModule" resourceType="Unspecified" />
    </handlers>
    <aspNetCore processPath="dotnet" arguments=".\Aduze.Web.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" />
  </system.webServer>
</configuration>
<!--ProjectGuid: 677f4bbf-84eb-44b7-a91e-b45ebfa48586-->

It registers AspNetCoreModule with App Service IIS so the communication between IIS and Kestrel work.

Wrap Up

I have not done anything special so far. Just look around the default template, try to understand pieces here and there.

  1. Understand the ASP.NET Core 2 Project template
  2. Understand IIS and Kestrel work together
  3. Understand the role of Program and Startup classes, as well as the Middleware pipeline. I wrote about it 1 year ago.
  4. It is easy to wire up a Kestrel web server (dotnet run CLI)

I will explore the Azure, Data Storage, and many other cool stuff using the project. More will come.

Have a nice weekend! (this post started days ago. I wrote along with my journey).

Async Await and Parallelism

In C#, async and await was first introduced in C# 5  detail explanation in MS Docs. You should go ahead and read those full articles. Starting from MVC 4, the async controller was introduced with async/await. Since then developers have been using async/await whenever possible.

What if a senior developer being asked: Hey explain async/await for juniors. We think we know something is just step 1. And that understanding might be wrong until we can explain to someone else.

Some believe that having asynchronous operations will make the system faster because more threads are used. Some say async will not block the main thread allowing better responsiveness of Windows Applications, or serving more requests in a Web Application.

How could we explain them? Not so many people have deep knowledge to dig deep in the low level implementation. Given that someone can (yes there are many of them), it is not easy to explain to others. Where is the balance? How about mapping the complex concept with something familiar?

I am not discussing the performance part, or will it be faster than sync. That is a complex topic and depending on many factors. Let’s analyze the serving more requests in a Web Application part.

Here is the truth, if you are in doubt ask Google

Async controller actions will allow a web application serves more requests.

Welcome to the Visa Department! You are here to get your Visa to US. There are 3 people in the department: Bob, John, and Lisa. Everyday there are hundreds of citizens come.

An applicant comes, Bob takes it. He does all the steps defined in the process. He might finish and come back 3 hours later. An applicant is solved and returned to the client.

When Bob is busy, John and Lisa take the next applicants and the process repeats. If Bob, John, and Lisa have not finished their applicants, there are just 3 applicants served. The rest has to wait.

There are 3 applicants served. The rest has to wait.

That is ASP.NET MVC synchronous controller in action. It blocks the processing pipeline.

The three decide to make a change. They organize their work and responsibilities

  1. Bob: Take incoming applicants. Put them on the boxes (labeled Requests Box) on tables next to him.
  2. John, Lisa: Take applicants in the boxes, proceed them. When an applicant is proceeded, put it in other boxes (labeled Responses Box) in the department.
  3. Whenever there is an applicant in the Responses Box, whoever (Bob, John, or Lisa) is free, take the applicant and return to the client.

What have changed in this model?

  1. Bob can take as many applicants as he can. Many clients are served.They can return to their seat, take a coffee and wait for the result.
  2. John and Lisa can coordinate or utilize the resources they have to finish the job.
  3. Anyone can return the result. Bob receives the applicant but maybe Lisa returns the result.

Is it faster to proceed one applicant? We do not know.

Can we serve many clients (applicants) in a day? Yes, definitely!

That is the ASP.NET MVC async controller in action.

Concept mapping

  1. Citizen (visa application): Request
  2. Visa Department: Web Server hosts the application.
  3. Bob, John, and Lisa: Thread
  4. Proceed an applicant: Application domain logic.
  5. Accept an applicant: Controller Action.

 

Ok Cool! Let’s see some code.

    public class ThreadModel
    {
        public int Id { get; set; }
        public string Message { get; set; }
    }
    public class ThreadTestController : Controller
    {
        [HttpGet]
        public async Task<ActionResult> Info()
        {
            var stopwatch = new Stopwatch();
            stopwatch.Start();
            var model = new List<ThreadModel>();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = "Bob receives a visa applicant"
            });
            await Task.Delay(TimeSpan.FromSeconds(30));
            stopwatch.Stop();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = $"Lisa returns the applicant after: {stopwatch.Elapsed}"
            });
            return Json(model);
        }
    }

And the outcome

[{"id":3,"message":"Bob receives a visa applicant"},
{"id":25,"message":"Lisa returns the applicant after: 00:00:30.0024240"}]

The request is handled at the thread 3 (Bob). And the response is handled at the thread 25 (Lisa). The elapsed time is 30 seconds.

Ok, then let’s see how long would it take if we await twice

    public class ThreadModel
    {
        public int Id { get; set; }
        public string Message { get; set; }
    }
    public class ThreadTestController : Controller
    {
        [HttpGet]
        public async Task<ActionResult> Info()
        {
            var stopwatch = new Stopwatch();
            stopwatch.Start();
            var model = new List<ThreadModel>();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = "Bob receives a visa applicant"
            });
            await Task.Delay(TimeSpan.FromSeconds(30));
            await Task.Delay(TimeSpan.FromSeconds(30));
            stopwatch.Stop();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = $"Lisa returns the applicant after: {stopwatch.Elapsed}"
            });
            return Json(model);
        }
    }

And the result

[{"id":29,"message":"Bob receives a visa applicant"},
{"id":32,"message":"Lisa returns the applicant after: 00:01:00.0099118"}]

It is 1 minute. Can we make it faster? How about this?

   public class ThreadTestController : Controller
    {
        [HttpGet]
        public async Task<ActionResult> Info()
        {
            var stopwatch = new Stopwatch();
            stopwatch.Start();
            var model = new List<ThreadModel>();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = "Bob receives a visa applicant"
            });
            var t1 = Task.Delay(TimeSpan.FromSeconds(30));
            var t2 = Task.Delay(TimeSpan.FromSeconds(30));
            await Task.WhenAll(t1, t2);
            stopwatch.Stop();
            model.Add(new ThreadModel
            {
                Id = Thread.CurrentThread.ManagedThreadId,
                Message = $"Lisa returns the applicant after: {stopwatch.Elapsed}"
            });
            return Json(model);
        }
    }

Hey, look

[{"id":4,"message":"Bob receives a visa applicant"},
{"id":27,"message":"Lisa returns the applicant after: 00:00:30.0134799"}]

It is 30 seconds.

 

Asynchronous programming is a hard job. A proper understanding is very important. You do not understand unless you can explain.

Find the Devil in Log Files with PowerShell

Early morning, a good developer comes to the office, start his/her normal routine, checking his assigned tasks. Alert! Alert! there is a critical bug. The customer reported:

Error Code: ahde67g4-23ab-78bc-92ad-abhvbed753g2. Time: 2018-03-04 17:28:00

The system is well-design to not dispose any sensitive information. The system administrator send the development team a bunch of log files on that day.

So far so good except there are 4 servers, each server has around 20 files, each file has 20MB in size. In short, you have to find a golden piece of information among 80 files, 20MB each.

The system is designed in such a way that when a request comes in, it is assigned a unique GUID value. It is called CorrelationId. When a log entry is recorded, it has the CorrelationId. When a request fails, the CorrelationId is returned.

Having correlation id helps us trace everything happened in a request. When a request fails, we extract all the log entries having that correlation id.

Let’s see how we will handle it with the power of Powershell. Powershell is ship with Windows. You have it for free.

What do we have?

  1. A bunch of log files
  2. A keyword to search for: The CorrelationId or known as Error Code.

What do we need?

  1. All log entries of the CorrectionId.
  2. Extract them to a file so we can investigate deeper.

To many developers, it is a trivial task. But if this is the first time for you, it will be cool. I promise.

Servers Log

Inside a server

Log files from a server. Each file has a maximum of 20MB in size. There might be many files

Open PowerShell

In Explorer, navigate to the folder containing all the log files. There might be many subfolders grouped by server name.

Type “Powershell” in the address bar. PowerShell shows up with the current path.

Type Magic Command

dir -filter "*logging.txt.2018-03-04*" -recurse | select-string -pattern "ahde67g4-23ab-78bc-92ad-abhvbed753g2" | select-object -ExpandProperty Line > GoldenLog.txt

Explanation

There are 4 parts in that single-chained command.

dir -filter "*logging.txt.2018-03-04*" -recurse

It says: give me all the files having logging.txt.2018-03-04 in their name, including the files in subfolders. This command will allow us to narrow the searching on files occurred on 2018-03-04.

select-string -pattern "ahde67g4-23ab-78bc-92ad-abhvbed753g2"

Find all the lines (log entry) from files returned from the previous command having the keyword specified after the pattern. You can use a regular expression to expand the search.

If running the combination of the 2 commands, all the matching records are displaying right in the PowerShell editor. In many cases, that might be enough to find the information you need.

select-object -ExpandProperty Line

What you see on the PowerShell is a string representation of a matching object. The above command will extract the matching line, AKA a line in a log file.

> GoldenLog.txt

Finally stream the result to a text file. Having a text file allows you to explore deeper, at least in the case of many records returned.

 

Troubleshooting is a special job that requires a developer (or tester) uses various tools at hands. I would suggest you get started with PowerShell if you are on Windows. I started this blog post 2 weeks ago. I spent 2 weeks using what I wrote here to troubleshoot issues, to fix bugs. I am so happy that I utilize it. So should you when you give it a try.

C# Delegate

When was the last time you write code using delegate in C#? Hmm do you know what it means? To be honest, I have not used delegate keyword for years. Long time ago, when I wrote I had to google for the syntax. With the evolving of C#, we have not seen it being used in production code. But, it is important, or at least is nice, to know what it is.

Years ago, when I read about it, I did not understand at all. I could not wrap it up in my head. Therefore, I avoided to use it. Maybe because I did not understand the why behind the keyword.

Time flies. I am reading C# fundamentals again. Luckily, with 10 years of experience, I can understand the delegate keyword, not that bad.

Here is the official document from MS Doc. However, I would suggest you to read from Jon Skeet.

Such a definition might cause confusion and hard to understand. One way to understand a new concept is to map it with what we have known. I hope you can have your own mapping.

Let’s explore some examples to understand delegate. Imagine a context where there is a team which has a team leader and some developers. There are some bugs need to fixed in a week. The team leader has the responsibility of leading the team to finish the mission.

The example might confuse you because, well, it is an imaginary context. In the real life, or real project modeling, things are so much complicated. The main point is a playground to write some code with delegate, forget about all new fancy stuffs such as Action, Func, inline method, or expression.

Wire up a very simple .NET Core project using VS Code, here the first version I got (after some failing builds with first time using VS Code on Mac)

using System;
using System.Collections.Generic;
namespace funApp
{
    
    delegate void KillBug(Bug bug);
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Hello Team Leader");
            var tl = new TeamLeader();
            tl.StartSprint(new List<Bug>{
                new Bug("Fix homepage layout"),
                new Bug("Timeout exception when searching")
            });
            tl.RunSprint(new KillBug(SayBugName));
            
            Console.Read();
        }
        private static void SayBugName(Bug bug)
        {
            Console.WriteLine($"Hi {bug.Name}, how are you today?");
        }
        
    }
    class TeamLeader
    {
        private IList<Bug> _bugs = new List<Bug>();
        public void StartSprint(IList<Bug> bugsFromBacklog)
        {
            foreach (var bug in bugsFromBacklog)
            {
                _bugs.Add(bug);
            }
        }
        public void RunSprint(KillBug knowHowToKillBug)
        {
            foreach (var bug in _bugs)
            {
                knowHowToKillBug(bug);
            }
        }
    }
    class Bug
    {
        public string Name { get; set; }
        
        public Bug(string name)
        {
            Name = name;
        }
    }
}

It works.

The example demonstrates a very important concept in programming: Separation of Concern. Looking at the RunSprint method for a moment. A team leader usually does not know how to kill a bug, I know many team leaders are great developers, but they cannot kill all bugs themselves. They usually use ones who know how to kill a bug, which is usually a developer in his/her team. Or he can outsource to others, as far as they know how to kill a bug. That “know how to kill a bug” is modeled via a delegate “KillBug”.

In later version of C# (2, 3,..) and .NET, there are more options to rewrite the code to get rid of KillBug delegate declaration. But the concept remains.

       public void RunSprint(Action<Bug> knowHowToKillBug)
        {
            foreach (var bug in _bugs)
            {
                knowHowToKillBug(bug);
            }
        }

That method will produce the same outcome.

Do I need to know delegate to write C# code, to finish my job? No. You don’t.

Do I need to understand it? Yes. You should. Understanding the why behind the delegate concept is very important to make a proper design. Every feature has a good reason. We, developers, must understand that reason to make the best use.