Manage Energy not Time

OK. So now you have time. But you are tired, exhausted, or even worse sick. What do you do? How do you feel?

You are running off our energy. Nothing matters anymore, IMO. In the worst case, when the energy is out, time is up.

I used to try to optimize time, manage time, find a way to have more time. None of them succeeded. I got frustrated when there were so many things but the clock says "Hey man, time’s up!". Sometimes, I was lucky to find some blocks of time. Unfortunately, I could not do any useful with them. I was exhausted. The body, the brain simply refused to work. Stress accumulated!

Recall the days when I felt good, waked up with full energy, I got so many things done and still felt good at the end of the day. Unfortunately, those days were just a few.

I managed the wrong factor! I should have focused on managing my energy not time.

Time is a lost battle

24 hours a day is what you get and so everybody. No exception. "Free time" does not exist. The mind is always searching for somethings to fill in the void. Some are good and some are bad.

Energy is in your hands

Not everyone is healthy or feeling good or was born with good health. Regardless of the current status, you can always do something to improve your health tomorrow, next week, next month or even next year. It is a factor that I can control.

Sound easy huh? Hmm, Hell No. It is easier said that done. But, very important, but I can make progress.

Here are some things that I can start building habit, integrate into my life schedule

  1. Exercise properly
  2. Eat less sugar
  3. Eat variety of fruits, vegetations
  4. Enough sleep
  5. Educate myself how to live healthier

Just name a few. I do not have to do them all at once. Just get started. I own my outcome, my life.

It is a mindset shift. It also helps me deal with outside factors. When a thing happens, I simply ask

  1. How does it impact my energy?
  2. Will it consume my energy too much?

If the answer is yes, then I better consider saying No. I have a list of important things in my life that needs lot of energy.

When writing this post, I feel very energized. I have enough energy to rock the day.

Have a good day = Have a full of energy day!

Learning How To Learn from Coursera

It was kind of funny that I took this course—Learn How To Learn. Graduated from the university, being a professional programmer for 15 years, and what? I have to learn How To Learn? I am glad I took the course.

I knew about the course why listening to Time Ferris Show podcast in a morning run session, by the way it is a wonderful thing. The guess mentioned the course. That he was taking the course at night. It is free. Cool! why not me? So I did.

Below are notes, key takeaways from the course.

Focused and Diffuse Modes

When come to learning, solving problems, the brain has 2 modes: Focused and Diffuse.

Focused mode helps us solves known problems. There are patterns. We kind of know how to approach the problem.

Diffuse mode is where all the creativity comes from. There are problems that we have no idea. Problems that require complete new ways of thinking. In my words, it means that I should not focus on the problem. Instead, input the problem and let my brain figuring it out without my awareness. Banging my head into the wall will not create anything except a broken, bleeding head.

The key point is that I have to trust these 2 modes and decide when to use what mode. However, do not mistake that it will work. The diffuse mode only works if the brain has a variety of input, sources, data, knowledge of different fields. If you do not study anything, nothing will work. Cross learning is powerful. Knowledge in one area will help solving problems in others, by the diffuse mode.

Chunks/Chunking

Chunks are group of knowledge, collected and stayed in your brain. I remember that I read materials and then thought I understood it. They’ve all gone away after a day or more. Now, I know that I have not created chunks. There are many practices, steps to form chunks. Understanding is not enough. There are understanding, practicing, and context.

I have to do my homework. I have to stay around with the concepts. I have to get myself familiar with the subjects. Chunks are fundamentals. I skipped it a lot.

Recall NOT Reread

This is the mistake I made a lot. When learning materials, I re-read them. Reread is a passive action. It does not invoke any brain activity. When I re-read, the brain will simply override the existing data or store the duplicated data somewhere.

However, recall is an active action. It triggers the process of retrieving information, trying to reasoning that they are, how they are connected to each other or existing data.

There is a 30 seconds rule: After a meeting, conversation, … spend 30 seconds to jot down things that matter most to you.

Spaced Repetition

Another mistake I made. Speaking of recall, I was recalling continuously through the day. Actually, I was afraid that I would forget them. Spaced repetition is that you recall them by days. For example, I learn something on Monday. I should recall them on Tuesday or Wednesday, and later another time on Friday.

Recently, I recall (spaced repetition) while I run in the morning. It is the perfect time.

Metaphor and Visualization

This is about the learning technique. We should create metaphors, visual analogies, connections when learning new concepts. The more you do the better you will remember them.

Take the Hard Stuff

I used to learn the easiest stuff first. This advice changes my approach. Learn from the easy parts but also challenge the hard stuff. This gives my diffuse brain chances to figure things out.

Sleep and Schedule Before Sleep

I recently learnt how crucial the sleep is. Staying up late and waking up early to learn is a big mistake. If you want to do something tomorrow, plan it before going to bed, the brain (the diffuse mode, remember) will help you figure out.

If you want to know more, visit this link Learning how to learn, Coursera. I am sure you will not regret.

An Opinionated Usage of Interface

Every C# developer knows what interface is. From the code perspective it is interface ISomething. There is no requirement for the I in the interface name. But that is just the naming convention. And it is good to know it is an interface by looking at the name. A more interesting question is when do we use interface?. And I guess each developer might have their own answer and reasons behind them.

Back in time, about 10 years ago, I started to use interface a lot. Back then the obvious reason was mocking and unit testing. And then the dependency injection came to my developer life. I read somewhere that you should inject interfaces instead of concrete implementations.

Have you ever heard of these?

  • Depend on interfaces instead of concrete implementations
  • It is easier for you to change the implementation later
  • It helps you mocked the implementation in the unit test
  • Your code looks clean and professional

In some codebases, I ended up with interfaces everywhere. The unit test was a bunch of mocks and mocks, everywhere. It looked good in the beginning. However, after a while, it was a pain.

  • It was hard to do the refactoring. For example, when I did not want to move a piece of code from this class to another class without changing the outcome behavior, the unit test failed. Because the expected interface was no longer there. The unit test knew too much about the implementation detail. I have the habit of refactoring code quite often. And I expect the unit test to catch my mistake if I accidentally change the outcome behavior. With mocking, I failed to archive that
  • Had to test at every layer. Basically, there were behavior test with mocking and there were tests for actual implementation. There were too much test code to maintain. Well, that was a waste of time and effort and error prone in the test
  • The chances of actually changing an implementation was rare

Ok, so is interface useful? Of course yes it is. And here are my opinions when to use it.

Code Contract

The interface tells the consumers the functionalities it supports. A class might have 10 methods. But not all consumers use or care 10 methods. They might be interested in 2 or 3 methods. It is fine to inject the class. However, the consumer is confused and might misuse.

Let take an example of an imagination log service. Here is the concrete implementation

public class SimpleLogService
{
    public void WriteLog(string logMessage)
    {

    }

    public IList<string> GetLogs()
    {
        return new List<string>();
    }
}

// API Controller to read the log
public class LogController : Controller
{
    private readonly SimpleLogService _logService;
    public LogController(SimpleLogService logService)
    {
        _logService = logService;
    }

    public IActionResult Get()
    {
        return OK(_logService.GetLogs());
    }
}

There is nothing wrong with the above code. However, I do not want the LogController to see/use the WriteLog method. That method is used by another controllers or services. And the SimpleLogService class might grow in size over the time. More and more methods are developed.

To solve that problem, I want to create a contract to tell LogController what it can use.

public interface ILogReaderService
{
    public IList<string> GetLogs();
}

public class SimpleLogService : ILogReaderService
{
    public void WriteLog(string logMessage)
    {

    }

    public IList<string> GetLogs()
    {
        return new List<string>();
    }
}

// API Controller to read the log
public class LogController : Controller
{
    private readonly ILogReaderService _logService;
    public LogController(ILogReaderService logService)
    {
        _logService = logService;
    }

    public IActionResult Get()
    {
        return OK(_logService.GetLogs());
    }
}

And I can do the same for the WriteLog part if necessary.

Decouple Implementation Dependency

In many projects, there is data involve. There are databases. And then comes the concept of Repository. And if the repository implementation is easy and that the database is ready. A developer can write a complete feature from API layer down to database layer. But I am doubt that is the reality. So the situation might look like this

  • One developer takes care of front end development
  • One developer takes care of the API (controller) implementation
  • One developer takes care of designing database, writing the repository. This might be the same developer that implements the API

The API layer depends on the Repository. However, we also want to see the flow and speed up the development. Let’s see some code

public class UserController : Controller
{
    private readonly IUserRepository _repository;

    public UserController(IUserRepository repository)
    {
        _repository = repository;
    }

    public async Task<IActionResult> GetUsers()
    {
        var users = await _repository.GetAllUsers();

        return Ok(user);
    }
}

The IUserRepository is the Code Contract between API and the repository implementation. To unlock the development flow, a simple in memory repository implementation is introduced

public class InMemoryUserRepository : IUserRepository
{
    public async Task<IList<User>> GetAllUsers()
    {
        await Task.CompletedTask();

        return new List<User>{
            new User("Elsa"),
            new User("John"),
            new User("Anna")
        };
    }
}

And the API can function. This releases the dependency on the actual repository implementation. When such an implementation is ready, switch to use it.

However, do not overuse it. Otherwise, you end up with interfaces everywhere and each developer gives their own temporary implementations. Choosing the right dependencies is an art and context matters a lot.

Conclusion

I rarely create interfaces with the purpose of unit testing in mind. Rather, it is the outcome of writing code, refactoring from a concrete implementation and then extracting into interfaces where they make the most sense. When I do, I pay close attention to its meaning. If I can avoid an interface, I will do it.

Code Contract and Decouple Implementation Dependency are the 2 big benefits from having proper interfaces. There are other reasons to use interfaces. They are all valid depending on the context. Sometime, it is the way the project architect wants it.

What are yours?

Have Fun with Fibonacci

I barely remember when the last time I implemented the famous Fibonacci was. It was the doctrine example of recursive implementation. I am not sure whether it is still a thing at the moment.

Over the weekend, I read some recent stuff in C#. The Fibonacci came to my mind. I have not tried to implement it differently and have not tested how bad it is with recursive implementation. So it is fun to have write some code.

Given that we need to know the result of F30 (or F40), how long would it take and how many calls in the recursive approach? And how about the alternative implementation?

Recursive Implementation

class Program
{
    private static int _recursiveCount = 0;
    private static long RecursiveFibonacci(int n)
    {
        _recursiveCount++;
        return n < 2 ? n : RecursiveFibonacci(n - 1) + RecursiveFibonacci(n - 2);
    }

    private static void DisplayFibonacci(Func<int, long> fibo, int number)
    {
        Console.WriteLine($"{number}: {fibo(number)}");
    }

    static void Main(string[] args)
    {
        const int number = 30;
        var stopwatch = new Stopwatch();
        stopwatch.Start();
        DisplayFibonacci(RecursiveFibonacci, number);
        stopwatch.Stop();

        Console.WriteLine($"Completed after: {stopwatch.Elapsed}; Recursive Counts: {_recursiveCount}");
    }
}

Run and observe the result

Completed after: 00:00:00.0457925; Recursive Counts: 2.692.537

The completion time depends on the computer the code runs. But the recursive count is impressive—2.5 millions time. I increased the number to 50 but lost my patient to wait.

Recursive with Cache

We could improve the recursive solution with result caching. Again, this is a simple implementation for fun.

private static readonly List<long> _fiboCache = new List<long> { 0, 1 };
private static long FibonacciRecursiveWithCache(int n)
{
    _recursiveCount++;
    while (_fiboCache.Count <= n)
    {
        _fiboCache.Add(-1);
    }

    if (_fiboCache[n] < 0)
    {
        _fiboCache[n] = n < 2 ? n : FibonacciRecursiveWithCache(n - 1) + FibonacciRecursiveWithCache(n - 2);
    }

    return _fiboCache[n];
}

For Loop Implementation

And I gave a try with non-recursive approach. There are more lines of code but runs so much faster

private static long FibonacciForLoop(int n)
{
    long n_1 = 1;
    long n_2 = 1;
    long total = 0;
    for (int i = 3; i <= n; i++)
    {
        total = n_1 + n_2;
        n_1 = n_2;
        n_2 = total;
    }

    return total;
}

I do not need 3 variables (n_1, n_2, total). The solution only needs 2 variables. However, I felt natural with 3 variables. It follows the way I calculate it manually.

So let put them together and see the differences

static void Main(string[] args)
{
    const int number = 30;

    Console.WriteLine("Recursive");
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    DisplayFibonacci(RecursiveFibonacci, number);
    stopwatch.Stop();

    Console.WriteLine($"Completed after: {stopwatch.Elapsed}; Recursive Counts: {_recursiveCount}");

    Console.WriteLine("Recursive with cache");
    _recursiveCount = 0;
    stopwatch.Reset();
    stopwatch.Start();
    DisplayFibonacci(FibonacciRecursiveWithCache, number);
    stopwatch.Stop();

    Console.WriteLine($"Completed after: {stopwatch.Elapsed}; Recursive Counts: {_recursiveCount}");

    Console.WriteLine("For loop");
    stopwatch.Reset();
    stopwatch.Start();
    DisplayFibonacci(FibonacciForLoop, number);
    stopwatch.Stop();

    Console.WriteLine($"Completed after: {stopwatch.Elapsed}");
}

Ladies and gentlemen, I present to you

Recursive
30: 832040
Completed after: 00:00:00.0277573; Recursive Counts: 2692537
Recursive with cache
30: 832040
Completed after: 00:00:00.0003827; Recursive Counts: 59
For loop
30: 832040
Completed after: 00:00:00.0001500

That’s it! I know how to implement the Fibonacci.

Technical Notes – CosmosDB Change Feed and Azure Function

Some notes while looking deeper into the integration between Azure CosmosDB Change Feed and Azure Function. Most of the time, we simply use the built-in trigger. And it just works. That is the beauty of the Azure.

// Azure function code, CosmosDb Trigger. Took from MS Example
public static class CosmosTrigger
{
    [FunctionName("CosmosTrigger")]
    public static void Run([CosmosDBTrigger(
        databaseName: "ToDoItems",
        collectionName: "Items",
        ConnectionStringSetting = "CosmosDBConnection",
        LeaseCollectionName = "leases",
        CreateLeaseCollectionIfNotExists = true)]IReadOnlyList<Document> documents,
        ILogger log)
    {
        if (documents != null && documents.Count > 0)
        {
            log.LogInformation($"Documents modified: {documents.Count}");
            log.LogInformation($"First document Id: {documents[0].Id}");
        }
    }
}

The example is everywhere. Nothing is fancy about it.

In a project, we took the advantages of that feature to migrate data from CosmosDB to Azure SQL Database for later processing. I wanted to make sure that we are well-prepared for the production. So I did some learnings. So here are the notes. Note that none of them are mine or new. They are simply my writing in the way that I want to remember them and in the areas that I am interested in.

Container, Logical Partition, and Physical Partition

Change Feed is per container. If a database has 3 containers, each has its own Change Feed. The Change Feed ensures that the documents are sent in the order they were written.

Change Feed is reliable as the database itself.

Under the hood, data is stored in many physical partitions. At that level, Change Feed is actually per physical partition. The data might be shuffled from one physical partition to another. When that happens, the Change Feed is moved as well. So how to ensure the document order across the physical partition, especially after moved? The Change Feed Processor (CFP) manages all the complexity for us.

Change Feed Processor (CFP)

In theory, developers can write code to interact directly with Change Feed. It is possible but not practical. In practice, not many (I cannot say NONE) want to. Instead, many depend on the Change Feed Processor (CFP). The MS Docs has sample code that you can write your own consumer code.

Azure Function with CosmosDb trigger

Azure CosmosDb trigger configuration

By default, the poll interval is 5 seconds (see the FeedPollDelay attribute).

Azure Function with the CosmosDb trigger is a layer on top of the CFP. It saves us from dealing with hosting, deployment, scaling, … with the power of Azure Function.

If the function execution fails, by throwing an exception, the changed documents are sent again in the next run. So there is a risk that the flow is stuck if the failure has not designed to handle properly. The Change Feed and Azure Function ensure that your code will received the changed documents at least once.