Home Ecommerce DDD - Hands-on Domain-driven Design and Event Sourcing - 5/6
Post
Cancel

Ecommerce DDD - Hands-on Domain-driven Design and Event Sourcing - 5/6


In the previous post, I talked about persisting domain events into the event store, projecting and reading them, all using MartenDb. Now it’s time to wrap everything I covered so far, add some more infrastructural missing pieces, and get all the backend finished.


Docker containers


I couldn’t finish this series without mentioning how important was giving you an out-of-the-box solution without requiring you to install anything other than Docker, and run everything with a few command lines. If you need to learn what Docker is, it’s the most famous and used open-source platform for deploying and managing containerized applications. I have composed the backend with docker containers for each microservice. I also used public container images for the database (Postgres), the message broker (Kafka), API Gateway, and Identity Server. You can learn more about public docker images from Docker Hub library.

With all in place in the docker-compose.yml file, all you need to have the project up and running is:

1
 $ docker-compose up


Ocelot - API Gateway


Because of this microservice architecture, there are many independent APIs, at least one per service. Now, consider that the SPA must make sense of everything and perform requests accordingly to the right API for the right need, but at the same time, each API runs somewhere and uses a different port and, in this case, in a different docker container.

Imagine the consumer (SPA) having to know all these details to send a command or a query. We don’t want to expose that for many reasons, not only because the external world shouldn’t know about the inner architecture when using the endpoints but also because having to handle these routing wouldn’t be practical at all.

The solution for this is using API Gateways, and I used Ocelot in this project. Ocelot did the job fairly well, allowing me to centralize the API at http://localhost:5000, and that’s all the SPA needs to know.

All the routes are set in the ocelot.json; the only thing I needed to declare for the DownstreamHostAndPorts/Host field was the docker image name of the corresponding service. Please check their documentation for more details. I probably didn’t explore most of them.

1
2
3
├── Crosscutting
│   ├── EcommerceDDD.ApiGateway
├──────── ocelot.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
{
  "Routes": [
    {
      "DownstreamPathTemplate": "/api/customers",
      "UpstreamPathTemplate": "/api/customers",
      "DownstreamScheme": "http",
      "DownstreamHostAndPorts": [
        {
          "Host": "ecommerceddd-customers",
          "Port": 80
        }
      ],
      "UpstreamHttpMethod": [ "GET", "POST" ]
    }
  ]
}


EcommerceDDD.IdentityServer


When creating a new customer, you must provide an email and a password. These fields are not related to the customer’s bounded context but merely for account/security purposes. However, they play a bigger role in authenticating a user or the application. I placed all the logic for handling it within Crosscutting/EcommerceDDD.IdentityServer project.

ASP.NET Core Identity

ASP.NET Core Identity: It is an API that supports user interface (UI) login functionality. Manages users, passwords, profile data, roles, claims, tokens, email confirmation, and more.

I implemented it using the same Postgresql server instance we saw for persisting domain events, now but for setting up a database outputting ASP.NET Core Identity migrations, you’ll find it in the Database/Migrations folder.

For this specific set of migrations, I’m using the IdentityApplicationDbContext, and I added migrations using:

1
dotnet ef migrations add InitialMigration -c IdentityApplicationDbContext

IdentityServer

IdentityServer is an OpenID Connect and OAuth 2.0 framework for ASP.NET Core.

IdentityServer is handy for authentication and can be easily integrated with ASP.NET Core Identity. Check out the Program.cs below and notice how I made it support the application using its .AddAspNetIdentity extension method:

Two more migrations were added to complete the persistence ready for IdentityServer:

1
2
dotnet ef migrations add InitialIdentityServerConfigurationDbMigration -c ConfigurationDbContext -o Migrations/IdentityServer/ConfigurationDb
dotnet ef migrations add InitialIdentityServerPersistedGrantDbMigration -c PersistedGrantDbContext -o Migrations/IdentityServer/PersistedGrantDb

There’s also a DataSeeder.cs file for setting up the environment with some default clients, resources and scopes used by IdentityServer. That is fundamental for issuing and validating tokens.

With both migrations applied when the project runs, we should have this complete database structure:

Issuing tokens

All up and running, you should see ecommerceddd-identityserver container running on port 5001. This project has the AccountsController, which is used for both creating a user (for the Customer) and requesting a user token, using the email and password.


Notice the controller injects ITokenRequester, a simple service I created for wrapping logic for requesting tokens using this microservice, and it is used everywhere. It also eases the application itself to request application tokens from time to time once the microservices call one another through internal HTTP requests.

ITokenRequest relies on TokenIssuerSettings.cs, a configuration record matching the section in appsettings.json within each microservice, and from there, it can gather important information for issuing tokens:

User Token

1
2
3
4
5
6
"TokenIssuerSettings": {
  "Authority": "http://ecommerceddd-identityserver",
  "ClientId": "ecommerceddd.user_client",
  "ClientSecret": "secret234554^&%&^%&^f2%%%",
  "Scope": "openid email read write delete"
}

Application Token

1
2
3
4
5
6
 "TokenIssuerSettings": {
   "Authority": "http://ecommerceddd-identityserver",
   "ClientId": "ecommerceddd.application_client",
   "ClientSecret": "secret33587^&%&^%&^f3%%%",
   "Scope": "ecommerceddd-api.scope read write delete"
 }

Notice that User tokens are generated on behalf of a specific user during the authentication process. They represent the identity of the user and contain information such as user ID, claims, and other data, with a shorter lifespan. In comparison, Application tokens are used to authenticate and authorize the application itself rather than a specific user, but machine-to-machine communication has a longer lifespan since it does not depend on individual user sessions.


Kafka topics


Apache Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

One last but essential aspect of the infrastructure is allowing different bounded contexts to communicate using a message broker. I mentioned integration events in the last chapters. They implement the IIntegrationEvent interface, which inherits MediatR.INotification interface.

I’m using Kafka as a message broker here, but there are good other options, such as RabbitMQ, Memphis, Azure Service Bus and others.

The idea is simple. Some microservices are producing integration events, while others are consuming them. In EcommerceDDD.Core.Infrastructure/Kafka, you will find a KafkaConsumer class from Program.cs in the EcommerceDDD.OrderProcessing microservice, which is subscribed to a list of topics defined in the appsettings.json.

When an event reaches the topic, the consumer receives it from the stream and deserializes into a corresponding integration event in which the OrderSaga.cs is configured to handle

1
2
3
4
5
public class OrderSaga :
    IEventHandler<OrderPlaced>,
    IEventHandler<OrderProcessed>,
    IEventHandler<PaymentFinalized>,
    IEventHandler<ShipmentFinalized>

I based this idea on the examples I found in this awesome repo I mentioned before. Check it out!

Now back to Kafka; when using kafka-ui, you can easily see the existing topics and check their messages.

How to ensure transactional consistency across microservices?

If events are consumed, it means they were published first. Now, think about how important it is to ensure consistency in this process, or the entire flow can be compromised.

Back to the MartenRepository.cs, you will see a method I haven’t covered so far because it wouldn’t have made sense before:

AppendToOutbox(INotification @event)

Unlike the AppendEventsAsync method for storing domain events, this one is meant to store only integration events, intending to communicate to other microservices subscribed to this very event that they’re ready to move the flow along, from placing an order to payment and finally, to shipment, we need to ensure at least one delivery of the integration event through the flow. AppendIntegrationEvent adds the integration event into the same Unit of Work used for domain events, but only when effecting the whole transaction to commit the changes in the aggregate the integration event is saved into an Outbox table.

Initially, I had a KafkaConsumer background service that was constantly checking messages put into the outbox table for each microservice until I changed everything to use Debezium, so once the message is inserted, it will be automatically published into a Kafka topic.

I wrote an entire post about Consistent message delivery with Transactional Outbox Pattern with details for using this technology. Remember to check it out!


SAGA - Placing order into the chaos


At this point, we can place orders; orders will be processed asynchronously in the server; integration events will fan out other internal commands. To coordinate all this logically sequentially, we need e need SAGA, a design pattern to manage data consistency in distributed transaction scenarios like this. To illustrate the flow, check below Events in orange and Commands in blue sticky notes:


The successful ordering workflow is handled in the OrderSaga.cs. However, there are failing cases you have to be prepared to handle and compensate for the flow somehow. For example, what if you purchase more products than are available in stock? Or what if you exceed the credit limit and can’t complete the payment? I implemented compensation events in each microservice and placed the handling for these cases into the OrderSagaCompensation.cs to cancel the order.

For testing the compensation flow, try to either spend more than your credit limit or get the maximum amount of many products like the below:

Notice that all events in the compensation flow are triggering a cancel command, simple and dirty, with some cancellation reason and referential key to what originated it. In the real world of e-commerce, each case could be treated more friendly and sophisticatedly, handling cases of backorder and so on, before canceling the order. For mere demonstration purposes, what I’ve done here will do.


Final thoughts


All we’ve seen until now completes the backend portion of the solution. Keep in mind that Microservices aren’t a must, nor does it mean that Monoliths are dead, as many developers think nowadays. They solve different types of problems and bring great benefits, but they also introduce complexity you have to be aware of and ready to face; otherwise, it can be pointless and even harmful to the success of your project, especially when dealing with minimal valuable products where development timing is crucial.

With all that said, we’re now ready for the next and final chapter, where I’ll focus exclusively on the SPA that makes all this shine. See you there!


Check the project on GitHub



This post is licensed under CC BY 4.0 by the author.