Home Hands-on DDD and Event Sourcing [3/6] - Domain events and Event Sourcing
Post
Cancel

Hands-on DDD and Event Sourcing [3/6] - Domain events and Event Sourcing


In the previous post, previous post a bit more into bounded context and some of the building blocks of the implementation. Now, let’s extend the implementation to domain events.


Domain Events


Before we jump into Event Sourcing, let’s clarify a common confusion: there are many types of events in software architecture, but not all events are domain events, and domain events don’t necessarily imply Event Sourcing.

A domain event represents an immutable fact that has already occurred in the domain — the result of a business behavior. Ideally, your aggregates should expose explicit behaviors (rather than being anemic ), and from those behaviors, domain events are born. Once again, ubiquitous language plays a key role in how these events are named and understood.

Domain events are always context-bound, hence their meaning only holds inside the bounded context where they originated.

When naming events, consider their context with high-value semantics, and use a combination of the [Noun][PastTenseVerb] (e.g. CustomerRegistered, OrderShipped).

What about Integration events?

Although they look similar, Domain and Integration events serve different purposes and operate at different scopes.

  • Domain Events: Trigger reactions within the same bounded context. They usually use in-memory synchronous dispatching (e.g., calling handlers inside the same process).
  • Integration Events: Trigger reactions across different bounded contexts or external systems, and are typically handled asynchronously using a messaging infrastructure. Since they fan out across service boundaries, they require decoupling and tolerate non-deterministic response times.

To stream integration events, you’ll usually use a message broker or event bus. There are many options available. In this project, I’m using Kafka for educational purposes, but I’ve also had great experiences with RabbitMQ. We’ll cover the implementation when tackling the infrastructure.


Event Sourcing


In short, Event Sourcing is an architectural pattern where state changes are represented as a sequence of events, and these events become the source of truth.

The art of logging events that surround us is not something new in our lives, but for software design purposes, it’s known that Greg Young shaped the technique into the form we call Event Sourcing nowadays.

Events are chronologically persisted in what’s called Event Store. For this to work consistently, the store has to be immutable, which means events are always appended but never changed or deleted. Furthermore, Event Sourcing allows us to shift from the conventional approach where retrieved data goes only for the last state of the domain object into reading the sequential stack of events, rehydrating the object until it gets into the latest state.

Also, writing and reading events are completely separate operations that can scale up and perform independently, and that is why event sourcing needs CQRS.

Once again, no rule of thumb for the technology you use to implement Event Sourcing. Some good players in the market, such as Event Store do the job very efficiently, but I’ll be fully using Postgres as document database, reason why using MartenDB was the natural choice.

With Event Sourcing:

  • Each state transition in an aggregate is captured as a domain event.
  • Events are persisted chronologically in an event store, not as overwrites of current state.
  • The system rebuilds aggregate state by rehydrating it from its stream of past events.

Why using Event Sourcing?

This approach offers powerful advantages:

  • You no longer face object-relational impedance mismatch — you store data as it was intended: event-based and serialized.
  • You get a natural audit trail — the complete event history reveals how and why the current state exists.
  • You get independent scalability between reads and writes, which leads us to CQRS.

Embedded complexity

Keep in mind that the learning curve can be pretty deep with all the details, depending on your implementation. Consider things like handling concurrency, where multiple users can edit the same record simultaneously, and you need to ensure they happen in the proper order. There’s a very nice article where the author covers this subject excellently, and I don’t dare try to explain it better. He also maintains this awesome repo that inspired me with many ideas to convert this study project into something event-sourced.

That said, Event Sourcing comes with complexity. You’ll need to deal with:

  • Concurrency and optimistic locking.
  • Schema evolution (when event structures change over time).
  • Event versioning
  • Performance tuning of long event streams.

CQRS: Command Query Responsibility Segregation

CQRS is an architectural pattern that is often mentioned alongside Event Sourcing, and for good reason. They pair perfectly.

  • Commands are defined in the domain and express user intents and actions. Commands will be the triggers to change the state of our aggregate to fan-out events on the write side of the coin.
  • Queries retrieve materialized views of the current state. They are handled in the read model.

This separation allows your write model to focus purely on domain logic and emitting events, while your read model is optimized for performance and user experience. In the next post, I’ll explore Projections, the mechanism to build and update read models.


Hands-on


Let’s walk through a simple example using the Customer aggregate root. After the domain invariants are validated, the domain object is built, the AppendEvent and the Apply methods are called in sequence:

AppendEvent(@event);
Apply(@event);
  • AppendEvent is defined in the AggregateRoot base class, and it adds the event to the uncommitted events Queue of IDomainEvent.
  • Apply method mutates the aggregate state based on the @event argument it is overriding. Each applied event mutates a corresponding part of the aggregate.

Take this example from the UpdateInformation command. Instead of directly modifying customer fields, it emits a CustomerUpdated event, which is then handled through Apply:


Final thoughts


Everything we’ve seen so far works with in-memory collections, which technically means we’re not event sourcing yet. In the next chapter, I’ll walk you through persisting domain events to the write database and projecting them to a read-optimized database using MartenDB projections.

Thanks for sticking with me so far and, see you in the next post!


Check the project on GitHub



This post is licensed under CC BY 4.0 by the author.