Defining Entities, Value Objects, and Aggregate Roots

One of the most important considerations that we face in implementing a project which is backed by Domain-Driven design is that we must be able to express its Strategic Patterns in both code and architectural constraints.  If we do not define that framework up front, we are not setting the team up for success.  Even within that framework, there is a central set of concepts that are required ahead of others, as they will both be built upon and extended, as we introduce other concepts.  To that end, we will begin by taking a look at Entities, Value Objects, and Aggregate Roots.

Building Out Our Core

Referring back to our last article, Structuring Projects for Domain-Driven Solutions, we went over the various types of Projects that we could add to our Solution.  The most logical place to start is defining the interfaces and base classes which will make up the building blocks of our implementation.  These live in Core, because they are reusable and represent framework concepts.  Since they are not specific to the implementation, it makes sense to abstract these for future reuse.  I am using Visual Studio 2015, which is also what the supplied example files will be provided in.  This exercise is simple enough, that it should be the same for any version of Visual Studio.

Inside of Visual Studio:

  1. Start by creating an Empty Solution, named “FrontLines”, or whatever name suits your liking.
  2. To that Solution, add a Class Library Project called “FrontLines.Core”.
  3. Within that Class Library Project, add a folder called “Domain”

When you are done, your Solution should look like this:


Remember that our goal for this exercise is to add a base implementation of Entities, Value Objects, and Aggregate Roots to this Project.  It sounds simple enough, and there is no point in trying to redefine what those concepts are, since the definitions provided by Eric Evans are already concise and easy to interpret.  As an aside, I cannot recommend the Blue Book enough; otherwise know by it’s official title, Domain-Driven Design: Tackling Complexity in the Heart of Software.  Additionally, Evans has provided a companion reference book on his site, which acts as a “pocket guide”.  For those who prefer the more tactile feel of physical books, you can procure a printed copy of Domain-Driven Design Reference: Definitions and Pattern Summaries.  It is, very graciously, also available free of charge from his site, on the DDD Reference page. Eric Evans’ site is an outstanding resource, all-around, and it is worth bookmarking his Domain Language site, in general.

Using the freely available “Definitions and Pattern Summaries” guide, there are clearly marked sections with concise definitions of the three concepts we are looking to cover.


Probably, the most well-known of the three concepts is the Entity. It plays a primary role in the Domain, as previously discussed. Generically, it contains the combination of defining the business logic and containing the data that reflects the Domain concept being expressed. The full definition provides us with a bit more information.

When an object is distinguished by its identity, rather than its attributes, make this primary to its definition in the model. Keep the class definition simple and focused on life cycle continuity and identity.
Define a means of distinguishing each object regardless of its form or history. Be alert to requirements that call for matching objects by attributes. Define an operation that is guaranteed to produce a unique result for each object, possibly by attaching a symbol that is guaranteed unique. This means of identification may come from the outside, or it may be an arbitrary identifier created by and for the system, but it must correspond to the identity distinctions in the model.
The model must define what it means to be the same thing.

You will hear the terminology “Strategic Pattern” and “Tactical Pattern” often in Domain-Driven Design, and this is an example of that concept.  What Evans has described is a Strategic Pattern, which are the decisions relating to adoption of a concept, architecture, or platform; versus a Tactical Pattern, which are the technical decisions on how to support the Strategic Pattern.  This is because Domain-Driven Design is entirely about the Strategic Patterns.  How we implement is the Tactical Patterns, in-motion.  Look at the Strategic Patterns as theory, and the Tactical Patterns as practice.  Tactical Patterns is often used interchangeably with Design Patterns, which misses the subtle nuances that we are truly after.  Design Patterns are Tactical Patterns, but not all Tactical Patterns are Design Patterns.

When we had talked about validity of an Entity, the concept of maintaining validity is a Strategic Pattern, discussed in detail in the Blue Book.  The one possible solution that we started to investigate encompass the corresponding Tactical Patterns that we put in place to achieve it.  That is what our goal of this exercise will continue to be; to take that Strategic Pattern, and determine how we enforce it through architecture, development, testing, and any other discipline that we need to apply technology to adhere to those goals.  At this point, we have already established some of the Tactical Patterns which make up our Entity.  Those were:

  • An Entity must not be directly instantiated through it’s constructor.  We have chosen to favor Entity construction through factory-type methods.  Our constructors will be privately scoped.
  • An Entity must express the availability of the data through properties which are publicly scoped for read operations, but are privately scoped for write operations.
  • An Entity must expose behaviors, in the form of methods, to perform any operation on the Entity, itself.

The last two items are both constrained in the last sentence of the first paragraph of the quote, above.  They solely are implementations intent on maintaining continuity, of which validity is a part.  Additionally, we have two new requirements from the actual definition of an Entity, as also provided by the definition.  We also need to address the following two items:

  • An Entity must be able to be uniquely identifiable without depending on the concepts expressed by the Domain.
  • An Entity must be able to be differentiated from another Entity, even given the same data.

Implementation of those two concepts are actually very straightforward, because they express the same underlying requirement.  It is very tempting to start to go down the path of implementing IEquatable<T> and overriding the Equals() and GetHashCode() methods of the Entity.  Looking at it from a non-developer perspective, the question becomes why this would be done?  Evans has already defined what equality is, and that is through an explicit identity and non-reliance of the actual data contained in the Entity.  Referring again to our discussion on Entity validity, remember that every operation that we incur is related to transitioning state.  Within that theoretical boundary, equality looses a bit of the conceptual baggage that is traditionally associated with the developer-centric view of the same terminology.  Given an identity, an Entity should either simply be the same or different, and unless we are looking at two references to the same Entity, the likelihood of it being the same are remote.  Since we are operating within the context of atomic operations, we are hard-pressed to come up with a valid use case where we require more than simply comparing identities of any two Entities.  This is because if we have a single Entity in two different transitional states within an atomic operation, we are likely either being overly-defensive (explicitly comparing to ensure a behavior made the change it was asked to make), or are trying to look at something outside of the scope of that atomic operation.  Does that mean it is “wrong” to put the the framework concept of equality in an Entity?  No, not at all.  The problem that I most commonly see in this approach is that it is used solely to support asserting equality in Unit Tests or Integration Tests.  As a matter of personal preference, I find it more lending to the nature of Domain-Driven Design to explicitly do an equality comparison on two exposed identifiers, because it more clearly represents the definition of the Entity.

As to the identity itself, we have already established that it can be system-generated.  Because the Domain has no external dependencies, this is usually a primitive type for which we can assure uniqueness.  Two common approaches are to have a low-level service interface that is injected into the Domain, which fulfills the role of generating a system-unique identifier from whichever external sources enforce them, or to simply use a globally-unique identifier, better known as a GUID.  The choice comes down to the simple fact that there is no need to introduce complexity into the system, unless it is explicitly required.  Start simple, and introduce complexity only when it is required.  It is too easy to think of the “what if” scenarios up-front and come up with elegant solutions that are never used.

To express the Entity, do we start with an interface or an abstract class?  Again, looking at only implementing what is required, an interface does not make sense at this point.  Were we to create an interface, say IEntity, would we have to distinct base implementations of it?  Since we would not, we will bypass the interface and start with a base class, shown here:

At least for now, the Entity is simple.  Starting with an abstract class, it exposes a GUID as the identity, for which all Entities will inherit.  We have controlled how the identity can be established, using the concept of system-generated identifiers through a single implementation.  By scoping the GenerateIdentity() method as protected, we can require that any implementations use this method to access the private setter operation while not allowing an implementation to alter that behavior.  The base implementation is a lot lighter than what the requirements would suggest they would have been, isn’t it?  When it comes to the other requirements of an Entity, there is no clear way to enforce them, just yet.  That is something we will return to look at, once we take a look at Value Objects and Aggregates, as this will become YART (Yet Another Recurring Theme).

Value Objects

Going from Entities as being the most well-known of the concepts, that would mean that a Value Object is probably the most misunderstood and underutilized.  Its deceptiveness comes from its simplicity, as defined:

When you care only about the attributes and logic of an element of the model, classify it as a value object. Make it express the meaning of the attributes it conveys and give it related functionality. Treat the value object as immutable. Make all operations Side-effect-free Functions that don’t depend on any mutable state. Don’t give a value object any identity and avoid the design complexities necessary to maintain entities.

The single most important aspect of a Value Object is that it is immutable.  Simply put, once a Value Object is created, the only way to change it is to create it again.  Immutability is a concept that extends beyond Value Objects, and is used to establish invariability in simple, but related concepts.  In the real world, we deal with this concept on a regular basis, when we deal with other people.  We refer to other people by his or her name, but is that what truly identifies a person?  Being named after my father, I knew the answer to this at a very early age.  If one of us were to get mail or to receive a phone call, our name did not uniquely identify who we were.  Our name is part of who we are, but it does not define us as a unique person.  For most people, the simple act of searching for yourself on the Internet will help establish this fact, as well.  In modeling a person, there is something more that signifies who that person is.  Without making it philosophical or scientific, perhaps that unique identity would be modeled with a Social Security Number.  The concept of a name, however can be abstracted from a person.  It commonly consists of related concepts that express data, in its simplest form, a first name, middle name, and last name will suffice.  We can additionally include things such as a suffix or honorary title.  While that might have addressed the issue for my father and myself, within the scope of the household, that scope was very narrowly confined.  Imagine that if the world was capable of supporting the sheer awesomeness of a “Joseph Ferris Convention”, there inevitably will be an inability to identify the individual persons just by naming alone.  For this reason, the name expresses related attributes, but is neither unique upon itself, or within the context of the Entity which uses it.

Like with Entities, we will construct a list of our technical requirements for a Value Object:

  • A Value Object must be immutable.
  • A Value Object must have a constructor that accepts the arguments used to populate the properties exposed.
  • A Value Object must not expose any publicly scoped setter operations on its properties.
  • A Value Object must be able to establish equality through the properties which define it.

Again, we look at the same rationales used in creating an abstract class over an interface.  There is not a use case for requiring multiple base classes, so we can again start directly with an abstract class.  Since the Value Object contains no unique identity, we can do equality checks based on the exposed properties.  This all means that a Value Object is, essentially, an implementation of IEquatable<T> that, again, has some requirements that are difficult to enforce in code, alone.

As much as I enjoying reinventing the wheel, a wonderful solution for the equability of a Value Object already exists.  Several years ago, Jimmy Bogard tackled this exact same exercise.  In his excellent article, Generic Value Object Equality, he has already built exactly what we are looking for.  What follows is exactly his class, formatted to my own style and standard practices, and with proper attribution.

As already alluded to, and just like with an Entity, we cannot enforce that an implementation is using a public constructor to populate non-accessible setters.  This is something we will handle outside of the base implementation, again.  However, we can now do equality checks without having to pollute the implementations with equality comparisons.

Aggregate Roots

An Aggregate, in general, is a special circumstance, and the terms Aggregate and Aggregate Root tend to get used interchangeably, when they should not.  Let’s jump straight into the definition, to understand why:

Cluster the entities and value objects into aggregates and define boundaries around each. Choose one entity to be the root of each aggregate, and allow external objects to hold references to the root only (references to internal members passed out for use within a single operation only). Define properties and invariants for the aggregate as a whole and give enforcement responsibility to the root or some designated framework mechanism.

There is a subtlety in the definition that needs to be taken into account.  An Aggregate is an agglomeration of Entities and Value Objects which express a common boundary.  On the other hand, the Aggregate Root is simply just a single Entity within the Aggregate that is designated as the gatekeeper of the enforced boundary.  This behavior, though, adds another imposition on how an Entity works.  The part about how we have to “give enforcement responsibility to the root”, means that access to any Entity or Value Object within the Aggregate’s boundaries are, at all times, controlled by the Aggregate Root.  We enforce this through scoping behaviors, but we have already established that design-time enforcement of this is problematic.  What makes the Aggregate Root different from the Entity, from a technical perspective, comes down to the scoping of the behaviors.

To this end, that means that we have the following requirements for an Aggregate Root:

  • An Aggregate Root must derive from an Entity.
  • An Aggregate Root must control access to any contained Entities or Value Objects.
  • An Aggregate Root can return a reference to any Entity or Value Object, scoped within the lifetime of the call being made.
  • An Aggregate Root must not contain references to Entities or Value Objects that do not belong to the Aggregate.

Conversely, the definition of an Aggregate Root alters the known definition of both an Entity and a Value Object:

  • An Entity or Value Object must only be created, retrieved, or modified by the Aggregate Root within the Aggregate boundary.

For a simple concept, an Aggregate Root provides architectural restraints that are not easily captured at design-time.  Right now, we are trusting the developer to implement each of these types of objects correctly.  The architectural constraints on the system, however, need to be a bit more concrete.  Since we are not able to capture the requirements for an Aggregate Root in code, since we have already expressed as much as we can with the Entity that it inherits from, we are left with an equivalent of a marker class:

Just because there is no divergent implementation details in an Aggregate Root yet, the implementations of this base class do have an important role.  Even if we were to never extend this class beyond this, it allows us to identify an Entity which is acting in the role of an Aggregate Root.  Essentially, it currently is acting as a Marker Interface would, which allows us to easily use it for Type Constraints, or for code-based inspections.  Speaking, of which…

“Houston, We Have a Problem”

So far, we have defined basic constructs that will allow us to easily create any of the three DDD concepts that we set out to achieve.  But what happens when a developer uses one, or more, of these concepts incorrectly?  To better understand what we are up against, let us do the complete opposite of what we should do, and fight the framework.

Expanding on our initial example, we will expand the concept of an Account into something more representative of a real-world scenario.  Here are new requirements, in terms of the Ubiquitous Language:

  • To create an Account, a First Name, Last Name, Password, and Email Address are required.
  • An Account can have multiple Email Addresses, but only one can be identified as a Primary Email Address.

To start, we will take the smallest granular implementation that we need to meet our requirements, since we need to look no further to identify the problem that we are now facing.  Looking at our two requirements, and applying what we have learned, we see that we have a Value Object.  From our look at Value Objects, we can immediately recognize the example of First Name and Last Name being a simple Name.  We know how we should implement a Value Object, but being intentionally mischievous, let’s create it this way:

While we did implement ValueObject<T>, our Name is not immutable.  It is nothing more than a Data Transfer Object in a Value Object’s clothing.  This can easily be discovered through a code review, but what if there is no code review conducted here?  Furthermore, this type of feedback should be given to the developer in a more timely fashion, so that the improper approach does not become so embedded that it becomes more difficult to change.  For a moment, we need to take off the developer’s hat, and put on our architect’s hat.  The goal is that we provide a means to provide a series of Tactical Patterns to meet the definition of the Strategic Patterns.  From a developer’s point of view, it begins and end with the source code.  The implementation, however, negatively impacts testability and the overall build quality.  Leaving the above sample like it is, defects can be introduced and Strategic Patterns lose their power.  An example of this would be that a Value Object is used in a particular phase of the state transition, but is then modified again, after any required validation has occurred.  As for build quality, it simply means that we do not want to let code into the build which violates the architectural intent.  If we were able to control the build quality, the testability aspect would simply fall in place, because we would be preventing these aberrations in the first place.  Ideally, it would be best if we could get a design-time warning, but Visual Studio does not really make that easily attainable.

There are commercial solutions that can help us with this, such as the very powerful NDepend.  With NDepend, we can write CQL (Code Query Language) queries against the code, and create violations off of the result of those queries.  For our purposes, however, it is a little out of scope and an incurred physical cost that not everyone has the luxury of taking on for the purposes of a demonstration.  In order to keep our examples as approachable as possible, we will next focus on alternative solutions that are freely available, even if that means that we will need to roll our sleeves up a bit.  In our next article, we will look at an alternative approach, using architecturally-focused Unit Tests to provide post-build events to inspect the code for architectural violations.  This will be just one of the tools that we will use to enforce architectural validity, as we build on the sample solution.

The sample code, which currently is contained in this article, is available on GitHub.