This chapter contains a checklist and some guidelines to take into consideration when getting ready for production-level performance. By now, you have probably used the test fixtures to test your command handling logic and sagas. The production environment isn't as forgiving as a test environment, though. Aggregates tend to live longer, be used more frequently and concurrently. For the extra performance and stability, you're better off tweaking the configuration to suit your specific needs.
If you have generated the tables automatically using your JPA implementation (e.g.
Hibernate), you probably do not have all the right indexes set on your tables.
Different usages of the Event Store require different indexes to be set for optimal
performance. This list suggests the indexes that should be added for the different
types of queries used by the default
Normal operational use (storing and loading events):
Table 'DomainEventEntry', columns
(primary key or unique index)
Table 'SnapshotEventEntry', columns
Replaying the Event Store contents
Table 'DomainEventEntry', column
Table 'AssociationValueEntry', columns
Table 'SagaEntry', column
sagaId (unique index)
The default column lengths generated by e.g. Hibernate may work, but won't be optimal. A UUID, for example, will always have the same length. Instead of a variable length column of 255 characters, you could use a fixed length column of 36 characters for the aggregate identifier.
The 'timestamp' column in the DomainEventEntry table only stores ISO 8601 timestamps. If all times are stored in the UTZ timezone, they need a column length of 24 characters. If you use another timezone, this may be up to 28. Using variable length columns is generally not necessary, since time stamps always have the same length.
The 'type' column in the DomainEventEntry stores the Type Identifiers of aggregates. Generally, these are the 'simple name' of the aggregate. Event the infamous 'AbstractDependencyInjectionSpringContextTests' in spring only counts 45 characters. Here, again, a shorter (but variable) length field should suffice.
By default, the MongoEventStore will only generate the index it requires for correct operation. That means the required unique index on "Aggregate Identifier", "Aggregate Type" and "Event Sequence Number" is created when the Event Store is created. However, when using the MongoEventStore for certain opertaions, it might be worthwile to add some extra indices.
Note that there is always a balance between query optimization and update speed. Load testing is ultimately the best way to discover which indices provide the best performance.
Normal operational use
An index is automatically created on "aggregateIdentifier", "type" and "sequenceNumber" in the domain events (default name: "domainevents") collection
Put a (unique) index on "aggregateIdentifier", "type" and "sequenceNumber" in the snapshot events (default name: "snapshotevents") collection
Put a non-unique index on "timestamp" and "sequenceNumber" in the domain events (default name: "domainevents") collection
Put a (unique) index on the "sagaIdentifier" in the saga (default name: "sagas") collection
Put an index on the "sagaType", "associations.key" and "associations.value" properties in the saga (default name: "sagas") collection
A well designed command handling module should pose no problems when implementing caching. Especially when using Event Sourcing, loading an aggregate from an Event Store is an expensive operation. With a properly configured cache in place, loading an aggregate can be converted into a pure in-memory process.
Here are a few guidelines that help you get the most out of your caching solution:
Make sure the Unit Of Work never needs to perform a rollback for functional reasons.
A rollback means that an aggregate has reached an invalid state. Axon will
automatically invalidate the cache entries involved. The next requrest will
force the aggregate to be reconstructed from its Events. If you use
exceptions as a potential (functional) return value, you can configure a
RollbackConfiguration on your Command Bus. By default, the
Unit Of Work will be rolled back on runtime exceptions.
All commands for a single aggregate must arrive on the machine that has the aggregate in its cache.
This means that commands should be consistently routed to the same machine, for as long as that machine is "healthy". Routing commands consistently prevents the cache from going stale. A hit on a stale cache will cause a command to be executed and fail at the moment events are stored in the event store.
Configure a sensible time to live / time to idle
By default, caches have a tendency to have a relatively short time to live, a matter of minutes. For a command handling component with consistent routing, a longer time-to-idle and time-to-live is usually better. This prevents the need to re-initialize an aggregate based on its events, just because its cache entry expired. The time-to-live of your cache should match the expected lifetime of your aggregate.
Snapshotting removes the need to reload and replay large numbers of events. A single snapshot represents the entire aggregate state at a certain moment in time. The process of snapshotting itself, however, also takes processing time. Therefor, there should be a balance in the time spent building snapshots and the time it saves by preventing a number of events being read back in.
There is no default behavior for all types of applications. Some will specify a number of events after which a snapshot will be created, while other applications require a time-based snapshotting interval. Whatever way you choose for your application, make sure snapshotting is in place if you have long-living aggregates.
See Section 5.5, “Snapshotting” for more about snapshotting.
The actual structure of your aggregates has a large impact of the performance of command handling. Since Axon manages the concurrency around your aggregate instances, you don't need to use special locks or concurrent collections inside the aggregates.
By default, the getChildEntities method in AbstractEventSourcedAggregateRoot and AbstractEventSourcedEntity uses reflection to inspect all the fields of each entity to find related entities. Especially when an aggregate contains large collections, this inspection could take more time than desired.
To gain a performance benefit, you can override the
method and return the collection of child entities yourself.
XStream is very configurable and extensible. If you just use a plain
XStreamSerializer, there are some quick wins ready to pick up. XStream
allows you to configure aliases for package names and event class names. Aliases are
typically much shorter (especially if you have long package names), making the
serialized form of an event smaller. And since we're talking XML, each character removed
from XML is twice the profit (one for the start tag, and one for the end tag).
A more advanced topic in XStream is creating custom converters. The default reflection based converters are simple, but do not generate the most compact XML. Always look carefully at the generated XML and see if all the information there is really needed to reconstruct the original instance.
Avoid the use of upcasters when possible. XStream allows aliases to be used for fields, when they have changed name. Imagine revision 0 of an event, that used a field called "clientId". The business prefers the term "customer", so revision 1 was created with a field called "customerId". This can be configured completely in XStream, using field aliases. You need to configure two aliases, in the following order: alias "customerId" to "clientId" and then alias "customerId" to "customerId". This will tell XStream that if it encounters a field called "customerId", it will call the corresponding XML element "customerId" (the second alias overrides the first). But if XStream encounters an XML element called "clientId", it is a known alias and will be resolved to field name "customerId". Check out the XStream documentation for more information.
For ultimate performance, you're probably better off without reflection based
mechanisms alltogether. In that case, it is probably wisest to create a custom
serialization mechanism. The
DataOutputStream allow you to easilly write the contents of the Events
to an output stream. The
ByteArrayInputStream allow writing to and reading from byte
Especially in distributed systems, Event Messages need to be serialized in multiple occasions. In the case of a Command Handling component that uses Event Sourcing, each message is serialized twice: once for the Event Store, and once to publish it on the Event Bus. Axon's components are aware of this and have support for SerializationAware messages. If a SerializationAware message is detected, its methods are used to serialize an object, instead of simply passing the payload to a serializer. This allows for performance optimizations.
By configuring the
SerializationOptimizingInterceptor, all generated
Events are wrapped into
SerializationAware messages, and thus benefit
from this optimization. Note that the optimization only helps if the same serializer
is used for different components. If you use the
serialization can be optimized by providing a
Serializer in the
then use an extra thread (or more when configured) to pre-serialize the Event
Message using that serializer.
When you serialize messages yourself, and want to benefit from the
SerializationAware optimization, use the
MessageSerializer class to
serialize the payload and meta data of messages. All optimization logic is
implemented in that class. See the JavaDoc of the MessageSerializer for more
The Axon Framework uses an
to generate all the
identifiers, whether they are for Events or Commands. The default
uses randomly generated
based identifiers. Although they are very safe to use, the process to generate them
doesn't excell in performance.
IdentifierFactory is an abstract factory that uses Java's ServiceLoader (since Java 6)
mechanism to find the implementation to use. This means you can create your own
implementation of the factory and put the name of the implementation in a file called
ServiceLoader mechanism will detect that file and attempt to create an instance of the
class named inside.
There are a few requirements for the
have its fully qualified class name as the contents of the
file on the classpath,
have an accessible zero-argument constructor,
be accessible by the context classloader of the application or by the
classloader that loaded the
IdentifierFactory class, and must