This chapter contains a checklist and some guidelines to take into consideration when getting ready for production-level performance. By now, you have probably used the test fixtures to test your command handling logic and sagas. The production environment isn't as forgiving as a test environment, though. Aggregates tend to live longer, be used more frequently and concurrently. For the extra performance and stability, you're better off tweaking the configuration to suit your specific needs.
If you have generated the tables automatically using your JPA implementation (e.g.
Hibernate), you probably do not have all the right indexes set on your tables. Different
usages of the Event Store require different indexes to be set for optimal performance.
This list explains which fields are used for the different types of queries by the
default EventEntryStore
implementation:
Normal operational use (storing and loading events):
Table 'DomainEventEntry', columns type
,
aggregateIdentifier
and sequenceNumber
(unique
index)
Snapshotting:
Table 'SnapshotEventEntry', columns type
,
aggregateIdentifier
and sequenceNumber
(optionally
a unique index)
Replaying the Event Store contents
Table 'DomainEventEntry', column timestamp
(optionally also
sequenceNumber
)
Sagas
Table 'AssociationValueEntry', columns associationKey
and
sagaId
,
Table 'SagaEntry', column sagaId
(unique index)
A well designed command handling module should pose no problems when implementing caching. Especially when using Event Sourcing, loading an aggregate from an Event Store is an expensive operation. With a properly configured cache in place, loading an aggregate can be converted into a pure in-memory process.
Here are a few guidelines that help you get the most out of your caching solution:
Make sure the Unit Of Work never needs to perform a rollback for functional reasons.
A rollback means that an aggregate has reached an invalid state, and will
invalidate the cache. The next requrest will force the aggregate to be
reconstructed from its Events. If you use exceptions as a potential
(functional) return value, you can configure a
RollbackConfiguration
on your Command Bus. By default, the
Unit Of Work will be rolled back on every exception.
All commands for a single aggregate must arrive on the machine that has the aggregate in its cache.
This means that commands should be consistently routed to the same machine, for as long as that machine is "healthy". Routing commands consistently prevents the cache from going stale. A hit on a stale cache will cause a command to be executed and fail at the moment events are stored in the event store.
Configure a sensible time to live / time to idle
By default, caches have a tendency to have a relatively short time to live, a matter of minutes. For a command handling component with consistent routing, an eternal time-to-idle and time-to-live is the better default. This prevents the need to re-initialize an aggregate based on its events, just because its cache entry expired. The time-to-live of your cache should match the expected lifetime of your aggregate.
Snapshotting removes the need to reload and replay large numbers of events. A single snapshot represents the entire aggregate state at a certain moment in time. The process of snapshotting itself, however, also takes processing time. Therefor, there should be a balance in the time spent building snapshots and the time it saves by preventing a number of events being read back in.
There is no default behavior for all types of applications. Some will specify a number of events after which a snapshot will be created, while other applications require a time-based snapshotting interval. Whatever way you choose for your application, make sure snapshotting is in place if you have long-living aggregates.
See Section 5.4, “Snapshotting” for more about snapshotting.
The actual structure of your aggregates has a large impact of the performance of command handling. Since Axon manages the concurrency around your aggregate instances, you don't need to use special locks or concurrent collections inside the aggregates.
By default, the getChildEntities method in AbstractEventSourcedAggregateRoot and AbstractEventSourcedEntity uses reflection to inspect all the fields of each entity to find related entities. Especially when an aggregate contains large collections, this inspection could take more time than desired.
To gain a performance benefit, you can override the getChildEntities
method and return the collection of child entities yourself. If an entity is a leaf
node (i.e. has no child entities), you may either return an empty collection or
null
.
XStream is very configurable and extensible. If you just use a plain
XStreamEventSerializer
, there are some quick wins ready to pick up.
XStream allows you to configure aliases for package names and event class names. Aliases
are typically much shorter (especially if you have long package names), making the
serialized form of an event smaller. An since we're talking XML, character removed from
XML is twice the profit (one for the start tag, and one for the end tag).
A more advanced topic in XStream is creating custom converters. The default reflection based converters are simple, but do not generate the most compact XML. Always look carefully at the generated XML and see if all the information there is really needed to reconstruct the original instance.
Avoid the use of upcasters when possible. XStream allows aliases to be used for fields, when they have changed name. Imagine revision 0 of an event, that used a field called "clientId". The business prefers the term "customer", so revision 1 was created with a field called "customerId". This can be configured completely in XStream, using field aliases. You need to configure two aliases, in the following order: alias "customerId" to "clientId" and then alias "customerId" to "customerId". This will tell XStream that if it encounters a field called "customerId", it will call the corresponding XML element "customerId" (the second alias overrides the first). But if XStream encounters an XML element called "clientId", it is a known alias and will be resolved to field name "customerId". Check out the XStream documentation for more information.
For ultimate performance, you're probably better off without reflection based
mechanisms alltogether. In that case, it is probably wisest to create a customer
serialization mechanism. The DataInputStream
and
DataOutputStream
allow you to easilly write the contents of the Events
to an output stream. The ByteArrayOutputStream
and
ByteArrayInputStream
allow writing to and reading from byte arrays. The
DomainEvent
class provides a constructor that you can use to do a full
reconstruction based on existing data: DomainEvent(String eventIdentifier,
DateTime creationTimeStamp, long eventRevision, long sequenceNumber,
AggregateIdentifier aggregateIdentifier)
.
The Axon Framework uses an IdentifierFactory
to generate all the
identifiers, whether they are for Events or Aggregates. The default
IdentifierFactory
uses randomly generated java.util.UUID
based identifiers. Although they are very safe to use, the process to generate them
doesn't excell in performance.
IdentifierFactory is an abstract factory that uses Java's ServiceLoader (since Java 6)
mechanism to find the implementation to use. This means you can create your own
implementation of the factory and put the name of the implementation in a file called
"/META-INF/services/org.axonframework.domain.IdentifierFactory
". Java's
ServiceLoader mechanism will detect that file and attempt to create an instance of the
class named inside.
There are a few requirements for the IdentifierFactory
. The
implementation must
have its fully qualified class name as the contents of the
/META-INF/services/org.axonframework.domain.IdentifierFactory
file on the classpath,
have an accessible zero-argument constructor,
extend IdentifierFactory
,
be accessible by the context classloader of the application or by the
classloader that loaded the IdentifierFactory
class, and
must
be thread-safe.