Direkt zum Hauptbereich

CQRS - Command Query Responsibility Segregation

A lot of information systems have been built with a data manipulation focus in mind. Often CRUD (create, read, update delete) operations built on top of a predefined relational data model are the first functionalities that are implemented and lay out as a foundation for the rest of an application. This is mainly because when we think about information systems we have a mental model of some record structure where we can create new records, read records, update existing records, and delete records. This had been learned throughout the last decade of data centric and transaction oriented IT systems. This approach often leads to shortcomings when it comes to query and analyze the system's data.

Classical layered architecture


This is where CQRS comes into the game. CQRS stands for Command Query Responsibility Segregation and has been first described by Greg Young and later on by Martin Fowler. This architectural pattern calls for dividing the software architecture into two parts. These have clearly separate responsibilities: on the one hand for executing commands that result in a change in state of the software system by itself but also for the side effect free operation of queries. This separation is a evolution of Bertrand Meyer's CQS principle (Command Query Separation), which requires a similar separation for individual methods of an interface.

CQRS is not a new idea. It can be seen rather as an alternative or a further improvement to the ubiquitous vertical layers model of a typical n-tier application. The common layers of relational database management system (RDBMS), data access layer (often O/R Mapper ), business logic, application layer and presentation layer usually result in a violation of the Single Responsibility Principle in the software architecture. Data access layer and business logic are both used for validation and execution of user actions as well as for the provision of data for display in the presentation layer and reporting. This interweaving of two different responsibilities can result in fatal consequences. It can mean that the implementation of requirements for the process logic is compromised by necessities for performance in the query processing. On the other side the implemented behavior in the process and business logic can result in a poor performance of more complex queries that need to return larger sets of data, in the worst case it cannot be feasible to implement certain requirements at all.
A simple CQRS architecture


CQRS has answers for these often contradictory sets of requirements introducing at least two models, one for the representation of business logic (often in the form of an object-oriented domain model) and one for the provisioning of the data (query model). The division not only allows to optimize both models on their respective responsibilities. It allows to test business logic much simpler and also opens the way for a whole series of simplifications, which in the end even might result in the option to resign some "heavyweight" technologies like O/R mapper frameworks, database cluster or caches or also might result in choosing alternatives to relational databases.

In a simple case CQRS might result in different data access and business logic components for the segregated responsibilities of [create, update, delete] on the one hand and [read, query] on the other hand. Depending on the characteristics of the application it might also be reasonable to also divide
 the underlying persistence layer. For example using a relational database for the storage of the raw data model and serving as a layer for the command part and using a read and query optimized high throughput storage mechanism like a NoSQL search platform might be a gouged choice. In this case a data synchronization and transformation mechanism needs to be implemented. As an addition benefit scalability and availability can be done much easier, although it also might lead into a modal of eventual consistency (see CAP theorem) - something that is often justifiable in the given context.
CQRS with separated persistance mechanisms
References:

Kommentare

Beliebte Posts aus diesem Blog

Create a Bearer token in Scala

Bearer tokens are a standard which is used in OAuth 2.0 . Although there have been discussions if the security mechanisms are significantly weaker than the use of using signatures as many implementations of OAuth 1.0 did (see http://tools.ietf.org/html/draft-ietf-oauth-v2-http-mac-00 ), bearer tokens are part of the OAuth 2.0 specification and therefore widely adopted in nearly all implementations. The syntax of Bearer tokens is specified in RFC6750 ( http://http://tools.ietf.org/html/rfc6750 ) This is a lean utils object to create specification compliant Bearers in Scala using the java.security.SecureRandom implementation as a randomizer. The standard generate function returns a token of 32 byte length. A second polymorphic functions allows for the generation of a token of individual size. import scala.util._ import java.security.SecureRandom /* * Generates a Bearer Token with the length of 32 characters according to the * specification RFC6750 (http://http://tools.ietf...

Moore's Law and Amdahl's law

Until the mid oft he 2000’s, there was some kind of an unbreakable fact regarding the vertical scalability of computer hardware and therefore the underlying scaling rules for computer software: Moore’s law. Moore’s law states that the number of transistors on integrated circuits doubles approximately every two years. In good old times of single core processors this meant that also the computing power doubled by this effect. This was a safe harbour for gaining performance in software development. The time was with you, you just had to wait and your execution was getting faster without any changes in the software architecture. Moore’s law is still a valid rule of thumb, but times for the software developers and architects have changed. Although the number of transistors is still growing rapidly, this results in more processor cores since the mid of the 2000’s which means a shift of the vertical scaling approach to horizontal scaling. This also means to gain a positive impact from micro...