Direkt zum Hauptbereich

Creating load tests with Gatling instead of JMeter

I just came around a tool called Gatling (http://gatling-tool.org/) to create load tests for web applications. I used to use JMeter for a long time, but JMeter has weaknesses that Gatling doesn‘t have.

JMeter uses a synchronous request approach. That means for every request JMeter generates, an internal thread is raised and therefore blocked until a response is being received or a timeout happens. This results in resource blocking on the load injector. The maximum number of threads is limited within a JVM - dependent on the underlying infrastructure -  and even if you are able to run a lot of parallel threads this will result in a high CPU and memory utilization. Although performance tweaking and scaling out to distributed testing might help in such a case, it makes testing more complex and error-prone.

This behavior can result in distorted metrics. Think about a typical breakpoint load test. You want to determine, which is the maximum number of requests per second your tested system can serve. This will be limited by JMeter to

max_requests_per_second = (1000 / average_request_time_in milliseconds) * max_jmeter_threads

Even if the system could server more, this is the maximum number JMeter can inject. This is especially important when the tested application has long response cycles, e.g. because of long lasting transactions or long lasting calculations within the requests.

Gatling is a tool built on Scala and Akka which is capable to serve much more parallel requests. It doesn‘t use a „thread per request“ model. Requests are generated using Akka concurrency features. Akka is built on Scala actors, that are internally based on isolated futures which can be created very fast and managed independently in a large number within a system. This makes is easy to create tens of thousands of parallel request on a convenience hardware based load injector.

Like JMeter, Gatling can be configured using scripts, „Scanarios“ in the Gatling terminology. Those are created in a Domain Specific Language that can be build with Scala features. For example a test with a simple clickstream in a web application can be defined as follows:

val stdSearch = scenario("Standard Search")
    .exec( http("Access Google").get("http://www.google.com") )
    .pause(2, 3)
    .exec( http("Search for 'auconsil'").get("http://google.com/#").queryParam("q","auconsil"))
    .pause(2))

setUp(stdSearch.users(4000).ramp(180))

The script is more or less self-explanatory. It creates a scenario with 2 requests and 2 pauses. Afterwards a load test - a so called simulation - is defined with 4000 parallel users and a time of 180 seconds after which all users are active.

This combination of Scala based DSL and scripting together with the actor based concurrency model and an extremely lightweight CPU and memory footprint makes Gatling to me a first choice for future projects.

Kommentare

Beliebte Posts aus diesem Blog

CQRS - Command Query Responsibility Segregation

A lot of information systems have been built with a data manipulation focus in mind. Often CRUD (create, read, update delete) operations built on top of a predefined relational data model are the first functionalities that are implemented and lay out as a foundation for the rest of an application. This is mainly because when we think about information systems we have a mental model of some record structure where we can create new records, read records, update existing records, and delete records. This had been learned throughout the last decade of data centric and transaction oriented IT systems. This approach often leads to shortcomings when it comes to query and analyze the system's data. Classical layered architecture This is where CQRS comes into the game. CQRS stands for Command Query Responsibility Segregation and has been first described by Greg Young and later on by Martin Fowler. This architectural pattern calls for dividing the software architecture into two parts...

Create a Bearer token in Scala

Bearer tokens are a standard which is used in OAuth 2.0 . Although there have been discussions if the security mechanisms are significantly weaker than the use of using signatures as many implementations of OAuth 1.0 did (see http://tools.ietf.org/html/draft-ietf-oauth-v2-http-mac-00 ), bearer tokens are part of the OAuth 2.0 specification and therefore widely adopted in nearly all implementations. The syntax of Bearer tokens is specified in RFC6750 ( http://http://tools.ietf.org/html/rfc6750 ) This is a lean utils object to create specification compliant Bearers in Scala using the java.security.SecureRandom implementation as a randomizer. The standard generate function returns a token of 32 byte length. A second polymorphic functions allows for the generation of a token of individual size. import scala.util._ import java.security.SecureRandom /* * Generates a Bearer Token with the length of 32 characters according to the * specification RFC6750 (http://http://tools.ietf...

Moore's Law and Amdahl's law

Until the mid oft he 2000’s, there was some kind of an unbreakable fact regarding the vertical scalability of computer hardware and therefore the underlying scaling rules for computer software: Moore’s law. Moore’s law states that the number of transistors on integrated circuits doubles approximately every two years. In good old times of single core processors this meant that also the computing power doubled by this effect. This was a safe harbour for gaining performance in software development. The time was with you, you just had to wait and your execution was getting faster without any changes in the software architecture. Moore’s law is still a valid rule of thumb, but times for the software developers and architects have changed. Although the number of transistors is still growing rapidly, this results in more processor cores since the mid of the 2000’s which means a shift of the vertical scaling approach to horizontal scaling. This also means to gain a positive impact from micro...