Introduction
This article is going to be about Artillery, a popular load and smoke testing framework.
Recently I used Artillery to evaluate the performance of some of our production services. I'd like to present some of the scenarios I encountered, and ways to solve them. So if you're new to load testing, I hope this article can serve as a helpful introduction to Artillery.
Now let's get started!
Regarding the code samples
Note that in the below samples, everything will be installed into a local folder we create. So you can follow along and run all of these samples without needing to install anything on your machine globally. So there's no worry about side-effects or changes to the configuration on your system, you can simply delete the folder when you are done!
The only prerequisite is to install Node.
JSONPlaceholder (a simple test server)
In these samples, I'm going to be using a publicly-available test REST API service known as JSONPlaceholder as the server. The public version is available at https://jsonplaceholder.typicode.com/, but instead we're actually going to run this same code locally -- because Artillery is designed to put heavy load on the server, we do not want to cause problems for this free and useful service!
Creating and running tests
Installation
Create a local directory that we'll use to install the dependencies and run our tests
mkdir load-testing cd load-testing
Install Artillery (and also module csv-parse which we'll need later)
npm install --save artillery npm install --save csv-parse
Install JSONPlaceholder
npm install --save jsonplaceholder
(Note: you might get some warnings here about your Node version being too new, but you can ignore those. I used Node 15 without problems)
Run JSONPlaceholder server
node ./node_modules/jsonplaceholder/index.js
Running the first test sample
Now that our server is running, let's get our first test code up and running!
# load-testing.yml config: target: "http://localhost:3000" phases: - duration: 60 arrivalRate: 10 name: "Run queries" scenarios: - name: "Run queries" flow: - get: url: "/todos/1"
Let's see what we've got here:
- We set the location of the server with
target: http://localhost:3000
- In the
phases:
section we configure to for 60 seconds with 10 simulated users - In the scenario
"Run queries"
we make a GET request to one of the endpoints (this loops until the time is up)
To run it:
./node_modules/artillery/bin/run run load-testing.yml
Reading test cases from a CSV file (payload files)
The above is well and good so far, but we're just requesting the same data from the same resource repeatedly. For most systems this will allow every request to be served from cached code and data, so it isn't a very good simulation of real-world usage. Therefore we'd like to vary the resources and parameters to provide more realistic testing, but it would be a bit unwieldy to hard code each value into the YAML file. This is where "payload" files come in -- we store these parameters in a CSV file, so we can easily create and change test cases without needing to modify the code.
Let's add the CSV file and the related code:
# queries.csv resource,queryparam1,queryparam2,queryparam3 posts,_start=20,_end=30, posts,views_gte=10,views_lte=20, posts,_sort=views,_order=asc, posts,_page=7,_limit=20, posts,title=json-server,author=typicode, comments,name_like=alias,, posts,title_like=est,, posts,q=internet,, users,_limit=25,, users,_sort=firstName,_order=desc, users,age_gte=40,, users,q=Sachin,,
# load-testing.yml config: target: "http://localhost:3000" payload: path: "queries.csv" # path is relative to the location of the test script skipHeader: true fields: - resource - queryparam1 - queryparam2 - queryparam3 phases: - duration: 60 arrivalRate: 10 name: "Run queries" scenarios: - name: "Run queries" flow: - get: url: "/{{ resource }}?{{ queryparam1 }}&{{ queryparam2 }}&{{ queryparam3 }}"
Now we have the parameters in the CSV. In the payload:
section we define the location of the file and variable names for each field, then in Run queries
we use these variable names. The nice thing is that Artillery will advance to the next CSV row each time automatically for us!
Creating an initial test data set
With the test server we've been using the data is just static JSON, so it's easy to make every test run start out with a consistent dataset. When testing real services however, you may need to use an API to populate the initial data. Fortunately, it is possible to do this in Artillery without needing additional external tools -- we can use a "processor" (custom Javascript plugin) and put this into the before
block (initialization code which runs before the test cases).
// utils.js const fs = require("fs") const parse = require('csv-parse') function loadCsvIntoJson(context, events, done) { fs.readFile(context.vars['csvFilePath'], function (err, fileData) { parse(fileData, {columns: false, trim: true}, function(err, rows) { // CSV data is in an array of arrays passed to this callback as `rows` context.vars['csvRows'] = rows context.vars['row'] = 1 done() }) }) } function getNextRow(context, events, done) { let row = context.vars['row'] context.vars['userId'] = context.vars['csvRows'][row][0] context.vars['id'] = context.vars['csvRows'][row][1] context.vars['title'] = context.vars['csvRows'][row][2] context.vars['completed'] = context.vars['csvRows'][row][2] row++ context.vars['row'] = row done() } function hasMoreRows(context, next) { return next(context.vars['row'] < context.vars['csvRows'].length) }
# load-testing.yml config: target: "http://localhost:3000" processor: "./utils.js" variables: csvFilePath: "todos.csv" # Path is relative to the location of the test script payload: path: "queries.csv" skipHeader: true fields: - resource - queryparam1 - queryparam2 - queryparam3 phases: - duration: 60 arrivalRate: 10 name: "Run queries" before: flow: - log: "Adding Todos..." - function: "loadCsvIntoJson" - loop: - function: "getNextRow" - log: Inserting Todo (id={{ id }}) - post: url: "/todos" json: userId: "{{ userId }}" id: "{{ id }}" title: "{{ title }}" completed: "{{ completed }}" whileTrue: "hasMoreRows" scenarios: - name: "Run queries" flow: - get: url: "/{{ resource }}?{{ queryparam1 }}&{{ queryparam2 }}&{{ queryparam3 }}"
Using .env (dotenv) for configuration
Until now these examples have simply hard-coded many values, but in a real-world test automation setup we probably want to separate configuration from code (many teams use .env as a standard place to store secrets) For our setup, .env was the way to go, but Artillery doesn't support this itself. Fortunately there is a tool called dotenv-cli which can run any arbitrary executable with the variables from .env loaded into its environment. You can install this by running
npm install --save dotenv-cli
For example, we might put the location of the server into our .env file:
# .env ARTILLERY_TARGET=http://localhost:3000
Then we can load this from the environment in the yaml file:
# load-testing.yml config: target: "{{ $processEnvironment.ARTILLERY_TARGET }}" ...
Finally, run with dotenv-cli
to use the .env values in the tests:
./node_modules/dotenv-cli/cli.js ./node_modules/artillery/bin/run run load-testing.yml
Interpreting the Output
After the test run completes, you will get some information like this:
All VUs finished. Total time: 1 minute, 7 seconds -------------------------------- Summary report @ 17:59:44(+0900) -------------------------------- http.codes.200: ................................................................ 600 http.request_rate: ............................................................. 10/sec http.requests: ................................................................. 600 http.response_time: min: ......................................................................... 13 max: ......................................................................... 202 median: ...................................................................... 104.6 p95: ......................................................................... 147 p99: ......................................................................... 179.5 http.responses: ................................................................ 600 vusers.completed: .............................................................. 600 vusers.created: ................................................................ 600 vusers.created_by_name.Run queries: ............................................ 600 vusers.failed: ................................................................. 0 vusers.session_length: min: ......................................................................... 15.6 max: ......................................................................... 339.2 median: ...................................................................... 111.1 p95: ......................................................................... 156 p99: ......................................................................... 228.2
Most of these are pretty self-explanatory, but the meaning of "p95" and "p99" might not be immediately obvious. From the documentation:
Request latency is in milliseconds, and p95 and p99 values are the 95th and 99th percentile values (a request latency p99 value of 500ms means that 99 out of 100 requests took 500ms or less to complete).
You may also see lines like:
errors.ETIMEDOUT: .............................................................. 9412 errors.ESOCKETTIMEDOUT: ........................................................ 30 errors.ECONNREFUSED: ........................................................... 16550
These are socket level errors where the client couldn't connect to the server. As you increase the number of users and requests, you'll eventually reach a limit where the service cannot process all of the incoming requests.
Authorization - when the api requires an access token
In our case, our API server requires an authentication token. You can add this to the HTTP headers for requests (where access_token
is the token returned by your authentication function):
- post: url: "/path/to/resource" headers: authorization: Bearer {{ access_token }}
Other Resources
JSONPlaceholder (the free server we used) is based on a framework called JSON Server, which is an extremely powerful tool that allows you to create a mock REST server from any arbitrary JSON in just a few minutes! It can very useful for development and testing.
Conclusion
That's it for this article! I hope you found it useful and I encourage you to check out the Artillery Docs if you are interested to learn more!