Why a Benchmark Runner
Given, that the amount of APIs increase by moving to microservices, we need a way to determine if changes to a service's response times are related to code changes. For this purpose a defined load with repeatable request seems the most useful.
Regarding existing libraries
Other tools like matteofigus/api-benchmark, bvanderlaan/api-bench-runner or jeffbski/bench-rest are all untouched for quite a while and don't provide type definitions for typescript. This makes them less desirable when working with bigger projects where the better static codecheck is an huge boost in developement speed. Additionally this tool separates the validation thread from the thread processing the actual requests to further minimize the effect of complicated validations or huge response bodies on the data gathering.
Usage & Examples
Basicly require main/include main and supply the executor method with required parameters. There is an example avaible in /examples.
Middlewares use an absolute file path to be loaded. the following characters will be used to expand short forms:
- json and form encode
- access token handling
- csrf-header handling
- status 2xx check
Any logger that either implements the Interface or has a wrapper. So far a wrapper for pino is avaible.
Logging levels used
Most log entries are debug level, with the major steps being written to info. Trace is currently not used but may be used for detailed argument printing at some point.
As usual with my projects this is MIT-licensed.
- more unit tests (yeah, it's one of those projects)
- soap support
- graphql support