Introduction¶
Automating benchmarks is important for reproducibility and hence comparability which is the major intent when performing benchmarks. Furthermore managing different combinations of parameters is error-prone and often results in significant amounts of work especially if the parameter space gets large.
In order to alleviate these problems JUBE helps performing and analyzing benchmarks in a systematic way. It allows custom work flows to be able to adapt to new architectures.
For each benchmark application the benchmark data is written out in a certain format that enables JUBE to deduct the desired information. This data can be parsed by automatic pre- and post-processing scripts that draw information, and store it more densely for manual interpretation.
The JUBE benchmarking environment provides a script based framework to easily create benchmark sets, run those sets on different computer systems and evaluate the results. It is actively developed by the Jülich Supercomputing Centre of Forschungszentrum Jülich, Germany.