COIN-OR::LEMON - Graph Library

Opened 16 years ago

Last modified 10 years ago

#33 new task


Reported by: Alpar Juttner Owned by: Alpar Juttner
Priority: major Milestone: LEMON 1.4 release
Component: core Version:
Keywords: Cc:
Revision id:


Lemon seriously needs a benchmarking environment to test

  • the effect of implementation changes on the running time
  • performance of different compilers ans optimization settings

This environment should contain

  • scripts for automated testing.
  • carefully chosen set test files and graph generators.

Change History (12)

comment:1 Changed 15 years ago by Alpar Juttner

Milestone: LEMON 1.0 releasePost 1.0
Owner: changed from Alpar Juttner to Peter Kovacs

comment:2 Changed 15 years ago by Peter Kovacs

Status: newassigned

comment:3 Changed 15 years ago by Peter Kovacs

Milestone: LEMON 1.1 release

comment:4 Changed 14 years ago by Peter Kovacs

The test file in #309 could be a small step for this task.

comment:5 Changed 14 years ago by Alpar Juttner

Peter has put together a repository consisting a couple of benchmarks. Currently, it can be found here at

comment:6 in reply to:  5 ; Changed 14 years ago by Alpar Juttner

Replying to alpar:

Peter has put together a repository consisting a couple of benchmarks. Currently, it can be found here at

Some comments on this repo.

  • It would be important to apply a rule that a test code will never be changed, once it has been settled down. This will however result in a lot of scripts in the long run, thus we need some naming convention on the executables and/or some directory hierarchy.
  • For the sake of post processing, the output of the tests should also be standardized. My suggestion is that
    • a test prog can write arbitraty informative msg to stdout, but
    • the real measurements must be printed in a special format that makes it easy to filter them out and interpret. I would suggest a whitespace separated triplet of
      • a prefix, say ***,
      • a label identifying the measurement like ListDigraph-ArcIt and
      • the measured amount itself.

Instead of a single label we might also allow several sublabels, like itertest ListDigraph ArcIt.

  • The scripts making the reports should be platform independent (i.e. should work on Win, too), thus probably to be written in Python. It is a good idea to allow executing them from the build system (i.e. e.g. make test-flows)

comment:7 in reply to:  6 Changed 14 years ago by Peter Kovacs

Replying to alpar:

Some comments on this repo.

I agree all your comments. The current version of this repository is far from a good state. I just collected some of my benchmark codes, and I wait for a volunteer to improve the repository. :)

comment:8 Changed 14 years ago by Peter Kovacs

Type: enhancementtask

comment:9 Changed 13 years ago by Peter Kovacs

Owner: changed from Peter Kovacs to Alpar Juttner
Status: assignednew

comment:10 Changed 12 years ago by Alpar Juttner

Milestone: LEMON 1.3 release

Please find a tentative benchmarking framework here:

It is based on our project template, thus it is possible to build the same source with different lemon versions parallel by setting the LEMON_SOURCE_ROOT_DIR cmake variable like this:

$ hg clone
$ cd lemon-benchmark
$ hg clone lemon
$ mkdir build;cd build
$ cmake ..; make
$ cd ..
$ hg clone [...]/mylemon mylemon
$ mkdir build-my;cd build-my
$ cmake -DLEMON_SOURCE_ROOT_DIR=../mylemon ..; make

The directory layout is the following:

  • generators/ It contains the source codes that generates input files for the algorithm. Currently only netgen is ported, but other generator are more than welcome.
  • data/ The CMakeLists.txt generates the input files. Currently, there is one make target called net-data that generates one DIMACS min cost flow instance called
    $ make net-data
  • tests/ The actual benchmark codes takes place here. Currently, there is one benchmark code called In addition there are some utilities here, e.g.
    • benchmark_tools.h contains utility functions for unified run time logging
    • factors out the main() function of benchmark programs that works on a file input. Look at and CMakeLists.txt to see how is works.

The convention for the benchmark codes is that they can write anything to stdout but the actual running time measurements must be in separate lines containing the following fields separated by single spaces.

*** test-name instance-name subtest-name runtime ...
  • test-name This refers the name of the test code
  • instance-name This is the instance the benchmark code is running on. E.g. the name of the input file of some graph size given as a command line parameter.
  • subtest-name A benchmark code may measure more than one things at once. This name identifies those particular measurements
  • runtime The (wall clock) running time in seconds.
  • ... Other fields are allowed at the and of the line. For example logTime() function for benchmark_tools.h prints the value of realtime/(usertime+systime)-1.0. This value indicated the accuracy of the measurement in some sence.

Is is recommended to use logTime() for printing time reports.

Many-many things to be done, of course. A quick TODO list:

  • Port more input file generators.
  • Add more test. This is the most important thing, of course.
  • Scripts that execute the benchmarks (with different compilers and/or lemon versions) and evaluate the results.
  • Ability to switch off the compilation of benchmark codes that are not available using a certain version of LEMON.

Any feedback is very welcome. Contributions are even more welcome :)

comment:11 Changed 12 years ago by Alpar Juttner

I added one more field to the running time log lines:

*** build-name test-name instance-name subtest-name runtime ...

Here, build-name refers to the build which is actually running. Using logTime(), it defaults to the last component of the build directory (i.e. build or build-my in the examples above), but it can be overwritten by the CMAKE variable BENCHMARK_BUILD_ID.

comment:12 Changed 10 years ago by Balazs Dezso

Milestone: LEMON 1.3 releaseLEMON 1.4 release
Note: See TracTickets for help on using tickets.