Main Results
We run our benchmarks on 2 different configurations: The Short tab below contains results of the smaller benchmarks (less than a million variables), run with a 1 hour timeout on a smaller machine (c4-standard-2). The Long tab contains larger benchmarks (more than a million variables), run with a 10 hour timeout on a larger machine (c4-highmem-8). Select the desired tab to view a summary of the results on that configuration.
Short
Long
Configuration
- Instance:c4-standard-2
- vCPUs:2
- Memory:7 GB
- Timeout:1h
Filters
Sectors
Technique
Kind of Problem
Problem Size
Realistic
Otheronly
Realisticonly
Model
🔔 Note: As with all benchmarks, our results provide only an indication of which solvers might be good for your problems. We recommend using our scripts to benchmark on your own problems before picking a solver See also the section on Caveats below.
Runtime vs Memory
A graph showing all the benchmark results (potentially filtered) that are summarized by the table above. Every data point in this graph is the result of running one solver on one benchmark problem instance. The more (circular) data points you see for a particular solver, the more benchmark instances it was able to solve successfully. A point that is lower than another one uses less memory, and a point that is to the left of another means it ran faster.
Click on any point in this graph to see details of the benchmark instance.represents benchmark instances that timed out or errored,indicates a successful run.
Caveats
Here are some key points to keep in mind when interpreting these results:
- We run benchmarks on commercial cloud virtual machines (VMs) for efficiency and cost reasons. The shared nature of cloud resources means there is some error in our runtime measurements, which we estimate as a coefficient of variation of no larger than 4%. More details on this here.
- All solvers are run with their default options, except for the duality gap tolerance for mixed integer benchmarks (MILPs), which we set to 0.0001. You can check the duality gaps for each solver in the benchmark details page corresponding to each benchmark instance.
- All results on this website use the runtime measured by our benchmarking script. This may not be the same as the runtime of the solving algorithm as reported by the solver, and it may include things like time for input file parsing and license checks. See more details and join the discussion on whether to use reported or measured runtime here.