IR Benchmarks

Synopsis

A collection of information retrieval benchmarks covering 15 corpora (1.9 billion documents) on which 32 well-known shared tasks are based. We filled the leaderboards with Docker images of 50 standard retrieval approaches. Within this setup, we were able to automatically run and evaluate the 50 approaches on the 32 tasks (1600 runs). All Benchmarks are added as training datasets because their qrels are already publicly available. Please find a detailed tutorial on how to submit approaches on github.

View on TIRA: https://tira.io/task-overview/ir-benchmarks

Access

Please refer to this publication for citing the dataset. If you want to link the dataset, please use the dataset permalink [doi].

People

Publications