Query-biased Summarization Track
RIRES: Russian Information Retrieval Evaluation Seminar

 News 
 About 
 Manifesto 
 Call for participation 
 General principles 
 Participation 
 Tracks 
 Participants 
 Test collections 
 Publications 
 Relevance tables 
 History 
 2004 
 2005 
 Forum 

По-русскиПо-русски
 

Query-biased Summarization Track

Overview

The purpose of this track is to evaluate methods of query-biased summarization of text documents.

For this track the standard procedure is used.

Collection

The source dataset is a union of Narod.ru and legal documents (2004) collections.

Only documents listed in the task description should be annotated by participants.

Task Description for Participating Systems

Each participant is granted access to Narod.ru and legal documents collections and a list of tasks. Each task is a (query, document) pair. For each task a participating system must generate an annotation of the document for the given query.

The list of the tasks is based on the set of queries which were used in the previous ROMIP workshops (2003-2006).

Expected result for each task is a plain text snippet not longer than 300 characters.

Evaluation Methodology

  • instructions for assessors:
    Assessors evaluate document relevance to a query basing on the annotation of the document (generated by a participating system) without seeing the document itself.
    For each result being evaluated assessors obtain a title (no longer than 100 characters) and an annotation (not longer than 300 characters).
  • relevance scale
    • yes / probably yes / perhaps yes / no / impossible to evaluate
    • yes / no / impossible to evaluate
  • official metrics:
    measure of concordance of metrics obtained for full documents in the ad hoc track and for annotations in this track

Data Formats