skip to main content
Skip Abstract Section

Abstract

INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2009 evaluation campaign, which consisted of a wide range of tracks: Ad hoc, Book, Efficiency, Entity Ranking, Interactive, QA, Link the Wiki, and XML Mining. INEX in running entirely on volunteer effort by the IR research community: anyone with an idea and some time to spend, can have a major impact.

References

  1. G. Demartini, T. Iofciu, and A. P. de Vries. Overview of the INEX 2009 entity ranking track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. N. Fuhr, C. Klas, A. Schaefer, and P. Mutschke. Daffodil: An integrated desktop for supporting high-level search activities in federated digital libraries. In 6th European Conference on Research and Advanced Technology for Digital Libraries (ECDL), pages 597--612, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Geva, J. Kamps, M. Lehtonen, R. Schenkel, J. A. Thom, and A. Trotman. Overview of the INEX 2009 ad hoc track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Geva, J. Kamps, and A. Trotman, editors. Focused Retrieval and Evaluation : 8th International Workshop of the Initiative for the Evaluation of XML Retrieval (INEX 2009), LNCS. Springer Verlag, Berlin, Heidelberg, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. W.-C. Huang, S. Geva, and A. Trotman. Overview of the INEX 2009 link the wiki track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. N. Jardine and C. J. van Rijsbergen. The use of hierarchical clustering in information retrieval. Information Storage and Retrieval, 7:217--240, 1971.Google ScholarGoogle ScholarCross RefCross Ref
  7. G. Kazai, A. Doucet, M. Koolen, and M. Landoni. Overview of the INEX 2009 book track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Kazai, N. Milic-Frayling, and J. Costello. Towards methods for the collective gathering and quality control of relevance assessments. In Proceedings of the 32nd Annual International ACM SIGIR Conference. ACM Press, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. Louis and A. Nenkova. Performance confidence estimation for automatic summarization. In EACL, pages 541--548. The Association for Computer Linguistics, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. V. Moriceau, E. SanJuan, X. Tannier, and P. Bellot. Overview of the INEX 2009 question answering track: A common task for QA, focused IR and automatic summarization systems. In Geva et al. {4}.Google ScholarGoogle Scholar
  11. R. Nayak. XML data mining: Process and applications. In M. Song and Y.-F. Wu, editors, Handbook of Research on Text and Web Mining Technologies. Idea Group Inc., USA, 2008.Google ScholarGoogle Scholar
  12. R. Nayak, C. M. De Vries, S. Kutty, S. Geva, L. Denoyer, and P. Gallinari. Overview of the INEX 2009 XML mining track: Clustering and classification of XML documents. In Geva et al. {4}.Google ScholarGoogle Scholar
  13. N. Pharo, R. Nordlie, N. Fuhr, T. Beckers, and K. N. Fachry. Overview of the INEX 2009 interactive track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. R. Schenkel, F. M. Suchanek, and G. Kasneci. YAWN: A semantically annotated Wikipedia XML corpus. In 12. GI-Fachtagung für Datenbanksysteme in Business, Technologie und Web (BTW 2007), pages 277--291, 2007.Google ScholarGoogle Scholar
  15. R. Schenkel and M. Theobald. Overview of the INEX 2009 efficiency track. In Geva et al. {4}. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. F. M. Suchanek, G. Kasneci, and G. Weikum. Yago: A Core of Semantic Knowledge. In 16th international World Wide Web conference (WWW 2007), New York, NY, USA, 2007. ACM Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Theobald and R. Schenkel. Overview of the inex 2008 efficiency track. In S. Geva, J. Kamps, and A. Trotman, editors, 7th INEX Workshop, volume 5631 of Lecture Notes in Computer Science, pages 179--191. Springer, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. E. Voorhees. The TREC question answering track. Journal of Natural Language Engineering, 7:361--378, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. E. Yilmaz, E. Kanoulas, and J. A. Aslam. A simple and efficient sampling method for estimating AP and NDCG. In Proceedings of the 31st Annual International ACM SIGIR Conference, pages 603--610. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Report on INEX 2009
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGIR Forum
      ACM SIGIR Forum  Volume 44, Issue 1
      June 2010
      88 pages
      ISSN:0163-5840
      DOI:10.1145/1842890
      Issue’s Table of Contents

      Copyright © 2010 Authors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 18 August 2010

      Check for updates

      Qualifiers

      • review-article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader