skip to main content
10.1145/3178876.3185998acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article
Free Access

Parabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising

Authors Info & Claims
Published:10 April 2018Publication History

ABSTRACT

This paper develops the Parabel algorithm for extreme multi-label learning where the objective is to learn classifiers that can annotate each data point with the most relevant subset of labels from an extremely large label set. The state-of-the-art 1-vs-All based DiSMEC and PPDSparse algorithms are the most accurate but can take upto months for training and prediction as they learn and apply an independent linear classifier per label. Consequently, they do not scale to large datasets with millions of labels. Parabel addresses both limitations by learning a balanced label hierarchy such that: (a) the 1-vs-All classifiers in the leaf nodes of the label hierarchy can be trained on a small subset of the training set thereby reducing the training time to a few hours on a single core of a standard desktop and (b) novel points can be classified by traversing the learned hierarchy in logarithmic time and applying the 1-vs-All classifiers present in just the leaf thereby reducing the prediction time to a few milliseconds per test point. This allows Parabel to scale to tasks considered infeasible for DiSMEC and PPDSparse such as predicting the subset of 7 million Bing queries that might lead to a click on a given ad-landing page for dynamic search advertising. Experiments on multiple benchmark datasets revealed that Parabel could be almost as accurate as PPDSparse and DiSMEC while being upto 1,000x faster at training and upto 40x-10,000x faster at prediction. Furthermore, Parabel was demonstrated to significantly improve dynamic search advertising on Bing by more than doubling the ad recall and improving the click-through rate by 20%. Source code for Parabel can be downloaded from [1].

Skip Supplemental Material Section

Supplemental Material

References

  1. R. Agrawal, A. Gupta, Y. Prabhu, and M. Varma. 2013. Multi-label Learning with Millions of Labels: Recommending Advertiser Bid Phrases for Web Pages. In WWW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. R. Babbar and B. Shoelkopf. 2017. DiSMEC-Distributed Sparse Machines for Extreme Multi-label Classification WSDM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Bengio, J. Weston, and D. Grangier. 2010. Label Embedding Trees for Large Multi-class Tasks. NIPS. 163--171. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. A. Bertoni, M. Goldwurm, J. Lin, and F. Saccà. 2012. Size Constrained Distance Clustering: Separation Properties and Some Complexity Results. Vol. 115 (2012), 125--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain. 2015. Sparse Local Embeddings for Extreme Multi-label Classification NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. P. S. Bradley, K. P. Bennett, and A. Demiriz. 2000. Constrained K-Means Clustering. Technical Report. MSR-TR-2000--65, Microsoft Research.Google ScholarGoogle Scholar
  7. Y. N. Chen and H. T. Lin. 2012. Feature-aware Label Space Dimension Reduction for Multi-label Classification NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Y. Choi, M. Fontoura, E. Gabrilovich, V. Josifovski, M. R. Mediano, and B. Pang. {n. d.}. Using landing pages for sponsored search ad selection WWW 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. Cissé, N. Usunier, T. Artières, and P. Gallinari. 2013. Robust Bloom Filters for Large MultiLabel Classification Tasks NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. J. Deng, S. Satheesh, A. C. Berg, and L. Fei-Fei. 2011. Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition NIPS. 567--575. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin. 2008. LIBLINEAR: A library for large linear classification. JMLR (2008). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. T. Gao and D. Koller. {n. d.}. Discriminative Learning of Relaxed Hierarchy for Large-scale Visual Recognition ICCV. 2072--2079. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. D. Hsu, S. Kakade, J. Langford, and T. Zhang. 2009. Multi-Label Prediction via Compressed Sensing. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. P. S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. P. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data CIKM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. H. Jain, Y. Prabhu, and M. Varma. 2016. Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications. KDD. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. K. Jasinska, K. Dembczynski, R. Busa-Fekete, K. Pfannschmidt, T. Klerx, and E. Hüllermeier. 2016. Extreme F-measure Maximization Using Sparse Probability Estimates ICML. 1435--1444. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Jernite, A. Choromanska, and D. Sontag. 2017. Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation. In ICML.Google ScholarGoogle Scholar
  18. K. S. Jones, S. Walker, and S. E. Robertson. 2000. A probabilistic model of information retrieval: development and comparative experiments. Inf. Process. Manage. (2000). Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Z. Lin, G. Ding, M. Hu, and J. Wang. 2014. Multi-label Classification via Feature-aware Implicit Label Space Encoding ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Liu, W. Chang, Y. Wu, and Y. Yang. 2017. Deep Learning for Extreme Multi-label Text Classification SIGIR. 115--124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. D. Manning, P. Raghavan, and H. Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. J. McAuley and J. Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text RecSys. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. E. L. Mencia and J. Fürnkranz. 2008. Efficient pairwise multilabel classification for large-scale problems in the legal domain SIGIR.Google ScholarGoogle Scholar
  24. P. Mineiro and N. Karampatziakis. 2015. Fast Label Embeddings for Extremely Large Output Spaces ECML.Google ScholarGoogle Scholar
  25. A. Niculescu-Mizil and E. Abbasnejad. 2017. Label Filters for Large Scale Multilabel Classification International Conference on Artificial Intelligence and Statistics. 1448--1457.Google ScholarGoogle Scholar
  26. Y. Prabhu, A. Kag, S. Gopinath, K. Dahiya, S. Harsola, R. Agrawal, and M. Varma. 2018. Extreme multi-label learning with label features for warm-start tagging, ranking and recommendation. In WSDM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Y. Prabhu and M. Varma. 2014. FastXML: A fast, accurate and stable tree-classifier for extreme multi-label learning. In KDD. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. S. Ravi, A. Z. Broder, E. Gabrilovich, V. Josifovski, S. Pandey, and B. Pang. {n. d.}. Automatic generation of bid phrases for online advertising WSDM 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search WWW. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. S. Si, H. Zhang, S. S. Keerthi, D. Mahajan, I. S. Dhillon, and C. J. Hsieh. 2017. Gradient Boosted Decision Trees for High Dimensional Sparse Output ICML. 3182--3190.Google ScholarGoogle Scholar
  31. Y. Tagami. 2017. AnnexML: Approximate Nearest Neighbor Search for Extreme Multi-label Classification KDD. 455--464. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2008. Effective and efficient multilabel classification in domains with large number of labels Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data.Google ScholarGoogle Scholar
  33. X. Wei and W. B. Croft. 2006. LDA-based document models for ad-hoc retrieval. In SIGIR. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Weston, S. Bengio, and N. Usunier. 2011. Wsabie: Scaling Up To Large Vocabulary Image Annotation IJCAI. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. J. Weston, A. Makadia, and H. Yee. 2013. Label Partitioning For Sublinear Ranking. In ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. C. Xu, D. Tao, and C. Xu. 2016. Robust Extreme Multi-label Learning. In KDD. 1275--1284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. I. E. H. Yen, X. Huang, W. Dai, P. Ravikumar, I. Dhillon, and E. Xing. 2017. PPDsparse: A Parallel Primal-Dual Sparse Method for Extreme Classification KDD. 545--553. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. I. E. H. Yen, X. Huang, P. Ravikumar, K. Zhong, and I. S. Dhillon. 2016. PD-Sparse: A primal and dual sparse approach to extreme multiclass and multilabel classification. In ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. W. T. Yih, J. Goodman, and V. R. Carvalho. {n. d.}. Finding advertising keywords on web pages. In WWW 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. H. F. Yu, P. Jain, P. Kar, and I. S. Dhillon. 2014. Large-scale Multi-label Learning with Missing Labels ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. W. Zhang, D. Wang, G. Xue, and H. Zha. 2012. Advertising Keywords Recommendation for Short-Text Web Pages Using Wikipedia. ACM TIST (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. W. Zhang, L. Wang, J. Yan, X. Wang, and H. Zha. 2017. Deep Extreme Multi-label Learning. CoRR (2017).Google ScholarGoogle Scholar

Index Terms

  1. Parabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        WWW '18: Proceedings of the 2018 World Wide Web Conference
        April 2018
        2000 pages
        ISBN:9781450356398

        Copyright © 2018 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        International World Wide Web Conferences Steering Committee

        Republic and Canton of Geneva, Switzerland

        Publication History

        • Published: 10 April 2018

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        WWW '18 Paper Acceptance Rate170of1,155submissions,15%Overall Acceptance Rate1,899of8,196submissions,23%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format