ABSTRACT
This paper develops the Parabel algorithm for extreme multi-label learning where the objective is to learn classifiers that can annotate each data point with the most relevant subset of labels from an extremely large label set. The state-of-the-art 1-vs-All based DiSMEC and PPDSparse algorithms are the most accurate but can take upto months for training and prediction as they learn and apply an independent linear classifier per label. Consequently, they do not scale to large datasets with millions of labels. Parabel addresses both limitations by learning a balanced label hierarchy such that: (a) the 1-vs-All classifiers in the leaf nodes of the label hierarchy can be trained on a small subset of the training set thereby reducing the training time to a few hours on a single core of a standard desktop and (b) novel points can be classified by traversing the learned hierarchy in logarithmic time and applying the 1-vs-All classifiers present in just the leaf thereby reducing the prediction time to a few milliseconds per test point. This allows Parabel to scale to tasks considered infeasible for DiSMEC and PPDSparse such as predicting the subset of 7 million Bing queries that might lead to a click on a given ad-landing page for dynamic search advertising. Experiments on multiple benchmark datasets revealed that Parabel could be almost as accurate as PPDSparse and DiSMEC while being upto 1,000x faster at training and upto 40x-10,000x faster at prediction. Furthermore, Parabel was demonstrated to significantly improve dynamic search advertising on Bing by more than doubling the ad recall and improving the click-through rate by 20%. Source code for Parabel can be downloaded from [1].
Supplemental Material
Available for Download
Published on April 23, 2018 in the ACM Digital Library. Replaced on July 20, with approval of the program chairs. An older draft was published in error.
- R. Agrawal, A. Gupta, Y. Prabhu, and M. Varma. 2013. Multi-label Learning with Millions of Labels: Recommending Advertiser Bid Phrases for Web Pages. In WWW. Google ScholarDigital Library
- R. Babbar and B. Shoelkopf. 2017. DiSMEC-Distributed Sparse Machines for Extreme Multi-label Classification WSDM. Google ScholarDigital Library
- S. Bengio, J. Weston, and D. Grangier. 2010. Label Embedding Trees for Large Multi-class Tasks. NIPS. 163--171. Google ScholarDigital Library
- A. Bertoni, M. Goldwurm, J. Lin, and F. Saccà. 2012. Size Constrained Distance Clustering: Separation Properties and Some Complexity Results. Vol. 115 (2012), 125--139. Google ScholarDigital Library
- K. Bhatia, H. Jain, P. Kar, M. Varma, and P. Jain. 2015. Sparse Local Embeddings for Extreme Multi-label Classification NIPS. Google ScholarDigital Library
- P. S. Bradley, K. P. Bennett, and A. Demiriz. 2000. Constrained K-Means Clustering. Technical Report. MSR-TR-2000--65, Microsoft Research.Google Scholar
- Y. N. Chen and H. T. Lin. 2012. Feature-aware Label Space Dimension Reduction for Multi-label Classification NIPS. Google ScholarDigital Library
- Y. Choi, M. Fontoura, E. Gabrilovich, V. Josifovski, M. R. Mediano, and B. Pang. {n. d.}. Using landing pages for sponsored search ad selection WWW 2010. Google ScholarDigital Library
- M. Cissé, N. Usunier, T. Artières, and P. Gallinari. 2013. Robust Bloom Filters for Large MultiLabel Classification Tasks NIPS. Google ScholarDigital Library
- J. Deng, S. Satheesh, A. C. Berg, and L. Fei-Fei. 2011. Fast and Balanced: Efficient Label Tree Learning for Large Scale Object Recognition NIPS. 567--575. Google ScholarDigital Library
- R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin. 2008. LIBLINEAR: A library for large linear classification. JMLR (2008). Google ScholarDigital Library
- T. Gao and D. Koller. {n. d.}. Discriminative Learning of Relaxed Hierarchy for Large-scale Visual Recognition ICCV. 2072--2079. Google ScholarDigital Library
- D. Hsu, S. Kakade, J. Langford, and T. Zhang. 2009. Multi-Label Prediction via Compressed Sensing. In NIPS. Google ScholarDigital Library
- P. S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. P. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data CIKM. Google ScholarDigital Library
- H. Jain, Y. Prabhu, and M. Varma. 2016. Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications. KDD. Google ScholarDigital Library
- K. Jasinska, K. Dembczynski, R. Busa-Fekete, K. Pfannschmidt, T. Klerx, and E. Hüllermeier. 2016. Extreme F-measure Maximization Using Sparse Probability Estimates ICML. 1435--1444. Google ScholarDigital Library
- Y. Jernite, A. Choromanska, and D. Sontag. 2017. Simultaneous Learning of Trees and Representations for Extreme Classification and Density Estimation. In ICML.Google Scholar
- K. S. Jones, S. Walker, and S. E. Robertson. 2000. A probabilistic model of information retrieval: development and comparative experiments. Inf. Process. Manage. (2000). Google ScholarDigital Library
- Z. Lin, G. Ding, M. Hu, and J. Wang. 2014. Multi-label Classification via Feature-aware Implicit Label Space Encoding ICML. Google ScholarDigital Library
- J. Liu, W. Chang, Y. Wu, and Y. Yang. 2017. Deep Learning for Extreme Multi-label Text Classification SIGIR. 115--124. Google ScholarDigital Library
- C. D. Manning, P. Raghavan, and H. Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Google ScholarDigital Library
- J. McAuley and J. Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text RecSys. Google ScholarDigital Library
- E. L. Mencia and J. Fürnkranz. 2008. Efficient pairwise multilabel classification for large-scale problems in the legal domain SIGIR.Google Scholar
- P. Mineiro and N. Karampatziakis. 2015. Fast Label Embeddings for Extremely Large Output Spaces ECML.Google Scholar
- A. Niculescu-Mizil and E. Abbasnejad. 2017. Label Filters for Large Scale Multilabel Classification International Conference on Artificial Intelligence and Statistics. 1448--1457.Google Scholar
- Y. Prabhu, A. Kag, S. Gopinath, K. Dahiya, S. Harsola, R. Agrawal, and M. Varma. 2018. Extreme multi-label learning with label features for warm-start tagging, ranking and recommendation. In WSDM. Google ScholarDigital Library
- Y. Prabhu and M. Varma. 2014. FastXML: A fast, accurate and stable tree-classifier for extreme multi-label learning. In KDD. Google ScholarDigital Library
- S. Ravi, A. Z. Broder, E. Gabrilovich, V. Josifovski, S. Pandey, and B. Pang. {n. d.}. Automatic generation of bid phrases for online advertising WSDM 2010. Google ScholarDigital Library
- Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search WWW. Google ScholarDigital Library
- S. Si, H. Zhang, S. S. Keerthi, D. Mahajan, I. S. Dhillon, and C. J. Hsieh. 2017. Gradient Boosted Decision Trees for High Dimensional Sparse Output ICML. 3182--3190.Google Scholar
- Y. Tagami. 2017. AnnexML: Approximate Nearest Neighbor Search for Extreme Multi-label Classification KDD. 455--464. Google ScholarDigital Library
- Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2008. Effective and efficient multilabel classification in domains with large number of labels Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data.Google Scholar
- X. Wei and W. B. Croft. 2006. LDA-based document models for ad-hoc retrieval. In SIGIR. Google ScholarDigital Library
- J. Weston, S. Bengio, and N. Usunier. 2011. Wsabie: Scaling Up To Large Vocabulary Image Annotation IJCAI. Google ScholarDigital Library
- J. Weston, A. Makadia, and H. Yee. 2013. Label Partitioning For Sublinear Ranking. In ICML. Google ScholarDigital Library
- C. Xu, D. Tao, and C. Xu. 2016. Robust Extreme Multi-label Learning. In KDD. 1275--1284. Google ScholarDigital Library
- I. E. H. Yen, X. Huang, W. Dai, P. Ravikumar, I. Dhillon, and E. Xing. 2017. PPDsparse: A Parallel Primal-Dual Sparse Method for Extreme Classification KDD. 545--553. Google ScholarDigital Library
- I. E. H. Yen, X. Huang, P. Ravikumar, K. Zhong, and I. S. Dhillon. 2016. PD-Sparse: A primal and dual sparse approach to extreme multiclass and multilabel classification. In ICML. Google ScholarDigital Library
- W. T. Yih, J. Goodman, and V. R. Carvalho. {n. d.}. Finding advertising keywords on web pages. In WWW 2006. Google ScholarDigital Library
- H. F. Yu, P. Jain, P. Kar, and I. S. Dhillon. 2014. Large-scale Multi-label Learning with Missing Labels ICML. Google ScholarDigital Library
- W. Zhang, D. Wang, G. Xue, and H. Zha. 2012. Advertising Keywords Recommendation for Short-Text Web Pages Using Wikipedia. ACM TIST (2012). Google ScholarDigital Library
- W. Zhang, L. Wang, J. Yan, X. Wang, and H. Zha. 2017. Deep Extreme Multi-label Learning. CoRR (2017).Google Scholar
Index Terms
- Parabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising
Recommendations
Extreme Multi-label Loss Functions for Recommendation, Tagging, Ranking & Other Missing Label Applications
KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningThe choice of the loss function is critical in extreme multi-label learning where the objective is to annotate each data point with the most relevant subset of labels from an extremely large label set. Unfortunately, existing loss functions, such as the ...
FastXML: a fast, accurate and stable tree-classifier for extreme multi-label learning
KDD '14: Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data miningThe objective in extreme multi-label classification is to learn a classifier that can automatically tag a data point with the most relevant subset of labels from a large label set. Extreme multi-label classification is an important research problem ...
DiSMEC: Distributed Sparse Machines for Extreme Multi-label Classification
WSDM '17: Proceedings of the Tenth ACM International Conference on Web Search and Data MiningExtreme multi-label classification refers to supervised multi-label learning involving hundreds of thousands or even millions of labels. Datasets in extreme classification exhibit fit to power-law distribution, i.e. a large fraction of labels have very ...
Comments