Abstract
We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and we demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider both the malicious noise model of Valiant [1985] and Kearns and Li [1988] and the adversarial label noise model of Kearns, Schapire, and Sellie [1994]. For malicious noise, where the adversary can corrupt both the label and the features, we provide a polynomial-time algorithm for learning linear separators in ℜd under isotropic log-concave distributions that can tolerate a nearly information-theoretically optimal noise rate of η = Ω(ϵ), improving on the Ω (ϵ3/log2(d/ϵ)) noise-tolerance of Klivans et al. [2009a]. In the case that the distribution is uniform over the unit ball, this improves on the Ω (ϵ/d1/4) noise-tolerance of Kalai et al. [2005] and the Ω (ϵ2/log(d/ϵ)) of Klivans et al. [2009a]. For the adversarial label noise model, where the distribution over the feature vectors is unchanged and the overall probability of a noisy label is constrained to be at most η, we also give a polynomial-time algorithm for learning linear separators in ℜd under isotropic log-concave distributions that can handle a noise rate of η = Ω(ϵ). In the case of uniform distribution, this improves over the results of Kalai et al. [2005], which either required runtime super-exponential in 1/ϵ (ours is polynomial in 1/ϵ) or tolerated less noise.1 Our algorithms are also efficient in the active learning setting, where learning algorithms only receive the classifications of examples when they ask for them. We show that, in this model, our algorithms achieve a label complexity whose dependence on the error parameter ϵ is polylogarithmic (and thus exponentially better than that of any passive algorithm). This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise. Our algorithms and analysis combine several ingredients including aggressive localization, minimization of a progressively rescaled hinge loss, and a novel localized and soft outlier removal procedure. We use localization techniques (previously used for obtaining better sample complexity results) to obtain better noise-tolerant polynomial-time algorithms.
- M. Anthony and P. L. Bartlett. 1999. Neural Network Learning: Theoretical Foundations. Cambridge University Press. Google ScholarDigital Library
- D. Applegate and R. Kannan. 1991. Sampling and integration of near log-concave functions. In STOC. Google ScholarDigital Library
- S. Arora, L. Babai, J. Stern, and Z. Sweedyk. 1993. The hardness of approximate optima in lattices, codes, and systems of linear equations. In Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science. Google ScholarDigital Library
- P. Awasthi, M. Balcan, and P. M. Long. 2014. The power of localization for efficiently learning linear separators with noise. In STOC. 449--458. See also Arxiv paper 1307.8371v7. Google ScholarCross Ref
- P. Awasthi, M.-F. Balcan, N. Haghtalab, and R. Urner. 2015. Efficient learning of linear separators under bounded noise. In COLT.Google Scholar
- P. Awasthi, M.-F. Balcan, N. Haghtalab, and H. Zhang. 2016. Learning and 1-bit compressed sensing under asymmetric noise. In COLT.Google Scholar
- P. Awasthi, A. Blum, and O. Sheffet. 2010. Improved guarantees for agnostic learning of disjunctions. In COLT.Google Scholar
- M.-F. Balcan, A. Beygelzimer, and J. Langford. 2006. Agnostic active learning. In ICML. Google ScholarDigital Library
- M.-F. Balcan, A. Broder, and T. Zhang. 2007. Margin based active learning. In COLT. Google ScholarDigital Library
- M.-F. Balcan and V. Feldman. 2013. Statistical active learning algorithms. In NIPS. Google ScholarDigital Library
- M.-F. Balcan and S. Hanneke. 2012. Robust interactive learning. In COLT.Google Scholar
- M.-F. Balcan, S. Hanneke, and J. Wortman. 2008. The true sample complexity of active learning. In COLT.Google Scholar
- M.-F. Balcan and P. M. Long. 2013. Active and passive learning of linear separators under log-concave distributions. In Conference on Learning Theory.Google Scholar
- P. L. Bartlett, O. Bousquet, and S. Mendelson. 2005. Local Rademacher complexities. Annals of Statistics 33, 4 (2005), 1497--1537.Google ScholarCross Ref
- A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. 2010. Agnostic active learning without constraints. In NIPS. Google ScholarDigital Library
- D. Bienstock and A. Michalka. 2014. Polynomial solvability of variants of the trust-region subproblem. In SODA. Google ScholarDigital Library
- A. Birnbaum and S. Shalev-Shwartz. 2012. Learning halfspaces with the zero-one loss: Time-accuracy tradeoffs. In NIPS. Google ScholarDigital Library
- A. Blum, A. Frieze, R. Kannan, and S. Vempala. 1997. A polynomial time algorithm for learning noisy linear threshold functions. Algorithmica 22, 1/2 (1997), 35--52.Google ScholarCross Ref
- Avrim Blum, Merrick L. Furst, Michael J. Kearns, and Richard J. Lipton. 1994. Cryptographic primitives based on hard learning problems. In Proceedings of the 13th Annual International Cryptology Conference on Advances in Cryptology. Google ScholarDigital Library
- S. Boucheron, O. Bousquet, and G. Lugosi. 2005. Theory of classification: A survey of recent advances. ESAIM: Probability and Statistics 9 (2005), 9:323--375.Google ScholarCross Ref
- N. H. Bshouty, Y. Li, and P. M. Long. 2009. Using the doubling dimension to analyze the generalization of learning algorithms. JCSS 75, 6 (2009), 323--335. Google ScholarDigital Library
- T. Bylander. 1994. Learning linear threshold functions in the presence of classification noise. In Conference on Computational Learning Theory. Google ScholarDigital Library
- R. Castro and R. Nowak. 2007. Minimax bounds for active learning. In COLT. Google ScholarDigital Library
- G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. 2011. Learning noisy linear classifiers via adaptive and selective sampling. Machine Learning 83, 1 (2011), 71--102. Google ScholarDigital Library
- D. Cohn, L. Atlas, and R. Ladner. 1994. Improving generalization with active learning. Machine Learning 15, 2 (1994). Google ScholarDigital Library
- N. Cristianini and J. Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press. Google ScholarDigital Library
- A. Daniely. 2015. A PTAS for agnostically learning halfspaces. In COLT. See also Arxiv paper arXiv:1410.7050.Google Scholar
- S. Dasgupta. 2005. Coarse sample complexity bounds for active learning. In NIPS. Google ScholarDigital Library
- S. Dasgupta. 2011. Active learning. Encyclopedia of Machine Learning (2011).Google Scholar
- S. Dasgupta, D. J. Hsu, and C. Monteleoni. 2007. A general agnostic active learning algorithm. In NIPS. Google ScholarDigital Library
- O. Dekel, C. Gentile, and K. Sridharan. 2012. Selective sampling and active learning from single and multiple teachers. JMLR 13 (2012), 2655--2697. Google ScholarDigital Library
- I. Diakonikolas, G. Kamath, D. Kane, J. Li, A. Moitra, and A. Stewart. 2016. Robust estimators in high dimensions without the computational intractability. ArXiv e-prints (April 2016).Google Scholar
- V. Feldman, P. Gopalan, S. Khot, and A. Ponnuswami. 2006. New results for learning noisy parities and halfspaces. In FOCS. 563--576. Google ScholarDigital Library
- Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning 28, 2--3 (1997), 133--168. Google ScholarDigital Library
- Michael R. Garey and David S. Johnson. 1990. Computers and Intractability; A Guide to the Theory of NP-Completeness. Google ScholarDigital Library
- A. Gonen, S. Sabato, and S. Shalev-Shwartz. 2013. Efficient pool-based active learning of halfspaces. In ICML.Google Scholar
- Anupam Gupta, Moritz Hardt, Aaron Roth, and Jonathan Ullman. 2011. Privately releasing conjunctions and the statistical query barrier. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing. Google ScholarDigital Library
- Venkatesan Guruswami and Prasad Raghavendra. 2006. Hardness of learning halfspaces with noise. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science. Google ScholarDigital Library
- V. Guruswami and P. Raghavendra. 2009. Hardness of learning halfspaces with noise. SIAM Journal of Computing 39, 2 (2009), 742--765. Google ScholarDigital Library
- S. Hanneke. 2007. A bound on the label complexity of agnostic active learning. In ICML. Google ScholarDigital Library
- S. Hanneke. 2011. Rates of convergence in active learning. The Annals of Statistics 39, 1 (2011), 333--361.Google ScholarCross Ref
- S. Hanneke. 2014. Theory of Disagreement-Based Active Learning. Foundations and Trends in Machine Learning. Now Publishers Inc. Google ScholarDigital Library
- S. Hanneke, V. Kanade, and L. Yang. 2015. Learning with a drifting target concept. In ALT. Google ScholarDigital Library
- D. S. Johnson and F. Preparata. 1978. The densest hemisphere problem. Theoretical Computer Science 6, 1 (1978), 93--107.Google ScholarCross Ref
- Adam Tauman Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. 2005. Agnostically learning halfspaces. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. Google ScholarDigital Library
- Michael Kearns and Ming Li. 1988. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing. Google ScholarDigital Library
- Michael Kearns, Robert Schapire, and Linda Sellie. 1994. Toward efficient agnostic learning. Machine Learning 17, 2--3 (Nov. 1994), 115--141. Google ScholarDigital Library
- M. Kearns and U. Vazirani. 1994. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA. Google ScholarDigital Library
- A. R. Klivans, P. M. Long, and Rocco A. Servedio. 2009a. Learning halfspaces with malicious noise. Journal of Machine Learning Research 10 (2009), 2715--2740. Google ScholarDigital Library
- A. R. Klivans, P. M. Long, and A. Tang. 2009b. Baum’s algorithm learns intersections of halfspaces with respect to log-concave distributions. In RANDOM. Google ScholarDigital Library
- V. Koltchinskii. 2010. Rademacher complexities and bounding the excess risk in active learning. Journal of Machine Learning Research 11 (2010), 2457--2485. Google ScholarDigital Library
- P. M. Long and R. A. Servedio. 2006. Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions. NIPS (2006). Google ScholarDigital Library
- P. M. Long and R. A. Servedio. 2011. Learning large-margin halfspaces with more malicious noise. In NIPS. Google ScholarDigital Library
- L. Lovász and S. Vempala. 2007. The geometry of logconcave functions and sampling algorithms. Random Structures and Algorithms 30, 3 (2007), 307--358. Google ScholarDigital Library
- Claire Monteleoni. 2006. Efficient algorithms for general active learning. In Proceedings of the 19th Annual Conference on Learning Theory. Google ScholarDigital Library
- D. Pollard. 2011. Convergence of Stochastic Processes. Springer-Verlag.Google Scholar
- M. Raginsky and A. Rakhlin. 2011. Lower bounds for passive and active learning. In NIPS. Google ScholarDigital Library
- Oded Regev. 2005. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing. Google ScholarDigital Library
- Rocco A. Servedio. 2001. Smooth boosting and learning with malicious noise. In 14th Annual Conference on Computational Learning Theory and 5th European Conference on Computational Learning Theory. Google ScholarDigital Library
- J. Sturm and S. Zhang. 2003. On cones of nonnegative quadratic functions. Mathematics of Operations Research 28 (2003), 246--267. Google ScholarDigital Library
- L. G. Valiant. 1985. Learning disjunction of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence. Google ScholarDigital Library
- V. Vapnik. 1998. Statistical Learning Theory. Wiley-Interscience.Google Scholar
- S. Vempala. 2010. A random-sampling-based algorithm for learning intersections of halfspaces. JACM 57, 6 (2010). Google ScholarDigital Library
- L. Wang. 2011. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning. JMLR 12 (2011), 2269--2292. Google ScholarDigital Library
- C. Zhang and K. Chaudhuri. 2014. Beyond disagreement-based agnostic active learning. In NIPS. Google ScholarDigital Library
- T. Zhang. 2006. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory 52, 4 (2006), 1307--1321. Google ScholarDigital Library
Index Terms
- The Power of Localization for Efficiently Learning Linear Separators with Noise
Recommendations
Learning geometric concepts with nasty noise
STOC 2018: Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of ComputingWe study the efficient learnability of geometric concept classes — specifically, low-degree polynomial threshold functions (PTFs) and intersections of halfspaces — when a fraction of the training data is adversarially corrupted. We give the first ...
The power of localization for efficiently learning linear separators with noise
STOC '14: Proceedings of the forty-sixth annual ACM symposium on Theory of computingWe introduce a new approach for designing computationally efficient and noise tolerant algorithms for learning linear separators. We consider the malicious noise model of Valiant [41, 32] and the adversarial label noise model of Kearns, Schapire, and ...
The power of localization for efficiently learning linear separators with noise
STOC '14: Proceedings of the forty-sixth annual ACM symposium on Theory of computingWe introduce a new approach for designing computationally efficient and noise tolerant algorithms for learning linear separators. We consider the malicious noise model of Valiant [41, 32] and the adversarial label noise model of Kearns, Schapire, and ...
Comments