skip to main content
research-article

The Power of Localization for Efficiently Learning Linear Separators with Noise

Published:09 January 2017Publication History
Skip Abstract Section

Abstract

We introduce a new approach for designing computationally efficient learning algorithms that are tolerant to noise, and we demonstrate its effectiveness by designing algorithms with improved noise tolerance guarantees for learning linear separators. We consider both the malicious noise model of Valiant [1985] and Kearns and Li [1988] and the adversarial label noise model of Kearns, Schapire, and Sellie [1994]. For malicious noise, where the adversary can corrupt both the label and the features, we provide a polynomial-time algorithm for learning linear separators in ℜd under isotropic log-concave distributions that can tolerate a nearly information-theoretically optimal noise rate of η = Ω(ϵ), improving on the Ω (ϵ3/log2(d/ϵ)) noise-tolerance of Klivans et al. [2009a]. In the case that the distribution is uniform over the unit ball, this improves on the Ω (ϵ/d1/4) noise-tolerance of Kalai et al. [2005] and the Ω (ϵ2/log(d/ϵ)) of Klivans et al. [2009a]. For the adversarial label noise model, where the distribution over the feature vectors is unchanged and the overall probability of a noisy label is constrained to be at most η, we also give a polynomial-time algorithm for learning linear separators in ℜd under isotropic log-concave distributions that can handle a noise rate of η = Ω(ϵ). In the case of uniform distribution, this improves over the results of Kalai et al. [2005], which either required runtime super-exponential in 1/ϵ (ours is polynomial in 1/ϵ) or tolerated less noise.1 Our algorithms are also efficient in the active learning setting, where learning algorithms only receive the classifications of examples when they ask for them. We show that, in this model, our algorithms achieve a label complexity whose dependence on the error parameter ϵ is polylogarithmic (and thus exponentially better than that of any passive algorithm). This provides the first polynomial-time active learning algorithm for learning linear separators in the presence of malicious noise or adversarial label noise. Our algorithms and analysis combine several ingredients including aggressive localization, minimization of a progressively rescaled hinge loss, and a novel localized and soft outlier removal procedure. We use localization techniques (previously used for obtaining better sample complexity results) to obtain better noise-tolerant polynomial-time algorithms.

References

  1. M. Anthony and P. L. Bartlett. 1999. Neural Network Learning: Theoretical Foundations. Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. D. Applegate and R. Kannan. 1991. Sampling and integration of near log-concave functions. In STOC. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Arora, L. Babai, J. Stern, and Z. Sweedyk. 1993. The hardness of approximate optima in lattices, codes, and systems of linear equations. In Proceedings of the 1993 IEEE 34th Annual Foundations of Computer Science. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. P. Awasthi, M. Balcan, and P. M. Long. 2014. The power of localization for efficiently learning linear separators with noise. In STOC. 449--458. See also Arxiv paper 1307.8371v7. Google ScholarGoogle ScholarCross RefCross Ref
  5. P. Awasthi, M.-F. Balcan, N. Haghtalab, and R. Urner. 2015. Efficient learning of linear separators under bounded noise. In COLT.Google ScholarGoogle Scholar
  6. P. Awasthi, M.-F. Balcan, N. Haghtalab, and H. Zhang. 2016. Learning and 1-bit compressed sensing under asymmetric noise. In COLT.Google ScholarGoogle Scholar
  7. P. Awasthi, A. Blum, and O. Sheffet. 2010. Improved guarantees for agnostic learning of disjunctions. In COLT.Google ScholarGoogle Scholar
  8. M.-F. Balcan, A. Beygelzimer, and J. Langford. 2006. Agnostic active learning. In ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M.-F. Balcan, A. Broder, and T. Zhang. 2007. Margin based active learning. In COLT. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. M.-F. Balcan and V. Feldman. 2013. Statistical active learning algorithms. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. M.-F. Balcan and S. Hanneke. 2012. Robust interactive learning. In COLT.Google ScholarGoogle Scholar
  12. M.-F. Balcan, S. Hanneke, and J. Wortman. 2008. The true sample complexity of active learning. In COLT.Google ScholarGoogle Scholar
  13. M.-F. Balcan and P. M. Long. 2013. Active and passive learning of linear separators under log-concave distributions. In Conference on Learning Theory.Google ScholarGoogle Scholar
  14. P. L. Bartlett, O. Bousquet, and S. Mendelson. 2005. Local Rademacher complexities. Annals of Statistics 33, 4 (2005), 1497--1537.Google ScholarGoogle ScholarCross RefCross Ref
  15. A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. 2010. Agnostic active learning without constraints. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. D. Bienstock and A. Michalka. 2014. Polynomial solvability of variants of the trust-region subproblem. In SODA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. A. Birnbaum and S. Shalev-Shwartz. 2012. Learning halfspaces with the zero-one loss: Time-accuracy tradeoffs. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. A. Blum, A. Frieze, R. Kannan, and S. Vempala. 1997. A polynomial time algorithm for learning noisy linear threshold functions. Algorithmica 22, 1/2 (1997), 35--52.Google ScholarGoogle ScholarCross RefCross Ref
  19. Avrim Blum, Merrick L. Furst, Michael J. Kearns, and Richard J. Lipton. 1994. Cryptographic primitives based on hard learning problems. In Proceedings of the 13th Annual International Cryptology Conference on Advances in Cryptology. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Boucheron, O. Bousquet, and G. Lugosi. 2005. Theory of classification: A survey of recent advances. ESAIM: Probability and Statistics 9 (2005), 9:323--375.Google ScholarGoogle ScholarCross RefCross Ref
  21. N. H. Bshouty, Y. Li, and P. M. Long. 2009. Using the doubling dimension to analyze the generalization of learning algorithms. JCSS 75, 6 (2009), 323--335. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. T. Bylander. 1994. Learning linear threshold functions in the presence of classification noise. In Conference on Computational Learning Theory. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. R. Castro and R. Nowak. 2007. Minimax bounds for active learning. In COLT. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. 2011. Learning noisy linear classifiers via adaptive and selective sampling. Machine Learning 83, 1 (2011), 71--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Cohn, L. Atlas, and R. Ladner. 1994. Improving generalization with active learning. Machine Learning 15, 2 (1994). Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. N. Cristianini and J. Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Daniely. 2015. A PTAS for agnostically learning halfspaces. In COLT. See also Arxiv paper arXiv:1410.7050.Google ScholarGoogle Scholar
  28. S. Dasgupta. 2005. Coarse sample complexity bounds for active learning. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. S. Dasgupta. 2011. Active learning. Encyclopedia of Machine Learning (2011).Google ScholarGoogle Scholar
  30. S. Dasgupta, D. J. Hsu, and C. Monteleoni. 2007. A general agnostic active learning algorithm. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. O. Dekel, C. Gentile, and K. Sridharan. 2012. Selective sampling and active learning from single and multiple teachers. JMLR 13 (2012), 2655--2697. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. I. Diakonikolas, G. Kamath, D. Kane, J. Li, A. Moitra, and A. Stewart. 2016. Robust estimators in high dimensions without the computational intractability. ArXiv e-prints (April 2016).Google ScholarGoogle Scholar
  33. V. Feldman, P. Gopalan, S. Khot, and A. Ponnuswami. 2006. New results for learning noisy parities and halfspaces. In FOCS. 563--576. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning 28, 2--3 (1997), 133--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Michael R. Garey and David S. Johnson. 1990. Computers and Intractability; A Guide to the Theory of NP-Completeness. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. A. Gonen, S. Sabato, and S. Shalev-Shwartz. 2013. Efficient pool-based active learning of halfspaces. In ICML.Google ScholarGoogle Scholar
  37. Anupam Gupta, Moritz Hardt, Aaron Roth, and Jonathan Ullman. 2011. Privately releasing conjunctions and the statistical query barrier. In Proceedings of the 43rd Annual ACM Symposium on Theory of Computing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Venkatesan Guruswami and Prasad Raghavendra. 2006. Hardness of learning halfspaces with noise. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. V. Guruswami and P. Raghavendra. 2009. Hardness of learning halfspaces with noise. SIAM Journal of Computing 39, 2 (2009), 742--765. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. S. Hanneke. 2007. A bound on the label complexity of agnostic active learning. In ICML. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. S. Hanneke. 2011. Rates of convergence in active learning. The Annals of Statistics 39, 1 (2011), 333--361.Google ScholarGoogle ScholarCross RefCross Ref
  42. S. Hanneke. 2014. Theory of Disagreement-Based Active Learning. Foundations and Trends in Machine Learning. Now Publishers Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. S. Hanneke, V. Kanade, and L. Yang. 2015. Learning with a drifting target concept. In ALT. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. D. S. Johnson and F. Preparata. 1978. The densest hemisphere problem. Theoretical Computer Science 6, 1 (1978), 93--107.Google ScholarGoogle ScholarCross RefCross Ref
  45. Adam Tauman Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. 2005. Agnostically learning halfspaces. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Michael Kearns and Ming Li. 1988. Learning in the presence of malicious errors. In Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Michael Kearns, Robert Schapire, and Linda Sellie. 1994. Toward efficient agnostic learning. Machine Learning 17, 2--3 (Nov. 1994), 115--141. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. M. Kearns and U. Vazirani. 1994. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. A. R. Klivans, P. M. Long, and Rocco A. Servedio. 2009a. Learning halfspaces with malicious noise. Journal of Machine Learning Research 10 (2009), 2715--2740. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. A. R. Klivans, P. M. Long, and A. Tang. 2009b. Baum’s algorithm learns intersections of halfspaces with respect to log-concave distributions. In RANDOM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. V. Koltchinskii. 2010. Rademacher complexities and bounding the excess risk in active learning. Journal of Machine Learning Research 11 (2010), 2457--2485. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. P. M. Long and R. A. Servedio. 2006. Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions. NIPS (2006). Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. P. M. Long and R. A. Servedio. 2011. Learning large-margin halfspaces with more malicious noise. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. L. Lovász and S. Vempala. 2007. The geometry of logconcave functions and sampling algorithms. Random Structures and Algorithms 30, 3 (2007), 307--358. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Claire Monteleoni. 2006. Efficient algorithms for general active learning. In Proceedings of the 19th Annual Conference on Learning Theory. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. D. Pollard. 2011. Convergence of Stochastic Processes. Springer-Verlag.Google ScholarGoogle Scholar
  57. M. Raginsky and A. Rakhlin. 2011. Lower bounds for passive and active learning. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Oded Regev. 2005. On lattices, learning with errors, random linear codes, and cryptography. In Proceedings of the 37th Annual ACM Symposium on Theory of Computing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Rocco A. Servedio. 2001. Smooth boosting and learning with malicious noise. In 14th Annual Conference on Computational Learning Theory and 5th European Conference on Computational Learning Theory. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. J. Sturm and S. Zhang. 2003. On cones of nonnegative quadratic functions. Mathematics of Operations Research 28 (2003), 246--267. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. L. G. Valiant. 1985. Learning disjunction of conjunctions. In Proceedings of the 9th International Joint Conference on Artificial Intelligence. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. V. Vapnik. 1998. Statistical Learning Theory. Wiley-Interscience.Google ScholarGoogle Scholar
  63. S. Vempala. 2010. A random-sampling-based algorithm for learning intersections of halfspaces. JACM 57, 6 (2010). Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. L. Wang. 2011. Smoothness, disagreement coefficient, and the label complexity of agnostic active learning. JMLR 12 (2011), 2269--2292. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. C. Zhang and K. Chaudhuri. 2014. Beyond disagreement-based agnostic active learning. In NIPS. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. T. Zhang. 2006. Information theoretical upper and lower bounds for statistical estimation. IEEE Transactions on Information Theory 52, 4 (2006), 1307--1321. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The Power of Localization for Efficiently Learning Linear Separators with Noise

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Journal of the ACM
      Journal of the ACM  Volume 63, Issue 6
      February 2017
      233 pages
      ISSN:0004-5411
      EISSN:1557-735X
      DOI:10.1145/3038256
      Issue’s Table of Contents

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 January 2017
      • Accepted: 1 October 2016
      • Revised: 1 June 2016
      • Received: 1 August 2015
      Published in jacm Volume 63, Issue 6

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader