skip to main content
10.1145/3159652.3159699acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Consistent Transformation of Ratio Metrics for Efficient Online Controlled Experiments

Published:02 February 2018Publication History

ABSTRACT

We study ratio overall evaluation criteria (user behavior quality metrics) and, in particular, average values of non-user level metrics, that are widely used in A/B testing as an important part of modern Internet companies» evaluation instruments (e.g., abandonment rate, a user»s absence time after a session).

We focus on the problem of sensitivity improvement of these criteria, since there is a large gap between the variety of sensitivity improvement techniques designed for user level metrics and the variety of such techniques for ratio criteria.

We propose a novel transformation of a ratio criterion to the average value of a user level (randomization-unit level, in general) metric that creates an opportunity to directly use a wide range of sensitivity improvement techniques designed for the user level that make A/B tests more efficient. We provide theoretical guarantees on the novel metric»s consistency in terms of preservation of two crucial properties (directionality and significance level) w.r.t. the source ratio criteria.

The experimental evaluation of the approach is done on hundreds large-scale real A/B tests run at one of the most popular global search engines, reinforces the theoretical results, and demonstrates up to $+34%$ of sensitivity rate improvement achieved by the transformation combined with the best known regression adjustment.

References

  1. Olga Arkhipova, Lidia Grauer, Igor Kuralenok, and Pavel Serdyukov . 2015. Search Engine Evaluation based on Search Engine Switching Prediction SIGIR'2015. ACM, 723--726. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Eytan Bakshy and Dean Eckles . 2013. Uncertainty in online experiments with dependent data: An evaluation of bootstrap methods KDD'2013. 1303--1311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Shuchi Chawla, Jason Hartline, and Denis Nekipelov . 2016. A/B testing of auctions. In EC'2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Thomas Crook, Brian Frasca, Ron Kohavi, and Roger Longbotham . 2009. Seven pitfalls to avoid when running controlled experiments on the web KDD'2009. 1105--1114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Alex Deng, Jiannan Lu, and Jonthan Litz . 2017. Trustworthy Analysis of Online A/B Tests: Pitfalls, challenges and solutions WSDM'2017. 641--649. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Alex Deng and Xiaolin Shi . 2016. Data-Driven Metric Development for Online Controlled Experiments: Seven Lessons Learned KDD'2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Alex Deng, Ya Xu, Ron Kohavi, and Toby Walker . 2013. Improving the sensitivity of online controlled experiments by utilizing pre-experiment data WSDM'2013. 123--132. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Pavel Dmitriev and Xian Wu . 2016. Measuring Metrics CIKM'2016. 429--437. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Alexey Drutsa . 2015. Sign-Aware Periodicity Metrics of User Engagement for Online Search Quality Evaluation SIGIR'2015. 779--782. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Alexey Drutsa, Gleb Gusev, and Pavel Serdyukov . 2015 a. Engagement Periodicity in Search Engine Usage: Analysis and Its Application to Search Quality Evaluation. In WSDM'2015. 27--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Alexey Drutsa, Gleb Gusev, and Pavel Serdyukov . 2015 b. Future User Engagement Prediction and its Application to Improve the Sensitivity of Online Experiments. In WWW'2015. 256--266. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Alexey Drutsa, Gleb Gusev, and Pavel Serdyukov . 2017 a. Periodicity in User Engagement with a Search Engine and its Application to Online Controlled Experiments. ACM Transactions on the Web (TWEB) Vol. 11 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Alexey Drutsa, Gleb Gusev, and Pavel Serdyukov . 2017 b. Using the Delay in a Treatment Effect to Improve Sensitivity and Preserve Directionality of Engagement Metrics in A/B Experiments WWW'2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Alexey Drutsa, Anna Ufliand, and Gleb Gusev . 2015 c. Practical Aspects of Sensitivity in Online Experimentation with User Engagement Metrics CIKM'2015. 763--772. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Georges Dupret and Mounia Lalmas . 2013. Absence time and user engagement: evaluating ranking functions WSDM'2013. 173--182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Bradley Efron and Robert J Tibshirani . 1994. An introduction to the bootstrap. CRC press.Google ScholarGoogle Scholar
  17. David A Freedman . 2008. On regression adjustments to experimental data. Advances in Applied Mathematics Vol. 40, 2 (2008), 180--193. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. David A Freedman, David Collier, Jasjeet S Sekhon, and Philip B Stark . 2010. Statistical models and causal inference: a dialogue with the social sciences. Cambridge University Press.Google ScholarGoogle Scholar
  19. Henning Hohnhold, Deirdre O'Brien, and Diane Tang . 2015. Focusing on the Long-term: It's Good for Users and Business KDD'2015. 1849--1858. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Bernard J Jansen, Amanda Spink, and Vinish Kathuria . 2007. How to define searching sessions on web search engines. Advances in Web Mining and Web Usage Analysis. Springer, 92--109. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Eugene Kharitonov, Alexey Drutsa, and Pavel Serdyukov . 2017. Learning Sensitive Combinations of A/B Test Metrics WSDM'2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Eugene Kharitonov, Craig Macdonald, Pavel Serdyukov, and Iadh Ounis . 2015 a. Optimised Scheduling of Online Experiments. In SIGIR'2015. 453--462. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Eugene Kharitonov, Aleksandr Vorobev, Craig Macdonald, Pavel Serdyukov, and Iadh Ounis . 2015 b. Sequential Testing for Early Stopping of Online Experiments SIGIR'2015. 473--482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Ronny Kohavi, Thomas Crook, Roger Longbotham, Brian Frasca, Randy Henne, Juan Lavista Ferres, and Tamir Melamed . 2009 a. Online experimentation at Microsoft. Data Mining Case Studies (2009), 11.Google ScholarGoogle Scholar
  25. Ron Kohavi, Alex Deng, Brian Frasca, Roger Longbotham, Toby Walker, and Ya Xu . 2012. Trustworthy online controlled experiments: Five puzzling outcomes explained KDD'2012. 786--794. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ron Kohavi, Alex Deng, Brian Frasca, Toby Walker, Ya Xu, and Nils Pohlmann . 2013. Online controlled experiments at large scale. In KDD'2013. 1168--1176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Kohavi, A. Deng, R. Longbotham, and Y. Xu . 2014. Seven Rules of Thumb for Web Site Experimenters. KDD'2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Ron Kohavi, Randal M Henne, and Dan Sommerfield . 2007. Practical guide to controlled experiments on the web: listen to your customers not to the hippo KDD'2007. 959--967. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M Henne . 2009 b. Controlled experiments on the web: survey and practical guide. Data Min. Knowl. Discov. Vol. 18, 1 (2009), 140--181. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Ron Kohavi, David Messner, Seth Eliot, Juan Lavista Ferres, Randy Henne, Vignesh Kannappan, and Justin Wang . 2010. Tracking Users' Clicks and Submits: Tradeoffs between User Experience and Data Loss. (2010).Google ScholarGoogle Scholar
  31. Stephen L Morgan and Christopher Winship . 2014. Counterfactuals and causal inference. Cambridge University Press.Google ScholarGoogle Scholar
  32. Kirill Nikolaev, Alexey Drutsa, Ekaterina Gladkikh, Alexander Ulianov, Gleb Gusev, and Pavel Serdyukov . 2015. Extreme States Distribution Decomposition Method for Search Engine Online Evaluation KDD'2015. 845--854. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Eric T Peterson . 2004. Web analytics demystified: a marketer's guide to understanding how your web site affects your business. Ingram.Google ScholarGoogle Scholar
  34. Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, and Pavel Serdyukov . 2016. Boosted Decision Tree Regression Adjustment for Variance Reduction in Online Controlled Experiments. In KDD'2016. 235--244. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Filip Radlinski, Madhu Kurup, and Thorsten Joachims . 2008. How does clickthrough data reflect retrieval quality? CIKM'2008. 43--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Kerry Rodden, Hilary Hutchinson, and Xin Fu . 2010. Measuring the user experience on a large scale: user-centered metrics for web applications CHI'2010. 2395--2398. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Tetsuya Sakai . 2006. Evaluating evaluation metrics based on the bootstrap SIGIR'2006. 525--532. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Yang Song, Xiaolin Shi, and Xin Fu . 2013. Evaluating and predicting user engagement change with degraded search relevance WWW'2013. 1213--1224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Diane Tang, Ashish Agarwal, Deirdre O'Brien, and Mike Meyer . 2010. Overlapping experiment infrastructure: More, better, faster experimentation KDD'2010. 17--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Huizhi Xie and Juliette Aurisset . 2016. Improving the Sensitivity of Online Controlled Experiments: Case Studies at Netflix KDD'2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Ya Xu and Nanyu Chen . 2016. Evaluating Mobile Apps with A/B and Quasi A/B Tests KDD'2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Ya Xu, Nanyu Chen, Addrian Fernandez, Omar Sinno, and Anmol Bhasin . 2015. From infrastructure to culture: A/B testing challenges in large scale social networks KDD'2015. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Consistent Transformation of Ratio Metrics for Efficient Online Controlled Experiments

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            WSDM '18: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining
            February 2018
            821 pages
            ISBN:9781450355810
            DOI:10.1145/3159652

            Copyright © 2018 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 2 February 2018

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            WSDM '18 Paper Acceptance Rate81of514submissions,16%Overall Acceptance Rate498of2,863submissions,17%

            Upcoming Conference

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader