skip to main content
10.1145/3219104.3219157acmotherconferencesArticle/Chapter ViewAbstractPublication PagespearcConference Proceedingsconference-collections
research-article

Homogenizing OSG and XSEDE: Providing Access to XSEDE Allocations through OSG Infrastructure

Published:22 July 2018Publication History

ABSTRACT

We present a system that allows individual researchers and virtual organizations (VOs) to access allocations on Stampede2 and Bridges through the Open Science Grid (OSG), a national grid infrastructure for running high throughput computing (HTC) tasks. Using this system, VOs and researchers are able to run larger workflows than can be done with OSG resources alone. This system allows a VO or user to run on XSEDE resources (with their allocation) using the same framework used with OSG resources. The system consists of two parts: the compute element (CE) that routes workloads to the appropriate user accounts and allocation on XSEDE resources, and simulated access to the CernVM Filesystem (CVMFS) servers used by OSG and VOs to distribute software and data. This allows jobs submitted through this system to work on a homogeneous environment regardless of whether they run on XSEDE HPC resources (like Stampede2 and Bridges) or OSG.

References

  1. C. Aguado Sanchez, J. Bloomer, P. Buncic, L. Franco, S. Klemer, and P. Mato. 2008. CVMFS - a file system for the CernVM virtual appliance. In Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research. Article 52, 52 pages.Google ScholarGoogle Scholar
  2. W. W. Armstrong et al. 1994. ATLAS: Technical proposal for a general-purpose p p experiment at the Large Hadron Collider at CERN. (1994).Google ScholarGoogle Scholar
  3. B Bockelman, T Cartwright, J Frey, E M Fajardo, B Lin, M Selmeci, T Tannenbaum, and M Zvada. 2015. Commissioning the HTCondor-CE for the Open Science Grid. Journal of Physics: Conference Series 664, 6 (2015), 062003. http://stacks.iop.org/1742-6596/664/i=6/a=062003Google ScholarGoogle ScholarCross RefCross Ref
  4. S. Chatrchyan et al. 2008. The CMS Experiment at the CERN LHC. JINST 3 (2008), S08004.Google ScholarGoogle ScholarCross RefCross Ref
  5. D.H.J. Epema, M. Livny, R. van Dantzig, X. Evers, and J. Pruyne. 1996. A worldwide flock of Condors: Load sharing among workstation clusters. Future Generation Computer Systems 12, 1 (1996), 53 -- 65. Resource Management in Distributed Systems. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Burt Holzman, Lothar A. T. Bauerdick, Brian Bockelman, Dave Dykstra, Ian Fisk, Stuart Fuess, Gabriele Garzoglio, Maria Girone, Oliver Gutsche, Dirk Hufnagel, Hyunwoo Kim, Robert Kennedy, Nicolo Magini, David Mason, P Spentzouris, Anthony Tiradani, Steve Timm, and Eric W. Vaandering. 2017. HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation. 1 (12 2017).Google ScholarGoogle Scholar
  7. Gregory M. Kurtzer, Vanessa Sochat, and Michael W. Bauer. 2017. Singularity: Scientific containers for mobility of compute. PLOS ONE 12, 5 (05 2017), 1--20.Google ScholarGoogle Scholar
  8. T. Maeno, K. De, T. Wenaus, P. Nilsson, G. A. Stewart, R. Walker, A. Stradling, J. Caballero, M. Potekhin, and D. Smith. 2011. Overview of ATLAS PanDA workload management. J. Phys. Conf. Ser. 331 (2011), 072024.Google ScholarGoogle ScholarCross RefCross Ref
  9. Massimo Mezzadri, Francesco Prelz, and David Rebatto. 2011. Job submission and control on a generic batch system: the BLAH experience. Journal of Physics: Conference Series 331, 6 (2011), 062039. http://stacks.iop.org/1742-6596/331/i=6/a=062039Google ScholarGoogle ScholarCross RefCross Ref
  10. Nicholas A. Nystrom, Michael J. Levine, Ralph Z. Roskies, and J. Ray Scott. 2015. Bridges: A Uniquely Flexible HPC Resource for New Communities and Data Analytics. In Proceedings of the 2015 XSEDE Conference: Scientific Advancements Enabled by Enhanced Cyberinfrastructure (XSEDE '15). ACM, New York, NY, USA, Article 30, 8 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Ruth Pordes, Don Petravick, Bill Kramer, Doug Olson, Miron Livny, Alain Roy, Paul Avery, Kent Blackburn, Torre Wenaus, Frank WÃijrthwein, Ian Foster, Rob Gardner, Mike Wilde, Alan Blatecky, John McGee, and Rob Quick. 2007. The open science grid. Journal of Physics: Conference Series 78, 1 (2007), 012057. http://stacks.iop.org/1742-6596/78/i=1/a=012057Google ScholarGoogle ScholarCross RefCross Ref
  12. I. Sfiligoi, D. C. Bradley, B. Holzman, P. Mhashilkar, S. Padhi, and F. Wurthwein. 2009. The Pilot Way to Grid Resources Using glideinWMS. In 2009 WRI World Congress on Computer Science and Information Engineering, Vol. 2. 428--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Igor Sfiligoi, Daniel C. Bradley, Burt Holzman, Parag Mhashilkar, Sanjay Padhi, and Frank Wurthwrin. 2009. The pilot way to Grid resources using glideinWMS. WRI World Congress 2 (2009), 428--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. TACC 2018. Texas Advanced Computing Center Stampede2. (mar 2018). Retrieved March 24, 2018 from https://www.tacc.utexas.edu/systems/stampede2Google ScholarGoogle Scholar
  15. J. Towns, T. Cockerill, M. Dahan, I. Foster, K. Gaither, A. Grimshaw, V. Hazlewood, S. Lathrop, D. Lifka, G. D. Peterson, R. Roskies, J. R. Scott, and N. Wilkins-Diehr. 2014. XSEDE: Accelerating Scientific Discovery. Computing in Science Engineering 16, 5 (Sept 2014), 62--74.Google ScholarGoogle ScholarCross RefCross Ref
  16. Rick Wagner, Philip Papadopoulos, Dmitry Mishin, Trevor Cooper, Mahidhar Tatineti, Gregor von Laszewski, Fugang Wang, and Geoffrey C. Fox. 2016. User Managed Virtual Clusters in Comet. In Proceedings of the XSEDE16 Conference on Diversity, Big Data, and Science at Scale (XSEDE16). ACM, New York, NY, USA, Article 24, 8 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Derek Weitzel, Brian Bockelman, Duncan A. Brown, Peter Couvares, Frank Würthwein, and Edgar Fajardo Hernandez. 2017. Data Access for LIGO on the OSG. In Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact (PEARC17). ACM, New York, NY, USA, Article 24, 6 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D Weitzel, I Sfiligoi, B Bockelman, J Frey, F Wuerthwein, D Fraser, and D Swanson. 2014. Accessing opportunistic resources with Bosco. Journal of Physics: Conference Series 513, 3 (2014), 032105. http://stacks.iop.org/1742-6596/513/i=3/a=032105Google ScholarGoogle ScholarCross RefCross Ref
  19. Von Welch, Ian Foster, Carl Kesselman, Olle Mulmo, Laura Pearlman, Steven Tuecke, Jarek Gawor, Sam Meder, and Frank Siebenlist. 2004. X. 509 proxy certificates for dynamic delegation. (01 2004).Google ScholarGoogle Scholar

Index Terms

  1. Homogenizing OSG and XSEDE: Providing Access to XSEDE Allocations through OSG Infrastructure

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      PEARC '18: Proceedings of the Practice and Experience on Advanced Research Computing
      July 2018
      652 pages
      ISBN:9781450364461
      DOI:10.1145/3219104

      Copyright © 2018 ACM

      Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 July 2018

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      PEARC '18 Paper Acceptance Rate79of123submissions,64%Overall Acceptance Rate133of202submissions,66%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader