OSG Document 1026-v1

Virtual Environments for Prototyping Tier-3 Clusters

Document #:
Document type:
Submitted by:
Marco Mambelli
Updated by:
Marco Mambelli
Document Created:
22 Feb 2011, 21:08
Contents Revised:
22 Feb 2011, 21:08
Metadata Revised:
22 Feb 2011, 21:08
Viewable by:
  • Public document
Modifiable by:

Quick Links:
Latest Version

The deployed hierarchy of Tier-1 and Tier-2 data centers, as organized within the context of the Worldwide LHC Computing Grid (WLCG), have without question been exceedingly successful in meeting the large-scale, group-level production and grid-level data analysis requirements of the experiments in the first full year of LHC operations. However, the plethora of derived datasets and formats thus produced, in particular large volumes of n-ntuple-like refined datasets, have underscored the need for additional resource configurations facilitating data access and analysis at the institutional or individual physicist-scale. The ATLAS and CMS collaborations have long ago formalized another level in the hierarchy of their respective computing infrastructures to meet this need, namely the Tier-3 center. Only now are the detailed requirements and optimal deployment configurations for these facilities being understood as the computing models and analysis modalities evolve. Since Tier-3 centers are typically smaller in scale than Tier-2s, and may sometimes have limited staffing resources with requisite computing expertise, reliable and easy to deploy cluster and storage system configurations and recipes targeted for common deployment need to be prototyped and tested in advance. In addition, Tier-3s come in a wide variety of configurations reflecting available resources and institutional goals. This adds complexity to the task of technology providers such as the Open Science Grid (OSG), which aspires to support Tier-3 groups. This paper describes a prototyping environment for creating virtual Tier-3 clusters using virtual machine technology based on VMware ESX deployed on a simple laptop. Using virtual machines with varying network topologies, the components of a Tier-3 cluster have been studied and modularized to simplify and streamline the deployment on real environments. The virtual cluster made it possible to test different solutions and simplify the verification of the software, and more importantly the testing of its installation and configuration instructions for Tier-3 managers. Using virtual machines and machine templates, it was possible to quickly bring up complete prototype clusters to test different systems on a virtual Tier-3 platform, such as the Xrootd distributed storage system, configurations of the Condor resource management system, and data transfer services such as GridFTP and SRM.
Files in Document:
Associated with Events:
CHEP10 held on 18 Oct 2010 in Taipei, Taiwan
DocDB Home ]  [ Search ] [ Last 20 Days ] [ List Authors ] [ List Events ] [ List Topics ]

Supported by the National Science Foundation and the U.S. Department of Energy's Office of Science Contact Us | Site Map

DocDB Version 8.8.9, contact Document Database Administrators