Conference Homepage

Agenda - BNL, 18-22 October, 2004


For a summary of the meeting, please see Alan Silverman's trip report


Note: Archived video of the talks will be available soon in the RealMedia streaming video format. When the talks are available, the links in the "Video" column of the agenda will become active. A version of the RealPlayer is required to view the video stream. Downloads are available for Windows version, UNIX/Linux version, Mac OS X version, and legacy players.
Monday 18 October — Berkner Hall, Room B
08:00-09:00Registration Video
09:00-09:25Peter Bond — BNL Deputy Director for Science & Technology / Welcome Stream
download
XX Mb
09:25-09:40Multiple / Start of the HEPiX meeting. Introduction, status, goals... Stream
download
XX Mb
09:40-10:00Walter Schoen / GSI Site Report Stream
download
XX Mb
10:00-10:20Shane Canon / NERSC Site report Stream
download
XX Mb
10:20-10:40Christopher Hollowell / BNL Site report Stream
download
XX Mb
10:40-11:10Coffee 
11:10-11:30Jiri Kosina / Prague Site report Stream
download
XX Mb
11:30-11:50Corrie Kost / TRIUMF Site report(Disk tuning table) Stream
download
XX Mb
11:50-12:10Paul Kuipers / NIKHEF site report Stream
download
XX Mb
12:10-12:30 Wojciech Wojcik/ CCIN2P3 site report Stream
download
XX Mb
12:30-13:30Lunch 
13:30-14:00Mirko Corosu (INFN)/ INFN TRIP Project
Abstract:

This talk presents the current status, activities and plans of the INFN TRIP Project. The aim of the project is the definition and development of an authorization and authentication infrastructure which allow any authorized user to access to wireless network services of any INFN section. At this time we have implemented an architecture based on Mac address authentication, X.509 certificate or username and password to access the service.

This system use captive portal, dinamic VLAN assignment and MAC address database (via Radius protocol).

We are evaluating a solution that integrates the system with 802.1x authentication.

Stream
download
XX Mb
14:00-14:30Chuck Boeheim (SLAC) / Using Ranger as a better SWATCH
Stream
download
XX Mb
14:30-15:00Enrico Maria Fasanelli (INFN)/ INFN K5 project
Stream
download
XX Mb
15:00-15:30Coffee 
15:30-15:50David Kelsey (CCLRC-RAL) / Update on LCG/EGEE Security
Abstract:
This talk will cover current work on LCG/EGEE security policy and procedures, including changes for LHC User Registration
Stream
download
XX Mb
15:50-16:30Robert Cowles (SLAC) / Computer Security Update
Stream
download
XX Mb
16:30-17:00Gabriele Carcassi (BNL) / GUMS: Grid User Management System
Abstract:
We describe our work on GUMS, a site tool for Resource Authorization (AuthZ) and Grid User Identity Mapping. We will first define the scope of the work, and describe the general direction we are taking. We will describe the current functionalities provided to BNL, such as the ability to have a flexible site policy controlled by a single XML file and the ability to integrate with site databases. We will then describe the current work being done for OSG, which includes the use of account pool, a GT3/4 based service and role based authorization using VOMS extended proxy credentials.
Stream
download
XX Mb
Tuesday 19 October — Berkner Hall, Room B
09:00-09:20Roberto Gomezel / INFN Site report
Stream
download
XX Mb
09:20-09:40Len Moss / SLAC Site report
Stream
download
XX Mb
09:40-10:00Stephan Wiesand / DESY Site Report Stream
download
XX Mb
10:00-10:20Manfred Alef / Karlsruhe Site Report Stream
download
XX Mb
10:20-10:50Coffee 
10:50-11:10Michel Jouvin / LAL Site report Stream
download
XX Mb
11:10-11:30Helge Meinhard / CERN site report Stream
download
XX Mb
11:30-11:50Kelvin Edwards / Jefferson Lab site report Stream
download
XX Mb
11:50-12:10Martin Bly / RAL site report Stream
download
XX Mb
12:10-12:30 Multiple / IHEPCCC Discussion Stream
download
XX Mb
12:30-13:30Lunch 
13:30-14:00Walter Schoen (GSI)/ Performance tests and tuning with SATA Linux file servers
SATA based LINUX file server can be used as relatively cheap disk storage. Important questions are the reliability and the performamance in RAID setups. This talk presents our experimences and perfomance tests taking into account different (kernel and RAID) configurations.
Stream
download
XX Mb
14:00-14:30Robert Petkus (BNL) / An Evaluation of Panasas at BNL
Abstract:
The RHIC Computing Facility at BNL has performed an evaluation of Panasas, a distributed, networked storage solution. The objective of the evaluation was to find a fast, scalable, fault tolerant, and easy to manage product that offered a global namespace. The discussion will include a review of our current NFS/SAN architecture, Panasas, testing methodologies, expectations and results.
Stream
download
XX Mb
14:30-15:00Jan van Eldik (CERN)/ Managing managed storage: CERN Disk Server operations
Stream
download
XX Mb
15:00-15:30Coffee 
15:30-16:00Tomasz Wlodek (BNL) / CRS - a Condor based batch system for massive job submission
Abstract:
We present a condor-based system for managing data reconstruction jobs in RHIC experiments.
Stream
download
Mb
16:00-16:30Dirk Duellmann (CERN)/ The LCG Persistency Framework (POOL) and distributed database deployment projects
Abstract:

As part of the LCG Application Area the persistency framework (POOL) has been developed which has just passed the first large scale deployment in experiment data challenges resulting in some 400TB of stored data. As an extension to POOL this year a Relation Abstraction Layer (RAL) has been introduced which will couple database back-end services like Oracle and MySQL as storage for meta data.

To prepare the database deployment side in LCG the 3D project has recently been started to define together with the experiments and service providers at Tier 1 and 2 the service infrastructure hosting POOL and other relational data. This project will also propose a data distribution environment which will connect the data base services provided as part of LCG.

The CERN database group is preparing for the deployment of databases for physics by consolidating the currently many disconnected services to a small number of Linux based Oracle clusters which are expected to run RAC. I will give a brief overview of these three activities focussing on their impact in the basic computing infrastructure.

Stream
download
16:30-17:00Matthias Schroder (CERN)/ Licensing Infrastructure at CERN
Stream
download
103.14 Mb
Wednesday 20 October — Berkner Hall, Room B
09:00-09:30Alf Wachsmann (SLAC)/ Perl API to AFS monitoring and debugging tools
Abstract:
I will present a new Perl module which provides APIs to the following AFS monitoring and debugging tools:
afsmonitor, cmdebug, rxdebug, scout, udebug, xstat_cm_test, xstat_fs_test
I will present implementation details, usage and some real life example applications for the Nagios monitoring system.
Stream
download
XX Mb
09:30-10:00Rafael Garcia Leiva (Universidad Autonoma de Madrid)/ Experience in the use of quattor toolsuite outside CERN
Abstract:
In this talk I will review very briefly the goals of the quattor toolsuite and how it works, then I will present the current experience in the use of quattor in two different sites: why they use quattor, the advantages over other fabric management tools, and the problems found so far.
Stream
download
XX Mb
10:00-10:30Matthias Schroder (CERN) / UIMON to LEMON Migration
Stream
download
XX Mb
10:30-11:00Coffee 
11:00-11:30Thorsten Kleinwort (CERN)/ Large Farm 'Real Life Problems' and their Solutions
Abstract
Having described our new farm management tools in last HEPiX Meetings, ELFms with Leaf, Lemon, and Quattor, I will concentrate in this talk on how we handle day-to-day work with these tools, on our various farms in the CERN Computer Centre. This includes software/kernel updates, hardware exchage/failures, and how we cope with the increasing demand for more machines/clusters managed by quattor for other groups at CERN.
Stream
download
XX Mb
11:30-12:00Karin Miers (GSI) / High availability with linux using drdb and heartbeat
Abstract:
This talk presents the design of a highly available linux file service based on Open Source tools. Two identical computers in a master/slave shared-nothing setup provide a hot-standby solution. The package heartbeat is used for the communication between the nodes and for IP address takeover. File system synchronisation between both nodes is done using DRBD which mirrors a whole block device via network (a kind of network raid-1). Additionally, service reliability and server status are monitored using the tool mon. Tests show that ongoing operations are not aborted during failover and recovery is completely transparent to the clients.
Stream
download
XX Mb
12:00-12:30Shane Canon (NERSC)/ CHOS - Chroot OS
Stream
download
XX Mb
12:30-13:30Lunch 
13:30-14:00Ruben Domingo Gaspar Aparicio (CERN) / Designing, deploying and supporting Windows Terminal Services at CERN
Abstract:
The CERN Windows Terminal Server Service was started on 1st April 2004. It provides to the CERN community an useful and easy way to:
  • run Windows applications independently of the platform where you are
  • securely access the CERN environment from outside CERN.
  • provide a service that can be used as base to setup customized clones for special needs (e.g. controls).
We will present the service, its architecture, its functional components, how to profit from it and our experience along these first six months of running the service.
Stream
download
XX Mb
14:00-14:30Reinhard Baltrusch (DESY)/ DESY Windows 2003 domain - features, migration and caveats
Stream
download
XX Mb
14:30-15:00Rafal Otto (CERN) / SMS 2003 Deployment and Managing Windows Security at CERN
Stream
download
XX Mb
15:00-15:30Coffee 
15:30-16:00Rafal Otto (CERN) / SPAM Fighting and Exchange 2003 at CERN
Stream
download
XX Mb
16:00-16:30Ruben Domingo Gaspar Aparicio (CERN) / A new Service for Electronic mailing list at CERN
Abstract:
The new distribution list service is now being deployed at CERN. It is integrated on the actual Mail infrastructure. It benefits from it on several aspect: anti-spam, anti-virus, flood control, less spam sent by the system to the owners, ... A new web interface developed on ASP.NET technology is available which introduces new features: bulk operations, managing archives, ...
Stream
download
XX Mb
16:30-17:00Conclusions Stream
download
XX Mb
Conference Dinner at Atlantis Marine World - Starting at 7:00pm
Thursday 21 October — Building 490 Conference Room
08:00-09:00 Large System SIG / Platforms for Physics Registration 
09:00-09:30Phil King (Intel) / Slides
Stream
download
XX Mb
09:30-10:00Stephan Wiesand (DESY) / AMD64/EM64T for HEP
Abstract:
I'll briefly introduce these new platforms, showing that there's more to them than just an extended address space. Then the results of performance comparisons for physics codes on several systems with Opteron, Nocona, Prescott and conventional x86 CPUs will be shown. I'll also talk about experiences with managing 64-bit Linux on these systems.
Stream
download
XX Mb
10:00-10:15Jan Iven (for Andreas Hirstius) (CERN/IT Dept) / Next-generation IA64 storage servers (PDF Version)
Stream
download
XX Mb
10:15-10:25Maxim Potekhin (BNL) / Price-performance estimates for some compute platforms(AMD,Intel)
Stream
download
XX Mb
10:30-11:00Coffee 
11:00-11:30Manfred Alef (Karlsruhe) / Experiences with Water-Cooled Clusters at GridKa
Abstract:

The requirements of computing power increase considerably in many applications. Often it is impossible to expand the datacenter due to limitations of time and money. Using smaller computers like 1 U servers or blade systems is a possibility to build more computing power into the available building.

At the same time the exectric power consumption, and the heat dissipation, of a single computer system increases extremely. This results in a very large heat load concentrated into very small enclosures. The problem is to remove the heat and bring it outside.

In order to overcome these difficulties the Grid Computing Centre Karlsrune (GridKa) determined to install water cooled computer cabinets. Since 2 years ago the first cabinets work without any problem.

Stream
download
XX Mb
11:30-12:00 Alf Wachsmann (SLAC) on behalf of Richard Mount / Huge memory systems for data intensive science
Stream
download
XX Mb
12:00-13:30Lunch 
13:30-14:00Chuck Boeheim (SLAC)/ Installing a Mac OS X Cluster
Stream
download
XX Mb
14:00-14:30Patrick Fuhrmann (DESY) for Andreas Gellrich (DESY)/ The DESY Production Grid
Abstract

DESY is one of the world-wide leading centers for research with particle accelerators and a center for research with synchrotron light. The hadron-electron collider HERA houses four experiments which are taking data and will be operated until 2007.

The H1 and ZEUS collaborations at HERA-II have started to exploit the Grid for their growing demand for Monte Carlo events after the update of the collider.

DESY participates in a number of national and international Grid projects. Among them are the German e-science initiative D-GRID, the EU-project Enabling Grid for E-sciencE (EGEE), and the International Lattice Data Grid (ILDG). In addition activities have started to exchange simulation data within the International Linear Collider Community (ILC).

The Grid Infrastructure at DESY is based on the so-called DESY Production Grid which make use of the LCG-2 middleware and is operated in the LCG TestZone. It is set up to provide generic Grid services for all Grid activities at DESY. Therefore the DESY Production Grid incorporates all necessary elements of a complete and independent Grid. This includes a Resource Broker, catalog services for replica and meta data management, and a dCache-based Storage Element which provides access to the entire DESY data space of 0.5 PB to the Grid.

In the contribution to HEPiX we will describe the DESY Production Grid in the context of the DESY Grid activities and present operation experiences.

Stream
download
XX Mb
14:30-15:00Multiple / Platform Discussion/Conclusions
Stream
download
XX Mb
15:00-15:30Coffee 
15:30-16:00Roberto Gomezel (INFN/Trieste) / Scientific Linux Experience at INFN/Trieste
Stream
download
XX Mb
16:00-17:00Alan Silverman - chair/ Panel on experience with Scientific linux and the use of the Redhat Subscription Model
 
Contributions from FNAL, CERN, SLAC and BNL plus open discussion.
Stream
download
XX Mb
Friday 22 October — Building 490 Conference Room
  Grid Operations Experience  
09:00-09:25Bob Cowles (SLAC) / OSG incident response etc
Stream
download
XX Mb
09:25-09:50Leigh Grundhoefer / iVDGL/Grid3/OSG iGOC - how they do ops and incident response
Stream
download
XX Mb
09:50-10:15Dave Kant (RAL) / LCG Grid Monitoring and Accounting
Abstract:
A description of the tools developed by the Grid Operation Center to monitor LCG.
Stream
download
XX Mb
10:15-10:45Coffee 
10:45-11:10Dave Kelsey (RAL) / LCG/EGEE Security Operations
Abstract:
This talk will present plans for a new Security Operations group, including incident response and service challenges
Stream
download
XX Mb
11:10-11:30Ian Bird (CERN) / LCG operations, status and problems
Stream
download
XX Mb
11:30-12:30 Site Managers / Discussion on how sites would accept remote operations
Markus Schulz and Ian Bird
Introduction and strawman ops model for discussion
Stream
download
XX Mb
12:30 Adjourn  
Top of Page

Last Modified: Tuesday, 26-Oct-2004 11:38:02 EDT


The Department of Energy's Brookhaven National Laboratory conducts research in the physical, biomedical, and environmental sciences, as well as in energy technologies. Brookhaven also builds and operates major facilities available to university, industrial, and government scientists. The Laboratory is managed by Brookhaven Science Associates, a limited liability company founded by Stony Brook University and Battelle, a nonprofit applied science and technology organization.

Privacy and Security Notice  | Contact Web Services for help