CSCS - Piz Daint



Summary

Resources per team: maximum 36'000 compute node hours and up to 5 TB of storage per team

Resource Access: ssh access or interactive access through Jupiter notebooks. UI based applications can be run via x11 forwarding.

Data cube access: Will be made available in a shared location by SDC2 organizers

Resource management: The teams have to use the SLURM workload manager. All analyses have to be submitted through jobs to this workload manager.

Software management: Participants should install their own software, but support can be requested to the CSCS support team (help@cscs.ch).

Documentation: Resource access information can be hosted on the SDC2 webpage. Information on how to access Piz Daint is available at CSCS' user portal (user.cscs.ch) .

Support: Support is available via ticketing system (help@cscs.ch). Tickets response are limited to business days. Moderate knowledge of Linux and Job Schedulers is expected.

Resource location: Switzerland

Technical specifications

Overview

  • Named after Piz Daint, a prominent peak in Grisons that overlooks the Fuorn pass, this supercomputer is a hybrid Cray XC40/XC50 system and is the flagship system for national HPC Service.

Technical specifications

Per user resource

  • Up to 36'000 compute node hours on the GPU part of the system. If the GPU cannot be used, the CPU on the node can still be used

  • Up to 5 TB of storage per team

  • 10 GB of home space per user (dedicated) and up to 8.8 PB of scratch capacity to use (shared)

  • Up to 2400 nodes can be requested by each job

Software installed

Volume of resource

  • The teams can use up to 36'000 compute node hours and up to 5 TB of storage.

  • Specific amounts should be specified when the request is made.

GPUs if any

  • The teams can make use of the P100 GPUs on the system (recommended) but can still use just the CPUs on the nodes.

User access


  • Each group must submit a format Small Development Project proposal in order to get access to the resources that are available: https://www.cscs.ch/user-lab/allocation-schemes/development-projects/

  • To start the process, applicants have to send first an email to projectoffice@cscs.ch requesting to have their accounts opened in order to be able to apply for a development project.

  • Approval is given at CSCS discretion after passing a technical review, which can take around 1 month.

  • Users should be aware that the service is shared by other users and their usage patterns may impact others. The typical problematic areas that groups should pay special attention when writing the proposal are to to avoid:

    • Many small files in the $SCRATCH file system (Lustre filesystem)

    • Thousands of short-lived jobs submitted to the queue using very few nodes (In this case, the GREASY scheduler should be used - https://user.cscs.ch/tools/high_throughput/)

    • Query too frequently the queue status (e.g . watch squeue). The SLURM scheduler has a 5 min scheduling cycle, probing it every 2 seconds makes no difference.

    • Running applications on the login nodes of the cluster. Piz Daint has dedicated pre and post partitions for this propose.

Logging in

How to run a workflow

Accessing the data cube

  • Will be made available in a shared location by SDC2 organizers (to be communicated at the beginning of the challenge)

Software management

  • Users can install/compile their own software by themselves

  • CSCS provided software can be accessed through environment modules.

Containerisation

Documentation

Resource management

Support

  • Support can be requested by email to the CSCS user support ticketing system help@cscs.ch. Tickets response are limited to business days.

  • Moderate knowledge of Linux and Job Schedulers is expected.

Credits and acknowledgements

Users must quote and acknowledge the use of CSCS resources in all publications related to their production and development projects as follows: "This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID ###"