Summary

Resources per team: HPC Cluster + Virtual Machines (VMs) cluster

Resource Access:  Currently both ssh, web based, X2GO accesses are foreseen. More details or options will be explained in a dedicated instruction page

Data cube access:  Shared directory, read only mode

Resource management: Team environments will be self isolated in VMs or using queuing systems in clusters. We can limit the resources available per team (not per user!) to the tbd value

Software management: A containerisation environment and possibility to load or to run containers will be provided.

Documentation: Since two types of platforms will be available, two dedicated sections in descriptive web pages will be provided to explain functioning and details. The main page is:  https://www.ict.inaf.it/computing/skadc2/ 

Support: FAQ and mailing list will be provided. Support will be given on a best effort approach.

Resource location: Italy

Technical specifications

Overview

The INAF - ICT, as coordination office of the National Institute of AstroPhysics, aims to support all the computational, storage, archival and software activities related to the SKA, including the SKA Data Challenge 2 call. The INAF-ICT portfolio is composed of several infrastructures, both internally handled (IA2 - https://www.ia2.inaf.it/, CHIPP - https://www.ict.inaf.it/computing/chipp/ ...) and provided by external vendors (Cineca, AWS, ..). More details could be found in the ICT main page (https://www.ict.inaf.it/). The most interesting features offered by the ICT are the technical support offered to the astronomy scientific community, the deep knowledge of the astronomy case, the participation of most of the staff to the EU project to enhance the user experience in data handling, reduction and analysis in the perspective of the SKA Era. Cluster (HPC/HTC) and virtualization/cloud approaches, containerization and team management are customized to provide an easy to use and tuned infrastructure to work in this peculiar case. The user space is meant to be per team, so concurrent accesses to the resources will be managed in autonomy by the teams.

Technical specifications

In the perspective of offering a comparable solution for each team in terms of computing power, we propose to limit the hw capacity for both the HPC/HTC and the virtualization/cloud resources (per Team space) to: 

or to what the SKA Data Challenge 2 coordinators will suggest, if the case.

Per user resource

All the physical resources of one VM are available for the Team and shared between the team users. On the HPC cluster each user is assigned to a SLURM partition. In order to avoid unlimited using and a fair challenge, some constraints will be defined.

Software installed

CASA, CARTA, SOFIA, IDL Function and more. Details will be available at the site  https://www.ict.inaf.it/computing/skadc2/ 

Volume of resource

The INAF-ICT infrastructure can easily accommodate four teams on a virtualized environment and other four on a cluster platform. The methodology to access will be as much homogeneous as possible so the final user will be able to access both in a similar way. Since the infrastructure is provided by a robust and state of the art authentication and authorization mechanism, the team composition could be from one to unspecified number of participants. After team member registration (first login), the team manager (PI) will be able to add the respective team members to his/her own group. The infrastructure administrator will create the group on the system after agreement with the SKA Data Challenge coordination team. No automatic creation of groups is foreseen. As previously mentioned, the account resources are shared among the group members, so concurrent utilization will degrade the available resources capacity. 

GPUs if any

Currently GPUs are available but not planned to be inserted in this proposal. If actual need will be revealed, extension to the current documentation will be provided.



User access

Teams will access the infrastructure after registration and group creation on the authorization system. The authentication mechanisms available will be the most diffuse protocols like EduGain, X509, OAuth2 (Google, LinkedIn, ORCID), or self registration. All the users must be added by the PI to the relative group. 

Logging in

A dedicated web interface to perform first registration and team set up on the authorization system is provided on https://www.ict.inaf.it/computing/skadc2/science-gateway/registration-and-team-setup/. The interface also redirects to web applications where to find more information on the account or methodology to access. 

How to run a workflow

Containers with radio astronomy more diffuse application toolkits will be provided. It will be possible also to run self developed containers on the infrastructure. 

Accessing the data cube

Data cube will be accessible in the directory of a read only shared file system.

Software management

The singularity containerization environment will be provided with the possibility to use a predefined container with the most common software already available and loadable with a click or calling a command line standard procedure, or the possibility to load a self implemented container. Singularity offers the possibility to run also Docker containers.  Installation of missing libraries is handled through regular support request. User can also install their own libraries. No software support for application adjustment to meet the use case will be provided.

Installation of missing libraries is handled through regular support request. User can also install their own libraries or run  their own containers.

Three main software management approaches are foreseen: (i) system wide installation that requires system administration support, (ii) pre defined container that can be extended by the users (eventually tutorial on how to manage containers will be provided), (iii) user level installation on the user home directory or team shared storage.

Environmental modules are also available for more complex configurations.

Containerisation

Singularity or Docker environments will be surely provided. Since most probably the community will need already installed applications, some containers already prepared will be offered using graphical user interface or command line calls. The recommended way to proceed is to properly use the tools available.

Documentation

PDF serving as the user guide will be uploaded to the description page https://www.ict.inaf.it/computing/skadc2/

FAQ and a mailing list will be available to support system administration issues to run containers and support to complement the list of available containers, offered.

Resource management


Support

A mailing list will be the election channel to request support on the INAF infrastructure available for the SKA Data Challenge 2. The support offered will be on a best effort base that means that answer will be provided in few working days, compatible with all the tasks of the other applications and projects.

Credits and acknowledgements 

We acknowledge the computing infrastructures of INAF, under the

coordination of the ICT office of Scientific Directorate, for the availability of computing resources and support.