EngageSKA - UCLCA Portugal SKA Regional Centre prototype

Summary

Resources per team:  Virtual Machines (VMs) assigned per team on ENGAGE SKA Cluster+ HPC Cluster (LCA-UC)

Resource Access:  ssh access to the VMs after setting up VPN; direct ssh to HPC centre accounts (managed by LCA-UC). More details or options will be explained in a dedicated access documentation.

Data cube access:  Shared directory, read only mode

Resource management: VMs will naturally isolate team environments and VM flavor will be fixed. 

Software management: Users can install software, but EngageSKA only supports the tools that are central to the system. They do not offer help for troubleshooting other software/code. Participants will have sudo access on their VMs.

Documentation: Resource access information can be hosted on the SDC2 webpage and workflows can be run freely run as team will have access to the VM.   Documentation will be hosted and made available wrt accessing resources, and running workflows.

Support: Contacts listed in the Support section. FAQ and mailing list will be provided. Support will be given on a best effort approach.

For UC-LCA accounts, support is available via ticketing system (helpdesk). Tickets response are limited to business days. Moderate knowledge of Linux and Job Schedulers is expected.

Resource location: Portugal

Technical specifications

Overview

The facility makes available Virtual Machines (VM) through OpenStack. To access a VM, users need to request an account and have installed the EngageSKA VPN (OpenStack just works inside the VPN). You can access the VM using the VPN and any SSH client with a floating IP address.

Detailed information about the EngageSKA cluster can be found in the SKA telescope developers portal. This includes the cluster specifications, how to access the cluster, the network, using the VPN, and the OpenStack platform.

Please, update overview for promotion during the challenge. 

In addition, LC-UC will provide access to its HPC facility,

Per user resource

Suitable flavors for VMs on the ENGAGE SKA Cloud are:

16/32+ vCPUs

48+GB RAM

200+ GB disk

(this may increase after the current upgrade cycle).

Suitable flavors for LCA-UC HPC are (For a fixed CPU time duration agreed with SDC2 hosts):

32+cores

48+GB RAM

200+ GB disk


Software installed

VMs are Linux CentOS. Users can install software, but EngageSKA only supports the tools that are central to the system. They do not offer help for troubleshooting other software/code.

Volume of resource

The ENGAGE SKA infrastructure can easily accommodate five teams on a virtualized environment (to increase  up to ten teams by Jan 2021) - any number of accounts (corresponding to team members) can access the assigned VM; other five teams on a HPC platform.

GPUs if any

The gpu partition provides acess to Nvidia Tesla V100 SXM2 GPUs (5120 Cuda Cores).

User access

Open the accounts

Logging in

Setting up the VPN 

Access to OpenStack requires to set up a VPN. Instructions can be found on the Access Documentation to be provided.

Accessing OpenStack

Once inside the VPN, you can access OpenStack through this dashboard. Set domain as "default" (Please, confirm information) and use your VPN credentials. It is highly recommended that you reset your password at first (configurations menu inside OpenStack). 

Authentication  

How to run a workflow

Users are free to run their workflows as they require as they have uninterrupted access to their assigned VM.

Accessing the data cube

The data cube will be available via a volume mounted on the team's VM.

Software management

Users can install software, but EngageSKA only supports the tools that are central to the system. 

Containerisation

Users can install and use docker or singularity to run containers on their VMs; Singularity on the HPC cluster.

Documentation

Documentation hosted on SDC2 website and on Access documentation to be provided.

Resource management

Separate VMs per team will isolate the team environments. The VM flavour will be fixed for the duration of the project and will thus limit/cap usage. 

For the HPC Cluster, resource management will be done via SLURM job scheduler, each team will be assigned to an account with a predefined number of hours.

Statistics 

Limitation of use


Support

Technical support

For support for our services, please contact us at: Please, confirm the email that will be used for support. 

Contacts

Credits and acknowledgements 

Credits and acknowledgements