Resources per team: 1 Virtual Machine (VM) assigned per team
Resource Access: ssh access to the VMs via a login node. UI based applications can be run via x11 forwarding.
Data cube access: Multiple copies of the data cube via volume mounts on the user VMs + backup copy. Each copy likely shared among a few teams.
Resource management: VMs will be isolating team environments and VM flavour will be fixed for the course of the challenge thus usage is capped.
Software management: Participants will install their own software on their VMs where possible but may need to request installations.
Documentation: Resource access information can be hosted on the SDC2 webpage and workflows can be run freely run as team will have access to the VM. STFC Cloud user docs hosted here https://stfc-cloud-docs.readthedocs.io/en/latest/
Support: Support will be provided via the teams in the SKA SAFe program. VM/network level support will be provided by STFC Cloud support. They are available via their ticketing system and attempt to address tickets by next business day.
Resource location: UK
STFC Cloud hosted at Rutherford Appleton Laboratory provides IaaS cloud resource provisioned via Openstack. Within an Openstack project, VMs can be spun up with desired flavour (ie CPU, RAM, disk requirements). VMs of a predetermined flavour would be spun up for each team.
Please, insert information.
Per user resource
Suitable flavour for VMs on the STFC Cloud is
124 GB RAM
800 GB disk
VMs can be Ubuntu Bionic or ScientificLinux 7. Software installations can be done via team members onto the VMs.
Volume of resource
Roughly 10 simultaneous teams can be supported each with a VM with flavour c1.3xl described above for 6 months. If supporting less than 10 simultaneous teams, this would make it easier to provision a larger VM per team (ie flavour c1.4xl). If SDC2 timeline shifts a bit further into 2021, this would make it easier to support 10 simultaneous teams each with a VM flavour c1.4xl.
Any number of accounts (corresponding to team members) can access the assigned VM.
GPUs if any
ssh access to login to the assigned VM. Users would ssh to a login node that would serve as a jump machine and be able to ssh to their assigned VM from there. We would likely give ssh access to the team leader and provide them with a username and an IP address. The team leader can then add/request addition of other team members and grant them access as needed.
How to run a workflow
Users are free to run their workflows as they require as they have uninterrupted access to their assigned VM.
Accessing the data cube
The data cube will be available via a volume mounted on the team's VM.
User will install their own software. They are encouraged to use containerisation application where possible and can request software installations where absolutely required.
Users can install and use docker or singularity to run containers on their VMs.
Instructions for SDC2 use are here, and general STFC cloud user docs are here .
Separate VMs per team will isolate the team environments. The VM flavour will be fixed for the duration of the project and will thus limit/cap usage.
Currently, Openstack does not have the capacity to do resource limitations and resource management within a project.
Support will be provided via the teams in the SKA SAFe program. Please email us at SDC2-IRISemail@example.com
Credits and acknowledgements
IRIS is funded by the Science & Technology Facilities Council.
STFC is one of the seven research councils within UK Research & Innovation.