Scoring code

A description of the scoring procedure is now available. 

Scoring your performance during the challenge

Challenge submissions have been scored using a dedicated scoring service,  provided as a pip-installable Python package here

This service has been designed to allow participating teams to score their own source catalogues from any machine by using a simple command line tool. The tool submits a catalogue to a remote server, where cross-matching of sources with a truth catalogue is performed and scores weighted for accuracy. 

During the start of the challenge, only scores for the development datasets were  returned. The service later returned scores for all three datasets ('full', 'ldev' and 'dev'). A live leaderboard was automatically updated every time a team achieved a new high score against the full challenge dataset. The maximum number of submissions per team was set to 30 per day. 

Scoring your performance after the challenge end

To evaluate one's performance after the challenge close, a scoring module is now available as a pip-installable Python package here. To run this package you need to download the truth catalogues for the datasets of interest (available here).