Initial Urban Radiological Search Competition

Urban Radiological Search Data Competition

This competition was conducted in Winter/Spring of 2018. The Leaderboard is now frozen for posterity, but competitors old and new may still access the data and have their submissions scored.

Welcome to the Urban Radiological Search Data Competition at datacompetitions.lbl.gov! Our goals as competition hosts are to spur new, innovative thinking in radiation detection algorithms, to engage a broader community in nonproliferation problems, and to demonstrate the use of a competition framework to explore mobile searches for non-natural radiation sources in urban environments.

For this competition, we have used radiation transport simulations to create thousands of data sets resembling typical radiological search data collected on urban streets in a mid-sized U.S. city. Each data set – a run – simulates the detection events received by a standard thallium-doped sodium iodide – NaI(Tl) – detector, specifically a 2”×4”×16” NaI(Tl) detector, driving along several city street blocks in a search vehicle. All of the runs contain data on radiation-detector interaction events from the natural background sources (roadway, sidewalks, and buildings), and some of the runs also contain data arising from non-natural extraneous radiation sources.

The event data created for this competition is derived from a simplified radiation transport model – a street without parked cars, pedestrians, or other ‘clutter’, a constant search vehicle speed with no stoplights, and no vehicles surrounding the search vehicle. In fact, the search vehicle itself is not even in the model – the detector is moving down the street by itself, 1 meter off the ground. This simple model provides a starting point for comparing detection algorithms at their most basic level.

The runs are separated into two sets: a training set for which a file with the correct answers (source type, time at which the detector was closest to the source) is also provided, and a test set for which you will populate and submit an answers file for online scoring. For each run in the test set, you’ll use your detection algorithm on the event data to determine (1) whether there is a non-natural extraneous source along the path (the detection component), and if so, (2) what type of source it is (identification) and (3) at what point the detector was closest to it during the run (location — which in this competition will be reported in seconds from the start of the run).

This effort is supported by the Enabling Capabilities for Nonproliferation and Arms Control (EC) Program Area of the Office of Defense Nuclear Nonproliferation Research and Development (DNN R&D), part of the National Nuclear Security Administration (NNSA), a semi-autonomous agency within the US Department of Energy. The DNN R&D office funds basic research to support other programs within the NNSA and other government agencies that have missions involving radiation detection.

How It Works

The competition will run from January 22 to April 9, 2018. Within that window, each team can submit its answers for the runs in the test set (login required). Competitors may form their own autonomous teams or be part of a larger team, in which the team’s best scores are used. Users (e.g., national laboratory employees or other researchers receiving US government sponsorship) may request an account here. After the account has been approved or during registration, users may request to join an existing team within any active competition (for now, that’s just this competition). Approved users may also request to create their own team. Requests to join new teams or to create a new team require approval by site administrators or the existing team’s point of contact. Upon creation of a team, submitted answers will show up on the public leaderboard.

We allow up to 1000 submissions per team during the competition, each of which will be scored. Each team’s best score will be reported on the public leaderboard (login required). The public rankings will be based on your answers to approximately 43% of the test set runs.

The final rankings will be based on the remaining 57% of runs and so may be different from what the public leaderboard shows. The top three teams in the final rankings will be recognized at an upcoming DNN R&D review meeting.

Follow-on Public Competition

We plan to host the same competition (same data sets, same scoring system) on a public forum such as Kaggle (www.kaggle.com) or TopCoder (www.topcoder.com) at a later date. Participants in the current competition will not be eligible to participate in the public competition.

LA-UR-17-26899

Teams

Team Name
LBNL
LANL-WAREHOUSE
ORNL
INL-PINS
Team Python Hacks
PNNL
SNL_Statistical_Sciences
LANL-PetaVision
RSLA-Tom
NNSS-NLV
RSLA-DSI
Roll_Tide
LLNL-NS
LLNL-DS
JHUAPL
Roqueta