GeoAI Arctic Mapping Challenge

Welcome to the 2025 GeoAI Arctic Mapping Challenge

words from organizers

Registration & Setup

  1. Go to the CodeBench Challenge Page.
  2. Click β€œJoin Challenge”.
  3. Sign in using GitHub, Google, or your institutional email.
  4. Accept the data license agreement before accessing the dataset.
  5. You’re ready to start! πŸš€

Download the Dataset

Once registered, you can:

  • Download the dataset directly from CodeBench.
  • Or visit the Dataset Page for detailed information and visualizations.

Dataset Highlights:

  • RGB + Sentinel-2 multi-spectral imagery.
  • High-resolution ArcticDEM elevation data.
  • COCO-style instance segmentation annotations.

Data Format

The dataset is organized into train and test splits with COCO-style annotations:

dataset/
β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ images/              # Multi-band GeoTIFFs
β”‚   β”œβ”€β”€ masks/               # Per-instance RTS masks
β”‚   └── annotations.json     # COCO-style labels
└── test/
    β”œβ”€β”€ images/
    └── sample_submission.json
Item Format Description
Imagery .tif Multi-band satellite imagery
Annotations .json COCO-style instance segmentation
Masks .png Per-instance RTS masks

Baseline Notebook & Starter Kit

We provide a baseline notebook to help you get started quickly.

Starter Kit Includes:

  • Data loading & visualization examples.
  • Baseline multimodal GeoAI model from Li et al. (2025).
  • Sample inference code.
  • Example submission formatting.

Task Description

Your task is to predict per-instance RTS masks using satellite imagery and topographic data.

Input Output
Multi-band satellite imagery (.tif) RTS instance segmentation masks

Evaluation Metrics

Submissions are ranked on CodeBench using the following metrics:

Metric Definition Range Higher = Better?
mAP Mean Average Precision over IoU thresholds 0–1 βœ…
mAP50 Mean Average Precision over IoU thresholds at 0.5 0–1 βœ…
mAP75 Mean Average Precision over IoU thresholds at 0.75 0–1 βœ…
mAP-small Mean Average Precision over IoU thresholds at 0.5 for small objects 0–1 βœ…
mAP-medium Mean Average Precision over IoU thresholds at 0.5 for medium objects 0–1 βœ…
mAP-large Mean Average Precision over IoU thresholds at 0.5 for large objects 0–1 βœ…

Primary ranking is based on mAP.

Results Format (Submission Format)

Your submission must follow the CodeBench COCO-style JSON format.

Submission Folder Structure:

submission/
β”œβ”€β”€ predictions/
β”‚   β”œβ”€β”€ image_001.json
β”‚   β”œβ”€β”€ image_002.json
β”‚   └── ...
└── submission.json

Example submission.json:

[
  {
    "image_id": "image_001",
    "category_id": 1,
    "segmentation": [ ... ],
    "score": 0.93
  },
  {
    "image_id": "image_002",
    "category_id": 1,
    "segmentation": [ ... ],
    "score": 0.87
  }
]

⚠️ Important: Submissions that do not follow this format will be automatically rejected.

Submission Instructions

  1. Train your model locally.
  2. Generate predictions for the test set.
  3. Package predictions in the required submission format.
  4. Compress into a .zip file.
  5. Upload to CodeBench Submissions.
  6. Check your score on the leaderboard.

What Participants Should Submit

In addition to leaderboard submissions, top-performing teams will be invited to contribute to an outcome publication. To be eligible, please prepare the following:

1. Predictions (Required for All Teams)

  • COCO-style submission.json file (see Results Format).
  • Uploaded via CodeBench Submissions.

2. Technical Report (Required for Top Teams)

Teams aiming for awards and paper inclusion must submit a short technical report (2–4 pages) covering:

  • Team & Affiliations (members, institution, country).
  • Model Architecture (diagrams encouraged: CNN, Transformer, hybrid, etc.).
  • Training Strategy (losses, augmentations, optimizers, batch size, hardware).
  • Special Techniques (domain adaptation, self-training, ensembling, band selection, etc.).
  • Results & Ablations (validation metrics, ablation tables, sample outputs).
  • Insights (challenges faced, lessons learned, potential future improvements).
  • References to related works.

A report template (LaTeX/Word) will be provided to keep submissions consistent.

3. Code & Reproducibility Package (Required for Award Eligibility)

  • Final inference code or container (to verify results).
  • Training scripts are encouraged but optional.
  • Instructions for reproducing predictions on the test set.

Challenge Phases

The competition consists of two phases:

Phase Timeline Description Awards
Challenge Phase Oct 01, 2025 β†’ May 01, 2026 Compete with others, submit predictions, improve models πŸ… Yes
Benchmark Phase Starts Dec 01, 2026 Evaluate models freely, test reproducibility, no awards ❌ No
  • Challenge Phase β†’ Compete for prizes and leaderboard positions.
  • Benchmark Phase β†’ Keep testing your models after the official challenge ends.

Important Dates

Event Date
Competition Opens October 1, 2025
Dataset Released October 1, 2025
Final Submission May 1, 2026
Winners Announced June 1, 2026
Benchmark Phase Starts December 1, 2026

Qualification, Rules, and Awards

Qualification

The competition is open to everyone, including students, academics, and industry professionals. We encourage participants from diverse backgrounds in computer science, remote sensing, geoscience, and related fields to form teams and compete.

Rules

  • Each team can consist of 1 to 5 members.
  • Each team is limited to 2 submissions per day and 100 submissions in .
  • The use of external data is permitted. Please see the Codabench page for specifics.
  • Code for the top-performing models must be submitted for verification to be eligible for awards.
  • For a complete list of rules, please visit the competition page on Codabench.

Awards

Prizes will be awarded to the top-performing teams based on the final leaderboard standings.

  • 1st Place: $1,000 USD
  • 2nd Place: $500 USD
  • 3rd Place: $200 USD

Support & Discussion

Leaderboard

The leaderboard will be updated regularly with the latest submissions.