Welcome to the 2025 GeoAI Arctic Mapping Challenge
words from organizers
Registration & Setup
- Go to the CodeBench Challenge Page.
- Click βJoin Challengeβ.
- Sign in using GitHub, Google, or your institutional email.
- Accept the data license agreement before accessing the dataset.
- Youβre ready to start! π
Download the Dataset
Once registered, you can: - Download the dataset directly from CodeBench. - Or visit the Dataset Page for detailed information and visualizations.
Dataset Highlights:
- RGB + Sentinel-2 multi-spectral imagery.
- High-resolution ArcticDEM elevation data.
- COCO-style instance segmentation annotations.
Data Format
The dataset is organized into train and test splits with COCO-style annotations:
dataset/
βββ train/
β βββ images/ # Multi-band GeoTIFFs
β βββ masks/ # Per-instance RTS masks
β βββ annotations.json # COCO-style labels
βββ test/
βββ images/
βββ sample_submission.json
Item | Format | Description |
---|---|---|
Imagery | .tif |
Multi-band satellite imagery |
Annotations | .json |
COCO-style instance segmentation |
Masks | .png |
Per-instance RTS masks |
Baseline Notebook & Starter Kit
We provide a baseline notebook to help you get started quickly.
Starter Kit Includes:
- Data loading & visualization examples.
- Baseline multimodal GeoAI model from Li et al. (2025).
- Sample inference code.
- Example submission formatting.
Task Description
Your task is to predict per-instance RTS masks using satellite imagery and topographic data.
Input | Output |
---|---|
Multi-band satellite imagery (.tif) | RTS instance segmentation masks |
Evaluation Metrics
Submissions are ranked on CodeBench using the following metrics:
Metric | Definition | Range | Higher = Better? |
---|---|---|---|
mAP | Mean Average Precision over IoU thresholds | 0β1 | β |
mAP50 | Mean Average Precision over IoU thresholds at 0.5 | 0β1 | β |
mAP75 | Mean Average Precision over IoU thresholds at 0.75 | 0β1 | β |
mAP-small | Mean Average Precision over IoU thresholds at 0.5 for small objects | 0β1 | β |
mAP-medium | Mean Average Precision over IoU thresholds at 0.5 for medium objects | 0β1 | β |
mAP-large | Mean Average Precision over IoU thresholds at 0.5 for large objects | 0β1 | β |
Primary ranking is based on mAP.
Results Format (Submission Format)
Your submission must follow the CodeBench COCO-style JSON format.
Submission Folder Structure:
submission/
βββ predictions/
β βββ image_001.json
β βββ image_002.json
β βββ ...
βββ submission.json
Example submission.json:
[
{
"image_id": "image_001",
"category_id": 1,
"segmentation": [ ... ],
"score": 0.93
},
{
"image_id": "image_002",
"category_id": 1,
"segmentation": [ ... ],
"score": 0.87
}
]
β οΈ Important: Submissions that do not follow this format will be automatically rejected.
Submission Instructions
- Train your model locally.
- Generate predictions for the test set.
- Package predictions in the required submission format.
- Compress into a
.zip
file. - Upload to CodeBench Submissions.
- Check your score on the leaderboard.
Challenge Phases
The competition consists of two phases:
Phase | Timeline | Description | Awards |
---|---|---|---|
Challenge Phase | Sep 09 β Nov 30, 2025 | Compete with others, submit predictions, improve models | π Yes |
Benchmark Phase | Starts Dec 01, 2025 | Evaluate models freely, test reproducibility, no awards | β No |
- Challenge Phase β Compete for prizes and leaderboard positions.
- Benchmark Phase β Keep testing your models after the official challenge ends.
Important Dates
Event | Date |
---|---|
Competition Opens | Sep 09, 2025 |
Dataset Released | Sep 09, 2025 |
Final Submission | Nov 30, 2025 |
Winners Announced | Dec 15, 2025 |
Benchmark Phase Starts | Dec 01, 2025 |
Qualification, Rules, and Awards
Qualification
The competition is open to everyone, including students, academics, and industry professionals. We encourage participants from diverse backgrounds in computer science, remote sensing, geoscience, and related fields to form teams and compete.
Rules
- Each team can consist of 1 to [Number] members.
- Each team is limited to [Number] submissions per day.
- The use of external data is [permitted/not permitted]. Please see the Codabench page for specifics.
- Code for the top-performing models must be submitted for verification to be eligible for awards.
- For a complete list of rules, please visit the competition page on Codabench.
Awards
Prizes will be awarded to the top-performing teams based on the final leaderboard standings.
- 1st Place: [Prize]
- 2nd Place: [Prize]
- 3rd Place: [Prize]
Support & Discussion
- CodeBench Discussion Forum: Join Here
- Contact Email: email
Leaderboard
The leaderboard will be updated regularly with the latest submissions.