Welcome to the 2025 GeoAI Arctic Mapping Challenge
words from organizers
Registration & Setup
- Go to the CodeBench Challenge Page.
- Click βJoin Challengeβ.
- Sign in using GitHub, Google, or your institutional email.
- Accept the data license agreement before accessing the dataset.
- Youβre ready to start! π
Download the Dataset
Once registered, you can:
- Download the dataset directly from CodeBench.
- Or visit the Dataset Page for detailed information and visualizations.
Dataset Highlights:
- RGB + Sentinel-2 multi-spectral imagery.
- High-resolution ArcticDEM elevation data.
- COCO-style instance segmentation annotations.
Data Format
The dataset is organized into train and test splits with COCO-style annotations:
dataset/
βββ train/
β βββ images/ # Multi-band GeoTIFFs
β βββ masks/ # Per-instance RTS masks
β βββ annotations.json # COCO-style labels
βββ test/
βββ images/
βββ sample_submission.json
Item | Format | Description |
---|---|---|
Imagery | .tif |
Multi-band satellite imagery |
Annotations | .json |
COCO-style instance segmentation |
Masks | .png |
Per-instance RTS masks |
Baseline Notebook & Starter Kit
We provide a baseline notebook to help you get started quickly.
Starter Kit Includes:
- Data loading & visualization examples.
- Baseline multimodal GeoAI model from Li et al. (2025).
- Sample inference code.
- Example submission formatting.
Task Description
Your task is to predict per-instance RTS masks using satellite imagery and topographic data.
Input | Output |
---|---|
Multi-band satellite imagery (.tif) | RTS instance segmentation masks |
Evaluation Metrics
Submissions are ranked on CodeBench using the following metrics:
Metric | Definition | Range | Higher = Better? |
---|---|---|---|
mAP | Mean Average Precision over IoU thresholds | 0β1 | β |
mAP50 | Mean Average Precision over IoU thresholds at 0.5 | 0β1 | β |
mAP75 | Mean Average Precision over IoU thresholds at 0.75 | 0β1 | β |
mAP-small | Mean Average Precision over IoU thresholds at 0.5 for small objects | 0β1 | β |
mAP-medium | Mean Average Precision over IoU thresholds at 0.5 for medium objects | 0β1 | β |
mAP-large | Mean Average Precision over IoU thresholds at 0.5 for large objects | 0β1 | β |
Primary ranking is based on mAP.
Results Format (Submission Format)
Your submission must follow the CodeBench COCO-style JSON format.
Submission Folder Structure:
submission/
βββ predictions/
β βββ image_001.json
β βββ image_002.json
β βββ ...
βββ submission.json
Example submission.json:
[
{
"image_id": "image_001",
"category_id": 1,
"segmentation": [ ... ],
"score": 0.93
},
{
"image_id": "image_002",
"category_id": 1,
"segmentation": [ ... ],
"score": 0.87
}
]
β οΈ Important: Submissions that do not follow this format will be automatically rejected.
Submission Instructions
- Train your model locally.
- Generate predictions for the test set.
- Package predictions in the required submission format.
- Compress into a
.zip
file. - Upload to CodeBench Submissions.
- Check your score on the leaderboard.
What Participants Should Submit
In addition to leaderboard submissions, top-performing teams will be invited to contribute to an outcome publication. To be eligible, please prepare the following:
1. Predictions (Required for All Teams)
- COCO-style submission.json file (see Results Format).
- Uploaded via CodeBench Submissions.
2. Technical Report (Required for Top Teams)
Teams aiming for awards and paper inclusion must submit a short technical report (2β4 pages) covering:
- Team & Affiliations (members, institution, country).
- Model Architecture (diagrams encouraged: CNN, Transformer, hybrid, etc.).
- Training Strategy (losses, augmentations, optimizers, batch size, hardware).
- Special Techniques (domain adaptation, self-training, ensembling, band selection, etc.).
- Results & Ablations (validation metrics, ablation tables, sample outputs).
- Insights (challenges faced, lessons learned, potential future improvements).
- References to related works.
A report template (LaTeX/Word) will be provided to keep submissions consistent.
3. Code & Reproducibility Package (Required for Award Eligibility)
- Final inference code or container (to verify results).
- Training scripts are encouraged but optional.
- Instructions for reproducing predictions on the test set.
Challenge Phases
The competition consists of two phases:
Phase | Timeline | Description | Awards |
---|---|---|---|
Challenge Phase | Oct 01, 2025 β May 01, 2026 | Compete with others, submit predictions, improve models | π Yes |
Benchmark Phase | Starts Dec 01, 2026 | Evaluate models freely, test reproducibility, no awards | β No |
- Challenge Phase β Compete for prizes and leaderboard positions.
- Benchmark Phase β Keep testing your models after the official challenge ends.
Important Dates
Event | Date |
---|---|
Competition Opens | October 1, 2025 |
Dataset Released | October 1, 2025 |
Final Submission | May 1, 2026 |
Winners Announced | June 1, 2026 |
Benchmark Phase Starts | December 1, 2026 |
Qualification, Rules, and Awards
Qualification
The competition is open to everyone, including students, academics, and industry professionals. We encourage participants from diverse backgrounds in computer science, remote sensing, geoscience, and related fields to form teams and compete.
Rules
- Each team can consist of 1 to 5 members.
- Each team is limited to 2 submissions per day and 100 submissions in .
- The use of external data is permitted. Please see the Codabench page for specifics.
- Code for the top-performing models must be submitted for verification to be eligible for awards.
- For a complete list of rules, please visit the competition page on Codabench.
Awards
Prizes will be awarded to the top-performing teams based on the final leaderboard standings.
- 1st Place: $1,000 USD
- 2nd Place: $500 USD
- 3rd Place: $200 USD
Support & Discussion
- CodeBench Discussion Forum: Join Here
Leaderboard
The leaderboard will be updated regularly with the latest submissions.