AtmosArena: Benchmarking Foundation Models for Atmospheric Sciences (Papers Track)

Tung Nguyen (University of California, Los Angeles); Prateik Sinha (UCLA); Advit Deepak (University of California, Los Angeles); Karen A. McKinnon (University of California, Los Angeles); Aditya Grover (UCLA)

NeurIPS 2024 Recorded Talk Cite
Climate Science & Modeling Meta- and Transfer Learning

Abstract

Deep learning has emerged as a powerful tool for atmospheric sciences, showing significant utility across various tasks in weather and climate modeling. In line with recent progress in language and vision foundation models, there are growing efforts to scale and finetune such models for multi-task spatiotemporal reasoning. Despite promising results, existing works often evaluate their model on a small set of non-uniform tasks, which makes it hard to quantify broad generalization across diverse tasks and domains. To address this challenge, we introduce AtmosArena, the first multi-task benchmark dedicated to foundation models in atmospheric sciences. AtmosArena comprises a suite of tasks that cover a broad spectrum of applications in atmospheric physics and atmospheric chemistry. To showcase the capabilities and key features of our benchmark, we conducted extensive experiments to evaluate two state-of-the-art deep learning models, ClimaX and Stormer on AtmosArena, and compare their performance with other deep learning and traditional baselines. By providing a standardized, open-source benchmark, we aim to facilitate further advancements in the field, much like open-source benchmarks have driven the development of foundation models for language and vision.