Artificial intelligence is rapidly transforming how we monitor natural hazards, predict risks, and design early warning systems. From satellite imagery to real-time analytics, AI now sits at the heart of disaster risk reduction. But innovation alone is not enough. As AI tools move from experimentation to operational use, responsibility, transparency, and global standards become just as critical as technical performance.
A newly published Nature npj Natural Hazards commentary explores an important question: Can online AI challenges help train developers not only to innovate, but to follow international standards for trustworthy AI? The answer, according to the authors, is cautiously optimistic—if challenges are designed with intention.
Online AI competitions have become global classrooms for machine learning. They attract participants from diverse backgrounds, offering hands-on experience with real-world datasets and real societal problems. These challenges are especially valuable for early-career scientists and practitioners in regions with limited access to formal AI training.
The Nature article focuses on a large international AI challenge for landslide detection using Earth observation data. Participants developed models to classify satellite images as landslide or non-landslide areas using optical (Sentinel-2) and radar (Sentinel-1) imagery.
The scale was impressive. Nearly 1,000 participants from more than 90 countries submitted close to 8,000 model runs. Winning teams used advanced deep-learning techniques, combining multiple data sources and ensemble methods to improve accuracy. From a technical standpoint, the challenge was a success.
The Blind Spot: AI Standards and Responsibility
However, the study revealed a critical gap. While participants excelled at improving model performance, most paid little attention to international AI standards. Concepts such as transparency, explainability, fairness, sustainability, and reproducibility were rarely discussed in project documentation.
This matters because international standards—developed by organizations such as ISO, ITU, and WMO—exist to ensure AI systems are safe, trustworthy, and fit for real-world deployment. In disaster risk reduction, failures in these areas can erode trust, reinforce inequalities, or lead to poor decision-making during crises. Accuracy alone does not guarantee reliability.
Why Standards Matter for Disaster Risk Reduction
AI systems used for early warning and hazard monitoring influence high-stakes decisions. Governments, emergency managers, and communities rely on these tools to act under uncertainty. If an AI model is biased, opaque, or poorly documented, its outputs may be misunderstood or misused. In worst cases, this can delay warnings or misallocate resources. International standards provide a shared framework to reduce these risks and enable interoperability across regions and institutions. The challenge is that standards are often invisible to developers, especially those learning through informal or competitive platforms. The authors propose a practical solution: embed international standards directly into the design of online AI challenges. This could include:
- Requiring participants to document ethical considerations and data limitations
- Evaluating submissions on transparency and reproducibility, not just accuracy
- Providing accessible guidance on relevant AI and Earth-observation standards
By doing this, challenges become more than competitions. They become training grounds for responsible AI.
What This Means for the MedEWSa Community
For the early warning and climate risk community, this work carries an important message. Capacity building must go beyond technical skills. Future AI practitioners need to understand how their models interact with society, policy, and decision-making systems.
Online challenges offer a scalable way to reach global audiences, especially in low- and middle-income countries. When aligned with international standards, they can help close knowledge gaps and promote more equitable participation in AI innovation.
Nava, L., Kuglitsch, M.M., Selby, J. et al. Introducing AI practitioners to international standards through online competitions. npj Nat. Hazards 2, 102 (2025).