ResearchWednesday, March 18, 2026· 2 min read

DeepMind Launches Cognitive Framework and Kaggle Hackathon to Measure AGI Progress

TL;DR

DeepMind unveiled a new cognitive framework designed to measure progress toward artificial general intelligence and has launched a Kaggle hackathon to crowdsource the evaluations. This open, community-driven effort aims to create rigorous, realistic benchmarks that accelerate safe and transparent AGI development.

Key Takeaways

  • 1DeepMind introduced a cognitive framework to systematically evaluate AGI-related capabilities across diverse cognitive domains.
  • 2A Kaggle hackathon invites the global community to build and contribute practical evaluation tasks and datasets.
  • 3The initiative promotes open, standardized benchmarks to improve transparency, comparability, and accountability in AGI research.
  • 4Community-built evaluations can accelerate progress while aligning measurement with real-world, safety-minded goals.

DeepMind launches a community-driven approach to measuring AGI progress

DeepMind has released a new cognitive framework intended to provide a clearer, structured way to measure progress toward artificial general intelligence (AGI). Rather than relying on ad hoc or single-use benchmarks, the framework maps cognitive abilities across domains and proposes concrete evaluation types that can be used to track systems' capabilities over time.

The team is pairing the framework with a practical, open participation effort: a Kaggle hackathon where researchers, practitioners, and the broader community are invited to design and submit evaluation tasks and datasets. By crowdsourcing the creation of realistic, diverse evaluations, DeepMind aims to build robust, scalable benchmarks that reflect real-world challenges.

These efforts emphasize openness and comparability. Standardized, community-vetted evaluations help ensure that progress claims are meaningful and reproducible, while enabling cross-lab comparison. Importantly, the framework centers on measuring capabilities in ways that can inform both research priorities and safety considerations as systems approach more general intelligence.

Why it matters: bringing the research community together to develop shared measurement tools accelerates progress, improves transparency, and lays practical groundwork for responsible AGI development. The Kaggle hackathon creates an accessible entry point for contributors to shape the benchmarks that will guide future advances.

If you’re working in AI research or evaluation, consider joining the hackathon to contribute evaluations that will help steer AGI development toward measurable, accountable outcomes.

Get AI Wins in Your Inbox

The best positive AI stories delivered to your inbox. No spam, unsubscribe anytime.