ICML 2025 Workshop on Assessing World Models Methods and Metrics for Evaluating Understanding
Paper deadline: May 20, 2025
Date: TBD (either July 18 or July 19 at ICML 2025 in Vancouver, Canada).
Overview
Generative models in different domains can produce outputs that seem to mimic the real world. But do these models truly understand the world?
Researchers across fields are exploring this question. For example:
In NLP, large language models are mechanistically probed to evaluate whether they encode real-world knowledge.
In video generation, models are being evaluated to see if they've recovered the laws of physics.
In scientific fields, foundation models are being developed to uncover new theories about the world.
This workshop will explore the question: how can we evaluate whether generative models have understood the real world? While this question is important across CS communities, we don't have unified frameworks for formalizing and evaluating world models. This workshop will bring together these CS communities along with non-CS scientists working with foundation models.
See our call for papers for more information. Papers (max. 4 pages) are due May 20, 2025.
Invited Speakers
MIT
Eleuther AI
TTIC and Google DeepMind
Flatiron Institute
MIT
Harvard
Organizing Committee
MIT
Harvard
Brown
Cornell
Harvard
Stanford