World Models: Five Competing Approaches

As of late 2025, there are five different approaches to creating world models. Here, we do a contrast-and-compare, identifying not only the specific areas in which each is (or can be) useful, the companies and “Chief Magicians” (leading scientists) behind each, and – most importantly – the money.

This blogpost is still under development; we’ll seal it up and put out pointers to it when it’s complete (or nearly so). Until then, we’ll be actually publishing early, as many of you will want to use the References and Resources list to get started.

Figure 1. Five competing approaches now exist for world models, potentially supporting AGI.

There are five general world model approaches, each of which can potentially contribute to AGI:

  • LLMs (large language models) plus RLHF (reinforcement learning with human feedback) – this is the current approach by all major LLM-based AI companies, including OpenAI (with its symbiotic relationship with Microsoft), Google’s Gemini 3, Anthropic’s Claude 4.5, and several other.
  • Fei Fei Li’s World Labs company, with its Marble being another gen-AI (generative AI) approach; more of an attempt to build an internal world model than the more straight-forward gen-AI approaches used by others (e.g., Google’s Veo 3.1, OpenAI’s Sora, etc.)
  • Yann LeCun has recently announced that he’ll be forming his own company (no name yet), we trust it will be built around “LeJEPA” (LeCun’s invention: Joint Embedding Predictive Architecture),
  • Karl Friston and members of Verses.ai (under CEO René Dubois), have developed AXIOM, which “represents scenes as compositions of objects, whose dynamics are modeled as piecewise linear trajectories that capture sparse object-object interactions” – object properties (including dynamic properties) are modeled using active inference, so this is a blending of a semi-symbolic (or at least physical-object modeling) with active inference, and
  • Neuro-symbolic world models, which are still the least developed and emphasized within the world model community.

We’ll be doing a contrast-and-compare among these different approaches over this post.


Starting Thoughts

The dominant conversations (see an excellent article published on EntropyTown (2025) compare Google’s DeepMind approaches (e.g., SIMA 2), Fei Fei Li’s [World Labs] Marble, and the likely advocacy of LeCun for LeJEPA.

Notably absent from these conversations is any mention of Friston’s active inference, even though Verses.ai has released AXIOM, a system where users can experiment with “slotted” objects that have properties associated with them, and which use active inference to postulate future actions.

And, of course, neuro-symbolic computing – even though it’s been brought up over this past year (and certain companies and organizations are hiring for leadership roles in this area) – is still not mentioned as a viable contender in the world models arena.

{* More to come. AJM, Thursday, Nov. 24, 2025. *}


References and Resources

Cross-Comparisons

This article does a cross-compare between Google’s DeepMind approach, Fei Fei Li’s (World Labs) Marble system, and Yann Lecun’s JEPA (likely now LeJEPA).

  • EntropyTown (on Twitter (X)). 2025. “Why Fei-Fei Li, Yann LeCun and DeepMind Are All Betting on “World Models” — and How Their Bets Differ.” entropytown @2025/Twitter (X) (Nov. 13, 2025). (Accessed Nov. 24, 2025; available at Google, Marble, LeJEPA.)

Balestriero and LeCun’s “LeJEPA”

  • Balestriero, Randall, and Yann LeCun. 2025. “LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics.” arXiv:2511.08544v3 [cs.LG] (14 Nov 2025). (Accessed Nov. 24, 2025; available at Abs, PDF.)

Leave a comment

Your email address will not be published. Required fields are marked *

Share via
Copy link
Powered by Social Snap