The biggest challenge that we face now is to get beyond the simple combination of LLMs (large language models, including all forms of multimedia models) with RLHF (reinforcement learning with human feedback, or any other form of feedback). Our current methods are pushing the limit of what we can do with some (fairly substantial and useful) real-world uses and applications, but also severe limitations.
To get beyond these limits, we need a different architecture.
One of the most promising avenues has been quietly under development by Verses.ai. (They’re now labeling their product line as “Genius,” see verses.ai/genius.)

Verses.ai has been basing their AI/AGI approach on Karl Friston’s work on active inference, and more recently joint work between Friston and other Verses.ai researchers, leading to renormalizing generative models (RGMs).
On July 23, 2025, Verses.ai released a white paper on arXiv, titled “Mobile Manipulation with Active Inference for Long-Horizon Rearrangement Tasks.” This is a landmark paper.

For easier reading, here’s the Abstract alone.

References and Resources
- Pezzato, Corrado, Ozan Çatal, Toon Van de Maele, Riddhi J. Pitliya, and Tim Verbelen. 2025. “Mobile Manipulation with Active Inference for Long-Horizon Rearrangement Tasks.” doi:10.48550. arXiv:2507.17338v1 [cs.RO] 23 Jul 2025. (Accessed Aug. 18, 2025; available online at arXiv:2507.17338v1.)