Self-evolving World Gen: From Prompting to Living Intelligence
What is the next platform shift, with embodied, self-evolving AI? We are entering the age of living intelligence, where world models become the foundation of the next platform shift - moving from content generation to reality orchestration. Built upon large-scale, multimodal simulation models that understand physics, agency, and context, with long memory, these world models evolve into the substrate of embodied AI — capable of reasoning spatially, emotionally, and socially within persistent environments. Above this foundation emerges the agentic platform layer, where generative agents act as creators, curators, and collaborators, shaping adaptive worlds that learn from every interaction. On the surface, this manifests as the new AI consumer experience - where users no longer open apps or watch media but step into continuously co-evolving realities that remember, respond, and reconfigure themselves around human intent. This keynote explores how creators can harness the triad of video, mesh, and LLM×G-Splat generation to build AI-native worlds, that is a living creature.
About the Speaker:
Yiqi Zhao, Product Design Lead at Meta, specializes in AI-driven spatial computing and wearable devices.
Her team has delivered human-centric innovations, including the Meta Quest headsets, Metaverse 3D AI effort like Meta Horizon Engine, AI NPC, WorldGen, AssetGen and Creator Assistant. With an HCI research background at Harvard and MIT Media Lab, Yiqi began her 3D journey in brain-computer interfaces (EEG) and wearable AI. Previously, she led Unity’s platform initiatives, including visionOS for Apple Vision Pro. She also made her mark in the gaming sector by shipping Destiny 2. Additionally, she leads Deepcake, an AIGC community with global awards.