Overview Speakers Schedule Call for Papers Organizers

Overview

There has been growing interest in learning robot skills from humans—both of which can be viewed as physical agents that interact with the world. In recent years, computer vision researchers have focused on creating digital twins of humans in virtual environments that behave like real ones, while roboticists have been building physical agents capable of interacting with the real world. We believe that progress in one field can greatly benefit the other. On one hand, virtual humans can be regarded as a special form of robotic agents; on the other hand, robots can learn manipulation and locomotion from human demonstrations, including those performed by simulated humans. Through the proposed workshop, we aim to bring these two fields together to explore common challenges—such as retargeting, embodiment gap, contact modeling, data scarcity, and the role of foundation models, among others. We hope this workshop will serve as a fertile ground for inspiring new research directions and fostering cross-disciplinary collaboration in this space.

Call for Papers

We invite the submission of 4-page extended abstracts (in CVPR format). The workshop papers are non-archival, and we encourage submissions that have already been submitted to or accepted by other venues. We will host a poster session featuring up to 30 posters.

Reviewer Nomination: We are looking for reviewers to help evaluate submissions. All reviewers will be acknowledged in the workshop. Nomination deadline is April 3, 2026.  Nominate a Reviewer →

Submission Timeline

Call for PapersMarch 13, 2026
Submission DeadlineApril 10, 2026
Acceptance NotificationMay 1, 2026
Final Version DueMay 15, 2026

Topics of Interest

We welcome papers on (but not limited to):

  • Human-object and human-scene interaction reconstruction and synthesis
  • Long-horizon interaction understanding and planning
  • Imitation learning from human demonstrations
  • Real-time human reconstruction and teleoperation
  • Foundation models for human and robot agents
  • Agent-agnostic representations and embodiment gap
  • Data collection and curation for embodied agents
  • Contact modeling, motion prediction, and physics-based simulation
  • Humanoid learning and whole-body control
Speakers Schedule iCal Download Accepted / Invited Paper List Organizers

Overview

The goal of this workshop is to build communication among researchers who study human modeling and robotics, both of which can be considered as physical agents that interact with the world. In recent years, computer vision researchers have focused on creating digital twins of humans in virtual environments that behave like real humans, and at the same time roboticists have worked on building physical agents capable of interacting with the real world. We believe that the progress in one field can greatly benefit the other. On one hand, virtual humans can be considered as a special form of a robotic agent. On the other hand, robots can learn manipulation and locomotion from human demonstrations, including from simulated humans. Through the proposed workshop, we aim to bring these two fields together and explore common challenges - such as contact modeling, motion prediction, overcoming data paucity, the role of large-scale models, etc. We hope that our workshop will provide a suitable platform for inspiring new research directions and solutions in this space.

Schedule

[Zoom]
TimeSpeaker(s)
9:25 - 9:30Opening Remark
9:30 - 10:00KeynoteHanbyul Joo
10:00 - 10:45SpotlightAditya Prakash, Mandi Zhao, Rick Akkerman
10:45 - 11:15KeynoteMichael Black
11:15 - 11:45KeynoteRichard Newcombe
11:45 - 12:15Poster Session@ExHall D #182 - #201
Lunch Break
13:30 - 14:00KeynoteKaren Liu
14:00 - 14:30KeynoteSiyuan Huang
14:30 - 15:00SpotlightRoei Herzig, Sirui Xu
15:15 - 15:45KeynoteJitendra Malik
15:45 - 16:15KeynoteDinesh Jayaraman
16:15 - 16:45SpotlightNeerja Thakkar, Junyao Shi
16:45 - 16:50Closing Remark

Call for Paper

Accepted / Invited Paper List

  • Poly-Autoregressive Prediction for Modeling Interactions
    Neerja Thakkar, Tara Sadjadpour, Jathushan Rajasegaran, Shiry Ginosar, Jitendra Malik
  • InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation
    Sirui Xu, Dongting Li, Yucheng Zhang, Xiyan Xu, Qi Long, Ziyin Wang, Yunzhi Lu, Shuchang Dong, Hezi Jiang, Akshat Gupta, Yu-Xiong Wang, Liangyan Gui
  • How Do I Do That? Synthesizing 3D Hand Motion and Contacts for Everyday Interactions
    Aditya Prakash, Benjamin Lundell, Dmitry Andreychuk, David Forsyth, Saurabh Gupta, Harpreet Sawhney
  • InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
    Rick Akkerman, Haiwen Feng, Michael J. Black, Dimitrios Tzionas, Victoria Fernández Abrevaya
  • InterMimic: Towards Universal Whole-Body Control for Physics-Based Human-Object Interactions
    Sirui Xu, Hung Yu Ling, Yu-Xiong Wang, Liang-Yan Gui
  • DexMachina: Functional Retargeting for Bimanual Dexterous Manipulation
    Zhao Mandi, Yifan Hou, Dieter Fox, Yashraj Narang, Ajay Mandlekar, Shuran Song
  • ZeroMimic: Distilling Robotic Manipulation Skills from Web Videos
    Junyao Shi, Zhuolun Zhao, Tianyou Wang, Ian Pedroza, Amy Luo, Jie Wang, Yecheng Jason Ma, Dinesh Jayaraman
  • UniSkill: Imitating Human Videos via Cross-Embodiment Skill Representations
    Jaehyun Kang, Hanjung Kim, Hyolim Kang, Meedeum Cho, Seon Joo Kim, Youngwoon Lee
  • Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation
    Shaowei Liu, Chuan Guo, Bing Zhou, Jian Wang
  • HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction
    Chen Bao, Jiarui Xu, Xiaolong Wang, Abhinav Gupta, Homanga Bharadhwaj
  • Sparse MoE Students for Efficient Knowledge Distillation
    Jongwon Ryu, Mingyu Jeon, Woojun Jung, Minuk Ma, Junyeong Kim
  • DiffCogNav: Diffusion-based Trajectory Planning for Cognitively-Aware Human Navigation Behavior
    Zhiwen Qiu, Ziang Liu, Tapomayukh Bhattacharjee, Saleh Kalantari
  • DemoDiffusion: One-Shot Human Imitation using pre-trained Diffusion Policy
    Sungjae Park, Homanga Bharadhwaj, Shubham Tulsiani
  • BG-HOP: A Bimanual Generative Hand-Object Prior
    Sriram Krishna, Sravan Chittupalli, Sungjae Park
  • Agent-Agnostic Semantic Reasoning for Material-Aware Obstacle Handling in Autonomous Vehicles
    Ayush Bheemaiah, Seungyong Yang
  • Visual imitation enables contextual humanoid control
    Arthur Allshire, Hongsuk Choi, Junyi Zhang, David McAllister, Anthony Zhang, Chung Min Kim, Trevor Darrell, Pieter Abbeel, Jitendra Malik, Angjoo Kanazawa
  • VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation
    Hanzhi Chen, Boyang Sun, Anran Zhang, Marc Pollefeys, Stefan Leutenegger
  • Learning Physics-Based Full-Body Human Reaching and Grasping from Brief Walking References
    Yitang Li, Mingxian Lin, Zhuo Lin, Yipeng Deng, Yue Cao, Li Yi
  • OmniManip: Towards General Robotic Manipulation via Object-Centric Interaction Primitives as Spatial Constraints
    Mingjie Pan, Jiyao Zhang, Tianshu Wu, Yinghao Zhao, Wenlong Gao, Hao Dong
  • GigaHands: A Massive Annotated Dataset of Bimanual Hand Activities
    Rao Fu, Dingxi Zhang, Alex Jiang, Wanjia Fu, Austin Funk, Daniel Ritchie, Srinath Sridhar
  • SkillMimic: Learning Basketball Interaction Skills from Demonstrations
    Yinhuai Wang, Qihan Zhao, Runyi Yu, Ailing Zeng, Jing Lin, Zhengyi Luo, Hok Wai Tsui, Jiwen Yu, Xiu Li, Qifeng Chen, Jian Zhang, Lei Zhang, Ping Tan
  • InteractAnything: Zero-shot Human Object Interaction Synthesis via LLM Feedback and Object Affordance Parsing
    Jinlu Zhang, Yixin Chen, Zan Wang, Jie Yang, Yizhou Wang, Siyuan Huang
  • DexHandDiff: Interaction-aware Diffusion Planning for Adaptive Dexterous Manipulation
    Zhixuan Liang, Yao Mu, Yixiao Wang, Tianxing Chen, Wenqi Shao, Wei Zhan, Masayoshi Tomizuka, Ping Luo, Mingyu Ding
  • EasyHOI: Unleashing the Power of Large Models for Reconstructing Hand-Object Interactions in the Wild
    Yumeng Liu, Xiaoxiao Long, Zemin Yang, Yuan Liu, Marc Habermann, Christian Theobalt, Yuexin Ma, Wenping Wang

Reviewer Acknowledgement

We thank all reviewers who helped us with the review process: Nischal Reddy Chandra, Sichang Su, Zhiwen Qiu, Chen Bao, Rakhil Immidisetti, Xinpeng Liu, Abhiroop Chatterjee, Zi-ang Cao, Yu Wu, Yuqi Xie, Aditya Prakash, Sholder Lyko, Rynaa Grover, Susmita Ghosh, Poorvi Hebbar.