Multimodal AI and Robotic Systems (MARS) Lab
Assistant Professor Yang Jianfei
/prof-yang-jianfei-wip-for-linkedin-post.png?sfvrsn=be9fc2fc_2)
Research Vision
- MARS Lab studies Physical AI, focusing on how artificial intelligence can empower physical systems—such as robotics and IoT—to perceive, understand, and interact with the world through multimodal learning.
- Our vision is to create intelligent robots and systems that seamlessly integrate into human society, enhancing productivity and empowering people to lead more fulfilling and prosperous lives.
Core Research Areas
- Multimodal AI (e.g., VLM, M-LLM)
- Embodied AI (e.g., Robot Learning and VLA)
- Efficient AI (e.g., Transfer Learning and TinyML)
- Multimodal LLM for human-robot interaction
- AIoT sensing for healthcare and elderly care
- AI-powered robotic arms for lab automation and factory
- Edge intelligence for smart home and building