Research
Research Interests
My research mainly focuses on Physical AI, where AI enables physical systems—such as robotics, IoT, and industrial systems—to perceive, understand, and interact with the physical world. Broadly speaking, I study how AI models empower Artificial Intelligence of Things (AIoT) edge devices to sense the environment (i.e. Mutimodal AIoT Sensing), how AI models are efficiently deployed in resource-restrained environments (i.e. Efficient AI), how multi-modal IoT data can be represented to support human-machine interaction via large language models (i.e. Multimodal LLM), and how AI enables trustworthy and automatic transactions and business (i.e. AI Blockchain). My ultimate vision is to create the AI that learns and actions in physical world, accomplishing various tasks for human being encircled by edge intelligence and LLM.
Revolving these goals, my team mainly studies the following topics:
- Deep Learning Empowered AIoT Sensing: Computer vision has been very powerful but it is still limited in real-world scenarios regarding the illumination, occlusion and privacy issue. Wireless AIoT sensing leverages non-intrusive sensors, e.g., mmWave radar, WiFi signals, and ultra-wideband (UWB), for human perception. Handling these IoT data is more challenging due to the lack of understanding of various sensor data. We develop deep learning, transfer learning, few-shot learning and self-supervised learning algorithms to enable cost-effective, data-efficient, privacy-preserving and fine-grained wireless sensing technology.
- Efficient AI: Deep networks help AI systems achieve remarkable performances but they are limited by the requirement of massive labeled data and powerful computational devices. In AIoT sensing, AI models should be simple enough to run in the real-time systems, which denotes our objective. We develop efficient AI models on real-time systems using edge devices, such as Raspberry Pi and NVIDIA Jetson Nano. Based on them, everything can be interconnected.
- Multimodal LLM: LLM has revolutionized the world with the capability of zero-shot tasks. Currently the LLM only recognizes texts and images. My team aims to enable LLM to connect IoT data and get deeply involved in IoT-enabled systems, e.g., robotics and smart manufacturing.
Collaborations
I am interested in collaboration on the topics mentioned above. Apart from AI and IoT, I am also interested in AI-enabled interdisciplinary research, e.g., bioinformatics. Please drop me an email if you are interested in collaboration with a cool idea.