Research


Research Interests

My research mainly focuses on Artificial Intelligence of Things (AIoT), where the state-of-the-art AI models are proposed for edge devices and enables everything interconnected. Broadly speaking, I study how AI models empower IoT-enabled edge devices to sense the environment (i.e. AIoT Sensing), how AI models are efficiently deployed in resource-restrained environments (i.e. Efficient AI), how multi-modal IoT data can be represented to support human-machine interaction via large language models (i.e. Multimodal LLM), and how AIoT enables trustworthy and automatic transactions and business (i.e. IoT Blockchain). My ultimate vision is to create a world of Artificial Internet of Everything encircled by edge intelligence and LLM.

Revolving these goals, my team mainly studies the following topics:

  • Deep Learning Empowered Wireless AIoT Sensing: Computer vision has been very powerful but it is still limited in real-world scenarios regarding the illumination, occlusion and privacy issue. Wireless AIoT sensing leverages non-intrusive sensors, e.g., mmWave radar, WiFi signals, and ultra-wideband (UWB), for human perception. Handling these IoT data is more challenging due to the lack of understanding of various sensor data. We develop deep learning, transfer learning, few-shot learning and self-supervised learning algorithms to enable cost-effective, data-efficient, privacy-preserving and fine-grained wireless sensing technology.
  • Efficient AI: Deep networks help AI systems achieve remarkable performances but they are limited by the requirement of massive labeled data and powerful computational devices. In AIoT sensing, AI models should be simple enough to run in the real-time systems, which denotes our objective. We develop efficient AI models on real-time systems using edge devices, such as Raspberry Pi and NVIDIA Jetson Nano. Based on them, everything can be interconnected.
  • Multimodal LLM: LLM has revolutionized the world with the capability of zero-shot tasks. Currently the LLM only recognizes texts and images. My team aims to enable LLM to connect IoT data and get deeply involved in IoT-enabled systems, e.g., robotics and smart manufacturing.

Collaborations

I am interested in collaboration with respect to the following directions:

  • IoT-enabled Human Perception and Its Applications (Smart Home and Robotics)
  • Affective Computing via Computer Vision and IoT Sensors (Human-Centric AI Systems)
  • Deep Learning and Transfer Learning Algorithms and Applications for Interdisciplinary Research (Biology and Medicine)