Research
Research Interests
My Research mainly focuses on Artificial Intelligence of Things (AIoT), where the state-of-the-art AI models are proposed and implemented in the edge computing and enables everything interconnected. Broadly speaking, I study how AI models empower IoT-enabled edge devices to sense the environment (i.e. Wireless Sensing), how AI models are efficiently deployed in resource-restrained devices (i.e. Efficient AI Systems), and how multimodal data from IoT sensors can be represented and engineered to support decision-making (i.e. Multimodal Learning).
Revolving these goals, my team mainly studies the following topics:
- Deep Learning for IoT-enabled Wireless Sensing: The state-of-the-art deep learning models empower better representation learning and difficult tasks for smart sensing such as device-free human activity recognition, gesture recognition and human vital sign detection. Though massive deep models have been explored for camera data, it is worth exploring many other IoT sensor data, such as mmWave radar, WiFi signals and ultra-wideband (UWB). To this end, we develop deep learning, transfer learning, few-shot learning and self-supervised learning algorithms to enable cost-effective, data-efficient, privacy-preserving and fine-grained wireless sensing technology.
- Multimodal Learning for Sensor Fusion: In smart sensing, a single modality can have main flaws when confronting some situations. For instance, visual recognition models cannot work well in the face of bad illumination or occlusion, which can be overcome by wireless sensing (e.g. WiFi or radar). To leverage the complementarity of various modalities, we propose multimodal learning algorithms for robust smart sensing under kinds of circumstances 24/7.
- Efficient Edge AI Systems: To improve the performance, deeper and deeper networks have been proposed, while for smart sensing via edge computing, AI models should be simple enough to run in the real-time systems, which denotes our objective. We develop efficient deep models on real-time systems on cost-effective edge computational devices such as Raspberry Pi and NVIDIA Jetson Nano. Based on them, everything can be interconnected.
Research Projects
- WiFi-based Human Sensing System: We build a WiFi-based sensing system based on off-the-shelf WiFi routers and chips that can extract fine-grained channel state information for human sensing. Via WiFi signals, human poses and activities are recognized through walls, which enables security and healthcare applications. We have collabroated projects with Panasonic and Singapore Airelines.
- Multimodal Learning and Systems: We build the multimodal sensing system integrating radar, camera and WiFi in our lab. Multimodal learning algorithms are proposed for various sensing applications, such as human sensing or autonomous driving.
- Data-Efficient Machine Learning: To overcome the lack of data issue, we study transfer learning, domain adaptation and unsupervised learning for visual recognition and wireless sensing.
Collaborations
I am interested in collaboration with respect to the following directions:
- IoT-enabled Human Sensing and Its Applications for Smart Home and Healthcare
- Affective Computing via Computer Vision and IoT Sensors
- Deep Learning and Transfer Learning Algorithms and Applications for Interdisciplinary Research