By AIoT Lab

This seminar brings together invited researchers working at the intersection of embodied intelligence, multimodal sensing, and wireless systems. Through a series of talks spanning embodied AI, multimodal perception, urban sensing, and radio-frequency fingerprinting, the seminar aims to foster interdisciplinary discussion and explore emerging research directions at the boundary of AI and physical-world systems. Faculty members and students with related interests are warmly welcome to attend and participate in open discussions.

Event Information

  • Series: AIoT Lab Invited Talks / Research Seminar
  • Date: January 14, 2026
  • Time: 15:00–17:30
  • Venue: Lab1, Satsuki, Reitaku University
  • Host: AIoT Lab
  • MC: Jinxiao Zhu (Assistant Professor, Tokyo Denki University)

Program Schedule

Time Speaker Affiliation Title
15:00–15:45 Chenren Xu Peking University Towards Hypervision Embodied AI
15:45–16:30 Lin Gu Tohoku University Artificial Cortex from Evolution: Synthesizing Multimodal Intelligence for Next-Generation AI
16:30–17:00 Wenhao Huang Keio University Real-time Fine-grained Garbage Disposal Sensing
17:00–17:30 Zhen Jia Keio University A Review and Research Agenda for Radio Frequency Fingerprinting

Speakers & Talks

Chenren Xu (Peking University)

Associate Professor
Lab / Group: SOAR Group
Talk: Towards Hypervision Embodied AI
Keywords: Multimodal sensing · Embodied AI

Abstract
As multimodal embodied intelligence is increasingly deployed in real-world environments, the requirements for environmental adaptability, operational safety, and robust understanding of complex scenes continue to intensify. This report is grounded in the concept of super-vision intelligent sensing, aiming to overcome the inherent limitations of conventional vision under conditions such as low illumination, occlusion, homogeneous materials, and structurally invisible objects. Leveraging penetrative, multi-physics modalities—including electrical, acoustic, magnetic, and radio-frequency sensing—the report systematically reviews recent advances in multimodal super-vision perception across tasks such as warehouse management, biological behavior tracking, liquid recognition, and agent interaction, and discusses their implications for advancing next-generation embodied intelligence capabilities. Furthermore, the report examines current bottlenecks in multimodal data scale and quality, and proposes a data engine tailored for super-vision scenarios to expand the coverage of both real and synthetic data. It also explores the potential of cross-modal synthesis, weakly supervised learning, and Real-to-Sim-to-Real paradigms in constructing a unified data foundation for multimodal super-vision perception.

Bio
Prof. Chenren Xu is a Boya Young Fellow Associate Professor, Deputy Director of Institute of Networking and Energy-efficient Computing and Assistant Dean of School of Computer Science at Peking University. His recent research focuses on embodied AIoT and wireless AI for science. He earned his Ph.D. from WINLAB, Rutgers University, and worked as postdoctoral fellow of Carnegie Mellon University, guest professor of Keio University and visiting scholars in AT&T Shannon Labs and Microsoft Research. He serves in multiple editorial and leadership roles and has received several academic awards.


Wenhao Huang (Keio University)

PhD Candidate
Talk: Real-time Fine-grained Garbage Disposal Sensing
Keywords: Edge computing · Automotive sensing · Smart cities · Object detection

Bio
Wenhao Huang is currently a Ph.D. candidate in the Graduate School of Media and Governance, Keio University Shonan Fujisawa Campus. He obtained his B.S. degree in Computer Science from Shanghai Jian Qiao University in 2020 and the M.S. degree (Cyber Informatics Program) from the Graduate School of Media and Governance, Keio University, Japan, in 2022. His research interests include multimodal ubiquitous sensing, wireless sensing, and image processing.


Lin Gu (Tohoku University)

Assistant Professor
Lab / Group: Lin Gu Lab
Talk: Artificial Cortex from Evolution: Synthesizing Multimodal Intelligence for Next-Generation AI
Keywords: Artificial intelligence · Evolution simulation

Abstract
“Artificial Cortex from Evolution” seeks to design a next-generation artificial cortical architecture inspired by natural evolution. By emulating the robustness and adaptability of biological neural systems, this work aims to develop AI systems with human-like cognitive capabilities, minimizing reliance on human annotation. This research is built upon three key pillars: (1) Computational Photography, which enables the extraction and reconstruction of physical properties from visual data; (2) Human Gaze-Based Attention Mechanism, which aligns AI perception with human cognitive processes to enhance interpretability and efficiency; and (3) Multi-Modal Large Language Models (LLMs), which integrate diverse data modalities for advanced reasoning and understanding. By synthesizing these foundations, we propose an artificial cortex capable of learning from multimodal inputs, with broad applications in Medical AI, Nuclear Fusion, and Human-Friendly Robotics, demonstrating its transformative potential in addressing critical global challenges.

Bio
Dr. Lin Gu is an assistant professor at Tohoku University with specific interest and expertise in the application of Artificial Intelligence in Medical Imaging and Computational Photography. Before moving to Japan, he was a postdoctoral research fellow at A*STAR, Singapore working on machine learning on biomedical imaging. He is now the project manager for Moonshot Program on Continuous Learning and Memory Mechanism and ACT-X Program on Gaze Assisted AI.


Zhen Jia (Keio University)

Assistant Professor
Talk: A Review and Research Agenda for Radio Frequency Fingerprinting
Keywords: RF fingerprinting · RF features · Cross-domain

Abstract
Radio frequency fingerprinting (RFF) has emerged as a promising technique for wireless device identification by exploiting subtle hardware-induced signal characteristics. Although recent studies have reported impressive performance on closed and well-controlled benchmarks, the deployability of RFF systems in real world remains an open challenge. We first present a structured review of representative RFF feature extraction paradigms, covering both explicit model-driven features and implicit data-driven representations. We then examine how channel effects, receiver variability, and temporal dynamics impact fingerprint reliability, revealing fundamental limitations of existing evaluation practices that rely primarily on closed-set accuracy. Building on these observations, the critical gap between benchmark-level identification and deployment-level requirements is highlighted. To bridge this gap, we outline an application-driven research agenda that focuses on practical challenges in real-world scenarios, including open-set and open-world RFF with unseen devices, cross-domain robustness under channel and receiver shifts, cold-start identification, and scaling laws related to device population size, time span, and data efficiency. Across these directions, the emphasis is placed on evaluation protocols that reflect deployment constraints rather than performance on a single closed benchmark.

Bio
Zhen Jia received his B.S. degree in 2018 and M.S. degree in 2021 from Shaanxi Normal University, China, and his Ph.D. degree in 2025 from Future University Hakodate, Japan. He is currently a Project Assistant Professor at Keio University and a Visiting Researcher at Reitaku University, Japan. His research interests include radio frequency fingerprinting, physical layer security, and nano-networks.