At the Crossroads of AI and Robotics: SLAMTEC Showcases Aurora S at the 2025 AI & Robotics Summit (AIRS)
In an era defined by the deep integration of artificial intelligence and the robotics industry, the 2025 AI & Robotics Industry Summit (AIRS) was recently held in grand fashion at the AsiaWorld-Expo in Hong Kong.
The summit brought together top global tech companies, academic experts, and industry leaders to explore the future pathways of cutting-edge technologies.
During one of the summit’s core sessions — the Embodied Intelligence Robotics Industry Chain Forum — Chen Shikai, Founder and CEO of SLAMTEC, was invited to deliver a keynote speech titled: “Aurora: A Multimodal Sensor for Spatial Perception in Embodied Intelligence.” His presentation drew significant attention across the robotics and AI industries.
Since its founding, SLAMTEC has focused on technological innovation at the core of robot spatial perception and autonomous mobility.
From consumer-grade cleaning robots to industrial AGVs, from commercial service robots to embodied intelligent agents — each technological breakthrough has brought machines closer to truly understanding the world.
In his speech, Chen emphasized:
“Spatial perception is the fundamental capability for all practical robots.
In the era of embodied intelligence, the essence lies in enabling machines to truly comprehend the world — and the foundation of understanding is still perception.”
Chen further noted that the development of embodied intelligent robots has entered a critical stage. Unlike traditional AI models that rely primarily on language understanding, embodied intelligence focuses on interaction between the body and the environment.
The key challenge lies in how to make robots truly understand the physical world, realizing the leap from seeing to understanding.
To tackle this, SLAMTEC integrates geometric, semantic, tactile, and depth-based perception to build a “perception brain” that enables robots to sense space, understand scenes, and predict interactions.
Chen’s keynote highlighted SLAMTEC’s newly released Aurora S, a next-generation AI-integrated spatial perception system under the company’s Aurora product line.
Aurora S deeply fuses vision, inertial navigation, and SLAMTEC’s proprietary AI-VSLAM technology to provide robots and intelligent agents with out-of-the-box, high-precision 3D perception, mapping, and semantic understanding.
It dramatically reduces integration complexity and accelerates intelligent applications — a tailor-made sensing system for the age of embodied intelligence.
In embodied intelligence, the breadth of perception defines the limits of understanding. Aurora S employs an end-to-end neural network architecture to achieve 120° ultra-wide stereo depth perception, allowing robots to capture a complete environmental view in a single frame.
Aurora FOV is wider when shooting the same object indoors
Compared to traditional depth cameras, Aurora S nearly doubles the field of view, greatly reducing obstacle-avoidance blind spots and enhancing safety and decision speed.
In short: The ultra-wide vision of Aurora S enhances environmental comprehension and reaction time — a foundation for safe embodied perception.
Traditional depth cameras often fail with low-texture or reflective surfaces. Aurora S easily overcomes these challenges.
Depth comparison of a traditional camera and the Aurora S at a black bar
Depth comparison of traditional cameras and Aurora S for taking photos of stainless steel elevator doors
Through deep learning–based perception algorithms, Aurora S delivers stable depth maps even for difficult materials such as black countertops or stainless-steel elevator doors — where traditional cameras produce holes or missing data.
This powerful weak-texture adaptability vastly expands the range of environments in which machine vision can operate, freeing robots from the limits of high-contrast or high-texture settings.
Bright sunlight and dynamic motion are common pitfalls for depth cameras. However, Aurora S, built on an end-to-end deep learning framework, achieves cross-environment stability.
Depth comparison of traditional cameras and Aurora S while walking outdoors
Depth comparison of traditional cameras and Aurora S with a roadside guardrail
Performs robustly under strong lightand dynamic scenes
Outputs seamless, loss-free depth mapsduring motion
Captures fine environmental details such as roadsidesand metal guardrails
With Aurora S, robots can move confidently from indoor to outdoor environments, achieving all-weather, all-scene perception.
Perception is more than measuring distance — it’s about understanding meaning. Aurora S integrates an AI semantic recognition engine, capable of identifying:
80 indoor object categories, such as people, tables, chairs, and monitors
18 outdoor elements, including roads, buildings, vehicles, and vegetation
Accurate identification of people and objects indoors
Outdoor element identification
From geometric measurement to semantic comprehension, Aurora S provides the cognitive foundation for embodied intelligence, enabling robots to interact with their environment on a semantic level.
Aurora S not only identifies semantics — it embeds them into three-dimensional space.
Through real-time 3D semantic point cloud mapping, the world seen by robots becomes structured and meaningful — with labelled elements like “roads,” “pedestrians,” and “vehicles.”
This allows robots to plan and make decisions directly within a 3D semantic space, marking a leap from 2D perception to full spatial comprehension.
In robotics, breakthroughs are only the beginning — true competitiveness lies in rapid deployment and stable operation.
Aurora S is designed with developers in mind:
Plug-and-play setup, no external dependencies
Supports DC 9–24Vpower and USB Type-C PD3.0
Open platform with multiple SDKs and system compatibility
The Aurora Remote UI and Aurora Remote SDK provide developers with visual tools for rapid implementation of dense mapping, semantic mapping, and 3D Gaussian Splatting (3DGS) reconstruction.
Aurora S is a complete 3D scene understanding solution, enabling developers to efficiently build embodied intelligent systems.
As SLAMTEC sees it — strong spatial perception is the first step toward truly embodied intelligence. Aurora S is the robot’s “bionic eye.”
From single-mode LiDARs to multimodal fusion, from 2D space to semantic understanding, SLAMTEC continues to push the boundaries of perception.
The Aurora series represents not just SLAMTEC’s strategic focus in the embodied intelligence era, but also the foundation for autonomous perception and spatial intelligence in robotics.
As Chen Shikai summarized:
“Every robot should possess the ability to understand the world. The mission of the Aurora series is to provide a stable, scalable perception base for embodied intelligence — enabling robots to truly become helpers to humankind.”
Looking ahead, SLAMTEC will continue advancing spatial perception technologies and collaborate with more partners to build an embodied intelligence ecosystem, accelerating the arrival of a new era where robots truly understand the world.
Keywords: Media Coverage