Aurora S is the new-generation fully integrated AI spatial perception system from SLAMTEC's Insight Series. By deeply integrating vision, inertial navigation, and SLAMTEC's self-developed AI-VSLAM technology, it provides robots and intelligent agents with out-of-the-box high-precision 3D perception, mapping, and semantic understanding. This greatly lowers integration and development barriers, accelerates the deployment of intelligent applications, and serves as a dedicated perception system for the era of embodied intelligence.
Aurora S integrates AI-VSLAM for full-scene 3D mapping and localization, stereo depth estimation, semantic recognition, and dedicated compute hardware. It delivers real-time, high-quality spatial perception outputs—mapping, localization, and more—directly to the user, without the need for additional compute resources or in-house algorithm development.
Unlike traditional SLAM systems that output only sparse point clouds, Aurora S employs built-in deep learning multimodal perception to generate dense, richly textured maps with precise localization. Its semantic recognition engine enhances geometric maps with real-time semantic labeling, ushering spatial navigation into the multimodal perception era—purpose-built for embodied intelligence.
Traditional SLAM System
Dense Textured Map by Aurora
Aurora S integrates a binocular dense depth sensing system that delivers wide-angle point clouds (120° HFOV) at 15 fps. Powered by an end-to-end deep learning pipeline, it maintains stable performance even in weak-texture or high-glare environments, with strict pixel-level alignment to RGB imagery. Users can directly leverage this data for real-time obstacle detection and 3D scene reconstruction—without additional sensors.
Aurora S integrates a high-performance real-time semantic segmentation and object recognition engine, delivering pixel-level results. It supports recognition of 18 outdoor scene categories and over 80 indoor object types, providing foundational support for VLN and VLA. The system also allows customized recognition tasks, integrates depth data to generate semantic maps, and advances spatial intelligence.
Rich interfaces and expandability, supporting DC 9–24V power supply or USB Type-C PD3.0
C++
Python
macOS
ROS1/ROS2
Use Aurora for fast 3DGs map reconstruction
Humanoid robots
Quadruped robots
Outdoor Robotics Lawn mowers, smart agriculture, yard inspection
Industrial Automation AGV, AMR
Digital Twin 3D scene reconstruction, VLN/VLA training data collection
Low-Speed Autonomous Driving Campus logistics, inspection robots
Built-in computing algorithms with direct output of mapping and localization result
Self-developed AI-VSLAM engine designed to meet demanding requirements
180° fisheye colour camera
120° Ultra-wide angle stereo depth vision (end-to-end deep learning solution)
Pixel-level semantic recognition of over hundred of objects
Multi-modal fusion for semantic mapping
One-click dense 3D reconstruction, support 3DGS solutions
Seamless integration with VLA/VLN training systems for embodied AI
Open platform supporting various SDKs including C++, ROS1/ROS2, and Python
Aurora S is the new-generation fully integrated AI spatial perception system from SLAMTEC's Insight Series. By deeply integrating vision, inertial navigation, and SLAMTEC's self-developed AI-VSLAM technology, it provides robots and intelligent agents with out-of-the-box high-precision 3D perception, mapping, and semantic understanding. This greatly lowers integration and development barriers, accelerates the deployment of intelligent applications, and serves as a dedicated perception system for the era of embodied intelligence.
Aurora S integrates AI-VSLAM for full-scene 3D mapping and localization, stereo depth estimation, semantic recognition, and dedicated compute hardware. It delivers real-time, high-quality spatial perception outputs—mapping, localization, and more—directly to the user, without the need for additional compute resources or in-house algorithm development.
Traditional SLAM System
Dense Textured Map by Aurora
Unlike traditional SLAM systems that output only sparse point clouds, Aurora S employs built-in deep learning multimodal perception to generate dense, richly textured maps with precise localization. Its semantic recognition engine enhances geometric maps with real-time semantic labeling, ushering spatial navigation into the multimodal perception era—purpose-built for embodied intelligence.
Aurora S integrates a binocular dense depth sensing system that delivers wide-angle point clouds (120° HFOV) at 15 fps. Powered by an end-to-end deep learning pipeline, it maintains stable performance even in weak-texture or high-glare environments, with strict pixel-level alignment to RGB imagery. Users can directly leverage this data for real-time obstacle detection and 3D scene reconstruction—without additional sensors.
Aurora S integrates a high-performance real-time semantic segmentation and object recognition engine, delivering pixel-level results. It supports recognition of 18 outdoor scene categories and over 80 indoor object types, providing foundational support for VLN and VLA. The system also allows customized recognition tasks, integrates depth data to generate semantic maps, and advances spatial intelligence.
Rich interfaces and expandability, supporting DC 9–24V power supply or USB Type-C PD3.0
C++
Python
macOS
ROS1/ROS2
Aurora provides the Remote UI visualization tool and Remote SDK, enabling fast implementation of advanced functions such as dense mapping, semantic mapping, and 3DGs reconstruction.
Humanoid robots
Quadruped robots
Outdoor Robotics Lawn mowers, smart agriculture, yard inspection
Industrial Automation AGV, AMR
Digital Twin 3D scene reconstruction, VLN/VLA training data collection
Low-Speed Autonomous Driving Campus logistics, inspection robots