Born for Embodied Intelligence — The Robot’s Eyes
Weighing just 238 g, Aurora S is the first to integrate multiple spatial perception capabilities into a single compact module — including real-time 3D visual SLAM, dense depth perception, and AI semantic understanding.
Powered by SLAMTEC’s deep-learning inference engine, Aurora S requires no external computing unit or algorithm development. Simply power it on, and it delivers high-quality localization and mapping for any embodied-intelligence application.

In a world rich with color and texture, representing space using only sparse geometric points or planar maps is no longer enough. As the newest member of the Aurora family, Aurora S not only integrates a real-time high-precision visual SLAM system, but also incorporates AI-based object recognition and semantic segmentation.

Each sensing module operates in tight fusion, elevating spatial perception to a new level and enabling robots to truly understand and interact with the world around them.

SLAMTEC’s Deep-Learning Engine — Robust in Any Environment
Deep learning lies at the heart of embodied intelligence, enabling neural-network inference far beyond traditional rule-based algorithms. SLAMTEC has embedded its proprietary deep-learning AI-vSLAM engine directly within Aurora S, allowing it to perform reliably under a wide range of complex and dynamic conditions.
AI-vSLAM: Proven Reliability Across Diverse Scenarios
High reliability is essential for any real-time localization and mapping system. With its deep-learning core, Aurora S maintains stable operation even under low-light, high-contrast, low-texture, and large-scale outdoor environments — consistently outperforming conventional vSLAM systems.

Feature-point extraction: traditional method

Aurora S deep-learning-based extraction
Aurora S operates stably across multi-floor indoor spaces, uneven outdoor terrains, and low-light nighttime scenes.


1,500 m² open grass field — stable mapping and localization

Singapore National Stadium (75,000 m²) — large-scale outdoor mapping

Low-light outdoor environment — reliable real-time performance
At the same time, built-in modules in Aurora S handle loop-closure correction, map optimization, and re-localization without any external computing resources.

Aurora S built-in real-time loop-closure and graph-optimization engine
End-to-End Stereo Depth Perception Engine — Wide FOV, Outdoor-Ready
Using deep learning, Aurora S delivers dense stereo depth perception at 15 fps, outputting 120° wide-FOV point clouds with rich detail. It remains stable even in strong sunlight or low-texture indoor scenes, outperforming traditional stereo-depth systems. Depth data and RGB imagery are synchronized at the pixel level for perfect color alignment.


AI Object Recognition & Semantic Segmentation — Dynamic Dual-Model System
Aurora S integrates two neural-network models for real-time semantic segmentation and object detection, enabling recognition of 18+ outdoor scene types and 80+ common indoor objects, with instant model switching to match application needs. Semantic data is deeply fused with other perception modules, enabling semantic map construction and other advanced AI-mapping capabilities.

PhotoReal Mapping — Building a Digital Twin Effortlessly

Traditional SLAM systems generate sparse, abstract point clouds that poorly represent the richness of the real world. Leveraging its multimodal sensing pipeline, Aurora S introduces PhotoReal Mapping, allowing users to easily build dense 3D maps with realistic color and texture.
Traditional SLAM point cloud compare Aurora S color-dense 3D map
These photorealistic models can be directly used for digital-twin creation, 3D reconstruction, and VLA/VLN training, accelerating the development of embodied-intelligence “world models.”
Comprehensive Development Support — From Research to Product Integration
Aurora S includes a full developer ecosystem for effortless adoption:
Aurora Remote UI — a visualization tool requiring no code; preview 3D maps and semantic segmentation results in real time, manage maps and parameters intuitively.
Aurora Remote SDK — available in C++, ROS 1/ROS 2, and Python; supports rapid integration for advanced functions such as dense mapping, semantic mapping, and 3D Gaussian Splatting (3DGS) reconstruction.

To ensure compatibility, all tools and SDKs support multiple platforms and architectures:
macOS (Apple M-series)
Windows
Linux (x64 & ARM64)

macOS tools & SDK

Aurora Python SDK tutorials in Jupyter Notebook
Multi-Format Dataset Export — Accelerating 3DGS and VLA/VLN Training
Aurora S enables one-click export of pre-generated maps and datasets in formats such as COLMAP, and others compatible with cutting-edge 3DGS (3D Gaussian Splatting) frameworks. These datasets can be directly imported into systems like NVIDIA Omniverse, enabling VLA/VLN data generation or sim-to-real training for advanced AI models.
3DGS scene reconstruction using Aurora S
Support PLY format exportation
Empowering Every Industry — Aurora S in Action
Aurora S’s exceptional performance is driving intelligent transformation across sectors:
Embodied intelligence: core visual perception for humanoid and quadruped robots
Outdoor robotics: reliable perception for lawn-mowing and agricultural robots in unstructured environments
Industrial automation: enhanced navigation for AGV/AMR in dynamic factory settings
Digital twins: efficient 3D scene reconstruction and VLA/VLN dataset collection
Low-speed autonomous driving: perception for campus logistics and security-patrol robots