Home
Products
Cases
News
Buy
Support
About Us
Partner
Contact Us
Home
Products
News
Buy
Support
About Us
Join US
Contact Us

NEW LAUNCH: SLAMTEC Aurora S — The “Dedicated Eye” for Embodied Intelligence

We are thrilled to announce the official release of Aurora S, SLAMTEC’s next-generation all-in-one AI spatial perception system. Watch the product introduction video to learn more: https://www.youtube.com/watch?v=rt_SaY0oGGo

Born for Embodied Intelligence — The Robot’s Eyes

Weighing just 238 g, Aurora S is the first to integrate multiple spatial perception capabilities into a single compact module — including real-time 3D visual SLAM, dense depth perception, and AI semantic understanding.
Powered by SLAMTEC’s deep-learning inference engine, Aurora S requires no external computing unit or algorithm development. Simply power it on, and it delivers high-quality localization and mapping for any embodied-intelligence application.
aurora s size
In a world rich with color and texture, representing space using only sparse geometric points or planar maps is no longer enough. As the newest member of the Aurora family, Aurora S not only integrates a real-time high-precision visual SLAM system, but also incorporates AI-based object recognition and semantic segmentation.
aurora s functions
Each sensing module operates in tight fusion, elevating spatial perception to a new level and enabling robots to truly understand and interact with the world around them.
aurora s modules

SLAMTEC’s Deep-Learning Engine — Robust in Any Environment

Deep learning lies at the heart of embodied intelligence, enabling neural-network inference far beyond traditional rule-based algorithms. SLAMTEC has embedded its proprietary deep-learning AI-vSLAM engine directly within Aurora S, allowing it to perform reliably under a wide range of complex and dynamic conditions.

AI-vSLAM: Proven Reliability Across Diverse Scenarios

High reliability is essential for any real-time localization and mapping system. With its deep-learning core, Aurora S maintains stable operation even under low-light, high-contrast, low-texture, and large-scale outdoor environments — consistently outperforming conventional vSLAM systems.
traditional method
Feature-point extraction: traditional method
Aurora S deep-learning-based extraction
Aurora S deep-learning-based extraction
Aurora S operates stably across multi-floor indoor spaces, uneven outdoor terrains, and low-light nighttime scenes.
for complex and large scense
grass field mapping and localization
1,500 m² open grass field — stable mapping and localization
large-scale outdoor mapping
Singapore National Stadium (75,000 m²) — large-scale outdoor mapping
Low-light outdoor environment
Low-light outdoor environment — reliable real-time performance
At the same time, built-in modules in Aurora S handle loop-closure correction, map optimization, and re-localization without any external computing resources.
loop closure
Aurora S built-in real-time loop-closure and graph-optimization engine

End-to-End Stereo Depth Perception Engine — Wide FOV, Outdoor-Ready

Using deep learning, Aurora S delivers dense stereo depth perception at 15 fps, outputting 120° wide-FOV point clouds with rich detail. It remains stable even in strong sunlight or low-texture indoor scenes, outperforming traditional stereo-depth systems. Depth data and RGB imagery are synchronized at the pixel level for perfect color alignment.
depth camera
depth camera compare

AI Object Recognition & Semantic Segmentation — Dynamic Dual-Model System

Aurora S integrates two neural-network models for real-time semantic segmentation and object detection, enabling recognition of 18+ outdoor scene types and 80+ common indoor objects, with instant model switching to match application needs. Semantic data is deeply fused with other perception modules, enabling semantic map construction and other advanced AI-mapping capabilities.

Keywords: slam,New Product Release

top