The Embodied Eye: Aurora S Unveiled
Hello everyone!
We often say that for machines to truly understand the world, they first need a good pair of “eyes.” Today, Xiaolan takes you on-site to witness how Aurora S, beyond providing powerful spatial localization and mapping capabilities, also becomes the “visual powerhouse” of robots through sheer technological strength!
As the core sensor for robot obstacle avoidance, the depth camera’s field of view (FOV) directly determines how wide a robot can “see” during motion. Unfortunately, traditional depth cameras often have limited FOVs, creating blind spots in obstacle detection.
In Aurora S, we’ve innovatively implemented an end-to-end neural network that achieves a 120° ultra-wide-angle depth perception — wide enough to cover the entire area in front of the robot.
We placed Aurora S and a conventional depth camera in the same position to capture the same scene. Let’s see how they compare.
Third-person perspective shooting
The result? A striking difference! Thanks to its 120° horizontal FOV, Aurora S captures a much broader area in a single shot, taking in the full environmental picture. In contrast, the traditional camera’s view looks noticeably cramped.
Traditional camera vs. Aurora S depth comparison
Indoor shot of same object — Aurora’s FOV is wider
In short: Robots and intelligent devices can now gather more comprehensive environmental data at a glance — enabling faster decisions and safer actions.
Many depth cameras fail when facing smooth, low-texture surfaces. So, we ran a series of tough weak-texture tests:
1. Black Counter Test: The traditional camera’s depth map shows obvious “holes” and missing data, while Aurora S delivers a smooth, complete, and stable output.
Traditional vs. Aurora S depth comparison
Counter overlay mode
2. Elevator Door Test (Stainless Steel): Traditional cameras almost fail entirely on reflective metallic surfaces. Yet Aurora S clearly reveals the depth contrast and 3D structure of the elevator door.
Traditional vs. Aurora S depth comparison
Elevator recognition overlay mode
The result: Whether in a dim café or a shiny elevator lobby, Aurora S provides reliable 3D perception — effectively eliminating visual blind spots.
Bright outdoor light and long-range distances are traditionally nightmares for depth cameras. But Aurora S, powered by its end-to-end deep learning architecture, shines in these conditions. Here’s what we found:
1.Dynamic Capture on the Move
During outdoor walking tests, the Aurora S depth map remained stable and continuous — no tearing or data loss during rapid movement. The 3D structure of the environment was reconstructed in real time.
Traditional vs. Aurora S depth comparison
Outdoor walk overlay mode
2. Accurate Roadside Detail Detection — Outdoor Safety Enhanced
We tested metal guardrails along the road. Traditional depth cameras produced sparse or missing data, but Aurora S sharply outlined the continuous structure of the rails, providing vital perception for stable outdoor obstacle avoidance.
Traditional vs. Aurora S depth comparison
Guardrail overlay mode, long view
In essence: Aurora S enables robots to operate seamlessly from indoors to outdoors, maintaining clear 3D perception even under strong light or dynamic movement.
Aurora S doesn’t just measure distances — it helps machines understand their surroundings.
Outdoors: It distinguishes over 18 types of scene elements, including buildings, roads, vegetation, and vehicles.
Indoors: It accurately recognizes over 80 types of common objects, such as people, tables, chairs, and monitors.
Its recognition capability is incredibly fine-grained — even detecting a person from just an arm or leg, and identifying the “phone” in someone’s hand. This leads to strong occlusion resistance and opens up rich interaction possibilities.
Aurora S projects all recognition results directly and accurately into a 3D point cloud in real time.
In this 3D world, you no longer see random points, but clearly color-labeled “roads,” “vehicles,” “pedestrians,” and “buildings.” This empowers robots to make decisions and interact directly in 3D space.
We conducted a real-time demo of Aurora S in an office setting.
As shown above, when connected via Remote UI, you can observe two live modules on the main interface:
Depth Perception:Accurate 3D geometry of the environment
Semantic Recognition: Precisely color-labeled object categories
All in real time — demonstrating Aurora S as a powerful and ready-to-use 3D scene understanding solution.
Aurora S is not just a depth camera — it’s a comprehensive visual hub for intelligent machines.
With its:
We believe powerful tools spark endless innovation.
May Aurora S become the most reliable “pair of eyes” behind your next breakthrough.
Keywords: SLAM,Product Introduction