메인
제품 센터 제품 센터
응용 사례
뉴스 동향
구매 채널
다운로드 및 지원
회사 소개
생태계 협력
연락처
메인
제품 센터 제품 센터
뉴스 동향
구매 채널
다운로드 및 지원
회사 소개
加入我们
연락처
  • 한국어 |
  • Eng
  • news title separator The Embodied Eye: Aurora S Unveiled

    Hello everyone!
    We often say that for machines to truly understand the world, they first need a good pair of “eyes.” Today, Xiaolan takes you on-site to witness how Aurora S, beyond providing powerful spatial localization and mapping capabilities, also becomes the “visual powerhouse” of robots through sheer technological strength!

    Highlight 1: 120° Ultra-Wide Stereo Depth Perception — Far Beyond Traditional Solutions

    As the core sensor for robot obstacle avoidance, the depth camera’s field of view (FOV) directly determines how wide a robot can “see” during motion. Unfortunately, traditional depth cameras often have limited FOVs, creating blind spots in obstacle detection.

    In Aurora S, we’ve innovatively implemented an end-to-end neural network that achieves a 120° ultra-wide-angle depth perception — wide enough to cover the entire area in front of the robot.

    We placed Aurora S and a conventional depth camera in the same position to capture the same scene. Let’s see how they compare.

    Third-person perspective shootingThird-person perspective shooting

    The result? A striking difference! Thanks to its 120° horizontal FOV, Aurora S captures a much broader area in a single shot, taking in the full environmental picture. In contrast, the traditional camera’s view looks noticeably cramped.

    Traditional camera vs. Aurora S depth comparisonTraditional camera vs. Aurora S depth comparison

     Indoor shot of same object — Aurora’s FOV is widerIndoor shot of same object — Aurora’s FOV is wider

    In short: Robots and intelligent devices can now gather more comprehensive environmental data at a glance — enabling faster decisions and safer actions.

    Highlight 2: The “Weak Texture Terminator” — No Fear of Black Counters or Elevator Doors

    Many depth cameras fail when facing smooth, low-texture surfaces. So, we ran a series of tough weak-texture tests:

    1. Black Counter Test: The traditional camera’s depth map shows obvious “holes” and missing data, while Aurora S delivers a smooth, complete, and stable output.

    Bar counter detection third perspective real shot Traditional vs. Aurora S depth comparisonTraditional vs. Aurora S depth comparison

    Counter overlay modeCounter overlay mode

    2. Elevator Door Test (Stainless Steel): Traditional cameras almost fail entirely on reflective metallic surfaces. Yet Aurora S clearly reveals the depth contrast and 3D structure of the elevator door.

    Elevator door third-person perspective shotTraditional vs. Aurora S depth comparisonTraditional vs. Aurora S depth comparison

    Elevator recognition overlay modeElevator recognition overlay mode

    The result: Whether in a dim café or a shiny elevator lobby, Aurora S provides reliable 3D perception — effectively eliminating visual blind spots.

    Highlight 3: Excellent Performance Under Bright Outdoor Light

    Bright outdoor light and long-range distances are traditionally nightmares for depth cameras. But Aurora S, powered by its end-to-end deep learning architecture, shines in these conditions. Here’s what we found:

    1.Dynamic Capture on the Move

    During outdoor walking tests, the Aurora S depth map remained stable and continuous — no tearing or data loss during rapid movement. The 3D structure of the environment was reconstructed in real time.

     Traditional vs. Aurora S depth comparisonTraditional vs. Aurora S depth comparison

    Outdoor walk overlay modeOutdoor walk overlay mode

    2. Accurate Roadside Detail Detection — Outdoor Safety Enhanced

    We tested metal guardrails along the road. Traditional depth cameras produced sparse or missing data, but Aurora S sharply outlined the continuous structure of the rails, providing vital perception for stable outdoor obstacle avoidance.

    Shooting from a third-person perspective while walkingTraditional vs. Aurora S depth comparisonTraditional vs. Aurora S depth comparison

    Guardrail overlay mode, long viewGuardrail overlay mode, long view

    In essence: Aurora S enables robots to operate seamlessly from indoors to outdoors, maintaining clear 3D perception even under strong light or dynamic movement.

    Highlight 4: Dynamic Semantic Recognition — Seeing Not Just “Where,” But “What”

    Aurora S doesn’t just measure distances — it helps machines understand their surroundings.

    Outdoors: It distinguishes over 18 types of scene elements, including buildings, roads, vegetation, and vehicles.

    Outdoor scene semantic recognitionIndoors: It accurately recognizes over 80 types of common objects, such as people, tables, chairs, and monitors.

    Semantic recognition of indoor objectsIts recognition capability is incredibly fine-grained — even detecting a person from just an arm or leg, and identifying the “phone” in someone’s hand. This leads to strong occlusion resistance and opens up rich interaction possibilities.

    Finely identify different objects

    Highlight 5: 3D Semantic Point Cloud — From 2D Understanding to Full Spatial Awareness

    Aurora S projects all recognition results directly and accurately into a 3D point cloud in real time.

    In this 3D world, you no longer see random points, but clearly color-labeled “roads,” “vehicles,” “pedestrians,” and “buildings.” This empowers robots to make decisions and interact directly in 3D space.

    aurora s point cloud overlay

    Seeing Is Believing: Real-Time Office Demo

    We conducted a real-time demo of Aurora S in an office setting.

    aurora s dense point cloud on office

    As shown above, when connected via Remote UI, you can observe two live modules on the main interface:

    Depth Perception:Accurate 3D geometry of the environment

    Semantic Recognition: Precisely color-labeled object categories

    All in real time — demonstrating Aurora S as a powerful and ready-to-use 3D scene understanding solution.

    In Summary

    Aurora S is not just a depth camera — it’s a comprehensive visual hub for intelligent machines.

    With its:

    • 120° wide-angle neural depth perception— seeing more,
    • Anti-texture weakness— seeing stably,
    • Strong light resistance— seeing farther,
    • AI semantic segmentation & 3D semantic projection— seeing and understanding better.

    We believe powerful tools spark endless innovation.

    May Aurora S become the most reliable “pair of eyes” behind your next breakthrough.

    Keywords: SLAM,Product Introduction

    top
    문의하기 버튼
    지원 이메일 버튼
    WhatsApp 버튼