SHENZHEN, China — October 24, 2025 — Driven by the wave of artificial intelligence, embodied intelligent robots are ushering in unprecedented development opportunities.
Large-scale modeling technology imbues robots with a powerful “brain,” enabling them to gradually acquire the abilities to understand, make decisions, and evolve.
From industrial production to home services, from medical care to space exploration, the innovative application boundaries of embodied intelligent robots are constantly expanding, with unlimited potential for the future.
However, one of the keys to truly bringing robots to life lies in their ability to perceive the world. As the core of a robot’s vision system, the binocular module bears the important mission of providing machines with three-dimensional vision. Shanghai HiSilicon chip-based binocular modules, with their superior performance, are becoming a key force driving technological innovation in robotics.
This module uses the Hi3519DV500 platform and supports AI binocular depth algorithms, producing a denser depth map compared to traditional binocular depth algorithms. It can also continuously supplement data for training in pain-point scenarios, becoming smarter with use.
The Hi3519DV500 features hardware binocular synchronization technology and is equipped with a DPU 2.0 binocular hardware acceleration unit, enabling high-performance depth calculation at 720P30.
Through its proprietary hardware acceleration architecture and AI computing capabilities, it can output high-precision, low-latency, and low-noise binocular depth data with extremely low power consumption, providing a stable and reliable perception foundation for upper-layer algorithms.
To address the challenges of real-world robot applications, the module has undergone in-depth algorithm optimization:
In the detection of slender objects, the accuracy of recognition has been significantly improved by refining the feature extraction method.
For areas with weak texture, a unique texture enhancement algorithm is used to effectively improve the matching effect.
In terms of noise suppression, filtering techniques significantly improve the purity of the data.
It supports sub-pixel-level calculations, enabling depth measurement accuracy to reach new heights.
These technological innovations have greatly enhanced the robot’s perception robustness in complex environments, enabling it to better cope with challenges such as changes in lighting, complex backgrounds, and dynamic scenes, and providing strong support for core functions such as path planning, obstacle avoidance, and precision operation.
Leveraging its open ecosystem and accumulated CANN technology architecture, Shanghai HiSilicon and Qianxun Intelligent have completed the integration and adaptation of their edge computing platform with Qianxun Intelligent’s Moz robot, providing strong support for independent innovation in the core computing power of robots.
This collaboration benefits from the powerful computing capabilities of the CANN architecture’s AI chips, enabling Qianxun Intelligent’s VLA large model and self-developed core algorithms to run smoothly on the edge computing platform. Moz robots have successfully implemented key robotic tasks such as environmental perception, task planning, and environmental interaction based on the CANN hardware and software ecosystem.
This collaboration with Qianxun Intelligent not only validates the competitiveness of CANN architecture AI chips in the robotics field, but also provides a replicable cooperation paradigm for the deep integration of the software, hardware, and chip ecosystem optimized under the CANN architecture with the robotics industry.
In the future, both parties will continue to deepen cooperation, promote the application of a series of chip solutions including sensing, analog, MCU, StarFlash, and RedCap in embodied intelligent scenarios, and help the robotics industry move towards a new stage of device AI+ driven upgrade.
Source: HiSilicon