Аннотации:
Unmanned Aerial Vehicle (UAV) is a flying robot that acts without constant human pilot involvement. UAVs are applied in military and civilian areas, in search and rescue operations, 3D mapping, simultaneous localization and mapping (SLAM) and other tasks. SLAM approaches are based on various sensors usage including lidars and cameras. Visual SLAM approaches rely on visual sensing systems and successfully operate within GPS-denied environments. Further, apply- ing several UAVs allows for complex tasks that cannot be handled by a single robot, minimizes exploration time and adds a security level for a case of a single robot failure. This paper presents a comparison of two most applicable vision-based collaborative monocular SLAM methods in Robot operating system, CORB-SLAM and CCM-SLAM, that run on a pair of UAVs. The evaluation is performed on preassembled datasets that correspond to a virtual environment in the Gazebo simulator. The error estimation in virtual experiments demonstrated that CCM-SLAM has a higher global localization accuracy than CORB-SLAM.