基于视觉与激光融合的井下灾后救援无人机自主位姿估计

何怡静, 杨维

何怡静,杨维. 基于视觉与激光融合的井下灾后救援无人机自主位姿估计[J]. 工矿自动化,2024,50(4):94-102. DOI: 10.13272/j.issn.1671-251x.2023080124
引用本文: 何怡静,杨维. 基于视觉与激光融合的井下灾后救援无人机自主位姿估计[J]. 工矿自动化,2024,50(4):94-102. DOI: 10.13272/j.issn.1671-251x.2023080124
HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102. DOI: 10.13272/j.issn.1671-251x.2023080124
Citation: HE Yijing, YANG Wei. Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion[J]. Journal of Mine Automation,2024,50(4):94-102. DOI: 10.13272/j.issn.1671-251x.2023080124

基于视觉与激光融合的井下灾后救援无人机自主位姿估计

基金项目: 国家自然科学基金资助项目(51874299)。
详细信息
    作者简介:

    何怡静(2000—),女,山东枣庄人,硕士研究生,主要研究方向为宽带移动通信和井下无人机定位,E-mail:21120060@bjtu.edu.cn

    通讯作者:

    杨维(1964—),男,北京人,教授,主要研究方向为宽带移动通信系统与专用移动通信,E-mail:wyang@bjtu.edu.cn

  • 中图分类号: TD67

Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion

  • 摘要: 无人机在灾后矿井的自主导航能力是其胜任抢险救灾任务的前提,而在未知三维空间的自主位姿估计技术是无人机自主导航的关键技术之一。目前基于视觉的位姿估计算法由于单目相机无法直接获取三维空间的深度信息且易受井下昏暗光线影响,导致位姿估计尺度模糊和定位性能较差,而基于激光的位姿估计算法由于激光雷达存在视角小、扫描图案不均匀及受限于矿井场景结构特征,导致位姿估计出现错误。针对上述问题,提出了一种基于视觉与激光融合的井下灾后救援无人机自主位姿估计算法。首先,通过井下无人机搭载的单目相机和激光雷达分别获取井下的图像数据和激光点云数据,对每帧矿井图像数据均匀提取ORB特征点,使用激光点云的深度信息对ORB特征点进行深度恢复,通过特征点的帧间匹配实现基于视觉的无人机位姿估计。其次,对每帧井下激光点云数据分别提取特征角点和特征平面点,通过特征点的帧间匹配实现基于激光的无人机位姿估计。然后,将视觉匹配误差函数和激光匹配误差函数置于同一位姿优化函数下,基于视觉与激光融合来估计井下无人机位姿。最后,通过视觉滑动窗口和激光局部地图引入历史帧数据,构建历史帧数据和最新估计位姿之间的误差函数,通过对误差函数的非线性优化完成在局部约束下的无人机位姿的优化和修正,避免估计位姿的误差累计导致无人机轨迹偏移。模拟矿井灾后复杂环境进行仿真实验,结果表明:基于视觉与激光融合的位姿估计算法的平均相对平移误差和相对旋转误差分别为0.001 1 m和0.000 8°,1帧数据的平均处理时间低于100 ms,且算法在井下长时间运行时不会出现轨迹漂移问题;相较于仅基于视觉或激光的位姿估计算法,该融合算法的准确性、稳定性均得到了提高,且实时性满足要求。
    Abstract: The autonomous navigation capability of drones in post disaster mines is a prerequisite for their capability to perform rescue and disaster relief tasks. The autonomous pose estimation technology in unknown three-dimensional space is one of the key technologies for autonomous navigation of drones. At present, vision based pose estimation algorithms are prone to blurred scale and poor positioning performance due to the inability of monocular cameras to directly obtain depth information in three-dimensional space and the susceptibility to underground dim light. However, laser based pose estimation algorithms are prone to errors due to the small viewing angle, uneven scanning patterns, and constraints on the structural characteristics of mining scenes caused by LiDAR. In order to solve the above problems, an autonomous pose estimation algorithm of underground disaster rescue drones based on visual and laser fusion is proposed. Firstly, the monocular camera and LiDAR carried by the underground drone are used to obtain the image data and laser point cloud data of the mine. The ORB feature points are uniformly extracted from each frame of the mine image data. The depth information of the laser point cloud is used to recover the ORB feature points. The visual based drone pose estimation is achieved through inter frame matching of the feature points. Secondly, feature corner points and feature plane points are extracted from each frame of underground laser point cloud data, and laser based drone pose estimation is achieved through inter frame matching of feature points. Thirdly, the visual matching error function and the laser matching error function are placed under the same pose optimization function, and the pose of the underground drone is estimated based on vision and laser fusion. Finally, historical frame data is introduced through visual sliding windows and laser local maps to construct an error function between the historical frame data and the latest estimated pose. The optimization and correction of the drone pose under local constraints are completed through nonlinear optimization of the error function, avoiding the accumulation of estimated pose errors that may lead to trajectory deviation of the drone. The simulation experiments that simulating the complex environment after a mine disaster are conducted. The results show that the average relative translation error and relative rotation error of the pose estimation algorithm based on visual and laser fusion are 0.001 1 m and 0.000 8°, respectively. The average processing time of one frame of data is less than 100 ms. The algorithm does not experience trajectory drift during long-term operation underground. Compared to pose estimation algorithms based solely on vision or laser, the accuracy and stability of this fusion algorithm have been improved, and the real-time performance meets the requirements.
  • 图  1   井下巷道无人机坐标系

    Figure  1.   Underground roadway drone coordinate system

    图  2   无人机自主位姿估计流程

    Figure  2.   Process of drone autonomous pose estimation

    图  3   ORB特征点深度恢复

    Figure  3.   Depth recovery of ORB feature points

    图  4   关键帧选取策略

    Figure  4.   Key frames selection strategy

    图  5   不同算法的估计轨迹与真实轨迹比较

    Figure  5.   Comparison of estimated trajectories with real trajectories of different algorithms

    图  6   不同算法的绝对位姿误差、相对位姿误差对比

    Figure  6.   Comparison of absolute pose error and relative pose error of different algorithms

    图  7   不同算法的平均平移误差和平均旋转误差

    Figure  7.   Average translation and rotation errors of different algorithms

    表  1   算法主要模块平均运行时间

    Table  1   Average running time of main modules of algorithm ms

    算法模块 平均运行时间
    位姿估计ORB特征点提取与匹配25.81
    激光特征点提取与匹配21.57
    视觉激光位姿融合12.69
    位姿优化滑动窗口与局部地图优化94.46
    下载: 导出CSV

    表  2   不同算法的平均资源占用率比较

    Table  2   Comparison of average resource usage of different algorithms %

    算法 CPU占用率 内存占用率
    基于视觉的位姿估计算法 19.7 30.8
    基于激光的位姿估计算法 18.9 26.8
    基于视觉与激光融合的位姿估计算法 21.2 31.6
    下载: 导出CSV
  • [1] 王恩元,张国锐,张超林,等. 我国煤与瓦斯突出防治理论技术研究进展与展望[J]. 煤炭学报,2022,47(1):297-322.

    WANG Enyuan,ZHANG Guorui,ZHANG Chaolin,et al. Research progress and prospect on theory and technology for coal and gas outburst control and protection in China[J]. Journal of China Coal Society,2022,47(1):297-322.

    [2] 毕林,王黎明,段长铭. 矿井环境高精定位技术研究现状与发展[J]. 黄金科学技术,2021,29(1):3-13.

    BI Lin,WANG Liming,DUAN Changming. Research situation and development of high-precision positioning technology for underground mine environment[J]. Gold Science and Technology,2021,29(1):3-13.

    [3] 江传龙,黄宇昊,韩超,等. 井下巡检无人机系统设计及定位与避障技术[J]. 机械设计与研究,2021,37(4):38-42,48.

    JIANG Chuanlong,HUANG Yuhao,HAN Chao,et al. Design of underground inspection UAV system and study of positioning and obstacle avoidance[J]. Machine Design & Research,2021,37(4):38-42,48.

    [4] 范红斌. 矿井智能救援机器人的研究与应用[J]. 矿业装备,2023(1):148-150.

    FAN Hongbin. Research and application of mine intelligent rescue robot[J]. Mining Equipment,2023(1):148-150.

    [5] 马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193-2206.

    MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193-2206.

    [6]

    MUR-ARTAL R,MONTIEL J,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. DOI: 10.1109/TRO.2015.2463671

    [7]

    CAMPOS C,ELVIRA R,RODRIGUEZ J J G,et al. ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. DOI: 10.1109/TRO.2021.3075644

    [8]

    LIN Jiarong,ZHANG Fu. Loam livox:a fast,robust,high-precision LiDAR odometry and mapping package for LiDARs of small FoV[C]. IEEE International Conference on Robotics and Automation,Paris,2020:3126-3131.

    [9]

    DELLENBACH P,DESCHAUD J E,JACQUET B,et al. CT-ICP:real-time elastic LiDAR odometry with loop closure[C]. International Conference on Robotics and Automation,Philadelphia,2022:5580-5586.

    [10]

    CHEN K,LOPEZ B T,AGHA-MOHAMMADI A,et al. Direct LiDAR odometry:fast localization with dense point clouds[J]. IEEE Robotics and Automation Letters,2022,7(2):2000-2007. DOI: 10.1109/LRA.2022.3142739

    [11] 余祖俊,张晨光,郭保青. 基于激光与视觉融合的车辆自主定位与建图算法[J]. 交通运输系统工程与信息,2021,21(4):72-81.

    YU Zujun,ZHANG Chenguang,GUO Baoqing. Vehicle simultaneous localization and mapping algorithm with lidar-camera fusion[J]. Journal of Transportation Systems Engineering and Information Technology,2021,21(4):72-81.

    [12] 洪炎,朱丹萍,龚平顺. 基于TopHat加权引导滤波的Retinex矿井图像增强算法[J]. 工矿自动化,2022,48(8):43-49.

    HONG Yan,ZHU Danping,GONG Pingshun. Retinex mine image enhancement algorithm based on TopHat weighted guided filtering[J]. Journal of Mine Automation,2022,48(8):43-49.

    [13] 李艳,唐达明,戴庆瑜. 基于多传感器信息融合的未知环境下移动机器人的地图创建[J]. 陕西科技大学学报,2021,39(3):151-159.

    LI Yan,TANG Daming,DAI Qingyu. Map-building of mobile robot in unknown environment based on multi sensor information fusion[J]. Journal of Shaanxi University of Science & Technology,2021,39(3):151-159.

    [14] 毛军,付浩,褚超群,等. 惯性/视觉/激光雷达SLAM技术综述[J]. 导航定位与授时,2022,9(4):17-30.

    MAO Jun,FU Hao,CHU Chaoqun,et al. A review of simultaneous localization and mapping based on inertial-visual-lidar fusion[J]. Navigation Positioning and Timing,2022,9(4):17-30.

    [15]

    SHAN T,ENGLOT B,MEYERS D,et al. LIO-SAM:tightly-coupled lidar inertial odometry via smoothing and mapping[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Las Vegas,2020:5135-5142.

    [16]

    QIN Chao,YE Haoyang,PRANATA C E,et,al. R-LINS:a robocentric lidar-inertial state estimator for robust and efficient navigation[J]. 2019. DOI: 10.48550/arXiv.1907.02233.

    [17]

    XU Wei,ZHANG Fu. Fast-LIO:a fast,robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters,2021,6(2):3317-3324. DOI: 10.1109/LRA.2021.3064227

    [18]

    XU Wei,CAI Yixi,HE Dongjiao,et al. Fast-LIO2:fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics,2022,38(4):2053-2073. DOI: 10.1109/TRO.2022.3141876

    [19]

    ZUO Xingxing,GENEVA P,LEE W,et al. LIC-fusion:LiDAR-inertial-camera odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:5848-5854.

    [20]

    ZHAO Shibo,ZHANG Hengrui,WANG Peng,et al. Super odometry:IMU-centric LiDAR-visual-inertial estimator for challenging environments[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Prague,2021:8729-8736.

    [21]

    KIM B,JUNG C,SHIM D H,et al. Adaptive keyframe generation based LiDAR inertial odometry for complex underground environments[C]. IEEE International Conference on Robotics and Automation,London,2023:3332-3338.

    [22]

    LIN Jiarong,ZHENG Chunran,XU Wei,et al. R2LIVE:a robust,real-time,LiDAR-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters,2021,6(4):7469-7476. DOI: 10.1109/LRA.2021.3095515

    [23]

    STURM J,ENGELHARD N,ENDRES F,et al. A benchmark for the evaluation of RGB-D SLAM systems[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Vilamoura-Algarve,2012. DOI: 10.1109/IROS.2012.6385773.

  • 期刊类型引用(11)

    1. 马芳进,李宁,李昂. 基于PLC的煤层气车载钻机控制系统设计与实现. 矿业装备. 2024(04): 149-151 . 百度学术
    2. 邢望. 车载钻机电液控制系统优化设计. 煤矿机械. 2023(01): 125-127 . 百度学术
    3. 关志阳. 车载救援钻机电控系统设计. 煤矿机械. 2023(06): 23-26 . 百度学术
    4. 刘祺. 煤矿救援钻机换杆系统研制及应用. 煤矿机械. 2023(10): 160-163 . 百度学术
    5. 贾永斌. 水平定向钻机自动化控制系统的研究. 机械管理开发. 2023(12): 200-201+204 . 百度学术
    6. 邢望. 快速定位故障方法在煤层气车载钻机的应用. 煤矿机械. 2022(04): 170-171 . 百度学术
    7. 邢望,罗鹏平. 基于双PLC协同作用的煤层气车载钻机控制系统设计. 煤矿机械. 2022(08): 10-12 . 百度学术
    8. 邢望. 车载钻机全环节故障诊断软件开发. 煤矿机械. 2022(12): 177-179 . 百度学术
    9. 刘铸,栗林波,杜建荣,张宇翔,宋建成. 矿用水平定向钻机智能化电控系统设计. 工矿自动化. 2021(06): 91-95+102 . 本站查看
    10. 王天龙,宋海涛,董洪波. 煤矿小断面硬岩巷掘进用远控钻机. 煤矿安全. 2021(09): 167-171 . 百度学术
    11. 王冬冬. 坑道钻机的常见故障分析与改进探析. 机电工程技术. 2019(08): 275-277 . 百度学术

    其他类型引用(2)

图(7)  /  表(2)
计量
  • 文章访问数:  106
  • HTML全文浏览量:  27
  • PDF下载量:  15
  • 被引次数: 13
出版历程
  • 收稿日期:  2023-08-30
  • 修回日期:  2024-04-23
  • 网络出版日期:  2024-05-09
  • 刊出日期:  2024-03-31

目录

    /

    返回文章
    返回