SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势

崔邵云, 鲍久圣, 胡德平, 袁晓明, 张可琨, 阴妍, 王茂森, 朱晨钟

崔邵云,鲍久圣,胡德平,等. SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势[J]. 工矿自动化,2024,50(10):38-52. DOI: 10.13272/j.issn.1671-251x.2024070010
引用本文: 崔邵云,鲍久圣,胡德平,等. SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势[J]. 工矿自动化,2024,50(10):38-52. DOI: 10.13272/j.issn.1671-251x.2024070010
CUI Shaoyun, BAO Jiusheng, HU Deping, et al. Research status and development trends of SLAM technology in autonomous mining field[J]. Journal of Mine Automation,2024,50(10):38-52. DOI: 10.13272/j.issn.1671-251x.2024070010
Citation: CUI Shaoyun, BAO Jiusheng, HU Deping, et al. Research status and development trends of SLAM technology in autonomous mining field[J]. Journal of Mine Automation,2024,50(10):38-52. DOI: 10.13272/j.issn.1671-251x.2024070010

SLAM技术及其在矿山无人驾驶领域的研究现状与发展趋势

基金项目: 江苏省科技成果转化专项资金项目(BA2023035);煤矿采掘机械装备国家工程实验室开放课题项目(GCZX-2023-01);江苏高校优势学科建设工程资助项目(PAPD)。
详细信息
    作者简介:

    崔邵云(1999—),男,山西晋城人,硕士研究生,主要研究方向为煤矿井下激光SLAM,E-mail:1055462124@qq.com

    通讯作者:

    鲍久圣(1979—),男,安徽桐城人,教授,博士,博士研究生导师,主要研究方向为矿山运输及其智能化,E-mail: cumtbjs@cumt.edu.cn

  • 中图分类号: TD67

Research status and development trends of SLAM technology in autonomous mining field

  • 摘要: 无人驾驶是矿山智能化关键技术之一,其中即时定位与地图构建(SLAM)技术是实现无人驾驶的关键环节。为推动SLAM技术在矿山无人驾驶领域的发展,对SLAM技术原理、成熟地面SLAM方案、现阶段矿山SLAM研究现状、未来矿山SLAM发展趋势进行了探讨。根据SLAM技术所使用的传感器,从视觉、激光及多传感器融合3个方面分析了各自的技术原理及相应框架,指出视觉和激光SLAM技术通过单一相机或激光雷达实现,存在易受环境干扰、无法适应复杂环境等缺点,多传感器融合SLAM是目前最佳的解决方法。探究了目前矿山SLAM技术的研究现状,分析了视觉、激光、多传感器融合3种SLAM技术在井工煤矿、露天矿山的适用性与研究价值,指出多传感器融合SLAM是井工煤矿领域的最佳方案,SLAM技术在露天矿山领域研究价值不高。基于现阶段井下SLAM技术存在的难点(随时间及活动范围积累误差、各类场景引起的不良影响、各类传感器无法满足高精度SLAM算法的硬件要求),提出矿山无人驾驶领域SLAM技术未来应向多传感器融合、固态化、智能化方向发展。
    Abstract: Autonomous driving is identified as one of the key technologies for mining intelligence, with simultaneous localization and mapping (SLAM) technology serving as a key link to realize autonomous driving. To advance the development of SLAM technology in autonomous mining, this paper discusses the principles of SLAM technology, mature ground SLAM solutions, the current research status of mining SLAM, and future development trends. Based on the sensors employed in SLAM technology, the study analyzes the technical principles and corresponding frameworks from three aspects: vision, laser, and multi-sensor fusion. It is noted that visual and laser SLAM technologies, which utilize single cameras or LiDAR, are susceptible to environmental interference and cannot adapt to complex environments. Multi-sensor fusion SLAM emerges as the most effective solution. The research examines the status of mining SLAM technology, analyzing the applicability and research value of visual, laser, and multi-sensor fusion SLAM technologies in underground coal mines and open-pit mines. It concludes that multi-sensor fusion SLAM represents the optimal research approach for underground coal mines, while the research value of SLAM technology in open-pit mines is limited. Based on the challenges identified in underground SLAM technology, such as accumulated errors over time and activity range, adverse effects from various scenes, and the inadequacy of various sensors to meet the hardware requirements for high-precision SLAM algorithms, it is proposed that future developments in SLAM technology for autonomous mining should focus on multi-sensor fusion, solid-state solutions, and intelligent development.
  • 煤炭是我国最主要的能源之一,并被誉为“工业之粮”,为我国的经济增长提供了坚实的后盾,因此国家高度重视并鼓励煤炭资源的环保使用[1-2]。矸石是一种在采煤过程中与煤一起开采出来的伴生产物,如果不将煤中的矸石分选出来,矸石的燃烧过程会释放出众多的有毒有害气体(CO,H2S,SO2等),造成巨大的污染,并破坏居民的生活环境,损害人体健康[3-4],不符合国家对煤炭资源清洁利用的要求。因此,在实际生产中需对煤和矸石进行分选。传统的人工分选方法劳动量大,分选效率较低,已逐渐被淘汰[5-7]。目前,煤和矸石分选方法主要有重介选煤、跳汰选煤、浮选、干法选煤、γ射线检测法等,普遍存在投资成本高、分选效率低、环境污染严重等问题[8]。伴随着机器视觉和图像处理技术的飞速进步,国内外研究人员通过CCD相机获得煤和矸石的可见光图像,用机器视觉和图像处理技术相结合的方法对煤和矸石进行分选。王家臣等[9]探究了煤和矸石在不同照度下的响应特性,对煤和矸石可见光图像的灰度、纹理特征进行了提取,以支持向量机(Support Vector Machine,SVM)模型对煤和矸石进行识别,正确率达98.39%。吴开兴等[10]利用灰度共生矩阵描述煤矸纹理特征,并用SVM进行识别。上述研究主要涉及对可见光图像特征的提取,但井下工况复杂,且存在因煤矸扬起的粉尘,CCD相机获得的煤和矸石可见光图像质量不高,进而分选率较低。随着光电技术的不断进步,X射线在煤矸分选领域得到了广泛研究和应用。郭永存等[11]利用双能X射线技术对煤和矸石进行透射和成像,提出了一种结合R值图像和高低能图像特征来进行煤和矸石多维度分析的方法,能够对不同的煤种实现较高的识别准确率。王文鑫等[12]提出一种X射线透射煤矸智能识别方法,提取煤矸图像灰度特征和纹理特征,采用多层感知机(Multilayer Perceptron,MLP)模型实现煤矸识别。上述煤矸分选方法均能达到良好的分选效果,但X射线会对工作人员的健康带来伤害。

    红外热成像技术[13]具有不受光照、粉尘影响的特点,且不会对人体造成伤害,在发电[14]、冶炼矿物[15]、绿色环保[16]、煤岩界面识别[17-19]等领域广泛应用。本文提出一种基于红外热成像的煤矸识别方法。考虑煤和矸石在室温下表面温度接近,导致它们在红外图像中没有较大差异,将煤和矸石在传送带的输送下依次经过加热区域,由红外热成像仪对煤和矸石进行拍摄,得到煤和矸石加热后的红外图像,对红外图像进行预处理并提取特征,采用SVM进行分类识别,达到识别煤和矸石的目的。

    为了获取煤和矸石加热后的红外图像,搭建了煤矸识别红外热成像实验装置,如图1所示。

    图  1  煤矸识别红外热成像实验装置
    Figure  1.  Experimental device for coal and gangue recognition

    实验装置由计算机、红外热成像仪、加热区域、传送带、带速控制器、控温器组成。红外热成像仪在监测物体表面温度和实时热成像的同时,可获取物体的红外灰度图像及各种色彩模板下的红外伪彩色图像。本实验使用的红外热成像仪的红外响应波段为8~14 μm,测温范围为−40~400 ℃,红外热成像仪的红外分辨率为320×240。加热区域由3盏碳纤维加热灯组成。加热区域和红外热成像仪通过钢架固定在传送带上方,加热区域距离传送带的高度为20 cm,红外热成像仪距离传送带的距离为50 cm,通过数据线与计算机相连,控温器和带速控制器固定在传送带一侧。控温器带有测温探头,可控制加热区域温度。加热区域的温度越高,煤和矸石加热后的红外图像特征差异越明显,但考虑到实验室的具体条件限制及真实的井下环境,过高的温度可能带来安全隐患,因此本实验加热区域的温度通过控温器控制在70 ℃。煤炭的燃点因煤炭的种类不同而有所不同,但基本都在300 ℃以上,加热区域的温度为70 ℃,远远没有达到煤炭的燃点,可避免引燃风险。传送带的速度为0.06 m/s,保持恒定。

    自然界中任何表面温度高于绝对零度的物体都会向周围环境辐射电磁波,波长介于0.75~1 000 μm的电磁波被称为红外线[20]。红外热成像技术主要是探测物体表面的红外辐射能量,以获得物体表面的温度。红外热成像技术基于斯蒂芬−玻耳兹曼定律,该定律指出,物体单位面积在单位时间内的热辐射能量为

    $$ J = \sigma \varepsilon {T^4} $$ (1)

    式中:σ为斯蒂芬−玻耳兹曼常数; ε为物体表面发射率;T为物体的热力学温度。

    根据斯蒂芬−玻耳兹曼定律可知,物体的热辐射能量与物体的表面发射率成正比,与物体热力学温度的4次方成正比。由于物体的表面发射率一般非常小,所以物体的热辐射能量主要和物体的热力学温度有关,物体的热力学温度越高,红外辐射的能量就越强。

    红外热成像仪通过红外探测器捕获物体发出的红外线辐射,并将这些红外辐射能量的分布图显示在红外探测器的光敏部件上,进一步由探测器将这些红外辐射能转换为电信号,并对其进行进一步处理,将处理后的数据传输到红外设备显示器上,得到被测物体的红外图像。

    由于煤和矸石的颜色、纹理、光泽和其他物理性质不同,它们表面的吸热能力也不同。为了更直观体现二者吸热能力的差异,在煤矸样本中随机选择煤和矸石样本各20块,在传送带的输送下经过加热区域,经红外热成像仪获取煤和矸石中心点的温度,得到煤和矸石加热后的温度,如图2所示。初始煤和矸石样本的温度与室温(22 ℃)相同。由图2可看出,经过相同时间的加热后,煤的表面温度明显高于矸石的表面温度,从而导致煤和矸石的红外图像有较大不同,因此,利用二者红外图像的差异对煤和矸石进行分选。

    图  2  煤矸加热后温度
    Figure  2.  Temperature of coal and gangue after heating

    本文选用安徽省淮南矿区的烟煤作为实验样本,样本煤和矸石在传送带的输送下依次经过加热区域。利用红外热成像仪对经均匀加热后的煤和矸石进行拍摄,得到煤和矸石的红外灰度图像(物体红外光强度被红外热成像仪获取后得到的图像)和红外彩色图像(红外热成像仪高彩虹颜色模板下形成的伪彩色图像)。将捕捉到的图像传输至计算机进行保存,以便后续的图像处理。选取300块煤矸样本进行图像采集,其中煤和矸石各150块,得到煤矸样本的红外灰度图像和红外彩色图像各300张。为了提高样本训练和识别的效率,并考虑煤和矸石表面凹凸情况对温升的影响,在计算机中对每张图像进行扫描后提取平均温度最高的部分,提取后的图像大小为200×200。图3为部分样本图像。

    图  3  煤矸样本图像
    Figure  3.  Images of coal and gangue samples

    红外图像的信噪比[21]较可见光图像低,因此在保证图像质量的前提下,选用中值滤波、高斯滤波、均值滤波对红外图像进行滤波处理。为了确定滤波过程中卷积核的大小,选取3×3,5×5,7×7这3种卷积核分别对同一张煤样本图像进行滤波处理,结果如图4所示。可看出在卷积过程中,卷积核越大,图像越模糊,卷积核尺寸为3×3时,图像的去噪效果最佳,因此选择3×3卷积核。

    图  4  煤样本图像的不同滤波处理结果
    Figure  4.  Results of different filtering process for a coal sample image

    为了客观体现图像的预处理结果,本文采用最小化平方误差(Minimum Squared−Error,MSE)和峰值信噪比(Peak Signal-to-Noise Ratio,PSNR)来评价图像的降噪效果。最小化平方误差越小,图像失真越小;峰值信噪比越大,图像失真越小。

    $$ {\text{MSE}} = \frac{1}{{M N}}\sum\limits_{i = 1}^M {\sum\limits_{j = 1}^N {{{[P(i,j) - B(i,j)]}^2}} } $$ (2)
    $$ {\text{PSNR}} = 10\lg \frac{{{{255}^2}}}{{{\text{MSE}}}} $$ (3)

    式中:MN为图像的维度;Pi,j)为原始图像在坐标(i,j)处的像素值;Bi,j)为降噪后的图像在坐标(i,j)处的像素值。

    以3×3的卷积核对3种滤波方式的滤波效果进行比较,结果见表1,对矸石图像的滤波结果如图5所示。由表1可看出,高斯滤波的MSE和PSNR较中值滤波和均值滤波低。由图4图5可看出,高斯滤波对煤和矸石红外图像的处理结果更为清晰,平滑度更好。因此本文选用高斯滤波对煤和矸石的红外灰度图像、红外彩色图像进行预处理。

    表  1  煤矸石图像滤波结果
    Table  1.  Filtering results for coal and gangue images
    滤波方式 煤图像 矸石图像
    MSE PSNR MSE PSNR
    高斯滤波 7.1382 39.5949 1.9222 45.2929
    中值滤波 7.4326 39.4194 2.3143 44.4865
    均值滤波 17.9601 35.5877 9.7932 38.2216
    下载: 导出CSV 
    | 显示表格
    图  5  矸石图像滤波结果
    Figure  5.  Gangue image filtering results

    灰度特征描述的是图像区域所对应的表面性质。灰度均值、灰度方差、最大频数对应的灰度值等灰度特征可通过灰度分析得到。煤和矸石红外灰度图像的像素值和像素点分布存在差异,导致它们的灰度分布不同,因此需对它们进行灰度分析。分别从煤和矸石的样本中随机挑选75块样本,对其红外灰度图像进行灰度特征提取,得到煤和矸石灰度图像的各特征值分布(表2)。

    表  2  煤矸石图像灰度特征分布范围
    Table  2.  Range of grayscale feature distribution of coal and gangue images
    样本 灰度均值 灰度方差 最大频数对应的灰度值 偏度
    89.8~163.3 106.9~3301.7 91.0~195.0 −1.8~0.7
    矸石 5.4~46.7 9.4~553.6 1.0~67.0 −1.1~2.2
    下载: 导出CSV 
    | 显示表格

    表2可看出,煤和矸石的灰度方差和偏度的分布范围重合度较高,致使区分度低,不利于分选。相比之下,灰度均值和最大频数对应的灰度值的分布范围差异较大,有利于分选。因此,在灰度特征中选择灰度均值和最大频数对应的灰度值这2个参数作为初步的识别特征。灰度均值和最大频数对应的灰度值的分布曲线如图6所示,可看出煤和矸石的分布差异十分明显。为了方便表示,记灰度均值、最大频数对应的灰度值分别为H1H2

    图  6  灰度均值和最大频数对应的灰度值的分布曲线
    Figure  6.  Distribution curves of the grayscale mean and the gray value corresponding to the maximum frequency number

    灰度共生矩阵是一种用于统计图像中所有像素以描述其灰度分布的统计方法,它综合反映了图像灰度的各种信息。基于灰度共生矩阵可得到许多特征参数来反映图像的纹理特征,常用的特征参数有熵、对比度、相关性、能量、同质性。为了观察煤和矸石红外灰度图像纹理方面的差别,选取与上文相同的煤和矸石样本,对其红外灰度图像进行纹理分析,得到熵、对比度、相关性、能量、同质性纹理特征参数的分布范围(表3)。

    表  3  煤矸石图像纹理特征参数的分布范围
    Table  3.  Distribution range of texture feature parameters of coal and gangue images
    样本对比度相关性能量同质性
    5.2~7.10.03~0.140.97~0.990.11~0.300.93~0.98
    矸石3.2~6.20.03~0.250.93~0.990.11~0.360.88~0.98
    下载: 导出CSV 
    | 显示表格

    为了进一步选择熵、对比度、相关性、能量、同质性纹理特征,做出其分布曲线,如图7所示。

    图  7  煤矸石图像纹理特征的分布曲线
    Figure  7.  Distribution curves of texture features of coal and gangue images

    表3图7可看出,能量和同质性的分布范围接近,分布曲线重合较多,熵、对比度、相关性的分布范围差异较大。因此,在纹理特征中选择熵、对比度、相关性这3个参数作为初步的识别特征。记熵、对比度、相关性分别为W1W2W3

    彩色图像特征提取是指从彩色图像中提取出能表现图像特点的属性集合。在彩色图像中,经常使用的特征包括颜色特征、形状特征和空间位置关系特征等。其中,颜色特征是最常用和基本的特征之一。煤和矸石的红外彩色图像差异很大,通过提取颜色特征可很好地反映二者之间的差异。颜色矩是一种有效的颜色特征,利用颜色的一阶、二阶和三阶矩来表示图像中的颜色分布。颜色矩具有简洁的特点,但其一般分辨能力较弱。因此,通常将颜色矩与其他特征相结合使用,以达到缩小范围的目的。选取纹理分析使用的煤和矸石样本,对其红外彩色图像进行颜色特征提取。对红外彩色图像的R,G,B通道分别提取一阶矩和二阶矩,得到各参数的分布范围(表4)。

    表  4  煤矸图像颜色特征参数分布
    Table  4.  Distribution of colour features of coal and gangue images
    样本R通道一阶矩G通道一阶矩B通道一阶矩R通道二阶矩G通道二阶矩B通道二阶矩
    12.5~133.972.8~152.63.9~185.923.3~88.632.3~83.19.1~89.2
    矸石39.1~177.20.5~1.938.0~188.713.2~71.90.8~9.113.5~75.2
    下载: 导出CSV 
    | 显示表格

    表4可看出,G通道一阶矩和G通道二阶矩的特征值分布范围差异大,其分布曲线如图8所示。因此,在颜色特征中选择G通道一阶矩和G通道二阶矩这2个参数作为初步的识别特征。记G通道一阶矩、G通道二阶矩的分别为C1C2

    图  8  具有高区分度的煤矸石图像颜色特征分布曲线
    Figure  8.  Distribution curves of highly distinguishable colour features of coal and gangue images

    由于图像特征提取过程需耗费大量时间,且提取出的特征中存在冗余特征,所以选择适当的特征选择算法来剔除冗余特征,并得到有利于分类的最优特征子集,对于改善识别性能、减少计算成本非常重要的。对H1H2W1W2W3C1C2 7个特征进行进一步选取。随机森林是一种集成学习方法,可作为特征选择来评估特征重要性。本文选用包含100棵树的随机森林分类器,计算上述 7个特征的重要性,结果如图9所示。

    图  9  特征选择结果
    Figure  9.  Feature selection results

    图9可看出,特征重要性的大小排列顺序为H2C2C1H1W1W3W2,且前4个特征的重要性比较突出,因此选择H2C2C1H1,即最大频数对应的灰度值、G通道二阶矩、G通道一阶矩、灰度均值这4个特征作为最终的分类特征,以这4个特征作为分类模型的输入。部分样本的特征见表5

    表  5  部分样本的特征
    Table  5.  Characteristics of selected samples
    样本序号 H2 C2 C1 H1
    煤1 147 59.95 108.14 118.39
    煤2 131 46.28 116.56 122.03
    煤3 195 62.22 108.43 153.84
    煤4 158 58.30 118.04 117.38
    矸石1 9 1.77 1.25 10.89
    矸石2 17 1.47 1.02 23.03
    矸石3 11 1.79 1.23 13.02
    矸石4 27 1.40 0.98 29.97
    下载: 导出CSV 
    | 显示表格

    SVM是属于监督学习的分类模型,煤和矸石的分类是一种非线性、小样本的二分类问题,SVM能够对二分类数据进行有效分类[22],因此选用SVM作为本文的分类模型。SVM分类超平面判别函数fx)与核函数Kx,xi)分别为

    $$ f(\boldsymbol{x})=\text{sgn}\left(\sum\limits_{k=1}^n\alpha_k^*y_kK(\boldsymbol{x},\boldsymbol{x}_k)+b^*\right) $$ (4)
    $$ K(\boldsymbol{x},\boldsymbol{x}_k)=\exp\left(-\frac{||\boldsymbol{x}-\boldsymbol{x}_k||^2}{2g^2}\right) $$ (5)

    式中:sgn为符号函数;α*为拉格朗日乘子;yk为第k个样本标志;n为样本总数量;K为高斯核函数;xk为第k个样本的特征向量;x为输入的特征向量;b*为分类阈值;g为高斯核参数。

    实验得到300块煤矸样本对应的红外图像,将煤和矸石样本各75块对应的图像作为模型的训练集,另外150块样本对应的图像作为测试集对模型进行实验验证。给煤和矸石赋予不同的标签后,按照特征选择的结果对训练集中图像提取H2C2C1H1特征,组成150×4的数据集矩阵,将该数据集作为分类模型的输入。将数据集随机分成5份,通过交叉验证法进行5次实验,以5次实验的平均分类准确率作为最终结果。通过实验得出,SVM分类器模型在训练集上的分类准确率为100%。

    使用训练好的SVM分类模型对煤和矸石样本进行分类实验。对测试集中图像进行特征提取后,组成150×4的数据集矩阵作为分类模型的输入,训练识别率为100%,验证识别率为99.4%。

    为了验证图像预处理的必要性,采用没有进行图像预处理的红外图像以相同的流程进行实验,最终得到验证识别率为98.7%。这说明对红外图像进行预处理有利于提高煤矸分选的准确率。

    为了验证基于红外热成像的煤矸识别方法的普适性,分别选取无烟煤和褐煤各75块作为实验样本,分别与75块矸石样本组成测试集样本,然后进行分类实验,得到无烟煤的分类准确率为98.9%,褐煤的分类准确率为98.6%。

    实验结果表明,本文提出的基于红外热成像的煤矸识别方法对3种煤炭的分类准确率均达到了98%以上。对于小样本集的分类,SVM分类模型的分类能力较强且十分稳定。因此,在实际的工程应用中,利用煤和矸石加热后红外图像的差异来对煤和矸石进行分选的方法是具体可行的。依据本文方法预调整相关参数后,可基本保证煤矸分选效果的稳定性。

    1) 煤和矸石的表面吸热能力不同,在70 ℃的加热区域加热相同时间后,煤的表面温度明显高于矸石,二者的红外图像差异明显。

    2) 卷积核为3×3的高斯滤波对煤和矸石的红外图像降噪效果最佳。

    3) 对煤和矸石的红外图像提取特征,最终选取4个最有利于分选的特征作为机器学习模型的输入,利用训练好的机器学习模型能够对煤和矸石进行有效的分选。

    4) 实验结果表明,基于红外热成像的煤矸识别方法对烟煤、无烟煤、褐煤的分类准确率均达到了98%以上,效果良好。

    5) 初步验证了小样本集下对煤和矸石分选的可行性,在后续的研究当中,应广泛采集不同矿区的煤矸样本,进一步验证本文方法的普适性。

  • 图  1   SLAM系统框架

    Figure  1.   Simultaneous localization and mapping(SLAM) system framework

    图  2   视觉SLAM建图类型

    Figure  2.   Visual SLAM mapping types

    图  3   ORB−SLAM框架

    Figure  3.   Oriented features from accelerated segment test and rotated binary robust independent elementary features simultaneous localization and mapping(ORB-SLAM) framework

    图  4   LSD−SLAM框架

    Figure  4.   Large-scale direct monocular simultaneous localization and mapping (LSD-SLAM )framework

    图  5   激光SLAM建图类型

    Figure  5.   Laser SLAM mapping types

    图  6   LOAM框架

    Figure  6.   LiDAR odometry andmapping (LOAM) framework

    图  7   LOAM−Livox框架

    Figure  7.   LiDAR odometry and mapping for Livox (LOAM-Livox) framework

    图  8   VINS−Mono框架

    Figure  8.   Visual inertial navigation system-monocular(VINS-Mono) framework

    图  9   LIO−SAM框架

    Figure  9.   Lidar-inertial odometry via smoothing and mapping (LIO-SAM) framework

    图  10   LVI−SAM框架

    Figure  10.   Lidar-visual-inertial odometry via smoothing and mapping(LVI-SAM) framework

    图  11   矿山应用场景

    Figure  11.   Mining application scenarios

    图  12   文献[47]提出的关键帧选取流程

    Figure  12.   Keyframe selection flow proposed in literature [47]

    图  13   文献[48]提出的改进双边滤波Retinex算法流程

    Figure  13.   Flow of improved bilateral filtering Retinex algorithm proposed by literature [48]

    图  14   文献[52]提出的基于NDT的激光SLAM框架

    Figure  14.   Normal distributions transform(NDT )-based laser SLAM framework proposed in literature [52]

    图  15   文献[53]提出的基于GICP的激光SLAM框架

    Figure  15.   Generalized iterative closest point(GICP)-based laser SLAM framework proposed by literature [53]

    图  16   改进LeGO−LOAM框架

    Figure  16.   Improved LeGO-LOAM framework

    图  17   井下相机−IMU融合SLAM框架

    Figure  17.   Underground Camera-IMU fusion SLAM framework

    图  18   井下激光雷达−IMU融合SLAM框架

    Figure  18.   Underground LiDAR-IMU fusion SLAM framework

    图  19   井下相机−激光雷达−IMU融合SLAM框架

    Figure  19.   Underground Camera-LiDAR-IMU fusion SLAM framework

    表  1   上文未提及的常见多传感器融合SLAM方案

    Table  1   Common multi-sensor fusion SLAM solutions

    多传感器融合SLAM方案 所属类型 优点 缺点
    激光−惯性里程计与地图构建[31] 激光雷达−IMU 率先开源的激光雷达与IMU融合方案 计算效率不高
    激光−惯性状态估计器[32] 激光雷达−IMU 相较LIO−mapping运行速度提高近1个数量级 复杂程度高
    快速激光−惯性里程计系列[33-35] 激光雷达−IMU 轻量级定位建图,运行效率高;Faster−LIO
    可应用至固态激光雷达
    牺牲了一定精度;更适合小尺度场景
    固态激光雷达−惯性里程计与地图构建[36] 激光雷达−IMU 适用于固态激光雷达的融合方案 狭窄长廊特征匹配退化严重
    激光−惯性里程计与地图构建[37] 激光雷达−IMU 可消除动态物体影响;低漂移;强鲁棒性 实时性较差
    基于关键帧的视觉−惯性SLAM系统[38] 相机−IMU 轨迹估计精确 无回环检测;无法构建环境地图
    基于方向加速分割测试特征检测子和
    旋转二进制鲁棒独立特征描述子的
    视觉−惯性SLAM系统[39]
    相机−IMU 通过融合IMU数据解决快速运动下
    特征点丢失的问题
    无法长时间应用至光照变化明显、
    欠特征场景
    基于方向加速分割测试特征检测子和
    旋转二进制鲁棒独立特征描述子的
    即时定位与地图构建[40]
    相机−IMU 定位精度、实时性好 快速运动场景存在特征丢失问题;
    大尺度场景计算消耗量大
    视觉−激光里程计与地图构建[41] 激光雷达−相机−IMU 精度高;鲁棒性好 无“回环检测”
    激光−单目视觉里程计[42] 激光雷达−相机−IMU 环境信息丰富,便于后续语义分割等操作 精度低于V−LOAM
    稳健实时的激光−惯性−视觉联合估计[43-45] 激光雷达−相机−IMU 实时性强;可拓展性优秀 计算资源需求大
    快速激光−惯性−视觉里程计[46] 激光雷达−相机−IMU 计算效率高;鲁棒性好 硬件要求高
    下载: 导出CSV
  • [1] 鲍久圣,刘琴,葛世荣,等. 矿山运输装备智能化技术研究现状及发展趋势[J]. 智能矿山,2020,1(1):78-88.

    BAO Jiusheng,LIU Qin,GE Shirong,et al. Research status and development trend of intelligent technologies for mine transportation equipment[J]. Journal of Intelligent Mine,2020,1(1):78-88.

    [2] 鲍久圣,张牧野,葛世荣,等. 基于改进A*和人工势场算法的无轨胶轮车井下无人驾驶路径规划[J]. 煤炭学报,2022,47(3):1347-1360.

    BAO Jiusheng,ZHANG Muye,GE Shirong,et al. Underground driverless path planning of trackless rubber tyred vehicle based on improved A* and artificial potential field algorithm[J]. Journal of China Coal Society,2022,47(3):1347-1360.

    [3]

    SMITH R C,CHEESEMAN P. On the representation and estimation of spatial uncertainty[J]. The International Journal of Robotics Research,1986,5(4):56-68. DOI: 10.1177/027836498600500404

    [4] 刘铭哲,徐光辉,唐堂,等. 激光雷达SLAM算法综述[J]. 计算 机工程与应用,2024,60(1):1-14. DOI: 10.54254/2755-2721/60/20240821

    LIU Mingzhe,XU Guanghui,TANG Tang,et al. Review of SLAM based on lidar[J]. Computer Engineering and Applications,2024,60(1):1-14. DOI: 10.54254/2755-2721/60/20240821

    [5]

    HUANG Leyao. Review on LiDAR-based SLAM techniques[C]. International Conference on Signal Processing and Machine Learning,Stanford,2021:163-168.

    [6] 李云天,穆荣军,单永志. 无人系统视觉SLAM技术发展现状简析[J]. 控制与决策,2021,36(3):513-522.

    LI Yuntian,MU Rongjun,SHAN Yongzhi. A survey of visual SLAM in unmanned systems[J]. Control and Decision,2021,36(3):513-522.

    [7]

    DAVISON A J,REID I D,MOLTON N D,et al. MonoSLAM:real-time single camera SLAM[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2007,29(6):1052-1067. DOI: 10.1109/TPAMI.2007.1049

    [8]

    KLEIN G,MURRAY D. Parallel tracking and mapping for small AR workspaces[C]. 6th IEEE and ACM International Symposium on Mixed and Augmented Reality,Nara,2007:225-234.

    [9]

    MUR-ARTAL R,MONTIEL J M M,TARDOS J D. ORB-SLAM:a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics,2015,31(5):1147-1163. DOI: 10.1109/TRO.2015.2463671

    [10]

    MUR-ARTAL R,TARDOS J D. ORB-SLAM2:an open-source SLAM system for monocular,stereo,and RGB-D cameras[J]. IEEE Transactions on Robotics,2017,33(5):1255-1262. DOI: 10.1109/TRO.2017.2705103

    [11]

    NEWCOMBE R A,LOVEGROVE S J,DAVISON A J. DTAM:dense tracking and mapping in real-time[C]. International Conference on Computer Vision,Barcelona,2011:2320-2327.

    [12] 张继贤,刘飞. 视觉SLAM环境感知技术现状与智能化测绘应用展望[J]. 测绘学报,2023,52(10):1617-1630.

    ZHANG Jixian,LIU Fei. Review of visual SLAM environment perception technology and intelligent surveying and mapping application[J]. Acta Geodaetica et Cartographica Sinica,2023,52(10):1617-1630.

    [13]

    ENGEL J,STUCKLER J,CREMERS D. Large-scale direct SLAM with stereo cameras[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Hamburg,2015:1935-1942.

    [14]

    TATENO K,TOMBARI F,LAINA I,et al. CNN-SLAM:real-time dense monocular SLAM with learned depth prediction[C]. IEEE Conference on Computer Vision and Pattern Recognition,Honolulu,2017:6243-6252.

    [15] 尹鋆泰. 动态场景下基于深度学习的视觉SLAM技术研究[D]. 北京:北京邮电大学,2023.

    YIN Juntai. Research on visual SLAM technology based on deep learning in dynamic scene[D]. Beijing:Beijing University of Posts and Telecommunications,2023.

    [16]

    MONTEMERLO M,THRUN S,KOLLER D,et al. FastSLAM:a factored solution to the simultaneous localization and mapping problem[J]. American Association for Artificial Intelligence,2002. DOI: 10.1007/s00244-005-7058-x.

    [17]

    HESS W,KOHLER D,RAPP H,et al. Real-time loop closure in 2D LIDAR SLAM[C]. IEEE International Conference on Robotics and Automation,Stockholm,2016:1271-1278.

    [18]

    ZHANG Ji,SINGH S. LOAM:lidar odometry and mapping in real-time[J]. Robotics:Science and Systems,2014. DOI: 10.15607/RSS.2014.X.007.

    [19]

    SHAN Tixiao,ENGLOT B. LeGO-LOAM:lightweight and ground-optimized lidar odometry and mapping on variable terrain[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:4758-4765.

    [20]

    LIN Jiarong,ZHANG Fu. Loam livox:a fast,robust,high-precision LiDAR odometry and mapping package for LiDARs of small FoV[C]. IEEE International Conference on Robotics and Automation,Paris,2020:3126-3131.

    [21]

    LI Lin,KONG Xin,ZHAO Xiangrui,et al. SA-LOAM:semantic-aided LiDAR SLAM with loop closure[C]. IEEE International Conference on Robotics and Automation,Xi'an,2021:7627-7634.

    [22]

    CHEN X,MILIOTO A,PALAZZOLO E,et al. SuMa++:efficient LiDAR-based semantic SLAM[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:4530-4537.

    [23]

    WANG Guangming,WU Xinrui,JIANG Shuyang,et al. Efficient 3D deep LiDAR odometry[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2023,45(5):5749-5765.

    [24]

    QIN Tong,LI Peiliang,SHEN Shaojie. VINS-mono:a robust and versatile monocular visual-inertial state estimator[J]. IEEE Transactions on Robotics,2018,34(4):1004-1020. DOI: 10.1109/TRO.2018.2853729

    [25]

    LI Peiliang,QIN Tong,HU Botao,et al. Monocular visual-inertial state estimation for mobile augmented reality[C]. International Symposium on Mixed and Augmented Reality,Nantes,2017:11-21.

    [26]

    QIN Tong,PAN Jie,GAO Shaozu,et al. A general optimization-based framework for local odometry estimation with multiple sensors[EB/OL]. (2019-01-11)[2024-06-22]. https://arxiv.org/abs/1901.03638.

    [27]

    SHAN Tixiao,ENGLOT B,MEYERS D,et al. LIO-SAM:tightly-coupled lidar inertial odometry via smoothing and mapping[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Las Vegas,2020:5135-5142.

    [28]

    SHAN Tixiao,ENGLOT B,RATTI C,et al. LVI-SAM:tightly-coupled lidar-visual-inertial odometry via smoothing and mapping[C]. IEEE International Conference on Robotics and Automation,Xi'an,2021:5692-5698.

    [29] 祝晓轩. 基于单目相机与IMU融合的SLAM系统研究[D]. 青岛:青岛大学,2023.

    ZHU Xiaoxuan. Research on SLAM system based on monocular camera and IMU fusion[D]. Qingdao:Qingdao University,2023.

    [30] 秦晓辉,周洪,廖毅霏,等. 动态环境下基于时序滑动窗口的鲁棒激光SLAM系统[J]. 湖南大学学报(自然科学版),2023,50(12):49-58.

    QIN Xiaohui,ZHOU Hong,LIAO Yifei,et al. Robust laser SLAM system based on temporal sliding window in dynamic scenes[J]. Journal of Hunan University(Natural Sciences),2023,50(12):49-58.

    [31]

    YE Haoyang,CHEN Yuying,LIU Ming. Tightly coupled 3D lidar inertial odometry and mapping[C]. International Conference on Robotics and Automation,Montreal,2019:3144-3150.

    [32]

    QIN Chao,YE Haoyang,PRANATA C E,et al. LINS:a lidar-inertial state estimator for robust and efficient navigation[C]. IEEE International Conference on Robotics and Automation,Paris,2020:8899-8906.

    [33]

    XU Wei,ZHANG Fu. FAST-LIO:a fast,robust LiDAR-inertial odometry package by tightly-coupled iterated Kalman filter[J]. IEEE Robotics and Automation Letters,2021,6(2):3317-3324. DOI: 10.1109/LRA.2021.3064227

    [34]

    XU Wei,CAI Yixi,HE Dongjiao,et al. FAST-LIO2:fast direct LiDAR-inertial odometry[J]. IEEE Transactions on Robotics,2022,38(4):2053-2073. DOI: 10.1109/TRO.2022.3141876

    [35]

    BAI Chunge,XIAO Tao,CHEN Yajie,et al. Faster-LIO:lightweight tightly coupled lidar-inertial odometry using parallel sparse incremental voxels[J]. IEEE Robotics and Automation Letters,2022,7(2):4861-4868. DOI: 10.1109/LRA.2022.3152830

    [36]

    LI Kailai,LI Meng,HANEBECK U D. Towards high-performance solid-state-LiDAR-inertial odometry and mapping[J]. IEEE Robotics and Automation Letters,2021,6(3):5167-5174. DOI: 10.1109/LRA.2021.3070251

    [37]

    ZHAO Shibo,FANG Zheng,LI Haolai,et al. A robust laser-inertial odometry and mapping method for large-scale highway environments[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Macau,2019:1285-1292.

    [38]

    LEUTENEGGER S,LYNEN S,BOSSE M,et al. Keyframe-based visual–inertial odometry using nonlinear optimization[J]. The International Journal of Robotics Research,2015,34(3):314-334. DOI: 10.1177/0278364914554813

    [39]

    MUR-ARTAL R,TARDOS J D. Visual-inertial monocular SLAM with map reuse[J]. IEEE Robotics and Automation Letters,2017,2(2):796-803. DOI: 10.1109/LRA.2017.2653359

    [40]

    CAMPOS C,ELVIRA R,RODRIGUEZ J J G,et al. ORB-SLAM3:an accurate open-source library for visual,visual-inertial,and multimap SLAM[J]. IEEE Transactions on Robotics,2021,37(6):1874-1890. DOI: 10.1109/TRO.2021.3075644

    [41]

    ZHANG Ji,SINGH S. Visual-lidar odometry and mapping:low-drift,robust,and fast[C]. IEEE International Conference on Robotics and Automation,Seattle,2015:2174-2181.

    [42]

    GRAETER J,WILCZYNSKI A,LAUER M. LIMO:lidar-monocular visual odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Madrid,2018:7872-7879.

    [43]

    LIN Jiarong,ZHENG Chunran,XU Wei,et al. R(2)LIVE:a robust,real-time,LiDAR-inertial-visual tightly-coupled state estimator and mapping[J]. IEEE Robotics and Automation Letters,2021,6(4):7469-7476. DOI: 10.1109/LRA.2021.3095515

    [44]

    LIN Jiarong,ZHANG Fu. R3LIVE++:a robust,real-time,RGB-colored,LiDAR-rnertial-visual tightly-coupled state estimation and mapping package[C]. International Conference on Robotics and Automation,Philadelphia,2022:10672-10678.

    [45]

    LIN Jiarong,ZHANG Fu. R3LIVE++:a robust,real-time,radiance reconstruction package with a tightly-coupled LiDAR-inertial-visual state estimator[J]. IEEE transactions on pattern analysis and machine intelligence,2024. DOI: 10.1109/TPAMI.2024.3456473.

    [46]

    HENG Chunran,ZHU Qingyan,XU Wei,et al. FAST-LIVO:fast and tightly-coupled sparse-direct LiDAR-inertial-visual odometry[C]. IEEE/RSJ International Conference on Intelligent Robots and Systems,Kyoto,2022:4003-4009.

    [47] 高毅楠,姚顽强,蔺小虎,等. 煤矿井下多重约束的视觉SLAM关键帧选取方法[J]. 煤炭学报,2024,49(增刊1):472-482.

    GAO Yinan,YAO Wanqiang,LIN Xiaohu,et al. Visual SLAM keyframe selection method with multiple constraints in underground coal mines[J]. Journal of China Coal Society,2024,49(S1):472-482.

    [48] 冯玮,姚顽强,蔺小虎,等. 顾及图像增强的煤矿井下视觉同时定位与建图算法[J]. 工矿自动化,2023,49(5):74-81.

    FENG Wei,YAO Wanqiang,LIN Xiaohu,et al. Visual simultaneous localization and mapping algorithm of coal mine underground considering image enhancement[J]. Journal of Mine Automation,2023,49(5):74-81.

    [49] 马宏伟,王岩,杨林. 煤矿井下移动机器人深度视觉自主导航研究[J]. 煤炭学报,2020,45(6):2193-2206.

    MA Hongwei,WANG Yan,YANG Lin. Research on depth vision based mobile robot autonomous navigation in underground coal mine[J]. Journal of China Coal Society,2020,45(6):2193-2206.

    [50]

    HUBER D F,VANDAPEL N. Automatic three-dimensional underground mine mapping[J]. The International Journal of Robotics Research,2006,25(1):7-17. DOI: 10.1177/0278364906061157

    [51] 安震. 自主导航搜救机器人关键技术研究[D]. 沈阳:东北大学,2015.

    AN Zhen. Research on key technologies of autonomous navigation search and rescue robot[D]. Shenyang:Northeastern University,2015.

    [52]

    LI Menggang,ZHU Hua,YOU Shaoze,et al. Efficient laser-based 3D SLAM for coal mine rescue robots[J]. IEEE Access,2019,7:14124-14138. DOI: 10.1109/ACCESS.2018.2889304

    [53]

    REN Zhuli,WANG Liguan,BI Lin. Robust GICP-based 3D LiDAR SLAM for underground mining environment[J]. Sensors,2019,19(13). DOI: 10.3390/s19132915.

    [54] 邹筱瑜,黄鑫淼,王忠宾,等. 基于集成式因子图优化的煤矿巷道移动机器人三维地图构建[J]. 工矿自动化,2022,48(12):57-67,92.

    ZOU Xiaoyu,HUANG Xinmiao,WANG Zhongbin,et al. 3D map construction of coal mine roadway mobile robot based on integrated factor graph optimization[J]. Journal of Mine Automation,2022,48(12):57-67,92.

    [55] 许鹏程. 基于粒子群优化的煤矿井下机器人FASTSLAM算法研究[D]. 北京:煤炭科学研究总院,2017.

    XU Pengcheng. Research on FASTSLAM algorithm of coal mine underground robot based on particle swarm optimization[D]. Beijing:China Coal Research Institute,2017.

    [56] 杨林,马宏伟,王岩,等. 煤矿巡检机器人同步定位与地图构建方法研究[J]. 工矿自动化,2019,45(9):18-24.

    YANG Lin,MA Hongwei,WANG Yan,et al. Research on method of simultaneous localization and mapping of coal mine inspection robot[J]. Industry and Mine Automation,2019,45(9):18-24.

    [57] 代嘉惠. 大功率本安驱动煤矿救援机器人定位与建图算法研究[D]. 重庆:重庆大学,2019.

    DAI Jiahui. Study on localization and mapping algorithm of high-power intrinsically safe coal mine rescue robot[D]. Chongqing:Chongqing University,2019.

    [58] 李仲强. 煤矿救援机器人自主建图和导航技术研究[D]. 淮南:安徽理工大学,2019.

    LI Zhongqiang. Research on self-construction and navigation technology of coal mine rescue robot[D]. Huainan:Anhui University of Science and Technology,2019.

    [59] 李芳威,鲍久圣,王陈,等. 基于LD改进Cartographer建图算法的无人驾驶无轨胶轮车井下SLAM自主导航方法及试验[J/OL]. 煤炭学报:1-12[2024-06-22]. https://doi.org/10.13225/j.cnki.jccs.2023.0731.

    LI Fangwei,BAO Jiusheng,WANG Chen,et al. Unmanned trackless rubber wheeler based on LD improved Cartographer mapping algorithm underground SLAM autonomous navigation method and test[J/OL]. Journal of China Coal Society:1-12[2024-06-22]. https://doi.org/10.13225/j.cnki.jccs.2023.0731.2023.0731.

    [60] 顾清华,白昌鑫,陈露,等. 基于多线激光雷达的井下斜坡道无人矿卡定位与建图方法[J]. 煤炭学报,2024,49(3):1680-1688.

    GU Qinghua,BAI Changxin,CHEN Lu,et al. Localization and mapping method for unmanned mining trucks in underground slope roads based on multi-line lidar[J]. Journal of China Coal Society,2024,49(3):1680-1688.

    [61] 薛光辉,李瑞雪,张钲昊,等. 基于激光雷达的煤矿井底车场地图融合构建方法研究[J]. 煤炭科学技术,2023,51(8):219-227.

    XUE Guanghui,LI Ruixue,ZHANG Zhenghao,et al. Lidar based map construction fusion method for underground coal mine shaft bottom[J]. Coal Science and Technology,2023,51(8):219-227.

    [62]

    ZHU Daixian,JI Kangkang,WU Dong,et al. A coupled visual and inertial measurement units method for locating and mapping in coal mine tunnel[J]. Sensors,2022,22(19):7437. DOI: 10.3390/s22197437

    [63] 汪雷. 煤矿探测机器人图像处理及动态物体去除算法研究[D]. 徐州:中国矿业大学,2020.

    WANG Lei. Research on image processing and dynamic object removal algorithm of coal mine detection robot[D]. Xuzhou:China University of Mining and Technology,2020.

    [64]

    YANG Xin,LIN Xiaohu,YAO Wanqiang,et al. A robust LiDAR SLAM method for underground coal mine robot with degenerated scene compensation[J]. Remote Sensing,2022,15(1). DOI: 10.3390/RS15010186.

    [65]

    YANG Lin,MA Hongwei,NIE Zhen,et al. 3D LiDAR point cloud registration based on IMU preintegration in coal mine roadways[J]. Sensors,2023,23(7). DOI: 10.3390/S23073473.

    [66] 司垒,王忠宾,魏东,等. 基于IMU−LiDAR紧耦合的煤矿防冲钻孔机器人定位导航方法[J]. 煤炭学报,2024,49(4):2179-2194.

    SI Lei,WANG Zhongbin,WEI Dong,et al. Positioning and navigation method of underground drilling robot for rock-burst prevention based on IMU-LiDAR tight coupling[J]. Journal of China Coal Society,2024,49(4):2179-2194.

    [67] 李猛钢,胡而已,朱华. 煤矿移动机器人LiDAR/IMU紧耦合SLAM方法[J]. 工矿自动化,2022,48(12):68-78.

    LI Menggang,HU Eryi,ZHU Hua. LiDAR/IMU tightly-coupled SLAM method for coal mine mobile robot[J]. Journal of Mine Automation,2022,48(12):68-78.

    [68] 董志华,姚顽强,蔺小虎,等. 煤矿井下顾及特征点动态提取的激光SLAM算法研究[J]. 煤矿安全,2023,54(8):241-246.

    DONG Zhihua,YAO Wanqiang,LIN Xiaohu,et al. LiDAR SLAM algorithm considering dynamic extraction of feature points in underground coal mine[J]. Safety in Coal Mines,2023,54(8):241-246.

    [69] 薛光辉,张钲昊,张桂艺,等. 煤矿井下点云特征提取和配准算法改进与激光SLAM研究[J/OL]. 煤炭科学技术:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.

    XUE Guanghui,ZHANG Zhenghao,ZHANG Guiyi,et al. Improvement of point cloud feature extraction and alignment algorithms and LiDAR SLAM in coal mine underground[J/OL]. Coal Science and Technology:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.TD.20240722.1557.003.html.

    [70] 李栋. 基于多源信息融合的巷道语义地图构建与复用方法研究[D]. 苏州:苏州大学,2022.

    LI Dong. A Method of construction and reuse of roadway semantic map based on multi-source information fusion[D]. Suzhou:Soochow University,2022.

    [71] 陈步平. 矿用搜救机器人多源信息融合SLAM方法研究[D]. 徐州:中国矿业大学,2023.

    CHEN Buping. Research on SLAM method of multi-source information fusion for mine search and rescue robot[D]. Xuzhou:China University of Mining and Technology,2023.

    [72] 马艾强,姚顽强. 煤矿井下移动机器人多传感器自适应融合SLAM方法[J]. 工矿自动化,2024,50(5):107-117.

    MA Aiqiang,YAO Wanqiang. Multi sensor adaptive fusion SLAM method for underground mobile robots in coal mines[J]. Journal of Mine Automation,2024,50(5):107-117.

    [73] 滕睿. 露天矿运输车辆无人驾驶关键技术研究[D]. 阜新:辽宁工程技术大学,2023.

    TENG Rui. Research on key technologies of unmanned driving of transport vehicles in open-pit mine[D]. Fuxin:Liaoning Technical University,2023.

    [74] 张清宇,崔丽珍,李敏超,等. 倾斜地面3D点云快速分割算法[J]. 无线电工程,2024,54(2):447-456.

    ZHANG Qingyu,CUI Lizhen,LI Minchao,et al. A fast segmentation algorithm for 3D point cloud on inclined ground[J]. Radio Engineering,2024,54(2):447-456.

    [75] 张清宇. 煤矿环境下LiDAR/IMU融合定位算法研究与实现[D]. 包头:内蒙古科技大学,2023.

    ZHANG Qingyu. Research and implementation of LiDAR/IMU fusion positioning algorithm in coal mine environment[D]. Baotou:Inner Mongolia University of Science & Technology,2023.

    [76] 马宝良,崔丽珍,李敏超,等. 露天煤矿环境下基于LiDAR/IMU的紧耦合SLAM算法研究[J]. 煤炭科学技术,2024,52(3):236-244.

    MA Baoliang,CUI Lizhen,LI Minchao,et al. Study on tightly coupled LiDAR-Inertial SLAM for open pit coal mine environment[J]. Coal Science and Technology,2024,52(3):236-244.

    [77] 李慧,李敏超,崔丽珍,等. 露天煤矿三维激光雷达运动畸变算法研究[J/OL]. 煤炭科学技术:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.td.20240325.1558.006.html.

    LI Hui,LI Minchao,CUI Lizhen,et al. Research on 3D LiDAR motion distortion algorithm for open-pit coal mine[J/OL]. Coal Science and Technology:1-12[2024-06-22]. http://kns.cnki.net/kcms/detail/11.2402.td.20240325.1558.006.html.

  • 期刊类型引用(0)

    其他类型引用(1)

图(19)  /  表(1)
计量
  • 文章访问数:  914
  • HTML全文浏览量:  96
  • PDF下载量:  72
  • 被引次数: 1
出版历程
  • 收稿日期:  2024-07-02
  • 修回日期:  2024-10-27
  • 网络出版日期:  2024-09-28
  • 刊出日期:  2024-10-24

目录

/

返回文章
返回