Volume 48 Issue 10
Oct.  2022
Turn off MathJax
Article Contents
RAO Tianrong, PAN Tao, XU Huijun. Unsafe action recognition in underground coal mine based on cross-attention mechanism[J]. Journal of Mine Automation,2022,48(10):48-54.  doi: 10.13272/j.issn.1671-251x.17949
Citation: RAO Tianrong, PAN Tao, XU Huijun. Unsafe action recognition in underground coal mine based on cross-attention mechanism[J]. Journal of Mine Automation,2022,48(10):48-54.  doi: 10.13272/j.issn.1671-251x.17949

Unsafe action recognition in underground coal mine based on cross-attention mechanism

doi: 10.13272/j.issn.1671-251x.17949
  • Received Date: 2022-05-18
  • Rev Recd Date: 2022-10-08
  • Available Online: 2022-10-22
  • The real-time video monitoring and alarming of unsafe actions of coal mine personnel is an important means to improve the level of safety in production. The coal mine underground environment is complex, and the monitoring video quality is poor. The conventional action recognition method based on image features or human body key point features is limited in application in the underground coal mine. An action recognition model of multi-feature fusion based on cross-attention mechanism is proposed to recognize unsafe actions of coal mine personnel. For segment video images, the 3D ResNet101 model is adopted to extract image features. The openpose algorithm and ST-GCN (space-time graph convolutional network) are adopted to extract human body key point features. The cross-attention mechanism is used to fuse the image features and human key point features. The fused features are spliced respectively with the image features or human key point features processed by the self-attention mechanism to obtain the final action recognition features. The recognition features is processed by the fully connected layer and the normalized exponential function softmax to obtain action recognition result. Based on the public data sets HMDB51 and UCF101, and the self-built coal mine video dataset, the action recognition experiment is carried out. The results show that the cross-attention mechanism can make the action recognition model more effective in fusing image features and human key point features, and greatly improve the recognition accuracy. At present, SlowFast is the most widely used action recognition model. Compared with the SlowFast, the recognition accuracy of the action recognition model of multi-feature fusion based on cross-attention mechanism has been improved by 1.8% and 0.9% for HMDB51 and UCF101 datasets respectively. The recognition accuracy on the self-built dataset has increased by 6.7%. It is verified that the action recognition model of multi-feature fusion based on cross-attention mechanism is more suitable for the recognition of unsafe actions in the complex coal mine environment.

     

  • loading
  • [1]
    党伟超,史云龙,白尚旺,等. 基于条件变分自编码器的井下配电室巡检行为检测[J]. 工矿自动化,2021,47(12):98-105. doi: 10.13272/j.issn.1671-251x.2021030087

    DANG Weichao,SHI Yunlong,BAI Shangwang,et al. Inspection behavior detection of underground power distribution room based on conditional variational auto-encoder[J]. Industry and Mine Automation,2021,47(12):98-105. doi: 10.13272/j.issn.1671-251x.2021030087
    [2]
    王国法,任怀伟,赵国瑞,等. 煤矿智能化十大“痛点”解析及对策[J]. 工矿自动化,2021,47(6):1-11. doi: 10.13272/j.issn.1671-251x.17808

    WANG Guofa,REN Huaiwei,ZHAO Guorui,et al. Analysis and countermeasures of ten 'pain points' of intelligent coal mine[J]. Industry and Mine Automation,2021,47(6):1-11. doi: 10.13272/j.issn.1671-251x.17808
    [3]
    SIMONYAN K, ZISSERMAN A. Two-streamconvolutional  networks  for  action  recognition  in videos[Z]. arXiv Preprint, arXiv:1406.2199v2.
    [4]
    WANG Limin, XIONG Yuanjun, WANG Zhe, et al. Temporal segment networks: towards good practices for deep action recognition[C]. European Conference on Computer Vision, Amsterdam, 2016: 20-36.
    [5]
    JI Lin, GAN Chuang, HAN Song. TSM: temporal shift module for efficient video understanding[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, 2019: 7083-7093.
    [6]
    LIU Kun, LIU Wu, GAN Chuang, et al. T-C3D: temporal convolutional 3D network for real-time action recognition[C]. The AAAI Conference on Artificial Intelligence, New Orleans, 2018: 7138-7145.
    [7]
    FEICHTENHOFER C, FAN H, MALIK J, et al. Slowfast networks for video recognition[C]. The IEEE International Conference on Computer Vision, Long Beach, 2019: 6202-6211.
    [8]
    党伟超,张泽杰,白尚旺,等. 基于改进双流法的井下配电室巡检行为识别[J]. 工矿自动化,2020,46(4):75-80. doi: 10.13272/j.issn.1671-251x.2019080074

    DANG Weichao,ZHANG Zejie,BAI Shangwang,et al. Inspection behavior recognition of underground power distribution room based on improved two-stream CNN method[J]. Industry and Mine Automation,2020,46(4):75-80. doi: 10.13272/j.issn.1671-251x.2019080074
    [9]
    刘浩,刘海滨,孙宇,等. 煤矿井下员工不安全行为智能识别系统[J]. 煤炭学报,2021,46(增刊2):1159-1169. doi: 10.13225/j.cnki.jccs.2021.0670

    LIU Hao,LIU Haibin,SUN Yu,et al. Intelligent recognition system of unsafe behavior of underground coal miners[J]. Journal of China Coal Society,2021,46(S2):1159-1169. doi: 10.13225/j.cnki.jccs.2021.0670
    [10]
    张立亚. 基于图像识别的煤矿井下安全管控技术[J]. 煤矿安全,2021,52(2):165-168. doi: 10.13347/j.cnki.mkaq.2021.02.032

    ZHANG Liya. Safety control technology of coal mine based on image recognition[J]. Safety in Coal Mines,2021,52(2):165-168. doi: 10.13347/j.cnki.mkaq.2021.02.032
    [11]
    YAN Sijie, XIONG Yuanjun, LIN Dahua. Spatial temporal graph convolutional networks for skeleton-based action recognition[C]. The AAAI Conference on Artificial Intelligence, New Orleans, 2018: 7444-7452.
    [12]
    SHI Lei, ZHANG Yifan, CHENG Jian, et al. Two-stream adaptive graph convolutional networks for skeleton-based action recognition[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, 2019: 12026-12035.
    [13]
    黄瀚,程小舟,云霄,等. 基于DA−GCN的煤矿人员行为识别方法[J]. 工矿自动化,2021,47(4):62-66. doi: 10.13272/j.issn.1671-251x.17721

    HUANG Han,CHENG Xiaozhou,YUN Xiao,et al. DA-GCN-based coal mine personnel action recognition method[J]. Industry and Mine Automation,2021,47(4):62-66. doi: 10.13272/j.issn.1671-251x.17721
    [14]
    王璇,吴佳奇,阳康,等. 煤矿井下人体姿态检测方法[J]. 工矿自动化,2022,48(5):79-84. doi: 10.13272/j.issn.1671-251x.17867

    WANG Xuan,WU Jiaqi,YANG Kang,et al. Human posture detection method in coal mine[J]. Journal of Mine Automation,2022,48(5):79-84. doi: 10.13272/j.issn.1671-251x.17867
    [15]
    HARA K, KATAOKA H, SATOH Y. Can spatiotemporal 3D CNNs retrace the history of 2D CNNs and imagenet[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 2018: 6546-6555.
    [16]
    CAO Zhe, SIMON T, WEI S-E, et al. Realtime multi-person 2D pose estimation using part affinity fields[C]. The IEEE International Conference on Computer Vision, Honolulu, 2017: 7291-7299.
    [17]
    WANG Xiaolong, GIRSHICK R, GUPTA A, et al. Non-local neural networks[C]. The IEEE International Conference on Computer Vision, Salt Lake City, 2018: 7794-7803.
    [18]
    VELICKOVIC P, CUCURULL G, CASANOVA A, et al. Graph attention networks[Z]. arXiv Preprint, arXiv: 1710.10903.
    [19]
    KUEHNE H, JHUANG H, GARROTE E, et al. HMDB: a large video database for human motion recognition[C]. International Conference on Computer Vision, Barcelona, 2011: 2556-2563.
    [20]
    SOOMORO K, ZAMIR A R, SHAH M. UCF101: a dataset of 101 human actions classes from videos in the wild[Z]. arXiv Preprint, arXiv: 1212.0402.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(4)

    Article Metrics

    Article views (369) PDF downloads(77) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return