Abstract:In order to solve the problem of insufficient local feature extraction and lack of contextual feature information fusion in 3D laser point clouds, a MAKNet point cloud feature extraction network integrating self-attention mechanism and multi-level point cloud feature extraction architecture was proposed. The network takes 3D laser point cloud data as input, and through the introduction of SAA module, the global and local features of the point cloud are extracted by the farthest point sampling algorithm and the K-nearest neighbor sampling algorithm combined with the self-attention mechanism, and the weight value between the center point feature value and the neighborhood point feature value is increased to suppress the problem of low recognition of sparse point features, and then the SAA module feature extraction is carried out twice in a row at different regional scales to extract and fuse multi-layer point features, and finally the point cloud features are spliced and combined. In order to obtain finer details of point cloud features, expand the receptive field of each point cloud, increase the coverage of extracted point information, and improve the network generalization ability. Experimental results show that the Overall Accuracy (OA) of the public dataset S3DIS is increased from 80.1% to 86.9% compared with PointNet++, and the Overall Accuracy (OA) of the self-built collection and transmission line corridor dataset reaches 96.4%, which proves that the MAKNet network has good robustness in semantic segmentation tasks and strong generalization ability in actual scene data.