Please use this identifier to cite or link to this item: http://hdl.handle.net/10553/130229
Title: SKD: keypoint detection for point clouds using saliency estimation
Authors: Tinchev, Georgi
Peñate Sánchez, Adrián 
Fallon, Maurice
Visual learning
UNESCO Clasification: 1203 Ciencia de los ordenadores
Keywords: Deep learning for visual perception
Recognition
Visual learning
Issue Date: 2021
Journal: IEEE Robotics and Automation Letters 
Abstract: We present SKD, a novel keypoint detector that uses saliency to determine the best candidates from a point cloud for tasks such as registration and reconstruction. The approach can be applied to any differentiable deep learning descriptor by using the gradients of that descriptor with respect to the 3D position of the input points as a measure of their saliency. The saliency is combined with the original descriptor and context information in a neural network, which is trained to learn robust keypoint candidates. The key intuition behind this approach is that keypoints are not extracted solely as a result of the geometry surrounding a point, but also take into account the descriptor's response. The approach was evaluated on two large LIDAR datasets - the Oxford RobotCar dataset and the KITTI dataset, where we obtain up to 50% improvement over the state-of-the-art in both matchability and repeatability. When performing sparse matching with the keypoints computed by our method we achieve a higher inlier ratio and faster convergence.
URI: http://hdl.handle.net/10553/130229
ISSN: 2377-3766
DOI: 10.1109/LRA.2021.3065224
Source: IEEE Robotics and Automation Letters, [ISSN: 2377-3766], vol. 6 (2), ( 2021)
Appears in Collections:Artículos
Show full item record

SCOPUSTM   
Citations

13
checked on May 19, 2024

Google ScholarTM

Check

Altmetric


Share



Export metadata



Items in accedaCRIS are protected by copyright, with all rights reserved, unless otherwise indicated.