![]() Peng S, Zhou X, Liu Y, Lin H, Huang Q, Bao H (2022) PVNet: Pixel-wise voting network for 6DoF object pose estimation. Xiang Y, Schmidt T, Narayanan V, Fox D (2018) PoseCNN: A convolutional neural network for 6D object pose estimation in cluttered scenes. In: 2015 IEEE international conference on computer vision (ICCV), pp 2938–2946. Kendall A, Grimes M, Cipolla R (2015) PoseNet: A convolutional network for real-time 6-DOF camera relocalization. In: 2017 IEEE international conference on computer vision (ICCV), pp 3848–3856. Rad M, Lepetit V (2017) BB8: A scalable, accurate, robust to partial occlusion method for predicting the 3D poses of challenging objects without using depth. Redmon J, Farhadi A (2018) YOLOv3: An incremental improvement. In: 2017 IEEE international conference on computer vision (ICCV), pp 1530–1538. Kehl W, Manhardt F, Tombari F, Ilic S, Navab N (2017) SSD-6D: Making rgb-based 3D detection and 6D pose estimation great again. In: 2019 IEEE/CVF international conference on computer vision (ICCV), pp 7667–7676. Park K, Patten T, Vincze M (2019) Pix2Pose: Pixel-wise coordinate regression of objects for 6D pose estimation. In: 2019 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 3338–3347. Wang C, Xu D, Zhu Y, Martin-Martin R, Lu C, Fei-Fei L, Savarese S (2019) DenseFusion: 6D object pose estimation by iterative dense fusion. In: 2009 6th International conference on electrical engineering/electronics, computer, telecommunications and information technology, vol 01. Malyavej V, Torteeka P, Wongkharn S, Wiangtong T (2009) Pose estimation of unmanned ground vehicle based on dead-reckoning/GPS sensor fusion by unscented Kalman filter. ![]() In: 2017 IEEE international conference on information and automation (ICIA), pp 678–682. Xiao Z, Wang X, Wang J, Wu Z (2017) Monocular ORB SLAM based on initialization by marker pose estimation. Ruan X, Wang F, Huang J (2019) Relative pose estimation of visual SLAM based on convolutional neural networks. Li X, Ling H (2020) Hybrid camera pose estimation with online partitioning for SLAM. In: 2016 IEEE 2nd workshop on everyday virtual reality (WEVR), pp 32–35. Hachiuma R, Saito H (2016) Recognition and pose estimation of primitive shapes from depth images for spatial augmented reality. In: 2019 IEEE global conference on signal and information processing (GlobalSIP), pp 1–5. Lu Y, Kourian S, Salvaggio C, Xu C, Lu G (2019) Single image 3D vehicle pose estimation for augmented reality. In: 2019 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct), pp 237–242. Zhang S, Song C, Radkowski R (2019) Setforge - synthetic RGB-D training data generation to support CNN-based pose estimation for augmented reality. In: 2017 Indian control conference (ICC), pp 424–431. Kothari N, Gupta M, Vachhani L, Arya H (2017) Pose estimation for an autonomous vehicle using monocular vision. In: 2019 IEEE conference on multimedia information processing and retrieval (MIPR), pp 163–168. Gu R, Wang G, Hwang J-n (2019) Efficient multi-person hierarchical 3D pose estimation for autonomous driving. IEEE Trans Pattern Anal Mach Intell 24(7):932–946. Our method improves the accuracy of PVNet by 1.09 and 5.14 on average in terms of the 2D reprojection error and ADD metric, respectively, without increasing the computational time.ĭrummond T, Cipolla R (2002) Real-time visual tracking of complex structures. Extensive experiments on LINEMOD, LINEMOD-Occlusion datasets validate the effectiveness and superiority of our method. To this end, a focal segmentation mechanism is proposed that ensures accurate complete segmentation of occluded objects. However, accurate segmentation of object pixels is difficult, particularly under severe occlusion. Subsequently, the 2D locations of 3D keypoints are computed using the direction vectors of object pixels, and the 6D object pose is obtained using a PnP algorithm. ![]() Similar to PVNet, our method regresses target object segments and pixel-wise direction vectors from an RGB image. A novel method is proposed that is based on PVNet but improves its performance. This study addresses the challenge of 6D pose estimation from a single RGB image under severe occlusion. Most of the previous 6D pose estimation methods have trained deep neural networks to directly regress poses from input images or predict the 2D locations of 3D keypoints for pose estimation thus, they are vulnerable to large occlusion. In the field of augmented reality, 6D pose estimation of rigid objects poses limitations and challenges.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |