fbpx
Wikipedia

Visual odometry

In robotics and computer vision, visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images. It has been used in a wide variety of robotic applications, such as on the Mars Exploration Rovers.[1]

The optical flow vector of a moving object in a video sequence

Overview edit

In navigation, odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations. While useful for many wheeled or tracked vehicles, traditional odometry techniques cannot be applied to mobile robots with non-standard locomotion methods, such as legged robots. In addition, odometry universally suffers from precision problems, since wheels tend to slip and slide on the floor creating a non-uniform distance traveled as compared to the wheel rotations. The error is compounded when the vehicle operates on non-smooth surfaces. Odometry readings become increasingly unreliable as these errors accumulate and compound over time.

Visual odometry is the process of determining equivalent odometry information using sequential camera images to estimate the distance traveled. Visual odometry allows for enhanced navigational accuracy in robots or vehicles using any type of locomotion on any[citation needed] surface.

Types edit

There are various types of VO.

Monocular and stereo edit

Depending on the camera setup, VO can be categorized as Monocular VO (single camera), Stereo VO (two camera in stereo setup).

 
VIO is widely used in commercial quadcopters, which provide localization in GPS denied situations.

Feature-based and direct method edit

Traditional VO's visual information is obtained by the feature-based method, which extracts the image feature points and tracks them in the image sequence. Recent developments in VO research provided an alternative, called the direct method, which uses pixel intensity in the image sequence directly as visual input. There are also hybrid methods.

Visual inertial odometry edit

If an inertial measurement unit (IMU) is used within the VO system, it is commonly referred to as Visual Inertial Odometry (VIO).

Algorithm edit

Most existing approaches to visual odometry are based on the following stages.

  1. Acquire input images: using either single cameras.,[2][3] stereo cameras,[3][4] or omnidirectional cameras.[5][6]
  2. Image correction: apply image processing techniques for lens distortion removal, etc.
  3. Feature detection: define interest operators, and match features across frames and construct optical flow field.
    1. Feature extraction and correlation.
    2. Construct optical flow field (Lucas–Kanade method).
  4. Check flow field vectors for potential tracking errors and remove outliers.[7]
  5. Estimation of the camera motion from the optical flow.[8][9][10][11]
    1. Choice 1: Kalman filter for state estimate distribution maintenance.
    2. Choice 2: find the geometric and 3D properties of the features that minimize a cost function based on the re-projection error between two adjacent images. This can be done by mathematical minimization or random sampling.
  6. Periodic repopulation of trackpoints to maintain coverage across the image.

An alternative to feature-based methods is the "direct" or appearance-based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction.[4][12][13]

Another method, coined 'visiodometry' estimates the planar roto-translations between images using Phase correlation instead of extracting features.[14][15]

Egomotion edit

 
Egomotion estimation using corner detection

Egomotion is defined as the 3D motion of a camera within an environment.[16] In the field of computer vision, egomotion refers to estimating a camera's motion relative to a rigid scene.[17] An example of egomotion estimation would be estimating a car's moving position relative to lines on the road or street signs being observed from the car itself. The estimation of egomotion is important in autonomous robot navigation applications.[18]

Overview edit

The goal of estimating the egomotion of a camera is to determine the 3D motion of that camera within the environment using a sequence of images taken by the camera.[19] The process of estimating a camera's motion within an environment involves the use of visual odometry techniques on a sequence of images captured by the moving camera.[20] This is typically done using feature detection to construct an optical flow from two image frames in a sequence[16] generated from either single cameras or stereo cameras.[20] Using stereo image pairs for each frame helps reduce error and provides additional depth and scale information.[21][22]

Features are detected in the first frame, and then matched in the second frame. This information is then used to make the optical flow field for the detected features in those two images. The optical flow field illustrates how features diverge from a single point, the focus of expansion. The focus of expansion can be detected from the optical flow field, indicating the direction of the motion of the camera, and thus providing an estimate of the camera motion.

There are other methods of extracting egomotion information from images as well, including a method that avoids feature detection and optical flow fields and directly uses the image intensities.[16]

See also edit

References edit

  1. ^ Maimone, M.; Cheng, Y.; Matthies, L. (2007). "Two years of Visual Odometry on the Mars Exploration Rovers" (PDF). Journal of Field Robotics. 24 (3): 169–186. CiteSeerX 10.1.1.104.3110. doi:10.1002/rob.20184. S2CID 17544166. Retrieved 2008-07-10.
  2. ^ Chhaniyara, Savan; KASPAR ALTHOEFER; LAKMAL D. SENEVIRATNE (2008). . Advances in Mobile Robotics: Proceedings of the Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Coimbra, Portugal. The Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines. Vol. 11. World Scientific, 2008. Archived from the original on 2012-02-24. Retrieved 2010-01-22.
  3. ^ a b Nister, D; Naroditsky, O.; Bergen, J (Jan 2004). Visual Odometry. Computer Vision and Pattern Recognition, 2004. CVPR 2004. Vol. 1. pp. I–652 – I–659 Vol.1. doi:10.1109/CVPR.2004.1315094.
  4. ^ a b Comport, A.I.; Malis, E.; Rives, P. (2010). F. Chaumette; P. Corke; P. Newman (eds.). "Real-time Quadrifocal Visual Odometry". International Journal of Robotics Research. 29 (2–3): 245–266. CiteSeerX 10.1.1.720.3113. doi:10.1177/0278364909356601. S2CID 15139693.
  5. ^ Scaramuzza, D.; Siegwart, R. (October 2008). "Appearance-Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles". IEEE Transactions on Robotics. 24 (5): 1015–1026. doi:10.1109/TRO.2008.2004490. hdl:20.500.11850/14362. S2CID 13894940.
  6. ^ Corke, P.; Strelow, D.; Singh, S. "Omnidirectional visual odometry for a planetary rover". Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on. Vol. 4. doi:10.1109/IROS.2004.1390041.
  7. ^ Campbell, J.; Sukthankar, R.; Nourbakhsh, I.; Pittsburgh, I.R. "Techniques for evaluating optical flow for visual odometry in extreme terrain". Intelligent Robots and Systems, 2004.(IROS 2004). Proceedings. 2004 IEEE/RSJ International Conference on. Vol. 4. doi:10.1109/IROS.2004.1389991.
  8. ^ Sunderhauf, N.; Konolige, K.; Lacroix, S.; Protzel, P. (2005). "Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle". In Levi; Schanz; Lafrenz; Avrutin (eds.). (PDF). Reihe Informatik aktuell. Springer Verlag. pp. 157–163. Archived from the original (PDF) on 2009-02-11. Retrieved 2008-07-10.
  9. ^ Konolige, K.; Agrawal, M.; Bolles, R.C.; Cowan, C.; Fischler, M.; Gerkey, B.P. (2008). "Outdoor Mapping and Navigation Using Stereo Vision". Experimental Robotics. Springer Tracts in Advanced Robotics. Vol. 39. pp. 179–190. doi:10.1007/978-3-540-77457-0_17. ISBN 978-3-540-77456-3.
  10. ^ Olson, C.F.; Matthies, L.; Schoppers, M.; Maimone, M.W. (2002). "Rover navigation using stereo ego-motion" (PDF). Robotics and Autonomous Systems. 43 (4): 215–229. doi:10.1016/s0921-8890(03)00004-6. Retrieved 2010-06-06.
  11. ^ Cheng, Y.; Maimone, M.W.; Matthies, L. (2006). "Visual Odometry on the Mars Exploration Rovers". IEEE Robotics and Automation Magazine. 13 (2): 54–62. CiteSeerX 10.1.1.297.4693. doi:10.1109/MRA.2006.1638016. S2CID 15149330.
  12. ^ Engel, Jakob; Schöps, Thomas; Cremers, Daniel (2014). "LSD-SLAM: Large-Scale Direct Monocular SLAM" (PDF). In Fleet D.; Pajdla T.; Schiele B.; Tuytelaars T. (eds.). Computer Vision. European Conference on Computer Vision 2014. Lecture Notes in Computer Science. Vol. 8690. doi:10.1007/978-3-319-10605-2_54.
  13. ^ Engel, Jakob; Sturm, Jürgen; Cremers, Daniel (2013). "Semi-Dense Visual Odometry for a Monocular Camera" (PDF). IEEE International Conference on Computer Vision (ICCV). CiteSeerX 10.1.1.402.6918. doi:10.1109/ICCV.2013.183.
  14. ^ Zaman, M. (2007). "High Precision Relative Localization Using a Single Camera". Robotics and Automation, 2007.(ICRA 2007). Proceedings. 2007 IEEE International Conference on. doi:10.1109/ROBOT.2007.364078.
  15. ^ Zaman, M. (2007). "High resolution relative localisation using two cameras". Journal of Robotics and Autonomous Systems. 55 (9): 685–692. doi:10.1016/j.robot.2007.05.008.
  16. ^ a b c Irani, M.; Rousso, B.; Peleg S. (June 1994). "Recovery of Ego-Motion Using Image Stabilization" (PDF). IEEE Computer Society Conference on Computer Vision and Pattern Recognition: 21–23. Retrieved 7 June 2010.
  17. ^ Burger, W.; Bhanu, B. (Nov 1990). "Estimating 3D egomotion from perspective image sequence". IEEE Transactions on Pattern Analysis and Machine Intelligence. 12 (11): 1040–1058. doi:10.1109/34.61704. S2CID 206418830.
  18. ^ Shakernia, O.; Vidal, R.; Shankar, S. (2003). "Omnidirectional Egomotion Estimation From Back-projection Flow" (PDF). Conference on Computer Vision and Pattern Recognition Workshop. 7: 82. CiteSeerX 10.1.1.5.8127. doi:10.1109/CVPRW.2003.10074. S2CID 5494756. Retrieved 7 June 2010.
  19. ^ Tian, T.; Tomasi, C.; Heeger, D. (1996). (PDF). IEEE Computer Society Conference on Computer Vision and Pattern Recognition: 315. Archived from the original (PDF) on August 8, 2008. Retrieved 7 June 2010.
  20. ^ a b Milella, A.; Siegwart, R. (January 2006). (PDF). IEEE International Conference on Computer Vision Systems: 21. Archived from the original (PDF) on 17 September 2010. Retrieved 7 June 2010.
  21. ^ Olson, C. F.; Matthies, L.; Schoppers, M.; Maimoneb M. W. (June 2003). "Rover navigation using stereo ego-motion" (PDF). Robotics and Autonomous Systems. 43 (9): 215–229. doi:10.1016/s0921-8890(03)00004-6. Retrieved 7 June 2010.
  22. ^ Sudin Dinesh, Koteswara Rao, K.; Unnikrishnan, M.; Brinda, V.; Lalithambika, V.R.; Dhekane, M.V. "Improvements in Visual Odometry Algorithm for Planetary Exploration Rovers". IEEE International Conference on Emerging Trends in Communication, Control, Signal Processing & Computing Applications (C2SPCA), 2013

visual, odometry, robotics, computer, vision, visual, odometry, process, determining, position, orientation, robot, analyzing, associated, camera, images, been, used, wide, variety, robotic, applications, such, mars, exploration, rovers, optical, flow, vector,. In robotics and computer vision visual odometry is the process of determining the position and orientation of a robot by analyzing the associated camera images It has been used in a wide variety of robotic applications such as on the Mars Exploration Rovers 1 The optical flow vector of a moving object in a video sequence Contents 1 Overview 2 Types 2 1 Monocular and stereo 2 2 Feature based and direct method 2 3 Visual inertial odometry 3 Algorithm 4 Egomotion 4 1 Overview 5 See also 6 ReferencesOverview editIn navigation odometry is the use of data from the movement of actuators to estimate change in position over time through devices such as rotary encoders to measure wheel rotations While useful for many wheeled or tracked vehicles traditional odometry techniques cannot be applied to mobile robots with non standard locomotion methods such as legged robots In addition odometry universally suffers from precision problems since wheels tend to slip and slide on the floor creating a non uniform distance traveled as compared to the wheel rotations The error is compounded when the vehicle operates on non smooth surfaces Odometry readings become increasingly unreliable as these errors accumulate and compound over time Visual odometry is the process of determining equivalent odometry information using sequential camera images to estimate the distance traveled Visual odometry allows for enhanced navigational accuracy in robots or vehicles using any type of locomotion on any citation needed surface Types editThere are various types of VO Monocular and stereo edit Depending on the camera setup VO can be categorized as Monocular VO single camera Stereo VO two camera in stereo setup nbsp VIO is widely used in commercial quadcopters which provide localization in GPS denied situations Feature based and direct method edit Traditional VO s visual information is obtained by the feature based method which extracts the image feature points and tracks them in the image sequence Recent developments in VO research provided an alternative called the direct method which uses pixel intensity in the image sequence directly as visual input There are also hybrid methods Visual inertial odometry edit If an inertial measurement unit IMU is used within the VO system it is commonly referred to as Visual Inertial Odometry VIO Algorithm editMost existing approaches to visual odometry are based on the following stages Acquire input images using either single cameras 2 3 stereo cameras 3 4 or omnidirectional cameras 5 6 Image correction apply image processing techniques for lens distortion removal etc Feature detection define interest operators and match features across frames and construct optical flow field Feature extraction and correlation Use correlation not long term feature tracking to establish correspondence of two images Construct optical flow field Lucas Kanade method Check flow field vectors for potential tracking errors and remove outliers 7 Estimation of the camera motion from the optical flow 8 9 10 11 Choice 1 Kalman filter for state estimate distribution maintenance Choice 2 find the geometric and 3D properties of the features that minimize a cost function based on the re projection error between two adjacent images This can be done by mathematical minimization or random sampling Periodic repopulation of trackpoints to maintain coverage across the image An alternative to feature based methods is the direct or appearance based visual odometry technique which minimizes an error directly in sensor space and subsequently avoids feature matching and extraction 4 12 13 Another method coined visiodometry estimates the planar roto translations between images using Phase correlation instead of extracting features 14 15 Egomotion edit nbsp Egomotion estimation using corner detection Egomotion is defined as the 3D motion of a camera within an environment 16 In the field of computer vision egomotion refers to estimating a camera s motion relative to a rigid scene 17 An example of egomotion estimation would be estimating a car s moving position relative to lines on the road or street signs being observed from the car itself The estimation of egomotion is important in autonomous robot navigation applications 18 Overview edit The goal of estimating the egomotion of a camera is to determine the 3D motion of that camera within the environment using a sequence of images taken by the camera 19 The process of estimating a camera s motion within an environment involves the use of visual odometry techniques on a sequence of images captured by the moving camera 20 This is typically done using feature detection to construct an optical flow from two image frames in a sequence 16 generated from either single cameras or stereo cameras 20 Using stereo image pairs for each frame helps reduce error and provides additional depth and scale information 21 22 Features are detected in the first frame and then matched in the second frame This information is then used to make the optical flow field for the detected features in those two images The optical flow field illustrates how features diverge from a single point the focus of expansion The focus of expansion can be detected from the optical flow field indicating the direction of the motion of the camera and thus providing an estimate of the camera motion There are other methods of extracting egomotion information from images as well including a method that avoids feature detection and optical flow fields and directly uses the image intensities 16 See also editDead reckoning Odometry Optical flow Optical motion captureReferences edit Maimone M Cheng Y Matthies L 2007 Two years of Visual Odometry on the Mars Exploration Rovers PDF Journal of Field Robotics 24 3 169 186 CiteSeerX 10 1 1 104 3110 doi 10 1002 rob 20184 S2CID 17544166 Retrieved 2008 07 10 Chhaniyara Savan KASPAR ALTHOEFER LAKMAL D SENEVIRATNE 2008 Visual Odometry Technique Using Circular Marker Identification For Motion Parameter Estimation Advances in Mobile Robotics Proceedings of the Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines Coimbra Portugal The Eleventh International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines Vol 11 World Scientific 2008 Archived from the original on 2012 02 24 Retrieved 2010 01 22 a b Nister D Naroditsky O Bergen J Jan 2004 Visual Odometry Computer Vision and Pattern Recognition 2004 CVPR 2004 Vol 1 pp I 652 I 659 Vol 1 doi 10 1109 CVPR 2004 1315094 a b Comport A I Malis E Rives P 2010 F Chaumette P Corke P Newman eds Real time Quadrifocal Visual Odometry International Journal of Robotics Research 29 2 3 245 266 CiteSeerX 10 1 1 720 3113 doi 10 1177 0278364909356601 S2CID 15139693 Scaramuzza D Siegwart R October 2008 Appearance Guided Monocular Omnidirectional Visual Odometry for Outdoor Ground Vehicles IEEE Transactions on Robotics 24 5 1015 1026 doi 10 1109 TRO 2008 2004490 hdl 20 500 11850 14362 S2CID 13894940 Corke P Strelow D Singh S Omnidirectional visual odometry for a planetary rover Intelligent Robots and Systems 2004 IROS 2004 Proceedings 2004 IEEE RSJ International Conference on Vol 4 doi 10 1109 IROS 2004 1390041 Campbell J Sukthankar R Nourbakhsh I Pittsburgh I R Techniques for evaluating optical flow for visual odometry in extreme terrain Intelligent Robots and Systems 2004 IROS 2004 Proceedings 2004 IEEE RSJ International Conference on Vol 4 doi 10 1109 IROS 2004 1389991 Sunderhauf N Konolige K Lacroix S Protzel P 2005 Visual odometry using sparse bundle adjustment on an autonomous outdoor vehicle In Levi Schanz Lafrenz Avrutin eds Tagungsband Autonome Mobile Systeme 2005 PDF Reihe Informatik aktuell Springer Verlag pp 157 163 Archived from the original PDF on 2009 02 11 Retrieved 2008 07 10 Konolige K Agrawal M Bolles R C Cowan C Fischler M Gerkey B P 2008 Outdoor Mapping and Navigation Using Stereo Vision Experimental Robotics Springer Tracts in Advanced Robotics Vol 39 pp 179 190 doi 10 1007 978 3 540 77457 0 17 ISBN 978 3 540 77456 3 Olson C F Matthies L Schoppers M Maimone M W 2002 Rover navigation using stereo ego motion PDF Robotics and Autonomous Systems 43 4 215 229 doi 10 1016 s0921 8890 03 00004 6 Retrieved 2010 06 06 Cheng Y Maimone M W Matthies L 2006 Visual Odometry on the Mars Exploration Rovers IEEE Robotics and Automation Magazine 13 2 54 62 CiteSeerX 10 1 1 297 4693 doi 10 1109 MRA 2006 1638016 S2CID 15149330 Engel Jakob Schops Thomas Cremers Daniel 2014 LSD SLAM Large Scale Direct Monocular SLAM PDF In Fleet D Pajdla T Schiele B Tuytelaars T eds Computer Vision European Conference on Computer Vision 2014 Lecture Notes in Computer Science Vol 8690 doi 10 1007 978 3 319 10605 2 54 Engel Jakob Sturm Jurgen Cremers Daniel 2013 Semi Dense Visual Odometry for a Monocular Camera PDF IEEE International Conference on Computer Vision ICCV CiteSeerX 10 1 1 402 6918 doi 10 1109 ICCV 2013 183 Zaman M 2007 High Precision Relative Localization Using a Single Camera Robotics and Automation 2007 ICRA 2007 Proceedings 2007 IEEE International Conference on doi 10 1109 ROBOT 2007 364078 Zaman M 2007 High resolution relative localisation using two cameras Journal of Robotics and Autonomous Systems 55 9 685 692 doi 10 1016 j robot 2007 05 008 a b c Irani M Rousso B Peleg S June 1994 Recovery of Ego Motion Using Image Stabilization PDF IEEE Computer Society Conference on Computer Vision and Pattern Recognition 21 23 Retrieved 7 June 2010 Burger W Bhanu B Nov 1990 Estimating 3D egomotion from perspective image sequence IEEE Transactions on Pattern Analysis and Machine Intelligence 12 11 1040 1058 doi 10 1109 34 61704 S2CID 206418830 Shakernia O Vidal R Shankar S 2003 Omnidirectional Egomotion Estimation From Back projection Flow PDF Conference on Computer Vision and Pattern Recognition Workshop 7 82 CiteSeerX 10 1 1 5 8127 doi 10 1109 CVPRW 2003 10074 S2CID 5494756 Retrieved 7 June 2010 Tian T Tomasi C Heeger D 1996 Comparison of Approaches to Egomotion Computation PDF IEEE Computer Society Conference on Computer Vision and Pattern Recognition 315 Archived from the original PDF on August 8 2008 Retrieved 7 June 2010 a b Milella A Siegwart R January 2006 Stereo Based Ego Motion Estimation Using Pixel Tracking and Iterative Closest Point PDF IEEE International Conference on Computer Vision Systems 21 Archived from the original PDF on 17 September 2010 Retrieved 7 June 2010 Olson C F Matthies L Schoppers M Maimoneb M W June 2003 Rover navigation using stereo ego motion PDF Robotics and Autonomous Systems 43 9 215 229 doi 10 1016 s0921 8890 03 00004 6 Retrieved 7 June 2010 Sudin Dinesh Koteswara Rao K Unnikrishnan M Brinda V Lalithambika V R Dhekane M V Improvements in Visual Odometry Algorithm for Planetary Exploration Rovers IEEE International Conference on Emerging Trends in Communication Control Signal Processing amp Computing Applications C2SPCA 2013 Retrieved from https en wikipedia org w index php title Visual odometry amp oldid 1212636684, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.