fbpx
Wikipedia

Corner detection

Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image. Corner detection is frequently used in motion detection, image registration, video tracking, image mosaicing, panorama stitching, 3D reconstruction and object recognition. Corner detection overlaps with the topic of interest point detection.

Output of a typical corner detection algorithm
Egomotion estimation using corner detection

Formalization edit

A corner can be defined as the intersection of two edges. A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighbourhood of the point.

An interest point is a point in an image which has a well-defined position and can be robustly detected. This means that an interest point can be a corner but it can also be, for example, an isolated point of local intensity maximum or minimum, line endings, or a point on a curve where the curvature is locally maximal.

In practice, most so-called corner detection methods detect interest points in general, and in fact, the term "corner" and "interest point" are used more or less interchangeably through the literature.[1] As a consequence, if only corners are to be detected it is necessary to do a local analysis of detected interest points to determine which of these are real corners. Examples of edge detection that can be used with post-processing to detect corners are the Kirsch operator and the Frei-Chen masking set.[2]

"Corner", "interest point" and "feature" are used interchangeably in literature, confusing the issue. Specifically, there are several blob detectors that can be referred to as "interest point operators", but which are sometimes erroneously referred to as "corner detectors". Moreover, there exists a notion of ridge detection to capture the presence of elongated objects.

Corner detectors are not usually very robust and often require large redundancies introduced to prevent the effect of individual errors from dominating the recognition task.

One determination of the quality of a corner detector is its ability to detect the same corner in multiple similar images, under conditions of different lighting, translation, rotation and other transforms.

A simple approach to corner detection in images is using correlation, but this gets very computationally expensive and suboptimal. An alternative approach used frequently is based on a method proposed by Harris and Stephens (below), which in turn is an improvement of a method by Moravec.

Moravec corner detection algorithm edit

This is one of the earliest corner detection algorithms and defines a corner to be a point with low self-similarity.[3] The algorithm tests each pixel in the image to see if a corner is present, by considering how similar a patch centered on the pixel is to nearby, largely overlapping patches. The similarity is measured by taking the sum of squared differences (SSD) between the corresponding pixels of two patches. A lower number indicates more similarity.

If the pixel is in a region of uniform intensity, then the nearby patches will look similar. If the pixel is on an edge, then nearby patches in a direction perpendicular to the edge will look quite different, but nearby patches in a direction parallel to the edge will result in only a small change. If the pixel is on a feature with variation in all directions, then none of the nearby patches will look similar.

The corner strength is defined as the smallest SSD between the patch and its neighbours (horizontal, vertical and on the two diagonals). The reason is that if this number is high, then the variation along all shifts is either equal to it or larger than it, so capturing that all nearby patches look different.

If the corner strength number is computed for all locations, that it is locally maximal for one location indicates that a feature of interest is present in it.

As pointed out by Moravec, one of the main problems with this operator is that it is not isotropic: if an edge is present that is not in the direction of the neighbours (horizontal, vertical, or diagonal), then the smallest SSD will be large and the edge will be incorrectly chosen as an interest point.[4]

The Harris & Stephens / Shi–Tomasi corner detection algorithms edit

Harris and Stephens[5] improved upon Moravec's corner detector by considering the differential of the corner score with respect to direction directly, instead of using shifted patches. (This corner score is often referred to as autocorrelation, since the term is used in the paper in which this detector is described. However, the mathematics in the paper clearly indicate that the sum of squared differences is used.)

Without loss of generality, we will assume a grayscale 2-dimensional image is used. Let this image be given by  . Consider taking an image patch over the area   and shifting it by  . The weighted sum of squared differences (SSD) between these two patches, denoted  , is given by:

 

  can be approximated by a Taylor expansion. Let   and   be the partial derivatives of  , such that

 

This produces the approximation

 

which can be written in matrix form:

 

where A is the structure tensor,

 

In words, we find the covariance of the partial derivative of the image intensity   with respect to the   and   axes.

Angle brackets denote averaging (i.e. summation over  ).   denotes the type of window that slides over the image. If a Box filter is used the response will be anisotropic, but if a Gaussian is used, then the response will be isotropic.

A corner (or in general an interest point) is characterized by a large variation of   in all directions of the vector  . By analyzing the eigenvalues of  , this characterization can be expressed in the following way:   should have two "large" eigenvalues for an interest point. Based on the magnitudes of the eigenvalues, the following inferences can be made based on this argument:

  1. If   and   then this pixel   has no features of interest.
  2. If   and   has some large positive value, then an edge is found.
  3. If   and   have large positive values, then a corner is found.

Harris and Stephens note that exact computation of the eigenvalues is computationally expensive, since it requires the computation of a square root, and instead suggest the following function  , where   is a tunable sensitivity parameter:

 

Therefore, the algorithm[6] does not have to actually compute the eigenvalue decomposition of the matrix   and instead it is sufficient to evaluate the determinant and trace of   to find corners, or rather interest points in general.

The Shi–Tomasi[7] corner detector directly computes   because under certain assumptions, the corners are more stable for tracking. Note that this method is also sometimes referred to as the Kanade–Tomasi corner detector.

The value of   has to be determined empirically, and in the literature values in the range 0.04–0.15 have been reported as feasible.

One can avoid setting the parameter   by using Noble's[8] corner measure   which amounts to the harmonic mean of the eigenvalues:

 

  being a small positive constant.

If   can be interpreted as the precision matrix for the corner position, the covariance matrix for the corner position is  , i.e.

 

The sum of the eigenvalues of  , which in that case can be interpreted as a generalized variance (or a "total uncertainty") of the corner position, is related to Noble's corner measure   by the following equation:

 

The Förstner corner detector edit

 
Corner detection using the Förstner Algorithm

In some cases, one may wish to compute the location of a corner with subpixel accuracy. To achieve an approximate solution, the Förstner[9] algorithm solves for the point closest to all the tangent lines of the corner in a given window and is a least-square solution. The algorithm relies on the fact that for an ideal corner, tangent lines cross at a single point.

The equation of a tangent line   at pixel   is given by:

 

where   is the gradient vector of the image   at  .

The point   closest to all the tangent lines in the window   is:

 

The distance from   to the tangent lines   is weighted by the gradient magnitude, thus giving more importance to tangents passing through pixels with strong gradients.

Solving for  :

 

  are defined as:

 

Minimizing this equation can be done by differentiating with respect to   and setting it equal to 0:

 

Note that   is the structure tensor. For the equation to have a solution,   must be invertible, which implies that   must be full rank (rank 2). Thus, the solution

 

only exists where an actual corner exists in the window  .

A methodology for performing automatic scale selection for this corner localization method has been presented by Lindeberg[10][11] by minimizing the normalized residual

 

over scales. Thereby, the method has the ability to automatically adapt the scale levels for computing the image gradients to the noise level in the image data, by choosing coarser scale levels for noisy image data and finer scale levels for near ideal corner-like structures.

Notes:

  •   can be viewed as a residual in the least-square solution computation: if  , then there was no error.
  • this algorithm can be modified to compute centers of circular features by changing tangent lines to normal lines.

The multi-scale Harris operator edit

The computation of the second moment matrix (sometimes also referred to as the structure tensor)   in the Harris operator, requires the computation of image derivatives   in the image domain as well as the summation of non-linear combinations of these derivatives over local neighbourhoods. Since the computation of derivatives usually involves a stage of scale-space smoothing, an operational definition of the Harris operator requires two scale parameters: (i) a local scale for smoothing prior to the computation of image derivatives, and (ii) an integration scale for accumulating the non-linear operations on derivative operators into an integrated image descriptor.

With   denoting the original image intensity, let   denote the scale space representation of   obtained by convolution with a Gaussian kernel

 

with local scale parameter  :

 

and let   and   denote the partial derivatives of  . Moreover, introduce a Gaussian window function   with integration scale parameter  . Then, the multi-scale second-moment matrix[12][13][14] can be defined as

 

Then, we can compute eigenvalues of   in a similar way as the eigenvalues of   and define the multi-scale Harris corner measure as

 

Concerning the choice of the local scale parameter   and the integration scale parameter  , these scale parameters are usually coupled by a relative integration scale parameter   such that  , where   is usually chosen in the interval  .[12][13] Thus, we can compute the multi-scale Harris corner measure   at any scale   in scale-space to obtain a multi-scale corner detector, which responds to corner structures of varying sizes in the image domain.

In practice, this multi-scale corner detector is often complemented by a scale selection step, where the scale-normalized Laplacian operator[11][12]

 

is computed at every scale in scale-space and scale adapted corner points with automatic scale selection (the "Harris-Laplace operator") are computed from the points that are simultaneously:[15]

  • spatial maxima of the multi-scale corner measure  
     
  • local maxima or minima over scales of the scale-normalized Laplacian operator[11]  :
     

The level curve curvature approach edit

An earlier approach to corner detection is to detect points where the curvature of level curves and the gradient magnitude are simultaneously high.[16][17] A differential way to detect such points is by computing the rescaled level curve curvature (the product of the level curve curvature and the gradient magnitude raised to the power of three)

 

and to detect positive maxima and negative minima of this differential expression at some scale   in the scale space representation   of the original image.[10][11] A main problem when computing the rescaled level curve curvature entity at a single scale however, is that it may be sensitive to noise and to the choice of the scale level. A better method is to compute the  -normalized rescaled level curve curvature

 

with   and to detect signed scale-space extrema of this expression, that are points and scales that are positive maxima and negative minima with respect to both space and scale

 

in combination with a complementary localization step to handle the increase in localization error at coarser scales.[10][11][12] In this way, larger scale values will be associated with rounded corners of large spatial extent while smaller scale values will be associated with sharp corners with small spatial extent. This approach is the first corner detector with automatic scale selection (prior to the "Harris-Laplace operator" above) and has been used for tracking corners under large scale variations in the image domain[18] and for matching corner responses to edges to compute structural image features for geon-based object recognition.[19]

Laplacian of Gaussian, differences of Gaussians and determinant of the Hessian scale-space interest points edit

LoG[11][12][15] is an acronym standing for Laplacian of Gaussian, DoG[20] is an acronym standing for difference of Gaussians (DoG is an approximation of LoG), and DoH is an acronym standing for determinant of the Hessian.[11] These scale-invariant interest points are all extracted by detecting scale-space extrema of scale-normalized differential expressions, i.e., points in scale-space where the corresponding scale-normalized differential expressions assume local extrema with respect to both space and scale[11]

 

where   denotes the appropriate scale-normalized differential entity (defined below).

These detectors are more completely described in blob detection. The scale-normalized Laplacian of the Gaussian and difference-of-Gaussian features (Lindeberg 1994, 1998; Lowe 2004)[11][12][20]

 

do not necessarily make highly selective features, since these operators may also lead to responses near edges. To improve the corner detection ability of the differences of Gaussians detector, the feature detector used in the SIFT[20] system therefore uses an additional post-processing stage, where the eigenvalues of the Hessian of the image at the detection scale are examined in a similar way as in the Harris operator. If the ratio of the eigenvalues is too high, then the local image is regarded as too edge-like, so the feature is rejected. Also Lindeberg's Laplacian of the Gaussian feature detector can be defined to comprise complementary thresholding on a complementary differential invariant to suppress responses near edges.[21]

The scale-normalized determinant of the Hessian operator (Lindeberg 1994, 1998)[11][12]

 

is on the other hand highly selective to well localized image features and does only respond when there are significant grey-level variations in two image directions[11][14] and is in this and other respects a better interest point detector than the Laplacian of the Gaussian. The determinant of the Hessian is an affine covariant differential expression and has better scale selection properties under affine image transformations than the Laplacian operator (Lindeberg 2013, 2015).[21][22] Experimentally this implies that determinant of the Hessian interest points have better repeatability properties under local image deformation than Laplacian interest points, which in turns leads to better performance of image-based matching in terms higher efficiency scores and lower 1−precision scores.[21]

The scale selection properties, affine transformation properties and experimental properties of these and other scale-space interest point detectors are analyzed in detail in (Lindeberg 2013, 2015).[21][22]

Scale-space interest points based on the Lindeberg Hessian feature strength measures edit

Inspired by the structurally similar properties of the Hessian matrix   of a function   and the second-moment matrix (structure tensor)  , as can e.g. be manifested in terms of their similar transformation properties under affine image deformations[13][21]

 ,
 ,

Lindeberg (2013, 2015)[21][22] proposed to define four feature strength measures from the Hessian matrix in related ways as the Harris and Shi-and-Tomasi operators are defined from the structure tensor (second-moment matrix). Specifically, he defined the following unsigned and signed Hessian feature strength measures:

  • the unsigned Hessian feature strength measure I:
     
  • the signed Hessian feature strength measure I:
     
  • the unsigned Hessian feature strength measure II:
     
  • the signed Hessian feature strength measure II:
     

where   and   denote the trace and the determinant of the Hessian matrix   of the scale-space representation   at any scale  , whereas

 
 

denote the eigenvalues of the Hessian matrix.[23]

The unsigned Hessian feature strength measure   responds to local extrema by positive values and is not sensitive to saddle points, whereas the signed Hessian feature strength measure   does additionally respond to saddle points by negative values. The unsigned Hessian feature strength measure   is insensitive to the local polarity of the signal, whereas the signed Hessian feature strength measure   responds to the local polarity of the signal by the sign of its output.

In Lindeberg (2015)[21] these four differential entities were combined with local scale selection based on either scale-space extrema detection

 

or scale linking. Furthermore, the signed and unsigned Hessian feature strength measures   and   were combined with complementary thresholding on  .

By experiments on image matching under scaling transformations on a poster dataset with 12 posters with multi-view matching over scaling transformations up to a scaling factor of 6 and viewing direction variations up to a slant angle of 45 degrees with local image descriptors defined from reformulations of the pure image descriptors in the SIFT and SURF operators to image measurements in terms of Gaussian derivative operators (Gauss-SIFT and Gauss-SURF) instead of original SIFT as defined from an image pyramid or original SURF as defined from Haar wavelets, it was shown that scale-space interest point detection based on the unsigned Hessian feature strength measure   allowed for the best performance and better performance than scale-space interest points obtained from the determinant of the Hessian  . Both the unsigned Hessian feature strength measure  , the signed Hessian feature strength measure   and the determinant of the Hessian   allowed for better performance than the Laplacian of the Gaussian  . When combined with scale linking and complementary thresholding on  , the signed Hessian feature strength measure   did additionally allow for better performance than the Laplacian of the Gaussian  .

Furthermore, it was shown that all these differential scale-space interest point detectors defined from the Hessian matrix allow for the detection of a larger number of interest points and better matching performance compared to the Harris and Shi-and-Tomasi operators defined from the structure tensor (second-moment matrix).

A theoretical analysis of the scale selection properties of these four Hessian feature strength measures and other differential entities for detecting scale-space interest points, including the Laplacian of the Gaussian and the determinant of the Hessian, is given in Lindeberg (2013)[22] and an analysis of their affine transformation properties as well as experimental properties in Lindeberg (2015).[21]

Affine-adapted interest point operators edit

The interest points obtained from the multi-scale Harris operator with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain an interest point operator that is more robust to perspective transformations, a natural approach is to devise a feature detector that is invariant to affine transformations. In practice, affine invariant interest points can be obtained by applying affine shape adaptation where the shape of the smoothing kernel is iteratively warped to match the local image structure around the interest point or equivalently a local image patch is iteratively warped while the shape of the smoothing kernel remains rotationally symmetric (Lindeberg 1993, 2008; Lindeberg and Garding 1997; Mikolajzcyk and Schmid 2004).[12][13][14][15] Hence, besides the commonly used multi-scale Harris operator, affine shape adaptation can be applied to other corner detectors as listed in this article as well as to differential blob detectors such as the Laplacian/difference of Gaussian operator, the determinant of the Hessian[14] and the Hessian–Laplace operator.

The Wang and Brady corner detection algorithm edit

The Wang and Brady[24] detector considers the image to be a surface, and looks for places where there is large curvature along an image edge. In other words, the algorithm looks for places where the edge changes direction rapidly. The corner score,  , is given by:

 

where   is the unit vector perpendicular to the gradient, and   determines how edge-phobic the detector is. The authors also note that smoothing (Gaussian is suggested) is required to reduce noise.

Smoothing also causes displacement of corners, so the authors derive an expression for the displacement of a 90 degree corner, and apply this as a correction factor to the detected corners.

The SUSAN corner detector edit

SUSAN[25] is an acronym standing for smallest univalue segment assimilating nucleus. This method is the subject of a 1994 UK patent which is no longer in force.[26]

For feature detection, SUSAN places a circular mask over the pixel to be tested (the nucleus). The region of the mask is  , and a pixel in this mask is represented by  . The nucleus is at  . Every pixel is compared to the nucleus using the comparison function:

 

where   is the brightness difference threshold,[27]   is the brightness of the pixel and the power of the exponent has been determined empirically. This function has the appearance of a smoothed top-hat or rectangular function. The area of the SUSAN is given by:

 

If   is the rectangular function, then   is the number of pixels in the mask which are within   of the nucleus. The response of the SUSAN operator is given by:

 

where   is named the 'geometric threshold'. In other words, the SUSAN operator only has a positive score if the area is small enough. The smallest SUSAN locally can be found using non-maximal suppression, and this is the complete SUSAN operator.

The value   determines how similar points have to be to the nucleus before they are considered to be part of the univalue segment. The value of   determines the minimum size of the univalue segment. If   is large enough, then this becomes an edge detector.

For corner detection, two further steps are used. Firstly, the centroid of the SUSAN is found. A proper corner will have the centroid far from the nucleus. The second step insists that all points on the line from the nucleus through the centroid out to the edge of the mask are in the SUSAN.

The Trajkovic and Hedley corner detector edit

In a manner similar to SUSAN, this detector[28] directly tests whether a patch under a pixel is self-similar by examining nearby pixels.   is the pixel to be considered, and   is point on a circle   centered around  . The point   is the point opposite to   along the diameter.

The response function is defined as:

 

This will be large when there is no direction in which the centre pixel is similar to two nearby pixels along a diameter.   is a discretised circle (a Bresenham circle), so interpolation is used for intermediate diameters to give a more isotropic response. Since any computation gives an upper bound on the  , the horizontal and vertical directions are checked first to see if it is worth proceeding with the complete computation of  .

AST-based feature detectors edit

AST is an acronym standing for accelerated segment test. This test is a relaxed version of the SUSAN corner criterion. Instead of evaluating the circular disc, only the pixels in a Bresenham circle of radius   around the candidate point are considered. If   contiguous pixels are all brighter than the nucleus by at least   or all darker than the nucleus by  , then the pixel under the nucleus is considered to be a feature. This test is reported to produce very stable features.[29] The choice of the order in which the pixels are tested is a so-called Twenty Questions problem. Building short decision trees for this problem results in the most computationally efficient feature detectors available.

The first corner detection algorithm based on the AST is FAST (features from accelerated segment test).[29] Although   can in principle take any value, FAST uses only a value of 3 (corresponding to a circle of 16 pixels circumference), and tests show that the best results are achieved with   being 9. This value of   is the lowest one at which edges are not detected. The order in which pixels are tested is determined by the ID3 algorithm from a training set of images. Confusingly, the name of the detector is somewhat similar to the name of the paper describing Trajkovic and Hedley's detector.

Automatic synthesis of detectors edit

Trujillo and Olague[30] introduced a method by which genetic programming is used to automatically synthesize image operators that can detect interest points. The terminal and function sets contain primitive operations that are common in many previously proposed man-made designs. Fitness measures the stability of each operator through the repeatability rate, and promotes a uniform dispersion of detected points across the image plane. The performance of the evolved operators has been confirmed experimentally using training and testing sequences of progressively transformed images. Hence, the proposed GP algorithm is considered to be human-competitive for the problem of interest point detection.

Spatio-temporal interest point detectors edit

The Harris operator has been extended to space-time by Laptev and Lindeberg.[31] Let   denote the spatio-temporal second-moment matrix defined by

 

Then, for a suitable choice of  , spatio-temporal interest points are detected from spatio-temporal extrema of the following spatio-temporal Harris measure:

 

The determinant of the Hessian operator has been extended to joint space-time by Willems et al [32] and Lindeberg,[33] leading to the following scale-normalized differential expression:

 

In the work by Willems et al,[32] a simpler expression corresponding to   and   was used. In Lindeberg,[33] it was shown that   and   implies better scale selection properties in the sense that the selected scale levels obtained from a spatio-temporal Gaussian blob with spatial extent   and temporal extent   will perfectly match the spatial extent and the temporal duration of the blob, with scale selection performed by detecting spatio-temporal scale-space extrema of the differential expression.

The Laplacian operator has been extended to spatio-temporal video data by Lindeberg,[33] leading to the following two spatio-temporal operators, which also constitute models of receptive fields of non-lagged vs. lagged neurons in the LGN:

 
 

For the first operator, scale selection properties call for using   and  , if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of an onset Gaussian blob. For the second operator, scale selection properties call for using   and  , if we want this operator to assume its maximum value over spatio-temporal scales at a spatio-temporal scale level reflecting the spatial extent and the temporal duration of a blinking Gaussian blob.

Colour extensions of spatio-temporal interest point detectors have been investigated by Everts et al.[34]

Bibliography edit

  1. ^ Andrew Willis and Yunfeng Sui (2009). "An Algebraic Model for fast Corner Detection". 2009 IEEE 12th International Conference on Computer Vision. IEEE. pp. 2296–2302. doi:10.1109/ICCV.2009.5459443. ISBN 978-1-4244-4420-5.
  2. ^ Shapiro, Linda and George C. Stockman (2001). Computer Vision, p. 257. Prentice Books, Upper Saddle River. ISBN 0-13-030796-3.
  3. ^ H. Moravec (1980). "Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover". Tech Report CMU-RI-TR-3 Carnegie-Mellon University, Robotics Institute.
  4. ^ Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover, Hans Moravec, March 1980, Computer Science Department, Stanford University (Ph.D. thesis)
  5. ^ C. Harris and M. Stephens (1988). (PDF). Proceedings of the 4th Alvey Vision Conference. pp. 147–151. Archived from the original (PDF) on 2022-04-01. Retrieved 2010-12-30.
  6. ^ Javier Sánchez, Nelson Monzón and Agustín Salgado (2018). . Image Processing on Line. 8: 305–328. doi:10.5201/ipol.2018.229. hdl:10553/43499. Archived from the original on 2020-05-11. Retrieved 2020-05-06.{{cite journal}}: CS1 maint: bot: original URL status unknown (link)
  7. ^ J. Shi and C. Tomasi (June 1994). "Good Features to Track". 9th IEEE Conference on Computer Vision and Pattern Recognition. Springer. pp. 593–600. CiteSeerX 10.1.1.36.2669. doi:10.1109/CVPR.1994.323794.
    C. Tomasi and T. Kanade (1991). Detection and Tracking of Point Features (Technical report). School of Computer Science, Carnegie Mellon University. CiteSeerX 10.1.1.45.5770. CMU-CS-91-132.
  8. ^ A. Noble (1989). Descriptions of Image Surfaces (Ph.D.). Department of Engineering Science, Oxford University. p. 45.
  9. ^ Förstner, W; Gülch (1987). "A Fast Operator for Detection and Precise Location of Distinct Points, Corners and Centres of Circular Features" (PDF). ISPRS.
  10. ^ a b c T. Lindeberg (1994). "Junction detection with automatic selection of detection scales and localization scales". Proc. 1st International Conference on Image Processing. Vol. I. Austin, Texas. pp. 924–928.
  11. ^ a b c d e f g h i j k Tony Lindeberg (1998). "Feature detection with automatic scale selection". International Journal of Computer Vision. Vol. 30, no. 2. pp. 77–116.
  12. ^ a b c d e f g h T. Lindeberg (1994). Scale-Space Theory in Computer Vision. Springer. ISBN 978-0-7923-9418-1.
  13. ^ a b c d T. Lindeberg and J. Garding "Shape-adapted smoothing in estimation of 3-D depth cues from affine distortions of local 2-D structure". Image and Vision Computing 15 (6): pp 415–434, 1997.
  14. ^ a b c d T. Lindeberg (2008). "Scale-Space". In Benjamin Wah (ed.). Wiley Encyclopedia of Computer Science and Engineering. Vol. IV. John Wiley and Sons. pp. 2495–2504. doi:10.1002/9780470050118.ecse609. ISBN 978-0-470-05011-8.
  15. ^ a b c K. Mikolajczyk, K. and C. Schmid (2004). "Scale and affine invariant interest point detectors" (PDF). International Journal of Computer Vision. 60 (1): 63–86. doi:10.1023/B:VISI.0000027790.02288.f2. S2CID 1704741.
  16. ^ L. Kitchen and A. Rosenfeld (1982). "Gray-level corner detection". Pattern Recognition Letters. Vol. 1, no. 2. pp. 95–102.
  17. ^ J. J. Koenderink and W. Richards (1988). "Two-dimensional curvature operators". Journal of the Optical Society of America A. Vol. 5, no. 7. pp. 1136–1141.
  18. ^ L. Bretzner and T. Lindeberg (1998). "Feature tracking with automatic selection of spatial scales". Computer Vision and Image Understanding. Vol. 71. pp. 385–392.
  19. ^ T. Lindeberg and M.-X. Li (1997). "Segmentation and classification of edges using minimum description length approximation and complementary junction cues". Computer Vision and Image Understanding. Vol. 67, no. 1. pp. 88–98.
  20. ^ a b c D. Lowe (2004). "Distinctive Image Features from Scale-Invariant Keypoints". International Journal of Computer Vision. 60 (2): 91. CiteSeerX 10.1.1.73.2924. doi:10.1023/B:VISI.0000029664.99615.94. S2CID 221242327.
  21. ^ a b c d e f g h T. Lindeberg ``Image matching using generalized scale-space interest points", Journal of Mathematical Imaging and Vision, volume 52, number 1, pages 3-36, 2015.
  22. ^ a b c d T. Lindeberg "Scale selection properties of generalized scale-space interest point detectors", Journal of Mathematical Imaging and Vision, Volume 46, Issue 2, pages 177-210, 2013.
  23. ^ Lindeberg, T. (1998). "Edge detection and ridge detection with automatic scale selection". International Journal of Computer Vision. 30 (2): 117–154. doi:10.1023/A:1008097225773. S2CID 35328443.
  24. ^ H. Wang and M. Brady (1995). "Real-time corner detection algorithm for motion estimation". Image and Vision Computing. 13 (9): 695–703. doi:10.1016/0262-8856(95)98864-P.
  25. ^ S. M. Smith and J. M. Brady (May 1997). "SUSAN – a new approach to low level image processing". International Journal of Computer Vision. 23 (1): 45–78. doi:10.1023/A:1007963824710. S2CID 15033310.
    S. M. Smith and J. M. Brady (January 1997), "Method for digitally processing images to determine the position of edges and/or corners therein for guidance of unmanned vehicle". UK Patent 2272285, Proprietor: Secretary of State for Defence, UK.
  26. ^ GB patent 2272285, Smith, Stephen Mark, "Determining the position of edges and corners in images", published 1994-05-11, issued 1994-05-11, assigned to Secr Defence 
  27. ^ "The SUSAN Edge Detector in Detail".
  28. ^ M. Trajkovic and M. Hedley (1998). "Fast corner detection". Image and Vision Computing. 16 (2): 75–87. doi:10.1016/S0262-8856(97)00056-5.
  29. ^ a b E. Rosten and T. Drummond (May 2006). "Machine learning for high-speed corner detection". European Conference on Computer Vision.
  30. ^ Leonardo Trujillo and Gustavo Olague (2008). (PDF). Evolutionary Computation. 16 (4): 483–507. doi:10.1162/evco.2008.16.4.483. PMID 19053496. S2CID 17704640. Archived from the original (PDF) on 2011-07-17.
  31. ^ Ivan Laptev and Tony Lindeberg (2003). "Space-time interest points". International Conference on Computer Vision. IEEE. pp. 432–439.
  32. ^ a b Geert Willems, Tinne Tuytelaars and Luc van Gool (2008). "An efficient dense and scale-invariant spatiotemporal-temporal interest point detector". European Conference on Computer Vision. Springer Lecture Notes in Computer Science. Vol. 5303. pp. 650–663. doi:10.1007/978-3-540-88688-4_48.
  33. ^ a b c Tony Lindeberg (2018). "Spatio-temporal scale selection in video data". Journal of Mathematical Imaging and Vision. 60 (4): 525–562. doi:10.1007/s10851-017-0766-9. S2CID 254649837.
  34. ^ I. Everts, J. van Gemert and T. Gevers (2014). "Evaluation of color spatio-temporal interest points for human action recognition". IEEE Transactions on Image Processing. 23 (4): 1569–1589. doi:10.1109/TIP.2014.2302677. PMID 24577192. S2CID 1999196.

Reference implementations edit

This section provides external links to reference implementations of some of the detectors described above. These reference implementations are provided by the authors of the paper in which the detector is first described. These may contain details not present or explicit in the papers describing the features.

  • DoG detection (as part of the SIFT system), Windows and x86 Linux executables
  • Harris-Laplace, static Linux executables. Also contains DoG and LoG detectors and affine adaptation for all detectors included.
  • FAST detector, C, C++, MATLAB source code and executables for various operating systems and architectures.
  • lip-vireo 2017-05-11 at the Wayback Machine, [LoG, DoG, Harris-Laplacian, Hessian and Hessian-Laplacian], [SIFT, flip invariant SIFT, PCA-SIFT, PSIFT, Steerable Filters, SPIN][Linux, Windows and SunOS] executables.
  • SUSAN Low Level Image Processing, C source code.
  • Online Implementation of the Harris Corner Detector - IPOL

See also edit

External links edit

corner, detection, approach, used, within, computer, vision, systems, extract, certain, kinds, features, infer, contents, image, frequently, used, motion, detection, image, registration, video, tracking, image, mosaicing, panorama, stitching, reconstruction, o. Corner detection is an approach used within computer vision systems to extract certain kinds of features and infer the contents of an image Corner detection is frequently used in motion detection image registration video tracking image mosaicing panorama stitching 3D reconstruction and object recognition Corner detection overlaps with the topic of interest point detection Output of a typical corner detection algorithm Egomotion estimation using corner detection Contents 1 Formalization 2 Moravec corner detection algorithm 3 The Harris amp Stephens Shi Tomasi corner detection algorithms 4 The Forstner corner detector 5 The multi scale Harris operator 6 The level curve curvature approach 7 Laplacian of Gaussian differences of Gaussians and determinant of the Hessian scale space interest points 8 Scale space interest points based on the Lindeberg Hessian feature strength measures 9 Affine adapted interest point operators 10 The Wang and Brady corner detection algorithm 11 The SUSAN corner detector 12 The Trajkovic and Hedley corner detector 13 AST based feature detectors 14 Automatic synthesis of detectors 15 Spatio temporal interest point detectors 16 Bibliography 17 Reference implementations 18 See also 19 External linksFormalization editA corner can be defined as the intersection of two edges A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighbourhood of the point An interest point is a point in an image which has a well defined position and can be robustly detected This means that an interest point can be a corner but it can also be for example an isolated point of local intensity maximum or minimum line endings or a point on a curve where the curvature is locally maximal In practice most so called corner detection methods detect interest points in general and in fact the term corner and interest point are used more or less interchangeably through the literature 1 As a consequence if only corners are to be detected it is necessary to do a local analysis of detected interest points to determine which of these are real corners Examples of edge detection that can be used with post processing to detect corners are the Kirsch operator and the Frei Chen masking set 2 Corner interest point and feature are used interchangeably in literature confusing the issue Specifically there are several blob detectors that can be referred to as interest point operators but which are sometimes erroneously referred to as corner detectors Moreover there exists a notion of ridge detection to capture the presence of elongated objects Corner detectors are not usually very robust and often require large redundancies introduced to prevent the effect of individual errors from dominating the recognition task One determination of the quality of a corner detector is its ability to detect the same corner in multiple similar images under conditions of different lighting translation rotation and other transforms A simple approach to corner detection in images is using correlation but this gets very computationally expensive and suboptimal An alternative approach used frequently is based on a method proposed by Harris and Stephens below which in turn is an improvement of a method by Moravec Moravec corner detection algorithm editThis is one of the earliest corner detection algorithms and defines a corner to be a point with low self similarity 3 The algorithm tests each pixel in the image to see if a corner is present by considering how similar a patch centered on the pixel is to nearby largely overlapping patches The similarity is measured by taking the sum of squared differences SSD between the corresponding pixels of two patches A lower number indicates more similarity If the pixel is in a region of uniform intensity then the nearby patches will look similar If the pixel is on an edge then nearby patches in a direction perpendicular to the edge will look quite different but nearby patches in a direction parallel to the edge will result in only a small change If the pixel is on a feature with variation in all directions then none of the nearby patches will look similar The corner strength is defined as the smallest SSD between the patch and its neighbours horizontal vertical and on the two diagonals The reason is that if this number is high then the variation along all shifts is either equal to it or larger than it so capturing that all nearby patches look different If the corner strength number is computed for all locations that it is locally maximal for one location indicates that a feature of interest is present in it As pointed out by Moravec one of the main problems with this operator is that it is not isotropic if an edge is present that is not in the direction of the neighbours horizontal vertical or diagonal then the smallest SSD will be large and the edge will be incorrectly chosen as an interest point 4 The Harris amp Stephens Shi Tomasi corner detection algorithms editFurther information Harris Corner Detector Harris and Stephens 5 improved upon Moravec s corner detector by considering the differential of the corner score with respect to direction directly instead of using shifted patches This corner score is often referred to as autocorrelation since the term is used in the paper in which this detector is described However the mathematics in the paper clearly indicate that the sum of squared differences is used Without loss of generality we will assume a grayscale 2 dimensional image is used Let this image be given by I displaystyle I nbsp Consider taking an image patch over the area u v displaystyle u v nbsp and shifting it by x y displaystyle x y nbsp The weighted sum of squared differences SSD between these two patches denoted S displaystyle S nbsp is given by S x y u v w u v I u x v y I u v 2 displaystyle S x y sum u sum v w u v left I u x v y I u v right 2 nbsp I u x v y displaystyle I u x v y nbsp can be approximated by a Taylor expansion Let I x displaystyle I x nbsp and I y displaystyle I y nbsp be the partial derivatives of I displaystyle I nbsp such that I u x v y I u v I x u v x I y u v y displaystyle I u x v y approx I u v I x u v x I y u v y nbsp This produces the approximation S x y u v w u v I x u v x I y u v y 2 displaystyle S x y approx sum u sum v w u v left I x u v x I y u v y right 2 nbsp which can be written in matrix form S x y x y A x y displaystyle S x y approx begin bmatrix x amp y end bmatrix A begin bmatrix x y end bmatrix nbsp where A is the structure tensor A u v w u v I x u v 2 I x u v I y u v I x u v I y u v I y u v 2 I x 2 I x I y I x I y I y 2 displaystyle A sum u sum v w u v begin bmatrix I x u v 2 amp I x u v I y u v I x u v I y u v amp I y u v 2 end bmatrix begin bmatrix langle I x 2 rangle amp langle I x I y rangle langle I x I y rangle amp langle I y 2 rangle end bmatrix nbsp In words we find the covariance of the partial derivative of the image intensity I displaystyle I nbsp with respect to the x displaystyle x nbsp and y displaystyle y nbsp axes Angle brackets denote averaging i e summation over u v displaystyle u v nbsp w u v displaystyle w u v nbsp denotes the type of window that slides over the image If a Box filter is used the response will be anisotropic but if a Gaussian is used then the response will be isotropic A corner or in general an interest point is characterized by a large variation of S displaystyle S nbsp in all directions of the vector x y displaystyle begin bmatrix x amp y end bmatrix nbsp By analyzing the eigenvalues of A displaystyle A nbsp this characterization can be expressed in the following way A displaystyle A nbsp should have two large eigenvalues for an interest point Based on the magnitudes of the eigenvalues the following inferences can be made based on this argument If l 1 0 displaystyle lambda 1 approx 0 nbsp and l 2 0 displaystyle lambda 2 approx 0 nbsp then this pixel x y displaystyle x y nbsp has no features of interest If l 1 0 displaystyle lambda 1 approx 0 nbsp and l 2 displaystyle lambda 2 nbsp has some large positive value then an edge is found If l 1 displaystyle lambda 1 nbsp and l 2 displaystyle lambda 2 nbsp have large positive values then a corner is found Harris and Stephens note that exact computation of the eigenvalues is computationally expensive since it requires the computation of a square root and instead suggest the following function M c displaystyle M c nbsp where k displaystyle kappa nbsp is a tunable sensitivity parameter M c l 1 l 2 k l 1 l 2 2 det A k trace 2 A displaystyle M c lambda 1 lambda 2 kappa left lambda 1 lambda 2 right 2 det A kappa operatorname trace 2 A nbsp Therefore the algorithm 6 does not have to actually compute the eigenvalue decomposition of the matrix A displaystyle A nbsp and instead it is sufficient to evaluate the determinant and trace of A displaystyle A nbsp to find corners or rather interest points in general The Shi Tomasi 7 corner detector directly computes min l 1 l 2 displaystyle min lambda 1 lambda 2 nbsp because under certain assumptions the corners are more stable for tracking Note that this method is also sometimes referred to as the Kanade Tomasi corner detector The value of k displaystyle kappa nbsp has to be determined empirically and in the literature values in the range 0 04 0 15 have been reported as feasible One can avoid setting the parameter k displaystyle kappa nbsp by using Noble s 8 corner measure M c displaystyle M c nbsp which amounts to the harmonic mean of the eigenvalues M c 2 det A trace A ϵ displaystyle M c 2 frac det A operatorname trace A epsilon nbsp ϵ displaystyle epsilon nbsp being a small positive constant If A displaystyle A nbsp can be interpreted as the precision matrix for the corner position the covariance matrix for the corner position is A 1 displaystyle A 1 nbsp i e 1 I x 2 I y 2 I x I y 2 I y 2 I x I y I x I y I x 2 displaystyle frac 1 langle I x 2 rangle langle I y 2 rangle langle I x I y rangle 2 begin bmatrix langle I y 2 rangle amp langle I x I y rangle langle I x I y rangle amp langle I x 2 rangle end bmatrix nbsp The sum of the eigenvalues of A 1 displaystyle A 1 nbsp which in that case can be interpreted as a generalized variance or a total uncertainty of the corner position is related to Noble s corner measure M c displaystyle M c nbsp by the following equation l 1 A 1 l 2 A 1 trace A det A 2 M c displaystyle lambda 1 A 1 lambda 2 A 1 frac operatorname trace A det A approx frac 2 M c nbsp The Forstner corner detector edit nbsp Corner detection using the Forstner Algorithm In some cases one may wish to compute the location of a corner with subpixel accuracy To achieve an approximate solution the Forstner 9 algorithm solves for the point closest to all the tangent lines of the corner in a given window and is a least square solution The algorithm relies on the fact that for an ideal corner tangent lines cross at a single point The equation of a tangent line T x x displaystyle T mathbf x mathbf x nbsp at pixel x displaystyle mathbf x nbsp is given by T x x I x x x 0 displaystyle T mathbf x mathbf x nabla I mathbf x top mathbf x mathbf x 0 nbsp where I x I x I y displaystyle nabla I mathbf x begin bmatrix I mathbf x amp I mathbf y end bmatrix top nbsp is the gradient vector of the image I displaystyle I nbsp at x displaystyle mathbf x nbsp The point x 0 displaystyle mathbf x 0 nbsp closest to all the tangent lines in the window N displaystyle N nbsp is x 0 argmin x R 2 1 x N T x x 2 d x displaystyle mathbf x 0 underset mathbf x in mathbb R 2 times 1 operatorname argmin int mathbf x in N T mathbf x mathbf x 2 d mathbf x nbsp The distance from x 0 displaystyle mathbf x 0 nbsp to the tangent lines T x displaystyle T mathbf x nbsp is weighted by the gradient magnitude thus giving more importance to tangents passing through pixels with strong gradients Solving for x 0 displaystyle mathbf x 0 nbsp x 0 argmin x R 2 1 x N I x x x 2 d x argmin x R 2 1 x N x x I x I x x x d x argmin x R 2 1 x A x 2 x b c displaystyle begin aligned mathbf x 0 amp underset mathbf x in mathbb R 2 times 1 operatorname argmin int mathbf x in N left nabla I left mathbf x right top left mathbf x mathbf x right right 2 d mathbf x amp underset mathbf x in mathbb R 2 times 1 operatorname argmin int mathbf x in N mathbf x mathbf x top nabla I mathbf x nabla I mathbf x top mathbf x mathbf x d mathbf x amp underset mathbf x in mathbb R 2 times 1 operatorname argmin left mathbf x top A mathbf x 2 mathbf x top mathbf b c right end aligned nbsp A R 2 2 b R 2 1 c R displaystyle A in mathbb R 2 times 2 textbf b in mathbb R 2 times 1 c in mathbb R nbsp are defined as A I x I x d x b I x I x x d x c x I x I x x d x displaystyle begin aligned A amp int nabla I mathbf x nabla I mathbf x top d mathbf x mathbf b amp int nabla I mathbf x nabla I mathbf x top mathbf x d mathbf x c amp int mathbf x top nabla I mathbf x nabla I mathbf x top mathbf x d mathbf x end aligned nbsp Minimizing this equation can be done by differentiating with respect to x displaystyle x nbsp and setting it equal to 0 2 A x 2 b 0 A x b displaystyle 2A mathbf x 2 mathbf b 0 Rightarrow A mathbf x mathbf b nbsp Note that A R 2 2 displaystyle A in mathbb R 2 times 2 nbsp is the structure tensor For the equation to have a solution A displaystyle A nbsp must be invertible which implies that A displaystyle A nbsp must be full rank rank 2 Thus the solution x 0 A 1 b displaystyle x 0 A 1 mathbf b nbsp only exists where an actual corner exists in the window N displaystyle N nbsp A methodology for performing automatic scale selection for this corner localization method has been presented by Lindeberg 10 11 by minimizing the normalized residual d min c b T A 1 b trace A displaystyle tilde d min frac c b T A 1 b operatorname trace A nbsp over scales Thereby the method has the ability to automatically adapt the scale levels for computing the image gradients to the noise level in the image data by choosing coarser scale levels for noisy image data and finer scale levels for near ideal corner like structures Notes c displaystyle c nbsp can be viewed as a residual in the least square solution computation if c 0 displaystyle c 0 nbsp then there was no error this algorithm can be modified to compute centers of circular features by changing tangent lines to normal lines The multi scale Harris operator editThe computation of the second moment matrix sometimes also referred to as the structure tensor A displaystyle A nbsp in the Harris operator requires the computation of image derivatives I x I y displaystyle I x I y nbsp in the image domain as well as the summation of non linear combinations of these derivatives over local neighbourhoods Since the computation of derivatives usually involves a stage of scale space smoothing an operational definition of the Harris operator requires two scale parameters i a local scale for smoothing prior to the computation of image derivatives and ii an integration scale for accumulating the non linear operations on derivative operators into an integrated image descriptor With I displaystyle I nbsp denoting the original image intensity let L displaystyle L nbsp denote the scale space representation of I displaystyle I nbsp obtained by convolution with a Gaussian kernel g x y t 1 2 p t e x 2 y 2 2 t displaystyle g x y t frac 1 2 pi t e left x 2 y 2 right 2t nbsp with local scale parameter t displaystyle t nbsp L x y t g x y t I x y displaystyle L x y t g x y t I x y nbsp and let L x x L displaystyle L x partial x L nbsp and L y y L displaystyle L y partial y L nbsp denote the partial derivatives of L displaystyle L nbsp Moreover introduce a Gaussian window function g x y s displaystyle g x y s nbsp with integration scale parameter s displaystyle s nbsp Then the multi scale second moment matrix 12 13 14 can be defined as m x y t s 3 h L x 2 x 3 y h t L x x 3 y h t L y x 3 y h t L x x 3 y h t L y x 3 y h t L y 2 x 3 y h t g 3 h s d 3 d h displaystyle mu x y t s int xi infty infty int eta infty infty begin bmatrix L x 2 x xi y eta t amp L x x xi y eta t L y x xi y eta t L x x xi y eta t L y x xi y eta t amp L y 2 x xi y eta t end bmatrix g xi eta s d xi d eta nbsp Then we can compute eigenvalues of m displaystyle mu nbsp in a similar way as the eigenvalues of A displaystyle A nbsp and define the multi scale Harris corner measure as M c x y t s det m x y t s k trace 2 m x y t s displaystyle M c x y t s det mu x y t s kappa operatorname trace 2 mu x y t s nbsp Concerning the choice of the local scale parameter t displaystyle t nbsp and the integration scale parameter s displaystyle s nbsp these scale parameters are usually coupled by a relative integration scale parameter g displaystyle gamma nbsp such that s g 2 t displaystyle s gamma 2 t nbsp where g displaystyle gamma nbsp is usually chosen in the interval 1 2 displaystyle 1 2 nbsp 12 13 Thus we can compute the multi scale Harris corner measure M c x y t g 2 t displaystyle M c x y t gamma 2 t nbsp at any scale t displaystyle t nbsp in scale space to obtain a multi scale corner detector which responds to corner structures of varying sizes in the image domain In practice this multi scale corner detector is often complemented by a scale selection step where the scale normalized Laplacian operator 11 12 n o r m 2 L x y t t 2 L x y t t L x x x y t L y y x y t displaystyle nabla mathrm norm 2 L x y t t nabla 2 L x y t t L xx x y t L yy x y t nbsp is computed at every scale in scale space and scale adapted corner points with automatic scale selection the Harris Laplace operator are computed from the points that are simultaneously 15 spatial maxima of the multi scale corner measure M c x y t g 2 t displaystyle M c x y t gamma 2 t nbsp x y t argmaxlocal x y M c x y t g 2 t displaystyle hat x hat y t operatorname argmaxlocal x y M c left x y t gamma 2 t right nbsp local maxima or minima over scales of the scale normalized Laplacian operator 11 n o r m 2 x y t displaystyle nabla mathrm norm 2 x y t nbsp t argmaxminlocal t n o r m 2 L x y t displaystyle hat t operatorname argmaxminlocal t nabla mathrm norm 2 L hat x hat y t nbsp The level curve curvature approach editAn earlier approach to corner detection is to detect points where the curvature of level curves and the gradient magnitude are simultaneously high 16 17 A differential way to detect such points is by computing the rescaled level curve curvature the product of the level curve curvature and the gradient magnitude raised to the power of three k x y t L x 2 L y y L y 2 L x x 2 L x L y L x y displaystyle tilde kappa x y t L x 2 L yy L y 2 L xx 2L x L y L xy nbsp and to detect positive maxima and negative minima of this differential expression at some scale t displaystyle t nbsp in the scale space representation L displaystyle L nbsp of the original image 10 11 A main problem when computing the rescaled level curve curvature entity at a single scale however is that it may be sensitive to noise and to the choice of the scale level A better method is to compute the g displaystyle gamma nbsp normalized rescaled level curve curvature k n o r m x y t t 2 g L x 2 L y y L y 2 L x x 2 L x L y L x y displaystyle tilde kappa mathrm norm x y t t 2 gamma L x 2 L yy L y 2 L xx 2L x L y L xy nbsp with g 7 8 displaystyle gamma 7 8 nbsp and to detect signed scale space extrema of this expression that are points and scales that are positive maxima and negative minima with respect to both space and scale x y t argminmaxlocal x y t k n o r m x y t displaystyle hat x hat y hat t operatorname argminmaxlocal x y t tilde kappa mathrm norm x y t nbsp in combination with a complementary localization step to handle the increase in localization error at coarser scales 10 11 12 In this way larger scale values will be associated with rounded corners of large spatial extent while smaller scale values will be associated with sharp corners with small spatial extent This approach is the first corner detector with automatic scale selection prior to the Harris Laplace operator above and has been used for tracking corners under large scale variations in the image domain 18 and for matching corner responses to edges to compute structural image features for geon based object recognition 19 Laplacian of Gaussian differences of Gaussians and determinant of the Hessian scale space interest points editLoG 11 12 15 is an acronym standing for Laplacian of Gaussian DoG 20 is an acronym standing for difference of Gaussians DoG is an approximation of LoG and DoH is an acronym standing for determinant of the Hessian 11 These scale invariant interest points are all extracted by detecting scale space extrema of scale normalized differential expressions i e points in scale space where the corresponding scale normalized differential expressions assume local extrema with respect to both space and scale 11 x y t argminmaxlocal x y t D n o r m L x y t displaystyle hat x hat y hat t operatorname argminmaxlocal x y t D mathrm norm L x y t nbsp where D n o r m L displaystyle D norm L nbsp denotes the appropriate scale normalized differential entity defined below These detectors are more completely described in blob detection The scale normalized Laplacian of the Gaussian and difference of Gaussian features Lindeberg 1994 1998 Lowe 2004 11 12 20 n o r m 2 L x y t t L x x L y y t L x y t D t L x y t D t displaystyle begin aligned nabla mathrm norm 2 L x y t amp t L xx L yy amp approx frac t left L x y t Delta t L x y t right Delta t end aligned nbsp do not necessarily make highly selective features since these operators may also lead to responses near edges To improve the corner detection ability of the differences of Gaussians detector the feature detector used in the SIFT 20 system therefore uses an additional post processing stage where the eigenvalues of the Hessian of the image at the detection scale are examined in a similar way as in the Harris operator If the ratio of the eigenvalues is too high then the local image is regarded as too edge like so the feature is rejected Also Lindeberg s Laplacian of the Gaussian feature detector can be defined to comprise complementary thresholding on a complementary differential invariant to suppress responses near edges 21 The scale normalized determinant of the Hessian operator Lindeberg 1994 1998 11 12 det H n o r m L t 2 L x x L y y L x y 2 displaystyle det H mathrm norm L t 2 L xx L yy L xy 2 nbsp is on the other hand highly selective to well localized image features and does only respond when there are significant grey level variations in two image directions 11 14 and is in this and other respects a better interest point detector than the Laplacian of the Gaussian The determinant of the Hessian is an affine covariant differential expression and has better scale selection properties under affine image transformations than the Laplacian operator Lindeberg 2013 2015 21 22 Experimentally this implies that determinant of the Hessian interest points have better repeatability properties under local image deformation than Laplacian interest points which in turns leads to better performance of image based matching in terms higher efficiency scores and lower 1 precision scores 21 The scale selection properties affine transformation properties and experimental properties of these and other scale space interest point detectors are analyzed in detail in Lindeberg 2013 2015 21 22 Scale space interest points based on the Lindeberg Hessian feature strength measures editInspired by the structurally similar properties of the Hessian matrix H f displaystyle Hf nbsp of a function f displaystyle f nbsp and the second moment matrix structure tensor m displaystyle mu nbsp as can e g be manifested in terms of their similar transformation properties under affine image deformations 13 21 H f A T H f A 1 displaystyle Hf A T Hf A 1 nbsp m A T m A 1 displaystyle mu A T mu A 1 nbsp Lindeberg 2013 2015 21 22 proposed to define four feature strength measures from the Hessian matrix in related ways as the Harris and Shi and Tomasi operators are defined from the structure tensor second moment matrix Specifically he defined the following unsigned and signed Hessian feature strength measures the unsigned Hessian feature strength measure I D 1 n o r m L t 2 det H L k trace 2 H L if det H L k trace 2 H L gt 0 0 otherwise displaystyle D 1 mathrm norm L begin cases t 2 det HL k operatorname trace 2 HL amp mbox if det HL k operatorname trace 2 HL gt 0 0 amp mbox otherwise end cases nbsp the signed Hessian feature strength measure I D 1 n o r m L t 2 det H L k trace 2 H L if det H L k trace 2 H L gt 0 t 2 det H L k trace 2 H L if det H L k trace 2 H L lt 0 0 otherwise displaystyle tilde D 1 mathrm norm L begin cases t 2 det HL k operatorname trace 2 HL amp mbox if det HL k operatorname trace 2 HL gt 0 t 2 det HL k operatorname trace 2 HL amp mbox if det HL k operatorname trace 2 HL lt 0 0 amp mbox otherwise end cases nbsp the unsigned Hessian feature strength measure II D 2 n o r m L t min l 1 H L l 2 H L displaystyle D 2 mathrm norm L t min lambda 1 HL lambda 2 HL nbsp the signed Hessian feature strength measure II D 2 n o r m L t l 1 H L if l 1 H L lt l 2 H L t l 2 H L if l 2 H L lt l 1 H L t l 1 H L l 2 H L 2 otherwise displaystyle tilde D 2 mathrm norm L begin cases t lambda 1 HL amp mbox if lambda 1 HL lt lambda 2 HL t lambda 2 HL amp mbox if lambda 2 HL lt lambda 1 HL t lambda 1 HL lambda 2 HL 2 amp mbox otherwise end cases nbsp where trace H L L x x L y y displaystyle operatorname trace HL L xx L yy nbsp and det H L L x x L y y L x y 2 displaystyle det HL L xx L yy L xy 2 nbsp denote the trace and the determinant of the Hessian matrix H L displaystyle HL nbsp of the scale space representation L displaystyle L nbsp at any scale t displaystyle t nbsp whereas l 1 H L L p p 1 2 L x x L y y L x x L y y 2 4 L x y 2 displaystyle lambda 1 HL L pp frac 1 2 left L xx L yy sqrt L xx L yy 2 4L xy 2 right nbsp l 2 H L L q q 1 2 L x x L y y L x x L y y 2 4 L x y 2 displaystyle lambda 2 HL L qq frac 1 2 left L xx L yy sqrt L xx L yy 2 4L xy 2 right nbsp denote the eigenvalues of the Hessian matrix 23 The unsigned Hessian feature strength measure D 1 n o r m L displaystyle D 1 mathrm norm L nbsp responds to local extrema by positive values and is not sensitive to saddle points whereas the signed Hessian feature strength measure D 1 n o r m L displaystyle tilde D 1 mathrm norm L nbsp does additionally respond to saddle points by negative values The unsigned Hessian feature strength measure D 2 n o r m L displaystyle D 2 mathrm norm L nbsp is insensitive to the local polarity of the signal whereas the signed Hessian feature strength measure D 2 n o r m L displaystyle tilde D 2 mathrm norm L nbsp responds to the local polarity of the signal by the sign of its output In Lindeberg 2015 21 these four differential entities were combined with local scale selection based on either scale space extrema detection x y t argminmaxlocal x y t D n o r m L x y t displaystyle hat x hat y hat t operatorname argminmaxlocal x y t D mathrm norm L x y t nbsp or scale linking Furthermore the signed and unsigned Hessian feature strength measures D 2 n o r m L displaystyle D 2 mathrm norm L nbsp and D 2 n o r m L displaystyle tilde D 2 mathrm norm L nbsp were combined with complementary thresholding on D 1 n o r m L gt 0 displaystyle D 1 mathrm norm L gt 0 nbsp By experiments on image matching under scaling transformations on a poster dataset with 12 posters with multi view matching over scaling transformations up to a scaling factor of 6 and viewing direction variations up to a slant angle of 45 degrees with local image descriptors defined from reformulations of the pure image descriptors in the SIFT and SURF operators to image measurements in terms of Gaussian derivative operators Gauss SIFT and Gauss SURF instead of original SIFT as defined from an image pyramid or original SURF as defined from Haar wavelets it was shown that scale space interest point detection based on the unsigned Hessian feature strength measure D 1 n o r m L displaystyle D 1 mathrm norm L nbsp allowed for the best performance and better performance than scale space interest points obtained from the determinant of the Hessian det H n o r m L t 2 L x x L y y L x y 2 displaystyle det H mathrm norm L t 2 left L xx L yy L xy 2 right nbsp Both the unsigned Hessian feature strength measure D 1 n o r m L displaystyle D 1 mathrm norm L nbsp the signed Hessian feature strength measure D 1 n o r m L displaystyle tilde D 1 norm L nbsp and the determinant of the Hessian det H n o r m L displaystyle det H norm L nbsp allowed for better performance than the Laplacian of the Gaussian n o r m 2 L t L x x L y y displaystyle nabla mathrm norm 2 L t L xx L yy nbsp When combined with scale linking and complementary thresholding on D 1 n o r m L gt 0 displaystyle D 1 mathrm norm L gt 0 nbsp the signed Hessian feature strength measure D 2 n o r m L displaystyle tilde D 2 mathrm norm L nbsp did additionally allow for better performance than the Laplacian of the Gaussian n o r m 2 L displaystyle nabla mathrm norm 2 L nbsp Furthermore it was shown that all these differential scale space interest point detectors defined from the Hessian matrix allow for the detection of a larger number of interest points and better matching performance compared to the Harris and Shi and Tomasi operators defined from the structure tensor second moment matrix A theoretical analysis of the scale selection properties of these four Hessian feature strength measures and other differential entities for detecting scale space interest points including the Laplacian of the Gaussian and the determinant of the Hessian is given in Lindeberg 2013 22 and an analysis of their affine transformation properties as well as experimental properties in Lindeberg 2015 21 Affine adapted interest point operators editThe interest points obtained from the multi scale Harris operator with automatic scale selection are invariant to translations rotations and uniform rescalings in the spatial domain The images that constitute the input to a computer vision system are however also subject to perspective distortions To obtain an interest point operator that is more robust to perspective transformations a natural approach is to devise a feature detector that is invariant to affine transformations In practice affine invariant interest points can be obtained by applying affine shape adaptation where the shape of the smoothing kernel is iteratively warped to match the local image structure around the interest point or equivalently a local image patch is iteratively warped while the shape of the smoothing kernel remains rotationally symmetric Lindeberg 1993 2008 Lindeberg and Garding 1997 Mikolajzcyk and Schmid 2004 12 13 14 15 Hence besides the commonly used multi scale Harris operator affine shape adaptation can be applied to other corner detectors as listed in this article as well as to differential blob detectors such as the Laplacian difference of Gaussian operator the determinant of the Hessian 14 and the Hessian Laplace operator The Wang and Brady corner detection algorithm editThe Wang and Brady 24 detector considers the image to be a surface and looks for places where there is large curvature along an image edge In other words the algorithm looks for places where the edge changes direction rapidly The corner score C displaystyle C nbsp is given by C d 2 I d t 2 2 c I 2 displaystyle C left frac delta 2 I delta mathbf t 2 right 2 c nabla I 2 nbsp where t displaystyle bf t nbsp is the unit vector perpendicular to the gradient and c displaystyle c nbsp determines how edge phobic the detector is The authors also note that smoothing Gaussian is suggested is required to reduce noise Smoothing also causes displacement of corners so the authors derive an expression for the displacement of a 90 degree corner and apply this as a correction factor to the detected corners The SUSAN corner detector editSUSAN 25 is an acronym standing for smallest univalue segment assimilating nucleus This method is the subject of a 1994 UK patent which is no longer in force 26 For feature detection SUSAN places a circular mask over the pixel to be tested the nucleus The region of the mask is M displaystyle M nbsp and a pixel in this mask is represented by m M displaystyle vec m in M nbsp The nucleus is at m 0 displaystyle vec m 0 nbsp Every pixel is compared to the nucleus using the comparison function c m e I m I m 0 t 6 displaystyle c vec m e left frac I vec m I vec m 0 t right 6 nbsp where t displaystyle t nbsp is the brightness difference threshold 27 I displaystyle I nbsp is the brightness of the pixel and the power of the exponent has been determined empirically This function has the appearance of a smoothed top hat or rectangular function The area of the SUSAN is given by n M m M c m displaystyle n M sum vec m in M c vec m nbsp If c displaystyle c nbsp is the rectangular function then n displaystyle n nbsp is the number of pixels in the mask which are within t displaystyle t nbsp of the nucleus The response of the SUSAN operator is given by R M g n M if n M lt g 0 otherwise displaystyle R M begin cases g n M amp mbox if n M lt g 0 amp mbox otherwise end cases nbsp where g displaystyle g nbsp is named the geometric threshold In other words the SUSAN operator only has a positive score if the area is small enough The smallest SUSAN locally can be found using non maximal suppression and this is the complete SUSAN operator The value t displaystyle t nbsp determines how similar points have to be to the nucleus before they are considered to be part of the univalue segment The value of g displaystyle g nbsp determines the minimum size of the univalue segment If g displaystyle g nbsp is large enough then this becomes an edge detector For corner detection two further steps are used Firstly the centroid of the SUSAN is found A proper corner will have the centroid far from the nucleus The second step insists that all points on the line from the nucleus through the centroid out to the edge of the mask are in the SUSAN The Trajkovic and Hedley corner detector editIn a manner similar to SUSAN this detector 28 directly tests whether a patch under a pixel is self similar by examining nearby pixels c displaystyle vec c nbsp is the pixel to be considered and p P displaystyle vec p in P nbsp is point on a circle P displaystyle P nbsp centered around c displaystyle vec c nbsp The point p displaystyle vec p nbsp is the point opposite to p displaystyle vec p nbsp along the diameter The response function is defined as r c min p P I p I c 2 I p I c 2 displaystyle r vec c min vec p in P left left I vec p I vec c right 2 left I vec p I vec c right 2 right nbsp This will be large when there is no direction in which the centre pixel is similar to two nearby pixels along a diameter P displaystyle P nbsp is a discretised circle a Bresenham circle so interpolation is used for intermediate diameters to give a more isotropic response Since any computation gives an upper bound on the min displaystyle min nbsp the horizontal and vertical directions are checked first to see if it is worth proceeding with the complete computation of c displaystyle c nbsp AST based feature detectors editAST is an acronym standing for accelerated segment test This test is a relaxed version of the SUSAN corner criterion Instead of evaluating the circular disc only the pixels in a Bresenham circle of radius r displaystyle r nbsp around the candidate point are considered If n displaystyle n nbsp contiguous pixels are all brighter than the nucleus by at least t displaystyle t nbsp or all darker than the nucleus by t displaystyle t nbsp then the pixel under the nucleus is considered to be a feature This test is reported to produce very stable features 29 The choice of the order in which the pixels are tested is a so called Twenty Questions problem Building short decision trees for this problem results in the most computationally efficient feature detectors available The first corner detection algorithm based on the AST is FAST features from accelerated segment test 29 Although r displaystyle r nbsp can in principle take any value FAST uses only a value of 3 corresponding to a circle of 16 pixels circumference and tests show that the best results are achieved with n displaystyle n nbsp being 9 This value of n displaystyle n nbsp is the lowest one at which edges are not detected The order in which pixels are tested is determined by the ID3 algorithm from a training set of images Confusingly the name of the detector is somewhat similar to the name of the paper describing Trajkovic and Hedley s detector Automatic synthesis of detectors editTrujillo and Olague 30 introduced a method by which genetic programming is used to automatically synthesize image operators that can detect interest points The terminal and function sets contain primitive operations that are common in many previously proposed man made designs Fitness measures the stability of each operator through the repeatability rate and promotes a uniform dispersion of detected points across the image plane The performance of the evolved operators has been confirmed experimentally using training and testing sequences of progressively transformed images Hence the proposed GP algorithm is considered to be human competitive for the problem of interest point detection Spatio temporal interest point detectors editThe Harris operator has been extended to space time by Laptev and Lindeberg 31 Let m displaystyle mu nbsp denote the spatio temporal second moment matrix defined by A u v w h u v w L x u v w 2 L x u v w L y u v w L x u v w L t u v w L x u v w L y u v w L y u v w 2 L y u v w L t u v w L x u v w L t u v w L y u v w L t u v w L t u v w 2 L x 2 L x L y L x L t L x L y L y 2 L y L t L x L t L y L t L t 2 displaystyle A sum u sum v sum w h u v w begin bmatrix L x u v w 2 amp L x u v w L y u v w amp L x u v w L t u v w L x u v w L y u v w amp L y u v w 2 amp L y u v w L t u v w L x u v w L t u v w amp L y u v w L t u v w amp L t u v w 2 end bmatrix begin bmatrix langle L x 2 rangle amp langle L x L y rangle amp langle L x L t rangle langle L x L y rangle amp langle L y 2 rangle amp langle L y L t rangle langle L x L t rangle amp langle L y L t rangle amp langle L t 2 rangle end bmatrix nbsp Then for a suitable choice of k lt 1 27 displaystyle k lt 1 27 nbsp spatio temporal interest points are detected from spatio temporal extrema of the following spatio temporal Harris measure H det m k trace 2 m displaystyle H det mu kappa operatorname trace 2 mu nbsp The determinant of the Hessian operator has been extended to joint space time by Willems et al 32 and Lindeberg 33 leading to the following scale normalized differential expression det H x y t n o r m L s 2 g s t g t L x x L y y L t t 2 L x y L x t L y t L x x L y t 2 L y y L x t 2 L t t L x y 2 displaystyle det H x y t mathrm norm L s 2 gamma s tau gamma tau left L xx L yy L tt 2L xy L xt L yt L xx L yt 2 L yy L xt 2 L tt L xy 2 right nbsp In the work by Willems et al 32 a simpler expression corresponding to g s 1 displaystyle gamma s 1 nbsp and g t 1 displaystyle gamma tau 1 nbsp was used In Lindeberg 33 it was shown that g s 5 4 displaystyle gamma s 5 4 nbsp and g t 5 4 displaystyle gamma tau 5 4 nbsp implies better scale selection properties in the sense that the selected scale levels obtained from a spatio temporal Gaussian blob with spatial extent s s 0 displaystyle s s 0 nbsp and temporal extent t t 0 displaystyle tau tau 0 nbsp will perfectly match the spatial extent and the temporal duration of the blob with scale selection performed by detecting spatio temporal scale space extrema of the differential expression The Laplacian operator has been extended to spatio temporal video data by Lindeberg 33 leading to the following two spatio temporal operators which also constitute models of receptive fields of non lagged vs lagged neurons in the LGN t n o r m x y n o r m 2 L s g s t g t 2 L x x t L y y t displaystyle partial t mathrm norm nabla x y mathrm norm 2 L s gamma s tau gamma tau 2 L xxt L yyt nbsp t t n o r m x y n o r m 2 L s g s t g t L x x t t L y y t t displaystyle partial tt mathrm norm nabla x y mathrm norm 2 L s gamma s tau gamma tau L xxtt L yytt nbsp For the first operator scale selection properties call for using g s 1 displaystyle gamma s 1 nbsp and g t 1 2 displaystyle gamma tau 1 2 nbsp if we want this operator to assume its maximum value over spatio temporal scales at a spatio temporal scale level reflecting the spatial extent and the temporal duration of an onset Gaussian blob For the second operator scale selection properties call for using g s 1 displaystyle gamma s 1 nbsp and g t 3 4 displaystyle gamma tau 3 4 nbsp if we want this operator to assume its maximum value over spatio temporal scales at a spatio temporal scale level reflecting the spatial extent and the temporal duration of a blinking Gaussian blob Colour extensions of spatio temporal interest point detectors have been investigated by Everts et al 34 Bibliography edit Andrew Willis and Yunfeng Sui 2009 An Algebraic Model for fast Corner Detection 2009 IEEE 12th International Conference on Computer Vision IEEE pp 2296 2302 doi 10 1109 ICCV 2009 5459443 ISBN 978 1 4244 4420 5 Shapiro Linda and George C Stockman 2001 Computer Vision p 257 Prentice Books Upper Saddle River ISBN 0 13 030796 3 H Moravec 1980 Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover Tech Report CMU RI TR 3 Carnegie Mellon University Robotics Institute Obstacle Avoidance and Navigation in the Real World by a Seeing Robot Rover Hans Moravec March 1980 Computer Science Department Stanford University Ph D thesis C Harris and M Stephens 1988 A combined corner and edge detector PDF Proceedings of the 4th Alvey Vision Conference pp 147 151 Archived from the original PDF on 2022 04 01 Retrieved 2010 12 30 Javier Sanchez Nelson Monzon and Agustin Salgado 2018 An Analysis and Implementation of the Harris Corner Detector Image Processing on Line 8 305 328 doi 10 5201 ipol 2018 229 hdl 10553 43499 Archived from the original on 2020 05 11 Retrieved 2020 05 06 a href Template Cite journal html title Template Cite journal cite journal a CS1 maint bot original URL status unknown link J Shi and C Tomasi June 1994 Good Features to Track 9th IEEE Conference on Computer Vision and Pattern Recognition Springer pp 593 600 CiteSeerX 10 1 1 36 2669 doi 10 1109 CVPR 1994 323794 C Tomasi and T Kanade 1991 Detection and Tracking of Point Features Technical report School of Computer Science Carnegie Mellon University CiteSeerX 10 1 1 45 5770 CMU CS 91 132 A Noble 1989 Descriptions of Image Surfaces Ph D Department of Engineering Science Oxford University p 45 Forstner W Gulch 1987 A Fast Operator for Detection and Precise Location of Distinct Points Corners and Centres of Circular Features PDF ISPRS a b c T Lindeberg 1994 Junction detection with automatic selection of detection scales and localization scales Proc 1st International Conference on Image Processing Vol I Austin Texas pp 924 928 a b c d e f g h i j k Tony Lindeberg 1998 Feature detection with automatic scale selection International Journal of Computer Vision Vol 30 no 2 pp 77 116 a b c d e f g h T Lindeberg 1994 Scale Space Theory in Computer Vision Springer ISBN 978 0 7923 9418 1 a b c d T Lindeberg and J Garding Shape adapted smoothing in estimation of 3 D depth cues from affine distortions of local 2 D structure Image and Vision Computing 15 6 pp 415 434 1997 a b c d T Lindeberg 2008 Scale Space In Benjamin Wah ed Wiley Encyclopedia of Computer Science and Engineering Vol IV John Wiley and Sons pp 2495 2504 doi 10 1002 9780470050118 ecse609 ISBN 978 0 470 05011 8 a b c K Mikolajczyk K and C Schmid 2004 Scale and affine invariant interest point detectors PDF International Journal of Computer Vision 60 1 63 86 doi 10 1023 B VISI 0000027790 02288 f2 S2CID 1704741 L Kitchen and A Rosenfeld 1982 Gray level corner detection Pattern Recognition Letters Vol 1 no 2 pp 95 102 J J Koenderink and W Richards 1988 Two dimensional curvature operators Journal of the Optical Society of America A Vol 5 no 7 pp 1136 1141 L Bretzner and T Lindeberg 1998 Feature tracking with automatic selection of spatial scales Computer Vision and Image Understanding Vol 71 pp 385 392 T Lindeberg and M X Li 1997 Segmentation and classification of edges using minimum description length approximation and complementary junction cues Computer Vision and Image Understanding Vol 67 no 1 pp 88 98 a b c D Lowe 2004 Distinctive Image Features from Scale Invariant Keypoints International Journal of Computer Vision 60 2 91 CiteSeerX 10 1 1 73 2924 doi 10 1023 B VISI 0000029664 99615 94 S2CID 221242327 a b c d e f g h T Lindeberg Image matching using generalized scale space interest points Journal of Mathematical Imaging and Vision volume 52 number 1 pages 3 36 2015 a b c d T Lindeberg Scale selection properties of generalized scale space interest point detectors Journal of Mathematical Imaging and Vision Volume 46 Issue 2 pages 177 210 2013 Lindeberg T 1998 Edge detection and ridge detection with automatic scale selection International Journal of Computer Vision 30 2 117 154 doi 10 1023 A 1008097225773 S2CID 35328443 H Wang and M Brady 1995 Real time corner detection algorithm for motion estimation Image and Vision Computing 13 9 695 703 doi 10 1016 0262 8856 95 98864 P S M Smith and J M Brady May 1997 SUSAN a new approach to low level image processing International Journal of Computer Vision 23 1 45 78 doi 10 1023 A 1007963824710 S2CID 15033310 S M Smith and J M Brady January 1997 Method for digitally processing images to determine the position of edges and or corners therein for guidance of unmanned vehicle UK Patent 2272285 Proprietor Secretary of State for Defence UK GB patent 2272285 Smith Stephen Mark Determining the position of edges and corners in images published 1994 05 11 issued 1994 05 11 assigned to Secr Defence The SUSAN Edge Detector in Detail M Trajkovic and M Hedley 1998 Fast corner detection Image and Vision Computing 16 2 75 87 doi 10 1016 S0262 8856 97 00056 5 a b E Rosten and T Drummond May 2006 Machine learning for high speed corner detection European Conference on Computer Vision Leonardo Trujillo and Gustavo Olague 2008 Automated design of image operators that detect interest points PDF Evolutionary Computation 16 4 483 507 doi 10 1162 evco 2008 16 4 483 PMID 19053496 S2CID 17704640 Archived from the original PDF on 2011 07 17 Ivan Laptev and Tony Lindeberg 2003 Space time interest points International Conference on Computer Vision IEEE pp 432 439 a b Geert Willems Tinne Tuytelaars and Luc van Gool 2008 An efficient dense and scale invariant spatiotemporal temporal interest point detector European Conference on Computer Vision Springer Lecture Notes in Computer Science Vol 5303 pp 650 663 doi 10 1007 978 3 540 88688 4 48 a b c Tony Lindeberg 2018 Spatio temporal scale selection in video data Journal of Mathematical Imaging and Vision 60 4 525 562 doi 10 1007 s10851 017 0766 9 S2CID 254649837 I Everts J van Gemert and T Gevers 2014 Evaluation of color spatio temporal interest points for human action recognition IEEE Transactions on Image Processing 23 4 1569 1589 doi 10 1109 TIP 2014 2302677 PMID 24577192 S2CID 1999196 Reference implementations editThis section provides external links to reference implementations of some of the detectors described above These reference implementations are provided by the authors of the paper in which the detector is first described These may contain details not present or explicit in the papers describing the features DoG detection as part of the SIFT system Windows and x86 Linux executables Harris Laplace static Linux executables Also contains DoG and LoG detectors and affine adaptation for all detectors included FAST detector C C MATLAB source code and executables for various operating systems and architectures lip vireo Archived 2017 05 11 at the Wayback Machine LoG DoG Harris Laplacian Hessian and Hessian Laplacian SIFT flip invariant SIFT PCA SIFT PSIFT Steerable Filters SPIN Linux Windows and SunOS executables SUSAN Low Level Image Processing C source code Online Implementation of the Harris Corner Detector IPOLSee also editblob detection affine shape adaptation scale space ridge detection interest point detection feature detection computer vision Image derivativeExternal links editLindeberg Tony 2001 1994 Corner detection Encyclopedia of Mathematics EMS Press Brostow Corner Detection UCL Computer Science Retrieved from https en wikipedia org w index php title Corner detection amp oldid 1193549212, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.