Matching Key Points of Two Images Using ORB and BFMatcher in OpenCV with Python

πŸ’‘ Problem Formulation: In image processing, key points matching is vital for tasks like object detection, image stitching, and tracking. Say we have two images of the same object taken from different angles and we want to find similar key points between them to establish their relationship. The input consists of two images and the desired output is a set of matched key points that are present in both images.

Method 1: Basic ORB Key Points Detection and BFMatcher Matching

The ORB (Oriented FAST and Rotated BRIEF) algorithm is a fast robust feature detector and descriptor that can be used in conjunction with the BFMatcher (Brute-Force Matcher) to find matching key points between two images. ORB detects unique features in the images, computes their descriptors, and BFMatcher finds the closest matches based on these descriptors.

Here’s an example:

import cv2

# Load the two images
img1 = cv2.imread('image1.jpg',0)
img2 = cv2.imread('image2.jpg',0)

# Initialize the ORB detector
orb = cv2.ORB_create()

# Detect and compute key points and descriptors with ORB
keypoints1, descriptors1 = orb.detectAndCompute(img1, None)
keypoints2, descriptors2 = orb.detectAndCompute(img2, None)

# Create BFMatcher object
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)

# Match descriptors
matches = bf.match(descriptors1, descriptors2)

# Sort them in the order of their distance
matches = sorted(matches, key = lambda x:x.distance)

# Draw the matches
matching_result = cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches[:10], None, flags=2)

cv2.imshow('Matches', matching_result)
cv2.waitKey(0)
cv2.destroyAllWindows()

The output will be an image displaying the top 10 matches between the key points in the two images, shown in the window ‘Matches’.

In this code snippet, we first load the two images and initialize the ORB detector. We then detect and compute key points and descriptors within each image. Next, we create the BFMatcher object and use it to find matching descriptors. Finally, we draw and display the top 10 matched key points between the two images.

Method 2: ORB with Ratio Test

Sometimes matches found by the brute-force approach are not good enough, and we can refine them using the ratio test proposed by D. Lowe in his SIFT paper. This method involves comparing the distance of the closest match to that of the second-closest in order to filter out poor matches.

Here’s an example:

import cv2

# Initialize the ORB detector and BFMatcher
orb = cv2.ORB_create()
bf = cv2.BFMatcher()

# ... [rest of the code is similar to Method 1 until match descriptors]

# Instead of using match() we use knnMatch()
matches = bf.knnMatch(descriptors1, descriptors2, k=2)

# Apply ratio test
good_matches = []
for m,n in matches:
    if m.distance < 0.75 * n.distance:
        good_matches.append([m])

# Draw matches
matching_result = cv2.drawMatchesKnn(img1, keypoints1, img2, keypoints2, good_matches, None, flags=2)

# Display
cv2.imshow('Good Matches', matching_result)
cv2.waitKey(0)
cv2.destroyAllWindows()

The output here is an image showing the matches that passed the ratio test, resulting in higher-quality matches.

This code uses the knnMatch() method of BFMatcher to find the two nearest neighbors. The ratio test checks if the closest match’s distance is significantly smaller than the second closest. If it passes the test, the match is considered good, and we only keep those matches. Then we use the drawMatchesKnn() function to render the good matches on the images for visual inspection.

Method 3: ORB with Homography Check

For more robustness, one can verify the geometric consistency between the matches using homography. This method involves calculating the homography matrix using a subset of the matches and then using this matrix to filter out outliers that do not fit the calculated transformation well.

Here’s an example:

import cv2
import numpy as np

# ... [rest of the code is similar to Method 2 until getting good matches]

# Extract location of good matches
points1 = np.zeros((len(good_matches), 2), dtype=np.float32)
points2 = np.zeros_like(points1)

for i, match in enumerate(good_matches):
    points1[i, :] = keypoints1[match[0].queryIdx].pt
    points2[i, :] = keypoints2[match[0].trainIdx].pt

# Find homography
H, mask = cv2.findHomography(points1, points2, cv2.RANSAC, 5.0)

# Use mask to filter matches
matchesMask = mask.ravel().tolist()

# Draw matches
draw_params = dict(matchColor = (0,255,0),
                   singlePointColor = None,
                   matchesMask = matchesMask,
                   flags = 2)

matching_result = cv2.drawMatches(img1, keypoints1, img2, keypoints2, good_matches, None, **draw_params)

# Display
cv2.imshow('Homography Matches', matching_result)
cv2.waitKey(0)
cv2.destroyAllWindows()

The output will be an image that shows the matches that are consistent with the geometric transformation suggested by the homography.

After finding the good matches, we use the coordinates of the matched key points to compute a homography with RANSAC. The resulting mask tells us which matches are inliers (fit the homography) and which are outliers. We only draw the matches that pass this geometrical check, resulting in a set of matches that are consistent with the spatial arrangement between the images.

Method 4: ORB with BFMatcher and Distance Threshold

You can set a distance threshold manually to fine-tune the BFMatcher’s matches. By setting a maximum distance, you can discard matches that are too far apart in terms of the descriptor’s similarity, thereby improving the quality of the matches.

Here’s an example:

import cv2

# ... [rest of the code is similar to Method 1 until match descriptors]

# Match descriptors with a distance threshold
distance_threshold = 30
matches = [m for m in bf.match(descriptors1, descriptors2) if m.distance < distance_threshold]

# Draw the matches
matching_result = cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches, None, flags=2)

# Display
cv2.imshow('Distance Threshold Matches', matching_result)
cv2.waitKey(0)
cv2.destroyAllWindows()

The output is an image showing the matches that are below the specified distance threshold.

This method utilizes the distance property of each match. We manually specify a threshold and filter the matches to include only those whose distance is smaller than this threshold, indicating a stronger descriptor similarity. The selected matches are then visualized using the drawMatches() method.

Bonus One-Liner Method 5: ORB with BFMatcher in a Compact Form

For quick prototyping or minimalistic implementations, the entire process of feature matching using ORB and BFMatcher can be condensed into fewer lines of code. This approach is less verbose and useful for rapid testing of feature matching.

Here’s an example:

import cv2
img1, img2 = cv2.imread('image1.jpg',0), cv2.imread('image2.jpg',0)
kpts1, d1, kpts2, d2 = *[cv2.ORB_create().detectAndCompute(img, None) for img in [img1, img2]]
matches = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True).match(d1, d2)
cv2.imshow('Compact Matches', cv2.drawMatches(img1, kpts1, img2, kpts2, sorted(matches, key=lambda x: x.distance)[:10], None, flags=2))
cv2.waitKey(0)
cv2.destroyAllWindows()

The output will be the same as in Method 1, but achieved in a more compact form.

In this one-liner approach, image loading, ORB initialization, keypoints detection and matching, and finally displaying the results are done in a compact and less detailed manner. It serves the same purpose as the more verbose methods but lacks clarity, which might not be ideal for educational purposes or complex implementations.

Summary/Discussion

  • Method 1: Basic ORB and BFMatcher. Straightforward and fast. May include less accurate matches.
  • Method 2: ORB with Ratio Test. Eliminates weaker matches, improving quality. Potentially discards useful matches if threshold not tuned properly.
  • Method 3: ORB with Homography Check. Verifies geometric consistency; robust to viewpoint changes. Computationally expensive due to the homography estimation.
  • Method 4: ORB with BFMatcher and Distance Threshold. Simple manual filtering based on match quality. Threshold selection is crucial and may require experimentation.
  • Method 5: Compact ORB and BFMatcher. Quick to write and useful for rapid testing. Lacks detailed steps for educational context or debuggability.