VIVA Lab Matching Tool - User Manual

Table of Contents




Introduction

A short overview of the functionalities of the Matching Tool can be found here.

The VIVA Lab Matching Tool can be used to identify a sparce set of matching feature points between weakly calibrated image triplets. It requires as input 3 images, and calibration information in the form of a fundamental matrix and a trifocal tensor, or of three projection matrices.

This tool was developed using Microsoft Visual C++ .NET, and the Open Source Computer Vision Library OpenCV, by Etienne Vincent and Robert Laganière.

This tool can conveniently be used in conjunction with the freely available Projective Vision Toolkit (PVT).

This tool is freely available, but acknowlegement should be given by referencing our publication.




Image Input

Overview

Three images, and the calibration information relating them are needed before matching can take place.

Opening Images

Click on "Image - Open", and select successively the three images to be matched. These may be in JPEG (.jpg/.jpeg) or Windows Bitmap format (.bmp). The images can be viewed by clicking on "Image - Show".

Opening the Calibration

Calibration information is essential in this guided matching tool, as the search for correspondence is conducted along epipolar lines, and verified using the camera system's trinocular geometry. The geometry given as input be reasonably accurate. There are three choices for the source of calibration information:

  1. Click on "Calibration - Open F,T", with the box "PVT Tensor" checked to open a fundamental matrix file and a trifocal tensor file in the format returned by the PVT. The fundamental matrix should relate Images 1 and 2, while the tensor relates Images 1,2 and 3. Note that it might be necessary to tune the PVT parameters to obtain sufficiently accurate weak calibration.
  2. Click on "Calibration - Open F,T", with the box "PVT Tensor" unchecked to open a text file containing the entries of a fundamental matrix, and another containing the entries of a trifocal tensor. The fundamental matrix should relate Images 1 and 2, while the tensor relates Images 3,1 and 2. (This is different from using the PVT toolkit input where the entries of the tensor are transposed in the file.)
  3. Click on "Calibration - Open P1,P2,P3" to open three text files containing the entries of three projection matrices corresponding to the three images. These are then used to compute a fundamental matrix and tensor, so the steps to follow steps are not influenced by the nature of the calibration input.




Feature Point Detection

Overview

Feature Points must be selected in Images 1 and 2. The search for correspondence will then be restricted to these points. Two types of feature points are available and can be used together or separately: Harris feature points, and Epipolar Gradient feature points. There are also funtionalities for saving and opening files containing the x-y coordinates of feature points, and saving images on which the feature points are drawn. The number of feature points selected are displayed at the bottom-left of the window.

Harris Feature Points

These feature points are related to those described in (Harris, C., Stephens, M., 1988, "A Combined Corner and Edge Detector"). The threshold in Images 1 and 2 should generally be set to the same value. Lower thresholds result in more points.

Epipolar Gradient Feature Points

This feature point detector was developed by us, and is very effective in this context of calibrated matching. A lower threshold results in more points. The "Line/Column Skip in Image 1" parameters can be used to skip some lines or columns in the search for feature points, and thus limit their numbers. If the epipolar lines in image 2 are near horizontal, the "Line Skip" should be used, whereas if they are horizontal, the "Column Skip" should be used. The detector is also influenced by the "Derivation Filter Diameter" and "Scale of Derivative" parameters on the bottom-right of the application window.




Feature Point Correspondence

Overview

Two methods are available for matching the detected feature points, the traditional Correlation-based approach, and an Edge Transfer-based approach. Both methods are guided by the epipolar and trinocular geometry, that is, matches are only saught along epipolar lines in the second image, and at locations determined by trifocal transfer in the third image. To eliminate the mismatches that might be present, a consistency check based on a disparity gradient constraint is also available. The tool shows images of the displacement vectors of matched points between Images 1 and 2, and 2 and 3. These are line starting at the coordinate of a point and ending at the coordinate of the corresponding point in the next view. There are also funtionalities for saving and opening files containing the coordinates of feature point triplets, and saving images on which the displacement vectors of matched points are drawn. The number of matched points found is displayed at the bottom-left of the window.

Correlation-based Correspondence

This is the traditional method of comparing points, by normalized correlation of the neighboring pixels. The "Window Radius" parameter determines the size of the correlation window, which are (2r+1)*(2r+1) squares. The threshold should be in the interval [-1,1], a higher value beign more discriminating. The "Search Radius in Image 3" determins the area over which a correspondence will be saught around a coordinate determined by trifocal transfer. Finaly, the "Distance from Epipolar Line in Image 2" determins the search area for correspondences in image 2.

Edge Transfer-based Correspondence

This method was introduced by us. In general, it produces more mismatches, and thus relies heavily on the subsequent use of the disparity gradient constraint. A lower threshold is more discriminating. The "Distance from Epipolar Line in Image 2" still determins the search area for correspondences in image 2. In general, the other parameters do not need to be altered. This method requires the prior detection of epipolar gradient feature points for its initialization.

Disparity Gradient Consistency Check

To eliminate matches that are inconsistent with their neighbors, click on "Disparity Gradient Constraint - Remove Inconsistencies". This can eliminate most mismatches, but can also eliminate accurate matches where there is significant parallax. This constraint requires that for each matched point, "Consistent Neighbors" out of the "Neighbors Considered" closest points be consistent with a disparity gradient of less than the threshold. Thus to strengthen the constraint, the threshold can be lowered, or the proportion "Consistent Neighbors"/"Neighbors Considered" increased.