Laser Scanner Calibration Dependency on the Line Detection Method

Calibration is an important step of 3D structured light laser scanners to be able to obtain an accurate measuring result. During the calibration a relation between the pixels in 3D laser scanner camera image and the real world units is made. Laser line detection method as part of the calibration process plays an important role in this calibration process, but this aspect has not been investigated before. In this paper the calibration and measurement result dependency from the laser line detection with pixel or sub-pixel resolution in the calibration process has been investigated. The calibration process of the developed 3D laser scanner has been described. The results show dependency between the laser line detection method in the calibration process and the final measurement results. DOI: http://dx.doi.org/10.5755/j01.eee.21.6.13765


I. INTRODUCTION
Calibration is a process in 3D laser scanners by which a relation between the camera image pixels and the real world units are made.Calibration is an important step in 3D laser scanners to be able to obtain an accurate measuring result.
Many 3D laser scanner calibration methods exist.Some of the calibration methods [1] propose to calibrate the laser scanner using the known dimensions of an object and calculating all of the parameters needed for the calibration from the laser line projected onto the surface of this object and captured by the camera in 3D laser scanner.Other calibration methods calibrate the 3D laser scanner by finding separately camera lens distortions [2], [3] and the laser light plane parameters using the known distances of an object from the camera.
All of those methods use laser line detection methods as a part of the calibration process, which means, that the accuracy of the laser scanner is directly related to the precision of the laser line detection algorithm, used in the calibration process.So far the laser line detection accuracy has not been investigated or taken into account in the Manuscript received January 3, 2015; accepted May 28, 2015.Current work has been supported by EU (FP7-SME project "Hermes" and European Regional Development Fund), Estonian Science Foundation (target financing SF0140061s12 and grant ETF8905), Doctoral School in Information and Communication Technology of Estonia, Estonian IT Academy scholarship, CEBE (Centre for Integrated Electronic Systems and Biomedical Engineering) and Tallinn University of Technology, Thomas Johann Seebeck electronics institute.calibration process.
Laser line on the real image of interest is typically 1 to 10 pixels wide.Many laser line detection algorithms [4], [5] for 3D laser line scanners detect the laser line with pixel resolution.Usually those laser line detection algorithms are simpler and thus preferred in cases, where the accuracy of the laser scanner is not very important or where the processor recourses are limited.
In other cases the one pixel resolution is not sufficient, as the actual laser line center position is located between two pixels and if going into sub-millimeter measurements the accuracy of the laser line detection algorithm significantly influences [6] the real measurement results.
Similar issue also exists in other structured light based scanning technologies [7] where the method of detecting the fringe pattern edges during the projector calibration process might influence the end result.
Several laser line sub-pixel detection methods [8], [9] exist, but how much do they affect the real measurement results compared to pixel resolution methods in the calibration process and where is the trade-off between the complexity of the laser line detection algorithm and the laser scanner calibration and thus the measurement accuracy?This influence has not been investigated previously.
In this paper the measurement result dependency from the laser line detection method with pixel or sub-pixel resolution in the calibration process is investigated.
The developed laser scanner is calibrated with pixel and sub-pixel resolution laser line detection algorithm.The measurements after the calibration process are made with sub-pixel resolution laser line detection method.The measurement results are compared against real world object with known dimensions.

II. 3D LASER SCANNER MODEL AND CALIBRATION PROCESS
The simplest approximation of a camera is a pinhole camera.In this model the lens is substituted for an infinitesimally small hole located in the focal plane through which all light must pass.The image of the depicted object is projected onto the image sensor [1].
In order to convert the laser line in pixels captured by the camera sensor to the real world measurements in millimeters one must understand the relationship between the camera sensor and the laser light plane.
By knowing of the camera sensor width W and height H the full resolution image optical center in pixels can be calculated: Laser line coordinates on captured image in pixels are defined as u and v. Camera focal length fx, fy in pixels for a given focus depends on the lens, used by the camera, and can be estimated by using objects in front of the camera with known dimensions at different distances.
Knowing that the light emitted by the line laser forms a proper virtual laser light plane and the laser line captured by the camera is located on this laser light plane, the laser line position in the camera coordinate system can be calculated: ' .
From this the real world coordinates of the laser line can be calculated: , ' The ideal pinhole cameras do not have any image distortions, because the lens, being the main source of the distortions, is substituted with an infinitely small hole.Unfortunately this is not the case with real world cameras.Two main types of distortions radial tangential distortions.Both of those distortions however can be taken into account and the camera image can be corrected.Radial distortions be corrected with the following equations: where ucorrected and vcorrected are image pixel coordinates (u, v) after corrections, kn is n th radial distortion coefficient, and Tangential distortions can be corrected with the following equations: where pn is n th tangential distortion coefficient.To calculate focal length, distortion coefficients and undistort the input image, Zhang [10] method has been used with known chessboard size and square side length in millimeters.
Figure 4 shows an input image with two measurement objects after undistortion process.
To find the laser light plane parameters without knowing the laser and camera exact positions relative to each other and the distance from the ground the laser line is projected on chessboard pattern with different poses of the chessboard pattern relative to the camera (Fig. 1).While at least 2 poses of the chessboard are required to construct a laser plane mathematical description, the more poses are used, the more accurate description of the laser light plane can be constructed.
By knowing the camera intrinsic parameters (focal length, image optical center, distortion coefficients) and the chessboard size the chessboard rotation and translation vectors (extrinsic parameters) can be calculated for every chessboard image by using object pose estimation.Extrinsic parameters are translating coordinates of a point (X, Y, Z) to a coordinate system, fixed with respect to the camera.The rotation and translation transformation is (when z' ≠ 0) ' ' . ' By this equation every point on chessboard can be converted to real world 3D points.Due to the fact, that the laser line lies on the same chessboard the laser line real world 3D location can be found.
To find the laser light plane parameters A, B, C and D the least square fitting (LSF) method is used.LSF uses the concept of minimizing the square sum of normal distances as (15) and ( 16) from all 3D points to the optimal plane to determine the parameters of best-fitted plane ( , , , ) , where the Pi denotes the normal distance from i th point to the plane, and 2 1 min, where m is the number of points.

III. DETERMINING THE LASER LIGHT PLANE PARAMETERS WITH PIXEL AND SUB-PIXEL LASER LINE DETECTION METHODS
Two laser line detection methods were used in order to find the laser light plane A, B, C, D parameters.The first method with pixel resolution was using a second order Gaussian derivate [11].The second method used to detect the laser from the chessboard image was the laser line subpixel detection by convolution maximum correction with parabola, as it has been described in the previous work [6].

IV. RESULTS
In order to see how the A, B, C, D parameters and thus the measurement results deviate, a set of chessboard images was chosen for laser light A, B, C, D parameter detection.Following A, B, C, D parameters were obtained with pixel and sub-pixel laser line detection methods, as shown in Table I and Table II.
It can be seen, that there are small differences between the A, B and C parameters obtained.The biggest difference is parameter A, which is the rotation of the laser light plane.In order to see how the laser line detection with pixel or sub-pixel resolution in the calibration process influences the real measurement results, two objects were used with known dimensions for the comparison of the results.The objects with the heights of 6 mm and 25 mm were placed under the laser line and measured.Before the measurement the optical distortion were corrected with the average re-projection error of RMS = 0.3122 pixels.
The camera was in the height of ~1874 mm from the floor surface.More close-up image shows the objects and laser line detected from the image with sub-pixel resolution.It can be seen that there is actually rather noticeable difference between two measurements.It can be also seen that the deviation is rather systematic and linear from one side of the image to another.This comes mainly from the fact that the A parameter of the laser light plane was the one, which was the most influenced.A is the rotation of the laser light plane.
Table III gives the deviation of measurements from the reference objects with different A, B, C, D values.It can be seen, that the sub-pixel laser line detection method in the calibration process has improved the measurements results.
V. CONCLUSIONS This paper presents how the calibration and measurement result depends from the laser line detection method with pixel or sub-pixel resolution in the calibration process.Measurement results are compared with reference objects with known dimensions.The experiments show that there is a noticeable dependency from the laser line detection method in the calibration process, when doing submillimeter measurement.Sub-pixel laser line detection improves measurement results.However, if the measurement accuracy can be within few millimeters then pixel resolution laser line detection in the calibration process is sufficient.

Figure 2 and
Figure 2 and Fig. 3 show the differences of detected laser line from the same chessboard image with pixel and subpixel resolution methods.

Fig. 4 .
Fig. 4. Undistorted input image with two measurement objects.After the laser line detection from image the detected laser line is converted to real world mm using two A, B, C, D parameters (A, B, C, D with pixel and sub-pixel laser line detection method) gained in the calibration process.The gained measurements are from camera sensor perspective.To get the actual measurements the floor surface height is subtracted from the initial measurements.Figure 6 shows the detected laser line in real mm-s with pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b).Two objects measured are clearly distinguishable out of witch one object height is approximately 6 mm and the other is around 25 mm.The deviation between pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b) measurements can be seen more clearly from Fig. 7 where smaller object can be seen more closely.

Figure 6
shows the detected laser line in real mm-s with pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b).Two objects measured are clearly distinguishable out of witch one object height is approximately 6 mm and the other is around 25 mm.The deviation between pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b) measurements can be seen more clearly from Fig. 7 where smaller object can be seen more closely.

Fig. 5 .
Fig. 5. Measured objects and laser line detected with sub-pixel resolution.

Fig. 6 .
Fig. 6.Measured objects in real mm-s with pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b).

Fig. 7 .
Fig. 7. Measurement of smaller object in real mm-s with pixel accuracy A, B, C, D (a) and sub-pixel accuracy A, B, C, D (b).By subtracting measurement results gained with pixel accuracy A, B, C, D from the measurement results gained with sub-pixel accuracy A, B, C, D across the whole laser line the deviation between measurements becomes clearer.

Fig. 8 .
Fig. 8. Deviation between measurements by using pixel (Z1) or sub-pixel resolution (Z2) laser line detection method in the calibration process with different image sets (a, b, c, d).

Figure 8
Figure 8 shows the deviation between two measurements Z1 and Z2 across the whole laser line with different image sets (a, b, c, d).With current setup and image sets the deviation is between -1 mm and +1.2 mm, what is quite much for doing sub-millimeter measurements.

TABLE I .
A, B, C, D PARAMETERS WITH PIXEL RESOLUTION OVER VARIOUS IMAGE SETS.

TABLE II .
A, B, C, D PARAMETERS WITH SUB-PIXEL RESOLUTION OVER VARIOUS SETS.

TABLE III .
MEASUREMENTS OF OBJECTS HEIGHTS WITH DIFFERENT A, B, C, D.