Kitti data set

Purpose

Use the depth information provided by the radar point cloud

How to achieve

The three-dimensional point cloud of the radar is projected onto the two-dimensional image of the camera

kitti data set introduction

kitti’s data acquisition platform is equipped with four cameras And a lidar, there are two gray-scale cameras and two color cameras among the four cameras.

As can be seen from the figure, the direction of the camera coordinate system (camera) and the direction of the radar coordinate system (velodyne) are stipulated:

?

camera : X = right, y = down, z = forward

velodyne: x = forward, y = left, z = up

Then the point cloud data collected by velodyne, The x-axis coordinate of each point is the required depth information.

More detailed profiles can be searched on the Internet. Only the necessary information related to the current purpose is listed here.

raw_data of kitti data set

raw_data provides synchronized and calibrated data and calibration data for each sequence.

Synchronized and calibrated data:

./imageXX contains the image sequence collected by each camera

./velodyne_points contains the data scanned by the radar , Point cloud form, each point is stored in (x,y,z,i) format, i is the reflection value

(When the radar collects data, it rotates and scans around the vertical axis, only when the radar Rotating to the same direction as the camera will trigger the camera to collect images. But there is no need to pay attention to this here, just use the synchronized and calibrated data given, it has aligned the radar data with the camera data, that is, it can be considered The image data corresponding to the same file name and the radar point cloud data belong to the same scene.)

Calibration data:

./cam_to_cam contains the calibration parameters of each camera

< p>./velo_to_cam contains the conversion parameters from radar to camera

For raw_data, kitti also provides sample tools to facilitate reading and outputting various data files. Please refer to the development kit on the official website raw_data download page. /p>

Using the devkit provided by kitti and the calib file of the corresponding data set

Interpret calib folder

cam_to_cam, including the calibration parameters of each camera

-S_xx: 1×2 the picture size of the xx camera before correction
-K_xx: 3×3 the picture size of the xx camera before correction Calibration parameters
-D_xx: 1×5 correction of the distortion coefficient of the xx camera before correction
-R_xx: 3×3 external parameters, the rotation matrix of the xx camera
-T_xx: 3×1 external parameters, the translation matrix of the xx camera
br>-S_rect_xx: 1×2 corrected image size of the XX camera
-R_rect_xx: 3×3 rotation matrix, used to correct the XX camera so that the image planes are co-planar (the original word is make image planes co-planar).
-P_rect_0x: 3×4 projection matrix, used to project from the corrected coordinate system of camera 0 to the image plane of camera X.

Only the last two matrices R_rect and P_rect are used here

velo_to_cam, the conversion from radar coordinate system to camera coordinate system No. 0

-R: 3×3 Rotation matrix
-T: 3×1 translation matrix
-delta_f and delta_c have been deprecated

From this, the formula for transforming the radar coordinate system to the image coordinate system of the xx camera can be obtained:

Suppose X is the homogeneous coordinate X = [xyz 1]’ in the radar coordinate system, corresponding to the homogeneous coordinate Y = [uv 1]’ of the image coordinate system of the xx camera, then:< /p>

Among them

(R|T): Radar coordinate system -> No. 0 camera coordinate system
R_rect_00: No. 0 camera coordinate system -> Corrected No. 0 camera coordinate system
P_rect_0x: Corrected coordinate system of No. 0 camera -> Image plane of No. x camera
For a more detailed and complete interpretation, please refer to readme.txt in devkit

Interpretation devkit

The run_demoVelodyne.m in the sample code provided by the official website realizes the projection of radar point cloud to the camera image

code flow

  1. Read the calibration file from the given path to obtain the specific matrix value
  2. According to the above formula, calculate the projection matrix P_velo_to_img, ie Y = P_velo_to_img * X
  3. From the given path Read the camera picture and load the point cloud data of the radar. Since it is only for display purposes, in order to speed up the operation, for the radar point cloud, only 1 point is kept every 5 points
  4. Remove those points within 5 meters of the radar (the x direction of the radar) (It is guessed that these points fall between the camera and the radar, so they will not appear on the image plane)
  5. Calculate the projection to get the points projected on the two-dimensional image
    6. Draw on the image After projecting the point, the color is determined according to the depth (the x direction value of the radar point). The color is the closer the warm color, the farther the cold color; the gray is the closer the dark color, the farther the light color is.

If you need to obtain the depth value from the depth map, you should convert the depth value to the color (grayscale) value when drawing the projection point.

Source: https://www.cnblogs.com/notesbyY/p/10478645.html

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 2000 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.