Finding Lane Lines on the Road - Part Deuce

The goal of this project (from the Udacity Self-driving Car nanodegree):

In this project, your goal is to write a software pipeline to identify the lane boundaries in a video from a front-facing camera on a car.

giphy

I used a combination of different computer vision techniques (camera calibration, region of interest, perspective transform) to create a software pipeline to process images and detect traffic lanes. The technologies used to accomplish this:

The pipeline consisted of these components:

Camera Calibration

Cameras typically have some level of distortion in the images they take. The distortion can cause the image to appear to be warped in some areas. Since we will be using the images to attempt to infer the dimensions of the pictured objects, we need to make sure that the distortion is corrected.

Calibration Images

image

Calibration images are a set of images of various calibration objects that have known attributes. By determining the transformation required to go from the known attributes to the actual attributes displayed in the image, we are able to generate a function that can correct for distortion.

Performing the calibration is relatively straight forward (assuming you have multiple calibration images and are using a chessboard diagram):

  1. For each of the calibration images find all the corners in the image with cv2.findChessboardCorners(image, patternSize[, corners[, flags]])
  2. Generate the transformation matrix for distortion correction using cv2.calibrateCamera(objectPoints, imagePoints, imageSize[, cameraMatrix[, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]]]]) → retval, cameraMatrix, distCoeffs, rvecs, tvecs

My implementation for this project can be found here:

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L9

Saving the calibration

Since the distortion is a property of camera we only need to calculate the distortion correction matrix once. During my implementation I was going to be running the pipeline many times so to save time I saved the distortion matrix to a pickle file and reloaded it from disk instead of recalculating it.

Pipeline

The pipeline consisted of 12 tunable parameters that were used to configure how each step ran:

Region of Interest

straight_lines1

I removed the parts of the image that do not contain lane lines by masking out parts of the image that aren’t in the specified region.

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L439

Distortion Correction

straight_lines1

Using the pre-calculated distortion correction matrix, the next step is undistort the image:

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L131

Perspective Transform

straight_lines1

The image is transformed to a bird’s eye view to help accentuate curvature in the road:

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L251

Lane pixel detection

Detecting the lane pixels is done by reducing the image to a binary image of the pixels that belong to lane lines.

Color Threshold

straight_lines1

Color thresholding is removing the colors not specified by a given range:

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L145

Edge Detection

straight_lines1

By tuning a Sobel filter to focus on characteristics found in lane lines, I was able to reduce the amount of noise unrelated to lane lines.

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L172

Lane Detection

straight_lines1

Lanes a detected by using a sliding window that search for pixels that belong to the lane based on the pixels that were detect previously as being part of the lane:

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L274

Position & Curvature

straight_lines1

I was able to calculate the curvature and position of the car in respect to the lane lines from the image by carefully choosing the source_points and destination_points used in the perspective transformation step. By using the knowledge that the width of a lane is 12 feet and the length of a lane line is 10 feet, I am able to create a pixel to feet conversion function.

The position of the car in respect to the center of the lane is calculated by finding the offset of the middle of the lane with the middle of the image.

The curvature of the lane is done by using cv2.fitPoly which will find a best fit polynomial to the provided points.

https://github.com/bayne/CarND-Advanced-Lane-Lines-solution/blob/master/main.py#L393

Problems & Improvements