Smart mobility homepage perception

7
Page 1 2-2-1. Perception of Surroundings This image is a concept image.

Transcript of Smart mobility homepage perception

Page 1: Smart mobility homepage perception

Page 1

2-2-1. Perception of Surroundings

This image is a concept image.

Page 2: Smart mobility homepage perception

Page 2

2-2-1-1. Vision System – Traffic Signal Perception

1) Traffic Light Perception

Detection

„Left‟

Classification

Left ?

Right ?

Tracking & Decision

„Left‟, „Left‟, „Right‟, „Left‟…

“Left”

Single image Multi images

Color based detection(RGB&HSV[1] threshold

→ Blob labeling)

Learning based detection(PCA[2] feature extraction

→ SVM[3] classifier)

Point tracking & Result voting(Deterministic tracking[4]

→ Result voting)

PCA SVMRGB HSV

Page 3: Smart mobility homepage perception

Page 3

2-2-1-1. Vision System – Traffic Signal Perception

2) Traffic Sign Perception

Detection Classification Tracking & Decision

Single image Multi images

Learning based detection(Haar-like feature extraction

→ Cascade classifier[6])

Learning based detection(PCA feature extraction

→ SVM classifier)

Ring buffer & Result voting (Simple ring buffer

→ Result voting)

PCA SVMHaar-like

Cascade

Page 4: Smart mobility homepage perception

Page 4

2-2-1-1. Vision System – Traffic Signal Perception

3) Library for Developing Perception System

We developed traffic signal perception system using

OpenCV.(http://opencv.org/) that serves qualified source code.

OpenCV was helpful to us for following image process topics.

- Image Processing

- Machine Learning

- Object Detection

4) Camera for Developing Perception System

Dragonfly2 is used as image sensor made by Point Grey.

(http://ww2.ptgrey.com/)

SDK of Point Grey Camera supplies various functions for

developing system.

Page 5: Smart mobility homepage perception

Page 5

5

1) System Overview

Top viewselective Gaussian

spatial filtersthresholding

Hough

transform

RANSAC

line fitting

(1) Top View ↔ Perspective View

(2) selective Gaussian spatial filters ↔ Edge Detection

Using separable kernel

Remove perspective effects, using the inverse perspective mapping

Focus on only a subregion of the input image, which helps in reducing the run time

Reslult data can be transformed directly in real world coordinate

Simple and robust than the edge detection

Reduce computing time, using separable kernel

Optimized to detecting vertical, horizontal lines

Top View + Filtering Perspective View + Edge Detection

2-2-1-1. Vision System – Detection of Lane Markers

Page 6: Smart mobility homepage perception

Page 6

6

2) Result

Lane, stop lane detection Speed bump detection

2-2-1-1. Vision System – Detection of Lane Markers

Page 7: Smart mobility homepage perception

Page 7

2-2-1-2. Lidar System

Laser Data

Acquisition

Grid Map GenerationSegmentation

&

Feature Extraction

Local Grid Map

Decision System

Mission & Object

Information (Laser)

Mission Detection

Object Queue

A scan point clustering algorithm

Line-fitting & Coner fitting