top of page

Traffic Sign Memory (TSR)

​

 

Introduction

Traffic-sign recognition is a safety tech system that recognizes traffic signs and relays the information displayed on the sign to the driver through the instrument cluster, infotainment screen, or head-up display. Most TSR systems can identify speed limit, stop, and “do not enter” signs. More sophisticated systems may be able to recognize other types of signs.The primary purpose of TSR is to increase driver focus. If a driver misses a sign, TSR can make them aware of it so they can react accordingly. The idea is simple: TSR identifies road signs the driver might have missed and alerts them of their presence.This technology uses advanced forward-facing cameras positioned high on the windshield, generally adjacent to the rearview mirror housing. Aimed to “see” traffic signs, the cameras scan the side of the road relative to the car.Once the camera captures a sign, the system’s software processes the image to establish its classification and meaning. The system then relays this information to the driver almost instantaneously in the form of an icon or graphic representation of the sign. However, TSR’s ability to accurately identify a sign depends on the speed of the vehicle and its distance to the sign.Some TSR systems also work in conjunction with advanced cruise control, which is set to maintain a speed above or below the scanned signs. For example, if TSR detects a 40-mph speed limit, it updates the cruise set speed to 40 mph unless the driver sets the parameters above or below the detected speed limit.

​

​

​

​

​

 

 

 

Which Car Brands Offer TSR?

Because of how new this innovative safety technology is, not many auto brands include TSR as standard or optional equipment on their models. Premium brands like AudiBMW, and Mercedes-Benz commonly offer TSR on their models, while safety stalwart Volvo provides the technology on every model in its lineup. While it is less common among mainstream brands, several of them, including FordHonda, and Mazda, also offer TSR as part of their higher-level ADAS packages on specific models.

 

Camera-based TSR systems available today use the same fundamental principles as the Opel system, but have evolved to detect different types of traffic signs in addition to speed limits.A forward-facing camera is connected to an onboard computer and constantly feeds it live video.

 

​

​

​

​

​

​

​

​

​

​

​

The computer employs computer vision algorithms and other image recognition technologies such as Optical Character Recognition (similar to how certain software can detect text from a document) to detect the shape and content of a traffic sign.Once this information has been determined, it’s usually displayed as a graphic within the instrument cluster, making it easy to see what the current speed limit is. 

 

 

​

​

​

​

​

​

​

​

Types of signs that can be detected

 

At a basic level, TSR systems are able to show the local speed limit on a particular road. Depending on the manufacturer, more advanced systems may also be able to detect other traffic signs such as ‘Stop’, ‘Give Way’, ‘Wrong Way’ and ‘No Entry’ signs and similarly display these within the instrument cluster.

​

​

​

​

​

​

 

 

Types of roadsigns in USA

 

Sign shape can also alert roadway users to the type of information displayed on a sign. Traffic regulations are conveyed in signs that are rectangular with the longer direction vertical or square. Additional regulatory signs are octagons for stop and inverted triangles for yield. Diamond-shaped signs signify warnings. Rectangular signs with the longer direction horizontal provide guidance information. Pentagons indicate school zones. A circular sign warns of a railroad crossing.The illustration below shows how the shape and color of a sign indicate the nature of the message.

 

​

​

​

​

​

 

Standards for the sign design and application of the signs shown here as well as for other traffic control devices are contained in the Manual on Uniform Traffic Control Devices (MUTCD).

 

Dimension drawings for signs can be found in the Standard Highway Signs book. Both of these books are available in electronic format online at https://mutcd.fhwa.dot.gov.

 

 

This web site also contains information on standard lettering used on highway signs and pavement markings and on highway sign color specifications.

Warning signs

 

​

​

​

 

 

 

 

 

 

 

 

 

 

 

 

​

 

 

 

 

 

 

Railroad and Light Rail Transit Grade Crossing Signs

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Limited Vehicle Storage Space Between Intersection and Tracks

 

​

​

​

​

​

​

​

​

​

​

​

Temporary Traffic Control Signs

 

 

 

 

 

 

 

 

 

 

 

​

​

​

​

​

Regulatory Signs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Mandatory Movements in Lanes at an Intersection

 

 

 

 

 

​

 

 

Guide Signs

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

Motorist Services and Recreation Signs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

Pedestrian and Bicycle Signs

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Pedestrian Detour Signs

 

 

 

 

 

 

 

 

Bicycle Guide Signs

 

 

 

 

 

 

 

 

 

​

You can get the dataset from this link – Data. It contains 4 files –

​​

  • signnames.csv – It has all the labels and their descriptors.

  • train.p – It contains all the training image pixel intensities along with the labels.

  • valid.p – It contains all the validation image pixel intensities along with the labels.

  • test.p – It contains all the testing image pixel intensities along with the labels.

​

The above files with extension .p are called pickle files, which are used to serialize objects into character streams. These can be deserialized and reused later by loading them using the pickle library in python.

 

Let’s implement a Convolutional Neural Network (CNN) using Keras in simple and easy-to-follow steps. A CNN consists of a series of Convolutional and Pooling layers in the Neural Network which map with the input to extract features. A Convolution layer will have many filters that are mainly used to detect the low-level features such as edges of a face. The Pooling layer does dimensionality reduction to decrease computation. Moreover, it also extracts the dominant features by ignoring the side pixels.

 

 

To read more about CNNs, go to this link. Importing the libraries  We will be needing the following libraries.

Make sure you install NumPy, Pandas, Keras, Matplotlib and OpenCV before implementing the following code.

 

​

​

​

​

​

​

​

​

 

 

 

 

 

 

 

Here, we are using numpy for numerical computations, pandas for importing and managing the dataset, Keras for building the Convolutional Neural Network quickly with less code, cv2 for doing some preprocessing steps which are necessary for efficient extraction of features from the images by the CNN.


 
Loading the dataset


Time to load the data. We will use pandas to load signnames.csv, and pickle to load the train, validation and test pickle files. After extraction of data, it is then split using the dictionary labels “features” and “labels”

 

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

 

Output:

(34799, 32, 32, 3)

(4410, 32, 32, 3)

(12630, 32, 32, 3)

 
Preprocessing the data using OpenCV
Preprocessing images before feeding into the model gives very accurate results as it helps in extracting the complex features of the image. OpenCV has some built-in functions like cvtColor() and equalizeHist() for this task. Follow the below steps for this task –

  • First, the images are converted to grayscale images for reducing computation using the cvtColor() function.

  • The equalizeHist() function increases the contrasts of the image by equalizing the intensities of the pixels by normalizing them with their nearby pixels.

  • At the end, we normalize the pixel values between 0 and 1 by dividing them by 255.

​

​

​

​

​

​

​

​

​

​

​

 

 

 

After reshaping the arrays, it’s time to feed them into the model for training. But to increase the accuracy of our CNN model, we will involve one more step of generating augmented images using the ImageDataGenerator.

This is done to reduce overfitting the training data as getting more varied data will result in a better model. The value 0.1 is interpreted as 10%, whereas 10 is the degree of rotation. We are also converting the labels to categorical values, as we normally do.

 

​

​

​

​

​

​

​

​

​

 

 

​

 

 

 

 

 

 

 

 

 

 

Building the model
As we have 43 classes of images in the dataset, we are setting num_classes as 43. The model contains two Conv2D layers followed by one MaxPooling2D layer. This is done two times for the effective extraction of features, which is followed by the Dense layers. A dropout layer of 0.5 is added to avoid overfitting the data.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

​

 

 

Output:

Epoch 1/10

2000/2000 [==============================] - 129s 65ms/step - loss: 0.9130 - acc: 0.7322 - val_loss: 0.0984 - val_acc: 0.9669

Epoch 2/10

2000/2000 [==============================] - 119s 60ms/step - loss: 0.2084 - acc: 0.9352 - val_loss: 0.0609 - val_acc: 0.9803

Epoch 3/10

2000/2000 [==============================] - 116s 58ms/step - loss: 0.1399 - acc: 0.9562 - val_loss: 0.0409 - val_acc: 0.9878

Epoch 4/10

2000/2000 [==============================] - 115s 58ms/step - loss: 0.1066 - acc: 0.9672 - val_loss: 0.0262 - val_acc: 0.9925

Epoch 5/10

2000/2000 [==============================] - 116s 58ms/step - loss: 0.0890 - acc: 0.9726 - val_loss: 0.0268 - val_acc: 0.9925

Epoch 6/10

2000/2000 [==============================] - 115s 58ms/step - loss: 0.0777 - acc: 0.9756 - val_loss: 0.0237 - val_acc: 0.9927

Epoch 7/10

2000/2000 [==============================] - 132s 66ms/step - loss: 0.0700 - acc: 0.9779 - val_loss: 0.0327 - val_acc: 0.9900

Epoch 8/10

2000/2000 [==============================] - 122s 61ms/step - loss: 0.0618 - acc: 0.9812 - val_loss: 0.0267 - val_acc: 0.9914

Epoch 9/10

2000/2000 [==============================] - 115s 57ms/step - loss: 0.0565 - acc: 0.9830 - val_loss: 0.0146 - val_acc: 0.9957

Epoch 10/10

2000/2000 [==============================] - 120s 60ms/step - loss: 0.0577 - acc: 0.9828 - val_loss: 0.0222 - val_acc: 0.9939

After successfully compiling the model, and fitting in on the train and validation data, let us evaluate it by using Matplotlib.
 
Evaluation and testing
Plotting the loss function.

 

 

 

 

 

 

 

Plotting the accuracy function.

 

 

​

​

​

​

​

 

 

 

 

 

 

 

 

 

 

 

Output:

 

 

 

 

 

 

 

 

 

 

 

​

 

 

 

 

 

 

 

 

 

 

 

 

 

​

 

 

As you can see, we have fitted the data well keeping both the training and validation loss at a minimum. Time to evaluate how our model performs on the test data.

 

 

Output:

Test Loss:  0.16352852963907774

Test Accuracy:  0.9701504354899777

​

​

​

​

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

​

Let us check one test image by feeding it into the model. The model gives a prediction of class 0 (Speed limit 20), which is correct.

Traffic sign memory in Instrument cluster
Traffic sign memory reconisation of road sign
Traffic sign memory reconisation of road sign
Traffic sign memory reconisation of road sign Algo
Road signs
Road signs
warning signs
Rail roads
limited
Temperory
Regulatory sign
Mandatory movements
Guide sign
Motorist
pedestrian
Pedestrian detour
Code for Traffic Sign Memory
Code for Traffic Sign Memory
Code for Traffic Sign Memory
Code for Traffic Sign Memory
Code for Traffic Sign Memory
Code for Traffic Sign Memory
Graph of traninng or validation of dataset
Code for Traffic Sign Memory
Graph of traninng or validation of dataset
Code for Traffic Sign Memory
Detection of the German road sign speed limit 20
Bicycle

My Story

This is your About page. This space is a great opportunity to give a full background on who you are, what you do and what your site has to offer. Your users are genuinely interested in learning more about you, so don’t be afraid to share personal anecdotes to create a more friendly quality. Every website has a story, and your visitors want to hear yours. This space is a great opportunity to provide any personal details you want to share with your followers. Include interesting anecdotes and facts to keep readers engaged. Double click on the text box to start editing your content and make sure to add all the relevant details you want site visitors to know. If you’re a business, talk about how you started and share your professional journey. Explain your core values, your commitment to customers and how you stand out from the crowd. Add a photo, gallery or video for even more engagement.

Contact

I'm always looking for new and exciting opportunities. Let's connect.

123-456-7890 

bottom of page