The deployment of highly intelligent and efficient machine vision systems accomplished to achieve new heights in multiple fields of human activity. A successful replacement of manual intervention with their automated systems assured safety, security and alertness in the transportation field. Automatic number plate recognition (ANPR) has become a common aspect of the intelligent transportation systems. In addition to the license plate information, identifying the exact make and model of the car is suitable to provide many additional cues in certain applications. Authentication systems may find it useful with an extra confirmation based on the model of the vehicle also. Different car models are characterized by the uniqueness in the overall car shape, position and structure of headlights etc. Majority of the research works rely on frontal/rear view of car for the recognition while some others are also there based on an arbitrary viewpoint. A template matching strategy is usually employed to find an exact match for the query image from a database of known car models. It is also possible to select and extract certain discriminative features from the region of interest (ROI) in the car image. And with the help of a suitable similarity measure such as euclidean distance it is able to demarcate between the various classes/models. The main objective of the paper is to understand the significance of certain detectors and descriptors in the field of car make and model recognition. The performance evaluation of SIFT, SURF, ORB feature descriptors for implementing a car recognition system was already available in literature. In this paper, we have studied the effectiveness of various combinations of feature detectors and descriptors on car model detection. The combination of the 6 detectors DoG, Hessian, Harris Laplace, Hessian Laplace, Multiscale Harris, Multiscale Hessian with the 3 descriptors SIFT, liop and patch was tested on three car databases. Scale Invariant Feature Transform (SIFT), a popular object detection algorithm allows the user to match different images and spot the similarities between them. The algorithm based on keypoints selection and description offers feature independent of illumination, scale, noise and rotation variations. Matching between images has been executed using Euclidian distance between descriptors. For the given keypoints in the test image, the smallest Euclidian distance between corresponding descriptor and all the descriptors of the training image indicates the best match. Our experiments were carried out in MATLAB using the VLFeat ToolBox. It was found to achieve a maximum accuracy of 91.67% with DoG-SIFT approach in database 1 comprising cropped ROI of toy car images. For the database 2 consisting of cropped ROI of real car images, the Multiscale Hessian-SIFT yielded the maximum accuracy of 96.88%. The database 3 comprised of high resolution real car images with background. The testing was conducted on the cropped and resized ROI's of these images. A maximum accuracy of 93.78% was obtained when the Multiscale Harris-SIFT feature descriptor was employed. As a whole these feature detectors and descriptors succeeded in recognizing the car models with an overall accuracy above 90%.


Article metrics loading...

Loading full text...

Full text loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error