1887

Abstract

Detection of images or moving objects have been highly worked upon, and has been integrated and used in commercial, residential and industrial environments. But, most of the strategies and techniques have heavy limitations. One of the limitations is due to low computational resources at user level. Other important limitations that need to be tackled are lack of proper data analysis of the measured trained data, dependency on the motion of the objects, inability to differentiate one object from other, and also concern over speed of the object under detection and Illuminacy. Hence, there is a need to draft, apply and recognize new techniques of detection that tackle the existing limitations. In our project we have worked upon a model based on Scalable Object Detection, using Deep Neural Networks to localize and track people, cars, potted plants and 16 others categories in the camera preview in real-time. The large Visual Recognition ImageNet package ‘inception5h’ from google is used. This is a trained model, with images of the respective categories, which is then converted to a graph file using neural networks. The graph nodes are usually huge in number and these are optimized for the use in android. The use of already available trained model is just for the purpose of ease and convenience, nevertheless any set of images can be trained and used in the android application. An important point to note is that training the images will need huge computational speeds and more than one computer supporting GPU. Also a.jar file built with the help of bazel is added to android studio to support the integration of Java and tensorflow. This jar file is the key to getting tensorflow in a mobile device. The jar file is built with the help of openCV, which is a library of programming functions mainly aimed at real-time computer vision. Once this has been integrated, any input to the android application inputed at real-time, is predicted with the help of tiny-yolo (you look only once – a darknet reference network). This application supports multi object detection, which is very useful. All the steps occur simultaneously with great speeds giving remarkable results, detecting all the categories of the trained model within a good illuminacy. The real time detection reference network used also works with an acceptable acceleration of moving objects but is not quite effective with low illumination. The objects are limited to identify only 20 categories but the scope can be broadened with a revised trained model. The 20 categories include «aeroplane»,»bicycle»,»bird»,»boat», «bottle», «bus», «car», «cat», «chair», «cow»,»diningtable»,»dog»,»horse»,»motorbike»,»person», «pottedplant», «sheep»,»sofa»,»train» and «tvmonitor». The application can be used handy in a mobile phone or any other smart device with minimal computational resources ie.., no connectivity to the internet. The application challenges speed and illuminacy. Effective results will help in real-time detection of traffic signs and pedestrians from a moving vehicle. This goes hand in hand with the similar intelligence in cameras which can be used as an artificial eye, and can be used in many areas such as surveillance, Robotics, Traffic, facial recognition, etc.

Loading

Article metrics loading...

/content/papers/10.5339/qfarc.2018.ICTPP417
2018-03-15
2024-03-29
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/papers/10.5339/qfarc.2018.ICTPP417
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error