Robust Vehicle Tracking

Robust Vehicle Tracking 

Unmanned Aerial Vehicles have been used widely in the commercial and surveillance use in the recent year. Vehicle tracking from aerial video is one of commonly used application. In this paper, a self-learning mechanism has been proposed for the vehicle tracking in real time. The main contribution of this paper is that the proposed system can automatic detect and track multiple vehicles with a self-learning process leading to enhance the tracking and detection accuracy. Two detection methods have been used for the detection. The Features from Accelerated Segment Test (FAST) with Histograms of Oriented Gradient (HoG) method and the HSV colour feature with Grey Level Co-occurrence Matrix (GLCM) method have been proposed for the vehicle detection. A Forward and Backward Tracking (FBT) mechanism has been employed for the vehicle tracking. The main purpose of this research is to increase the vehicle detection accuracy by using the tracking results and the learning process, which can monitor the detection and tracking performance by using their outputs. Videos captured from UAVs have been used to evaluate the performance of the proposed method. According to the results, the proposed learning system can increase the detection performance.


INTRODUCTION
Unmanned Aerial Vehicles (UAVs) have become a key research area in recent years in military and civilian applications, which has the advantages of small, lightweight, fast and easy deployment, as well as it can achieve “zero” casualties so it can be deploy at the extreme missions. Vehicle detection from UAVs has drawn a great attention in the researches such as automatic traffic monitoring, aerial surveillance and other security related applications. There are various challenges that UAVs could face, one of the main challenges of the detection and tracking is the target objects might change their shapes in the aerial images or sudden disappear and reappear during the tracking process. Thus, the detection and tracking process needs to handle various problems. First of all, the tracking and detection system have to be scale-invariant to the target which avoid the errors caused by the UAVs changing their altitude during tracking. Secondly, the rotationally invariant features should be considered as the UAV’s flight directions can change rapidly and unpredictable, which change the directions of the target’s movement. Furthermore, the illumination to the targets may vary depending on the flight directions of the UAVs and shooting angles to the targets, also, the blur problems could occurred by the UAVs’ shaking. Therefore, the transformation invariant is needed. Furthermore, the background confusions and targets occlusions may exist. Finally, the most important issue is the detection and tracking process have to be real-time. In this paper, a vehicle tracking and detection method with self-learning has been proposed as shown in Figure 1. In the input video, vehicles are detected automatically using the features extracted from Histogram of Oriented Gradients (HoG) [1] and Features from Accelerated Segment Test (FAST) [2] with Support Vector Machine (SVM) classifier [3]. It is assumed that the vehicle has higher density of corners than other objects in the environment so finding the distribution of corners should be the very first thing to narrow the area for further HoG processing. FAST corner detection method can quickly and accurately detect relevant corner points. Another detection method by using the Grey Level Co-occurrence Matrix (GLCM) with HSV colour feature has been used in order to prove that the proposed self-learning tracking method can increase the detection accuracy
    
 https://codeshoppy.com/shop/product/




A.FAST-HoG Detection Method In this detection method, we integrated the FAST corner detection with the HoG descriptors feature because the FAST detection can narrow down the Region of Interest (RoI) for the HoG detection, which can reduce the large processing time of the sliding window process. The FAST detector classifies a pixel p as a corner by performing a simple brightness test on a discretized circle of sixteen pixels around the pixel p. A corner is detected at p if there are twelve contiguous pixels in the circle with intensities that are all brighter or darker than the centre pixel p by a threshold t. A score function is evaluated for each candidate corner in order to perform non-maximal suppression for the final detection where Sbright is the subset of pixels in the circle that are brighter than p by the threshold t ,and Sdark the subset of pixels that are darker than p by t. The HoG feature was originally developed for detecting humans. The idea of the HoG descriptor is that the shape of the objects can always be identified by the distribution of the edge even without precise information about the edges themselves. However, a weakness of the HoG descriptor is that it is not rotationally invariant. To solve this problem, four different directions (0, 45, 90, 135 degrees) of each training samples were used in the proposed method. Each group of the orientated training sample has its own classification model and the final classification model is calculated based on all four of the orientated classification models. The extraction of a HoG feature vector starts with colour and gamma normalisation, then edges are detected by convolving the image patch with the simple mask [-1, 0, 1] both horizontally and vertically. The image patch is then subdivided into rectangular regions cells, and within each cell the gradient for each pixel is computed. In the next step each pixel computes a weighted vote for the orientation of the cell by the gradient magnitude. Those votes are accumulated in to orientation bins with the range of 0 to 180 degrees which identify as the gradient angle that stored in a histogram. Local contrast normalisation is used to suppress the effects of changes in illumination and contrast with the background on the gradient magnitude. This step was found to be essential for better performance which is achieved by grouping cells into large blocks and normalising within these blocks, ensuring that low contrast regions are stretched. The HoG feature vectors extracted from the regions of interest are imported into a binary classifier that determines the presence of a vehicle in the image patch. The method used separate SVMs to train on sample vehicle images that are categorised into four angular offsets (0, 45, 90, and 135). These four SVM’s models are then intergraded as a single classifier model that evaluates a rotationally invariant response for a single HoG feature vector. The Support Vector Machines were chosen as the learning algorithm used in classification as they demonstrated a very high accuracy in previous vehicle detection research.
 
 https://codeshoppy.com/shop/product/elearning-app/
https://codeshoppy.com/shop/product/job-search/
https://codeshoppy.com/shop/product/travel-management/
https://codeshoppy.com/shop/product/net-classified/
https://codeshoppy.com/shop/product/ehealth-care/

Comments

Popular Posts