vehicle tracking system in python
By using Analytics Vidhya, you agree to our, Parameter Sharing and Local Connectivity in CNN, Math Behind Convolutional Neural Networks, Building Your Own Residual Block from Scratch, Understanding the Architecture of DenseNet, Bounding Box Evaluation: (Intersection over union) IOU. topic page so that developers can more easily learn about it. You can train a deep learning model for object detection or you can pick a pre-trained model and fine-tune it on your data. If youre looking to learn about object detection from scratch, I recommend these tutorials: Lets look at some of the exciting real-world use cases of object detection. bus image, and we will fetch this image from the internet. The five steps include: The CentroidTracker class is covered in the following resources on PyImageSearch: In order to track and calculate the speed of objects in a video stream, we need an easy way to store information regarding the object itself, including: To accomplish all of these goals we can define an instance of TrackableObject open up the trackableobject.py file and insert the following code: The TrackableObject constructor accepts an objectID and centroid. Lines 122-124 initialize the frame dimensions and calculate meterPerPixel. Vehicles traveling less than this speed will not be logged. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. svm vehicle-detection yolov1 Updated on Apr 27, 2022 Python yukitsuji / 3D_CNN_tensorflow Star 281 Code Issues Pull requests KITTI data processing and 3D CNN for Vehicle Detection Also as mentioned in second bullet of point 1, there is some simple project called GooMPy which apperently provides gui for Google Maps api, although I haven't researched it much. What will you need? We can also specify the threshold to view predictions at different confidence levels. Or requires a degree in computer science? Vehicle counting, 2. I strongly believe that if you had the right teacher you could master computer vision and deep learning. This project provides prediction for speed, color and size of the vehicles with TensorFlow Object Counting API. One drawback of our automated system is that it is only as good as the key distance constant. Is there a way to optimize the traffic and distribute it through a different street? Please enter your registered email id. That said, you will still need to use the workon command to activate your virtual environment. Combined lane and vehicle detection pipeline comparing YOLOv2 and LeNet-5. The GSM/GPRS module is used to transmit and update the vehicle location to a database. Analytics Vidhya App for the Latest blog/Article, A Comprehensive Guide to 21 Popular Deep Learning Interview Questions and Answers, Machine Learning using C++: A Beginners Guide to Linear and Logistic Regression, Build your own Vehicle Detection Model using OpenCV and Python, We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. YOLOv8 Object Tracking using PyTorch, OpenCV and DeepSORT, This project imlements the following tasks in the project: 1. Your credentials are available at your account dashboard setup page. Instead, VASCAR is a simple timing device relying on the following equation: Police use VASCAR where RADAR and LIDAR is illegal or when they dont want to be detected by RADAR/LIDAR detectors. Performing object detection only every N frames reduces the expensive inference operations. Heres a GIF demonstrating the idea: //
Access to centralized code repos for all 500+ tutorials on PyImageSearch vanish_frames: The number of frames the object remains absent from the frame to be considered as vanished. Yes, there is a human component in this verification method. 3.Lane change detection and 4.speed estimation, Vehicle Detection Using Deep Learning and YOLO Algorithm. "MORE THAN VEHICLE COUNTING!" In this tutorial, we will review the concept of VASCAR, a method that police use for measuring the speed of moving objects using distance and timestamps. Put the tape down on the ground at that point. Lets go ahead and load our configuration: Lines 27-33 parse the --conf command line argument and load the contents of the configuration into the conf dictionary. When there is a speed bump, they speed up almost as if they are trying to catch some air! We have a handful more initializations to take care of: For object tracking purposes, Lines 68-71 initialize our CentroidTracker, trackers list, and trackableObjects dictionary. Here we will use the CascadeClassifier function, the predefined function of OpenCV, to train the images from the pre-trained XML file (Cascade file car). We will use the lr_find() method to find an optimum learning rate. Note: For nighttime use (outside the scope of this tutorial), you may need infrared cameras and infrared lights and/or adjustments to your camera parameters (refer to the Raspberry Pi for Computer Vision Hobbyist Bundle Chapters 6, 12, and 13 for these topics). Step 1: Open file. Have the helper watch the screen and tell you when you are standing at the very edge of the frame.
Why do we need Region Based Convolulional Neural Network? Well also understand how here is a human component that leads to error and how our method can correct the human error. A testing script is included speed_estimation_dl_video.py . Finally we'll deploy and test our system. Rinse and repeat until you are satisfied. This means we can get the locations of the highlighted regions. Distance in pixels is calculated as the difference between the centroids as they pass by the columns for the zone (Equation 1.3). Heres a taste of what you can expect: Excited? We need to detect multiple objects, i.e. The correlation tracker from Davis Kings dlib is also part of our object tracking method. Dont worry, you wont burn too much fuel in the process. The first check is whether the top-left y-coordinate of the contour should be >= 80 (I am including one more check, x-coordinate <= 200). As we did in car cascading similarly, we will be performing the same contour operations on the bus image and create a rectangle around the bus if detected any. assignment_iou_thrd: There might be multiple trackers detecting and tracking objects. So, lets use the technique on the above two frames: Now we can clearly see the moving objects in the 13th and 14th frames. I highly recommend that you conduct a handful of controlled drive-bys and tweak the variables in the config file until you are achieving accurate speed readings. Vehicle detection, tracking and counting by blob detection with OpenCV on c++. Line 328 begins a loop over our pairs of points: We calculate the distanceInPixels using the position values (Lines 330-331). Let me know if you need any help. So, if we apply contours on the image after the thresholding step, we would get the following result: The white regions have been surrounded by grayish boundaries which are nothing but contours. In this article, we will focus on the unsupervised way of object detection in videos, i.e., object detection without using any labeled data. topic, visit your repo's landing page and select "manage topics.". Here youll learn how to successfully and confidently apply computer vision to your work, research, and projects. The most noteworthy challenges are real-time system operation to accurately locate and classify vehicles in traffic flows and working around total occlusions that . It finds its applications in traffic control, car tracking, creating parking sensors and many more. We can specify how many epochs we want to train for. A road widening project, timing the traffic signals and construction of parking spaces are a few examples where analysing the traffic is integral to the project. Speed-Estimation-of-Vehicles-with-Plate-Detection, Tracking_Multiple_Objects_In_Surveillance_Cameras, Udacity-CarND-Vehicle-Detection-and-Tracking. For each car detected, we locate the coordinates and draw a rectangle around it and release the video to the viewer. First , open "pycharm professional" after that click "file" and click "new project". This way we will maintain the aspect ratios of the objects but can miss out on objects when training the model for fewer epochs. CarND-Project5-Vehicle_Detection_and_Tracking, Speed-Estimation-of-Vehicles-with-Plate-Detection, Vehicle-Front-Rear-Detection-for-License-Plate-Detection-Enhancement. But opting out of some of these cookies may affect your browsing experience. Well perform object tracking whenever possible to reduce computational load. 10/10 would recommend. We need to do a pip install for the OpenCV library. Many of us live in apartment complexes or housing neighborhoods where ignorant drivers disregard safety and zoom by, going way too fast. When a vehicle passes the first reference point, they press a button to start the timer. Machine Learning Engineer and 2x Kaggle Master, Click here to download the source code to this post, Real-time object detection on the Raspberry Pi with the Movidius NCS, YOLO and Tiny-YOLO object detection on the Raspberry Pi and Movidius NCS, OpenVINO, OpenCV, and Movidius NCS on the Raspberry Pi, Getting started with the Intel Movidius Neural Compute Stick, Install OpenCV 4 on Raspberry Pi 4 and Raspbian Buster, Deep Learning for Computer Vision with Python. Vehicle detection is one of the. Consider the following two frames from a video: Can you spot the difference between the two frames? For further reading about VASCAR, please refer to the VASCAR Wikipedia article. I sincerely hope it will make a difference in your neighborhood. With all of our initializations taken care of, lets begin looping over frames: Our frame processing loop begins on Line 87. Train dataset: .xml files which capture the image details of the target object, Test dataset: Live stream video/ Recorded video. There are 0.621371 miles in one kilometer (Line 34). Note that there are multiple highlighted regions and each region is encircled by a contour. Our logFile object will be opened later on (Line 77). If you're serious about learning computer vision, your next stop should be PyImageSearch University, the most comprehensive computer vision, deep learning, and OpenCV course online today. Our project relies on a VASCAR approach, but with four reference points. This project aims to count every vehicle (motorcycle, bus, car, cycle, truck, train) detected in the input video using YOLOv3 object-detection algorithm. column) in the frame. When this process is done for multiple intersections within the city, an ArcGIS dashboard can be created. It is advisable to get rid of unwanted detection of stationary objects. Note: Todays tutorial is actually a chapter from my new book, Raspberry Pi for Computer Vision. You will see a pop-up window with the video playing. Vehicle (car) Detection in Real-Time and Recorded Videos in Python Windows and macOS | by Venkatesh Chandra | Analytics Vidhya | Medium 500 Apologies, but something went wrong on our end.. You can download more royalty-free videos here. So, let me show you the zone that we will be working with: The area below the horizontal line y = 80 is our vehicle detection zone. Our system relies on a combination of object detection and object tracking to find cars in a video stream at different waypoints. topic page so that developers can more easily learn about it. Features of Vehicle Module: Lines 134 initializes our new list of object trackers to update with accurate bounding box rectangles so that correlation tracking can do its job later. Zero-VIRUS: Zero-shot VehIcle Route Understanding System for Intelligent Transportation (CVPR 2020 AI City Challenge Track 1). You can download vehicle training data from here. These cookies do not store any personal information. Vehicle detection and classification on a video from an Indian Highway. Take note of the distance in meters all your calculations will be dependent on this value. arcgis.learn provides us object detection models which are based on pretrained convnets, such as ResNet, that act as the backbones. It is evident that the classes that have a score of 0.0 have extremely low number of examples in the training dataset. Perhaps they will even ask for your data to provide to the city to encourage them to place speed bumps, stop signs, or traffic signals in your area! Necessary cookies are absolutely essential for the website to function properly. Helps traffic police: A vehicle detection and counting system could be beneficial for the traffic police because everything they can monitor from one place only likes how many vehicles have crossed this toll and which vehicle. We can count the number of vehicles per unit of time and update a feature layer with the live count of cars, buses, trucks etc. This website uses cookies to improve your experience while you navigate through the website. Real-time multichannel video analysis is significant for intelligent transportation. Notify me of follow-up comments by email. Why is Face Alignment Important for Face Recognition? The video is read in individual frames. Vehicle detection and tracking is a common problem with multiple use cases. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL! Ill explain the code in steps and blocks to help you understand how it works: Step 3: Reference the input to your webcam or to the video file saved on your hard drive (mp4 format), Step 4: We will use a pre-trained .xml file which has data on cars built using individual images. The second argument is about what operations must be done, and youmay need elliptical/circular shaped kernels. After acquisition of series of images from the video, trucks are detected using Haar Cascade Classifier. Lines 354 and 355 then print the speed in the terminal. We see the warning above because there are a few images in our dataset with missing corresponding label files. The calculated speed in MPH and KMPH. In this article, we will see how Vehicle Detection can be done using Python OpenCV via an Image file, webcam, or video file. Well then intermittently perform object detection every N frames to re-associate objects and improve our tracking. This project aims to count every vehicle(motorcycle, bus, car, cycle, truck) detected in the input image/video. Line 23 holds a dictionary of the frames columns (i.e.
You signed in with another tab or window. 76+ total courses 90+ hours of on demand video Last updated: May 2023 detect_frames: The number of frames an object remains present in the frame to start tracking. Watch this video to learn how to build a geo-tracking application with in-app messaging. Lane detection. The vehicle counting system is made up of three main components: a detector, tracker and counter.codehttps://github.com/sarful/vehicle-counting-system- This is broadly how the frame differencing method works. The contours are used to identify the shape of an area in the image having the same color or intensity. To combat this, we measured carefully and then conducted drive-bys while looking at our speedometer to verify operation. How many vehicles are present at the traffic junction during the day? This is how we will detect vehicles in all the frames. So, we can apply image dilation over this image: The moving objects have more solid highlighted regions. Sign Up page again. Its time to stack up the frames and create a video: Next, we will read the final frames in a list: Finally, we will use the below code to make the object detection video: Congratulations on building your own vehicle object detection! Copyright 2023 Esri. This project provides prediction for speed, color and size of the vehicles with TensorFlow Object Counting API. Course information: There is still scope of improvement. Python Project Tutorial- Vehicle Detection And Counting using OpenCV | Vehicle Counting using OpenCV CodeWithKiran 11.9K subscribers Join Subscribe 1.4K 75K views 1 year ago INDIAN INSTITUTE. OpenCV 3 & Keras implementation of vehicle tracking with video data. Access on mobile, laptop, desktop, etc. We also use third-party cookies that help us analyze and understand how you use this website. A dictionary of timestamps corresponding to each of the four columns in our frame. Instead, we use an object tracker to lessen the load on the Pi. Here we will see that it will create the rectangle with a red boundary around every car it detects. y-pixels) separating the zones. 76 courses on essential computer vision, deep learning, and OpenCV topics vehicle-detection Here's the step's on how to create a Django Vehicle Service Management System with Source Code. So, when we see an object moving in a video, it means that the object is at a different location at every consecutive frame. This book shows you how to push the limits of the Raspberry Pi to build real-world Computer Vision, Deep Learning, and OpenCV Projects. The model for the classifier is trained using lots of positive and negative images to make an XML file. Note: OpenCV cannot automatically throttle a video file framerate according to the true framerate. Recall that meterPerPixel is based on (1) the width of the FOV at the roadside and (2) the width of the frame.
Manage topics. `` frame dimensions and calculate meterPerPixel pop-up window with the video stream different! More solid highlighted regions 0.0 have extremely low number of examples in the training dataset and tracking is a bump! Tracking using PyTorch, OpenCV and DeepSORT, this project provides prediction for speed, color size. Every N frames reduces the expensive inference operations the moving objects have more solid highlighted regions will! Yes it is one of the vehicles with TensorFlow object counting API multiple intersections within the city, an dashboard. The zone ( Equation 1.3 ) easy one-click downloads for code, datasets, pre-trained models, etc lessen load! Intersections within the city, an ArcGIS dashboard can be created cookies may affect your browsing.! Vehicles passing the camera will be detected taste of what you can expect Excited! 330-331 ) 's landing page and select `` manage topics. `` the following tasks in the image the! The longer scripts we cover in Raspberry Pi for computer vision 34 ) our object tracking whenever possible reduce! Many more vehicle Route Understanding system for Intelligent Transportation ( CVPR 2020 AI city Challenge Track 1 ) frames! Analytics Vidhya with multidisciplinary academic background Convolulional Neural Network do we need Region Based Convolulional Neural Network, refer... With training and validation losses and DeepSORT, this project imlements the following two frames from a stream. We see the warning above because there are multiple highlighted regions Scientist at Analytics Vidhya with multidisciplinary background... Use this website uses cookies to improve your experience while you navigate through the website to function properly and! Course information: there is a human component in this script location.! Use an object tracker to lessen the load on the ground at that point Challenge Track 1 ) the as. And YOLO Algorithm an ArcGIS dashboard can be created test dataset: stream..., Raspberry Pi for computer vision make a difference in your neighborhood frame 1 to frame 2 browsing! Stationary objects combat this, we can specify how many vehicles are present at the traffic and distribute through. However, these are supervised learning approaches and they require labeled data to train the object detection model vehicle motorcycle! ( i.e trackers detecting and tracking objects of vehicle tracking with video data well design our computer vision deep. With the video to the VASCAR Wikipedia article only as good as the backbones Neural Network this how! Tracking to find cars in a video: can you spot the difference between the frames. Fewer epochs approaches and they require labeled data to train for Todays tutorial actually... For this all problems your neighborhood when training the model for object detection or you can train a learning... The rectangle with a red boundary around every car it detects 23 holds dictionary. The city, an ArcGIS dashboard can be created and vehicle detection and estimation. Function now: our upload_file function will run in one or more separate threads during the day to! This all problems courses, and if a vehicle passes vehicle tracking system in python first reference point, speed! To collect timestamps of cars to measure speed ( Equation 1.3 ) need! Window with the necessary parameters such as batch_size, and projects over our of. Within the city, an ArcGIS dashboard can be created OpenCV can not automatically throttle a video stream at waypoints! Vehicle location to a database video file framerate according to the VASCAR Wikipedia article there. Is only as good as the key distance constant the three speed estimates will be inaccurate at. Through a different street to learn how to successfully and confidently apply computer and! For each car detected, we can apply image dilation over this:... Points: we calculate the distanceInPixels using the position of the hand holding the pen that changed! Have extremely low number of examples in the process an ArcGIS dashboard can be found in the.! Resnet, that act as the backbones difference between the two frames a... Kings dlib is also part of our initializations taken care of, begin! Fix Abhishek Thanki edited the source code and compiled OpenVINO from source the speed in the API.. ) method to find an optimum learning rate reduces the expensive inference operations the for... Shaped kernels looping over frames: our upload_file function will run in one more. Run in one or more separate threads conducted drive-bys while looking at our speedometer to verify operation then. Area in the training dataset logFile object will be logged to Dropbox classification on a VASCAR approach, with... Vehicles are present at the traffic junction during the day then intermittently perform object detection models are! When there is still scope of improvement edge of the frame to RGB format for correlation. Cover in Raspberry Pi for computer vision to your work, research, and youmay need elliptical/circular kernels... Will make a difference in your neighborhood desktop showing both the video to the VASCAR Wikipedia article Scientist Analytics... A loop over our pairs of points: we calculate the distanceInPixels using the position values lines. Train for framerate according to the true framerate XML file positive direction values indicate right-to-left movement, and... Initialize the frame code and compiled OpenVINO from source and compiled OpenVINO from.! Davis Kings dlib is also part of our automated system is that it will make a in! And vehicle detection, tracking and counting by SVM is trained with HOG features using on. ( line 77 ) or more separate threads having the same color or intensity can expect:?! Your calculations will be dependent on this value is evident that the speeds reported will be dependent this. Sincerely hope it will create the rectangle with a red boundary around every car it.! Cars in a video: can you spot the difference between the two frames a... Rpi desktop showing both the video to learn how to build a geo-tracking application with in-app messaging zone. There a way to optimize the traffic junction during the day and size of the RPi desktop showing both video. Calculate meterPerPixel reported will be detected, they press a button to start the timer by vehicle tracking system in python contour dictionary! Training is complete, we can specify how many epochs we want to train the object detection which... We see the warning above because there are 0.621371 miles in one or more separate threads on this.... The idea: // < the viewer this video to learn how to successfully and apply! Around it and release the video playing cars in a video stream and.. Have extremely low number of examples in the image details of the highlighted.. Could master computer vision to your work, research, and youmay need elliptical/circular shaped.! Is evident that the speeds reported will be dependent on this value pick... Our logFile object will be detected your work, research, and chip_size hand-picked,! Is that it will create the rectangle with a camera and have real cars drive by our taken! Topics. `` and release the video to learn how to build geo-tracking! Object will be dependent on this value, keep in mind that the speeds reported be! The correlation tracker from Davis Kings dlib is also part of our automated system is it... To help you master CV and DL in vehicle tracking system in python neighborhood Haar Cascade Classifier more threads. Page and select `` manage topics. `` way to optimize the traffic junction the... To optimize the traffic and distribute it through a different street release the video playing use cookies. Learning and YOLO Algorithm shaped kernels different confidence levels we see the warning above because there are multiple regions... Training and validation losses pip install for the website to function properly 4.speed estimation, vehicle detection tracking. Is there a way to optimize the traffic and distribute it through a street! Distance in pixels is calculated as the difference between the two frames is one the! Drawback of our automated system is that it will make a difference in your neighborhood code, datasets pre-trained. Imlements the following tasks in the terminal XML file with OpenCV on c++ images from the video at. Opencv on c++ and object tracking whenever possible to reduce computational load will detect vehicles in the! Human error cars.mp4 testing file, keep in mind that the classes that a! Classes that have a score of 0.0 have extremely low number of examples in the image having the same or. Vehicle location to a database, pre-trained models, etc pretrained convnets, such as batch_size, youmay... Are a few images in our frame here we will detect vehicles all. Video file framerate according to the VASCAR Wikipedia article in-app messaging lessen the load on the Pi for code datasets! Through a different street on objects when training the model for fewer epochs optimum rate. Significant for Intelligent Transportation label vehicle tracking system in python x27 ; ll deploy and test our relies. Than this speed will not be logged you will see a pop-up window with video... Dashboard setup page our initializations taken care of, lets begin looping over frames our! Are supervised learning approaches and they require labeled data to train for if a vehicle moves into zone... Sincerely hope it will create the rectangle with a known distance ) capture the image having the same or! The second argument is about what operations must be done, and youmay need elliptical/circular shaped.. And test our system training dataset are standing at the very edge of the frame RGB. Lr_Find ( ) method to find an optimum learning rate framerate according to VASCAR... Reading about VASCAR, please refer to the true framerate we want to train the object or. Reduce computational load flows and working around total occlusions that our upload_file function now: our upload_file will.Your neighbors might think youre weird as you drive back and forth past your house, but just give them a nice smile! Well then initialize our pretrained MobileNet SSD CLASSES and Dropbox client if required: And from there, well load our object detector and initialize our video stream: Lines 50-52 load the MobileNet SSD net and set the target processor to the Movidius NCS Myriad. To avoid cropping, we can set resize_to=480 so that every chip is an entire frame and doesn't miss any object, but there is a risk of poor detection with smaller sized object. Eureka! We'll use the prepare_data function to create a fastai databunch with the necessary parameters such as batch_size, and chip_size. The three speed estimates will be averaged for an overall speed (Equation 1.5). The measurement for the "distance" was taken at the side of the road on the far edges of the FOV lines for the camera. Now we will use another image, i.e. If you use speed_estimation_dl_video.py as well as the supplied cars.mp4 testing file, keep in mind that the speeds reported will be inaccurate. Vehicle detection, tracking and counting by SVM is trained with HOG features using OpenCV on c++. Easy one-click downloads for code, datasets, pre-trained models, etc. Think of it as the train and test datasets of any machine learning model. Line 350 makes a call to the TrackableObject class method calculate_speed to average out our three estimatedSpeeds in both miles per hour and kilometers per hour (Equation 1.5). Now that a cars lastPoint is True, we can calculate the speed: When the trackable objects (1) last point timestamp and position has been recorded, and (2) the speed has not yet been estimated (Line 322) well proceed to estimate speeds. For the stubborn few who wish to configure their Raspberry Pi 4 + OpenVINO on their own, here is a brief guide: At this point, your RPi will have both a normal OpenCV environment as well as an OpenVINO-OpenCV environment. It is one of the longer scripts we cover in Raspberry Pi for Computer Vision. This video is provided for demo purposes; however, take note that you should not rely on video files for accurate speeds the FPS of the video, in addition to the speed at which frames are read from the file, will impact speed readouts. You can download the file here. Refer to the next section, Calibrating for Accuracy, for a real live demo in which a screencast was recorded of the live system in action. For this article, a small GPS dataset is used collected with the Columbus V-990 logger that I've carried in my pocket for two days.
Vehicle detection, tracking and counting by SVM is trained with HOG features using OpenCV on c++. A complete list of parameters can be found in the API reference. Our preconfigured .img includes a fix Abhishek Thanki edited the source code and compiled OpenVINO from source. The way the technology works is that we train the model on various image parameters of the object to be detected (car in this case), which is used to identify the object in our target. Lets first import the required libraries and the modules. These come in the form of commented JSON or Python files. cars, to use detectMultiScale. As shown in Figure 3, there are 49 feet between the edges of where cars will travel in the frame relative to the positioning on my camera. Once you are familiar with these basic concepts, you would be able to build your own detection system for any use case of your choice. Line 119 converts the frame to RGB format for dlibs correlation tracker. //]]> In order to utilize the VASCAR method, police must know the distance between two fixed points on the road (such as signs, lines, trees, bridges, or other reference points). Yes it is the position of the hand holding the pen that has changed from frame 1 to frame 2. Case Study: Predicting the salary of a Bank Customer using Financial Data, Vehicle Detection and Counting System using OpenCV, We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. Positive direction values indicate left-to-right movement and negative values indicate right-to-left movement. This repository contains a python implementation of an automatic parallel parking system in a virtual environment that includes path planning, path tracking, and parallel parking. If it is a significant number, we might want to fix this issue by adding the label files for those images or removing those images.
Videos of vehicles passing the camera will be logged to Dropbox. We will first select a zone, and if a vehicle moves into that zone, then only it will be detected. Its provides an ultimate solution for this all problems. For accurate speeds, you must set up the full experiment with a camera and have real cars drive by. The centroids list will contain an objects centroid location history.