Live Video Streaming From Drone Over Internet (Vpn) and Image Detection Using Artificial Intelligence Tool (Yolo V4 & Yolo V4 (Tiny))

Object detection using Yolo

            One of the most important aspect of any drone mission is the ability to live stream video feed from an onboard camera. This live feed can then be viewed at ground control station or be utilized by Artificial Intelligence (AI) tools like You Only Look Once (Yolo) for object detection. The AI algorithm can be trained for specific drone mission requirement such as surveillance, rescue, disaster relief or intelligent payload delivery. In this chapter I have illustrated the configuration for creating a streaming server onboard companion computer to live stream video feed over encrypted and secure VPN link. The configuration has been tested for both Raspberry Pi 4 and Jetson Nano. The live feed has been received at Jetson TX2 and processed for object detection using Yolo v4 Tiny. However, training the algorithm for specific dataset has not left out of the scope of this chapter.

Making video streaming server on companion computer like Raspberry Pi 4/Jetson Nano

            Video streaming server on companion computer onboard drone   In order to get live stream from drone, we need to create a streaming server on companion computer. The companion computer could be Raspberry Pi 4 or Jetson Nano, the process is same. We need to be careful with the directory paths which will be different for Raspberry Pi 4 or Jetson Nano as per your environment.

Step 1.      Install MJPEG server on Raspberry Pi 4 or Jetson Nano. First update, upgrade OS followed by downloading streameye repository from github.
sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/ccrisan/streameye.git

Downloading streameye github repository for MJPEG streaming server for onboard companion computer
Downloading streameye github repository for MJPEG streaming server for onboard companion computer

cd streameye
make
sudo make install

If there are no particular drivers required to be installed, all connected      video devices / cameras will be displayed by using the following    Linux command

ls /dev/video*

Detecting all video devices connected to companion computer
Detecting all video devices connected to companion computer

If you are using one of the official camera modules, it is important to do   the following so that the camera is displayed immediately (preferably by autostart)

sudo modprobe bcm2835-v4l2

To get camera or HDMI capture card details on Companion Computer, 
v4l2-ctl -V

Getting camera/capture card device information
Getting camera/capture card device information

 Step 2.             Create a script to run the streameye application installed on companion computer with our required configuration.

 sudo nano run.sh

 Add the below commands

#!/bin/bash
ffmpeg -re -f video4linux2 -i /dev/video0 -s 640x480 -fflags nobuffer -f mjpeg -qscale 8 - 2>/dev/null | streameye

Figure 11.5: Creating content of run.sh, a script for triggering streameye MJPEG server with particular video streaming options
Figure 11.5: Creating content of run.sh, a script for triggering streameye MJPEG server with particular video streaming options

No we need to create a camera_stream.service to auto run the camera script run.sh on start up. For this use the command
sudo nano /etc/systemd/system/camera_stream.service

Creating startup service file for running run.sh script automatically on startup using systemd
Creating startup service file for running run.sh script automatically on startup using systemd

[Unit]
Description=Camera Streaming Service
After=networking.service
StartLimitIntervalSec=0

[Service]
Type=idle
ExecStart=/home/pi/run.sh
Restart=on-failure
RestartSec=1

[Install]
WantedBy=multi-user.target

Press Ctrl+O, enter and then Ctrl+X to save and exit. To enable camera service, then start and check status. After status is successful, we are confirmed the service is good to go.

sudo systemctl enable camera_stream.service
sudo systemctl start camera_stream.service
sudo systemctl status camera_stream.service

camera_streaming.service successfully created, enabled and started with running status
camera_streaming.service successfully created, enabled and started with running status

Receiving video feed on ground control station application i.e Mission Planner

            Receiving video feed on ground station i.e., Mission Planner       

To view feed in Mission Planner, we need to firs install GStreamer version 1.14 for windows msi package. Then open Mission Planner, right click on Mission Planner panel > video > Set GStreamer Source and add the link

Importing video feed directly into Mission Planner GCS application
Importing video feed directly into Mission Planner GCS application

Add this link
souphttpsrc location=http://100.96.1.66:8080/mjpeg do-timestamp=true ! decodebin  ! videoconvert ! video/x-raw,format=BGRA ! appsink name=outsink

Importing video feed directly into Mission Planner GCS application
Importing video feed directly into Mission Planner GCS application
Successfully imported video feed directly into Mission Planner GCS application
Successfully imported video feed directly into Mission Planner GCS application

The same feed is simultaneously available through browser. Open browser on ground station connected on OpenVPN. Type http://100.96.1.66:8080/ to view live feed. If on local network then put local IP of Raspberry Pi 4 else VPN IP on VPN network or Public IP incase it’s on open network.

Preparing Jetson TX2 with latest Ubuntu 16.04 LTS and Jetpack 4.5.1

            Jetson TX2 is the computer on ground. It will receive the video feed from drone on VPN and do object detection on it using Yolov4 Darknet framework. Installing latest Jetpack 4.5.1 on JetsonTX2 to install all the pre-requisites. The list of pre-requisites required for YoloV4 are: –

  • CMake >= 3.18
  • CUDA >= 10.2
  • OpenCV >= 2.4
  • cuDNN >= 8.0.2
  • GPU with CC >= 3.0

Jetpack 4.5.1 installs all these prerequisites as a package. We will refresh the operating System Ubuntu 16.04 LTS with Jetpack 4.5.1 on Jetson TX2. We require a host Linux machine with 8 Gb RAM minimum running minimum Ubuntu 16.04 LTS. I am using my dual boot Laptop to do the same. Connect the Jetson TX2 to your laptop using micro-USB cable, while power to Jetson TX2 is provided by the power adaptor. Install Nvidia SDK Manager on Linux host machine.

 Step 1      Power the Jetson TX2 with power adaptor and connect to Linux Laptop (Host Machine) over micro-USB cable.

  • press power button then wait for boot led lights up. 
  • press reset & recovery buttons together 
  • release reset button 
  • and release the recovery button after 3 seconds later. 

Step 2    Open terminal window on the host machine running Linux. Use   command lsusb and you should see “NVidia Corp.” title in the terminal.

Detecting Jetson TX2 connected to Laptop/PC running Linux Ubuntu ready for configuration
Detecting Jetson TX2 connected to Laptop/PC running Linux Ubuntu ready for configuration

Step 3      Next, we need to set configurations in the Nvidia SDK Manager. We are required to set up the SDK Manager by making an account. Target Hardware must be autodetected as your Jetson TX2. Accept the license agreement and continue.

Using Nvidia SDK Manager application running on host Linux PC/Laptop to configure Jetson TX2 via micro USB cable
Using Nvidia SDK Manager application running on host Linux PC/Laptop to configure Jetson TX2 via micro USB cable

Continue to next step

Installing Ubuntu 16.04 as Operating system with Jetpack 4.5.1
Installing Ubuntu 16.04 as Operating system with Jetpack 4.5.1

Step 4       Nvidia SDK Manager will for password to complete installation. Provide password to continue. 

Authorization of superuser/administrator
Authorization of superuser/administrator

Step 5   After all packages have been downloaded, Jetson OS will be installed on the Jetson TX2. 

Downloading OS and Jetpack on Jetson TX2
Downloading OS and Jetpack on Jetson TX2

Step 6     The SDK Manager asks your Jetson TX2 NX’s username and password. Need to connect monitor and keyboard to Jetson TX2 and complete step 7 and then proceed here again.

Login credentials of Jetson TX2 to proceed
Login credentials of Jetson TX2 to proceed

 Step 7     Complete the SDK Manager installation progress. Configure your Ubuntu installation progress (language, keyboard type, location, username & password etc.). 

Configuring user profile on Ubuntu 16.04 installed on Jetson TX2
Configuring user profile on Ubuntu 16.04 installed on Jetson TX2

Step 8      Type your username and password in SDK Manager then click “Install”. 

Downloading remaining packages
Downloading remaining packages
Ubuntu 16.04 with Jetpack 4.5.1 successfully installed
Ubuntu 16.04 with Jetpack 4.5.1 successfully installed

Now we have a fresh copy of Ubuntu 16.04 LTS with Jetpack 4.5.1 ready for installation of Yolov4 Darknet framework.

Installing VPN as a service to make Jetson TX2 as part of our private VPN 

            Now we need to install VPN to receive video feed on the private VPN from onboard companion computer. The process is similar to the one done in Chapter III above. 

Step 1     First install Open VPN application on Jetson Nano.

sudo apt-get install openvpn

Use WinSCP application on windows to transport the folder VPN_Folder that contains all my .ovpn files. Let me remind you that the VPN_Folder contains .ovpn files for FCU, Companion computer and Ground Control Station (GCS). For doing this Artificial Intelligence based object detection, I created another profile for Jetson TX2 and named it jetsontx2.ovpn.  Rest the process remains the same as shown in Chapter 4 of how to transport the .ovpn file. Again, I have transported the entire folder VPN_Folder first to default location using WinSCP. It allows transportation of files/folders from windows to Linux. Then using below mentioned command, I will move folder to location of ovpn application for execution.

Now move the folder from /home/yash to /etc/openvpn for execution

sudo mv /home/yash/VPN_Folder /etc/openvpn/

Step 2     Create systemd file for startup at boot

sudo nano /etc/systemd/system/vpn.service

Creating vpn.service at /etc/systemd/system for running VPN as a service on startup
Creating vpn.service at /etc/systemd/system for running VPN as a service on startup

[Unit]
Description=VPN Service
After=networking.service
StartLimitIntervalSec=0

[Service]
Type=idle
ExecStart=/usr/sbin/openvpn /etc/openvpn/VPN_Folder/jetson.ovpn
Restart=on-failure
RestartSec=1

[Install]
WantedBy=multi-user.target

Creating content of vpn.service startup file
Creating content of vpn.service startup file

sudo systemctl enable vpn.service
sudo systemctl start vpn.service
sudo systemctl status vpn.service
sudo systemctl daemon-reload
(only to reload service incase some changes are made)

Using Darknet framework and YOLO v4 artificial intelligence algorithm for detecting objects on live camera feed using Jetson TX2 located at ground station

                        Installation of Darknet framework  Now after Drone and Jetson TX2 both are connected to VPN (internet), we need to install the Darknet framework to do object detection on live video stream. The installation process is as below:-

Step 1       Git clone the darknet repository from github
https://github.com/AlexeyAB/darknet.git

Step 2. Move inside Darknet folder

Darknet folder containing YOLO weights and other configuration file for Artificial Intelligence implementation on Jetson TX2
Darknet folder containing YOLO weights and other configuration file for Artificial Intelligence implementation on Jetson TX2
Changes to Makefile for make install on Jetson TX2
Changes to Makefile for make install on Jetson TX2

You can amend the Makefile either through GUI by connecting keyboard and monitor to the Jetson TX2 or using remote ssh.

Incase doing through terminal window or remote ssh of Jetson TX2, use below commands. I have made changes to Makefile as per my hardware Jetson TX2 running Arch Linux. Need to amend as per hardware used.

cd darknet
sudo nano Makefile

Amend the Makefile as above and then Ctrl+O and Ctrl+X to save and exit.

Step 3.    After making changes, to the Makefile, we need to make install it. Need to do this in terminal window or through remote          ssh. We are still in darknet directory.

make

Step 4.   If all goes well then, we are ready to test live video           feed for image detection. First make sure companion computer is on and connected to onboard camera. It is accessible on VPN from Jetson TX2 over Internet. Jetson TX2 being on ground can utilize any type of internet connection from OFC based to 4G LTE Dongle as per the availability. Also, connecting monitor and keyboard to Jetson TX2 is suggested over taking remote ssh from GCS laptop.

Open browser in Jetson TX2 and check availability of video feed from Companion computer on url http://100.96.1.66:8080 You must put the IP Address of your companion computer (VPN or local LAN)

Step 5. Testing live video feed for object detection using YOLO version 4. Open terminal window.

If we want to use YOLOv4

cd darknet
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://100.96.1.66:8080/video?dummy=param.mjpg

Executing YOLO version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO version 4 for object detection through terminal on live feed from onboard camera over private VPN
Objects detected with Frame per second (FPS) statistics
Objects detected with Frame per second (FPS) statistics

If we want to use YOLOv4-tiny

cd darknet
./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights http://100.96.1.66:8080/video?dummy=param.mjpg

Executing YOLO-tiny version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO-tiny version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO-tiny version 4 for object detection through terminal on live feed from onboard camera over private VPN
Executing YOLO-tiny version 4 for object detection through terminal on live feed from onboard camera over private VPN
Objects detected with Frame per second (FPS) statistics
Objects detected with Frame per second (FPS) statistics
Objects detection on live video feed
Objects detection on live video feed
Objects detection on live video feed
Objects detection on live video feed

YoloV4 is more accurate than YOLOv4-tiny, however YOLOv4-tiny is much faster than YOLOv4. It entirely depends upon the requirement and resources available. A quick understanding of the concept is as per pictures below

Speed vs accuracy comparison of YOLO versions
Speed vs accuracy comparison of YOLO versions
Speed vs accuracy comparison of YOLO versions
Speed vs accuracy comparison of YOLO versions

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these