This is an example page. It’s different from a blog post because it will stay in one place and will
One of the most important aspect of any drone mission is the ability to live stream video feed from an onboard camera. This live feed can then be viewed at ground control station or be utilized by Artificial Intelligence (AI) tools like You Only Look Once (Yolo) for object detection. The AI algorithm can be trained for specific drone mission requirement such as surveillance, rescue, disaster relief or intelligent payload delivery. In this chapter I have illustrated the configuration for creating a streaming server onboard companion computer to live stream video feed over encrypted and secure VPN link. The configuration has been tested for both Raspberry Pi 4 and Jetson Nano. The live feed has been received at Jetson TX2 and processed for object detection using Yolo v4 Tiny. However, training the algorithm for specific dataset has not left out of the scope of this chapter.
Video streaming server on companion computer onboard drone In order to get live stream from drone, we need to create a streaming server on companion computer. The companion computer could be Raspberry Pi 4 or Jetson Nano, the process is same. We need to be careful with the directory paths which will be different for Raspberry Pi 4 or Jetson Nano as per your environment.
Step 1. Install MJPEG server on Raspberry Pi 4 or Jetson Nano. First update, upgrade OS followed by downloading streameye repository from github.
sudo apt-get update
sudo apt-get upgrade
git clone https://github.com/ccrisan/streameye.git
sudo make install
If there are no particular drivers required to be installed, all connected video devices / cameras will be displayed by using the following Linux command
If you are using one of the official camera modules, it is important to do the following so that the camera is displayed immediately (preferably by autostart)
sudo modprobe bcm2835-v4l2
To get camera or HDMI capture card details on Companion Computer,
Step 2. Create a script to run the streameye application installed on companion computer with our required configuration.
sudo nano run.sh
Add the below commands
ffmpeg -re -f video4linux2 -i /dev/video0 -s 640x480 -fflags nobuffer -f mjpeg -qscale 8 - 2>/dev/null | streameye
No we need to create a camera_stream.service to auto run the camera script run.sh on start up. For this use the command
sudo nano /etc/systemd/system/camera_stream.service
Description=Camera Streaming Service
Ctrl+O, enter and then
Ctrl+X to save and exit. To enable camera service, then start and check status. After status is successful, we are confirmed the service is good to go.
sudo systemctl enable camera_stream.service
sudo systemctl start camera_stream.service
sudo systemctl status camera_stream.service
Receiving video feed on ground control station application i.e Mission Planner
Receiving video feed on ground station i.e., Mission Planner
To view feed in Mission Planner, we need to firs install GStreamer version 1.14 for windows msi package. Then open Mission Planner,
right click on Mission Planner panel > video > Set GStreamer Source and add the link
Add this link
souphttpsrc location=http://100.96.1.66:8080/mjpeg do-timestamp=true ! decodebin ! videoconvert ! video/x-raw,format=BGRA ! appsink name=outsink
The same feed is simultaneously available through browser. Open browser on ground station connected on OpenVPN. Type http://100.96.1.66:8080/ to view live feed. If on local network then put local IP of Raspberry Pi 4 else VPN IP on VPN network or Public IP incase it’s on open network.
Preparing Jetson TX2 with latest Ubuntu 16.04 LTS and Jetpack 4.5.1
Jetson TX2 is the computer on ground. It will receive the video feed from drone on VPN and do object detection on it using Yolov4 Darknet framework. Installing latest Jetpack 4.5.1 on JetsonTX2 to install all the pre-requisites. The list of pre-requisites required for YoloV4 are: –
- CMake >= 3.18
- CUDA >= 10.2
- OpenCV >= 2.4
- cuDNN >= 8.0.2
- GPU with CC >= 3.0
Jetpack 4.5.1 installs all these prerequisites as a package. We will refresh the operating System Ubuntu 16.04 LTS with Jetpack 4.5.1 on Jetson TX2. We require a host Linux machine with 8 Gb RAM minimum running minimum Ubuntu 16.04 LTS. I am using my dual boot Laptop to do the same. Connect the Jetson TX2 to your laptop using micro-USB cable, while power to Jetson TX2 is provided by the power adaptor. Install Nvidia SDK Manager on Linux host machine.
Step 1 Power the Jetson TX2 with power adaptor and connect to Linux Laptop (Host Machine) over micro-USB cable.
- press power button then wait for boot led lights up.
- press reset & recovery buttons together
- release reset button
- and release the recovery button after 3 seconds later.
Step 2 Open terminal window on the host machine running Linux. Use command lsusb and you should see “NVidia Corp.” title in the terminal.
Step 3 Next, we need to set configurations in the Nvidia SDK Manager. We are required to set up the SDK Manager by making an account. Target Hardware must be autodetected as your Jetson TX2. Accept the license agreement and continue.
Continue to next step
Step 4 Nvidia SDK Manager will for password to complete installation. Provide password to continue.
Step 5 After all packages have been downloaded, Jetson OS will be installed on the Jetson TX2.
Step 6 The SDK Manager asks your Jetson TX2 NX’s username and password. Need to connect monitor and keyboard to Jetson TX2 and complete step 7 and then proceed here again.
Step 7 Complete the SDK Manager installation progress. Configure your Ubuntu installation progress (language, keyboard type, location, username & password etc.).
Step 8 Type your username and password in SDK Manager then click “Install”.
Now we have a fresh copy of Ubuntu 16.04 LTS with Jetpack 4.5.1 ready for installation of Yolov4 Darknet framework.
Now we need to install VPN to receive video feed on the private VPN from onboard companion computer. The process is similar to the one done in Chapter III above.
Step 1 First install Open VPN application on Jetson Nano.
sudo apt-get install openvpn
Use WinSCP application on windows to transport the folder VPN_Folder that contains all my .ovpn files. Let me remind you that the VPN_Folder contains .ovpn files for FCU, Companion computer and Ground Control Station (GCS). For doing this Artificial Intelligence based object detection, I created another profile for Jetson TX2 and named it jetsontx2.ovpn. Rest the process remains the same as shown in Chapter 4 of how to transport the .ovpn file. Again, I have transported the entire folder VPN_Folder first to default location using WinSCP. It allows transportation of files/folders from windows to Linux. Then using below mentioned command, I will move folder to location of ovpn application for execution.
Now move the folder from /home/yash to /etc/openvpn for execution
sudo mv /home/yash/VPN_Folder /etc/openvpn/
Step 2 Create systemd file for startup at boot
sudo nano /etc/systemd/system/vpn.service
sudo systemctl enable vpn.service (only to reload service incase some changes are made)
sudo systemctl start vpn.service
sudo systemctl status vpn.service
sudo systemctl daemon-reload
Using Darknet framework and YOLO v4 artificial intelligence algorithm for detecting objects on live camera feed using Jetson TX2 located at ground station
Installation of Darknet framework Now after Drone and Jetson TX2 both are connected to VPN (internet), we need to install the Darknet framework to do object detection on live video stream. The installation process is as below:-
Step 1 Git clone the darknet repository from github
Step 2. Move inside Darknet folder
You can amend the Makefile either through GUI by connecting keyboard and monitor to the Jetson TX2 or using remote ssh.
Incase doing through terminal window or remote ssh of Jetson TX2, use below commands. I have made changes to Makefile as per my hardware Jetson TX2 running Arch Linux. Need to amend as per hardware used.
sudo nano Makefile
Amend the Makefile as above and then Ctrl+O and Ctrl+X to save and exit.
Step 3. After making changes, to the Makefile, we need to make install it. Need to do this in terminal window or through remote ssh. We are still in darknet directory.
Step 4. If all goes well then, we are ready to test live video feed for image detection. First make sure companion computer is on and connected to onboard camera. It is accessible on VPN from Jetson TX2 over Internet. Jetson TX2 being on ground can utilize any type of internet connection from OFC based to 4G LTE Dongle as per the availability. Also, connecting monitor and keyboard to Jetson TX2 is suggested over taking remote ssh from GCS laptop.
Open browser in Jetson TX2 and check availability of video feed from Companion computer on url http://100.96.1.66:8080 You must put the IP Address of your companion computer (VPN or local LAN)
Step 5. Testing live video feed for object detection using YOLO version 4. Open terminal window.
If we want to use YOLOv4
./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights http://100.96.1.66:8080/video?dummy=param.mjpg
If we want to use YOLOv4-tiny
./darknet detector demo cfg/coco.data cfg/yolov4-tiny.cfg yolov4-tiny.weights
YoloV4 is more accurate than YOLOv4-tiny, however YOLOv4-tiny is much faster than YOLOv4. It entirely depends upon the requirement and resources available. A quick understanding of the concept is as per pictures below