Getting Started With OpenCV and Raspberry Pi

I bought a Raspberry Pi nearly a year ago, always intending to use it in OpenCV experiments. Just recently, I got around to starting the project. The current plan is to make a small robot that first, uses computer vision to track and follow an object. The next phase will be obstacle avoidance, and is where the project becomes more of an OpenCV experiment.

 

The main tasks are as follows:

  1. Install and compile OpenCV on the pi
  2. Track and find centroid of an object
  3. Send out obstacle centroid over UART
  4. Get a simple motor control system on the auxiliary MCU working
  5. Start developing obstacle avoidance functionality

 

As of this post, the first two tasks are complete and the 3rd is underway. I’m doing the Pi-side software in Python, and the auxiliary MCU (used primarily for motor control) in C, and I’ll probably use a TI Hercules LaunchPad, since I’ve got one sitting around.

 

What follows are my notes on getting the Pi up and running. I haven’t used one of these for about 2 years, so I was rusty. Hopefully these notes and linked resources will be helpful to others looking to get OpenCV going on their devices.

 

Pi Bringup

First, the Pi needs to get started up and we need to verify that we can SSH into it. I’m going to assume that you can boot into Raspbian reliably.

While trying to set up my WiFi connection, I realized that my keyboard layout was not correct. I didn’t have the tilde and pipe characters, for example, and they’re pretty important for CLI’ing around the device.

Keyboard Layout: To fix it, you need to modify the keyboard layout. This link had the right instructions for me: https://www.raspberrypi.org/forums/viewtopic.php?f=26&t=39806&p=331516#p331516

In short, run the following commands:

cd /etc/default

sudo nano keyboard

In nano, edit the config file to show the following:

XKBMODEL =“ pc105”

XKBLAYOUT =“ US”

To save and exit,

ctrl + x
ctrl + y

To shut down, run: sudo shutdown -r now

Scroll CLI: To scroll in your CLI, you can run commands and pipe the output to the “less” utility, using the following syntax: command | less

WiFi: I followed this tutorial to get WiFi going. I had to use the tip above to see all the details: https://www.raspberrypi.org/documentation/configuration/wireless/wireless-cli.md

SSH: Finally, you can set up SSH access by following these instructions: https://www.raspberrypi.org/documentation/remote-access/ssh/

 

OpenCV Install

Now that the Pi is ready to go, we can install OpenCV and all of the relevant utilities for developing with it. The de facto standard OpenCV tutorials are by Adrian over at pyimagesearch.com,  and I followed his instructions for getting OpenCV going. Total install and compile time was about 4 hours. The tutorial is here: http://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/

 

Testing OpenCV

To try out OpenCV, I first wrote up the class to access the Pi Camera. The instructions are here: http://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/

 

Object Tracking

As a simple intro, I again followed one of Adrian’s tutorials. This time, it was for ball tracking, which is based on colour. http://www.pyimagesearch.com/2015/09/14/ball-tracking-with-opencv/

 

Project Development

With those experiments and introductions complete, I started on the tasks for my project. First, I integrated the Pi camera access code with the object tracking. Next, I cleaned up a lot of the code. Some of the pyimagesearch tutorials do things which I view as unnecessary, such as resizing the image before processing. I chose to bring in images from the Pi at 640×480, and found that the code ran faster when processing that initial size, and slower when I scaled the images down to a width of 600px. Also, I removed all of the image overlays except for the centroid location and the bounding box, since those are the only parts that are necessary for my project.

Next, I examined the example ball tracking code and looked at how the centroids were calculated. It’s pretty simple, and the units are in pixels. It’ll be really straightforward to send these values out to the auxiliary MCU, and I’ll probably only need to use a P controller to have the robot follow an object of the correct colour. Pretty neat! So far, thanks to the great documentation available for the Pi and for OpenCV, and the ease of use of Python, it has been very straightforward to get this project rolling. I can’t wait to dig more into the processing capabilities of OpenCV more and keep experimenting.

Test subjects: squishy promotional things from industry events.

 

Useful Raspberry Pi Commands

Here’s a list of commands that I use frequently when interfacing with the Pi:

Start desktop from CLI: startx

Leave Python virtual environment: deactivate

Enter Python virtual environment: workon <name of environment>

SSH into Pi with X window forwarding: ssh -X <ip address of Rpi> -l <username on Rpi>

Leave a Reply

Your email address will not be published. Required fields are marked *