The FIRST robotics competition (FRC) challenges high school students to design and build a robot capable of performing multiple challenging tasks. These annual challenges typically involve computer vision components, such as identifying and using reflective markers to locate targets. High school computer science curriculum rarely covers software engineering topics, let alone advanced topics like computer vision.
To help FRC teams, I have written the robovision python library. This library includes functions useful for the types of vision tasks typically involved in an FRC competition. The goal of this library is to reduce and hide some of the complexity involved with target identification, measuring, field orienteering, etc.
Some of the functions included in robovision are:
- Multi-threaded image acquisition from a web cam, IP cam (i.e. Axis cam), Raspberry Pi camera, or Jetson onboard gstreamer camera
- Lens distortion removal based on the camera calibrations created with the provided autocalibrate.py script
- Retroreflective target identification, contour finding, and geometry finding functions
- Image resizing, equalization, brightness and contrast adjustments, and more
- A preprocessor class, which enables you to set up a pipeline of functions that will be applied in series to an image.
- Overlay arrows, text, borders, or crosshairs on images
- Rolling (moving) average calculations
Robovision complements the RobotPy library. It does not duplicate functionality, nor is it meant to replace that excellent project. Team 1518 uses both robovision and the cscore and networktables components of RobotPy.
- Python 3.5+ (2.x is not supported)
- OpenCV 3.4+ (4.x will probably work, but is untested)
Robovision is meant to run on a coprocessor (e.g. a Jetson or Raspberry Pi) and hasn't been tested on the RoboRio itself. While it probably works on a Windows computer, it was developed and is tested only on Mac OS and Linux systems.
Installation details are covered in the wiki's Installation and System Setup page.
Here's a sample of how you might use robovision to calculate the distance in inches to a 12" piece of retro-reflective tape held horizontally and face-on to an IP camera:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Examples and documentation
I include a selection of example scripts in the extras folder in the GitHub repo. While not really production ready, these show how to perform selected vision tasks that might be helpful for an FRC challenge. There's also a couple of scripts to help you do lens calibration (used to remove lens distortions, also called field flattening).
Be sure to check out the project's wiki for documentation on the library's classes and functions. You'll need the numpy and OpenCV python packages to use robovision. Some of the example scripts use additional libraries.
FRC Team 1518, Raider Robotics used robovision successfully in its 2019 Deep Space Challenge bot. So, another good source of examples (as well as some "incomplete thoughts") is the team's GitHub repository.
Looking ahead, the library needs even more simplification and abstraction to make it easy for FRC teams to use. Additional documentation and examples are also needed. I have not extensively tested or optimized the library for the Raspberry Pi (we're using a Jetson TX2, which has plenty of horsepower for our vision tasks). I very much welcome pull requests and contributions to the project.