Vision and Object Tracking

Outline

In 2018 we are using vision to track the power cubes in FIRST Power Up. To do this we are using a Raspberry Pi 3 which is sending the angle and time taken to process to the roboRIO. This is then used by to robot to drive over to the cube autonomously.

Vision Processing

Our vision this year is being run on a Raspi 3 using a PlayStation Eye camera for taking images. The first step in the vision process is setting the camera and camera server. To do this we use CScore, a part of Robotpy. After this we grab the image and use OpenCV to generate a 'mask', which is a filter which discards any part of the image which isn't the colour we want - in this case yellow. After this we generate contours, which defines each blob of yellow as a different object. After this each blob which is big enough is tracked and the angle is found using the formula atan(-[centre of contour] / [focal length]). This angle is then sent to the roboRIO.

Message Passing

For passing our data to the roboRIO we use Network Tables, a part of Robotpy.

Source Code

Source code is available on thedropbears repository on GitHub.