Select Page

PowerPlay Programming & Sensing: Initial Approach

by | Dec 29, 2022 | News, Programming | 0 comments

Photo of a custom signal sleeve

We approached the programming of our robots for PowerPlay with the challenges that we’d have to overcome during typical gameplay which included general Navigation of the Field, Operations during the Autonomous Period and our needs related to Computer Vision.  For each of those we started with an initial approach to the problem and made improvements based on our first experiences and later noted a future direction we would take to improve the performance of our robots.  Below are our notes on the process:

 

1. Navigation

Initial:

We were able to reuse our code for autonomous navigation from last year’s challenge, Freight Frenzy, which was based on using motor encoders. We updated the mapping of the encoder values to rotational and translational robot motion because we are using different motors, wheels, and chassis in this year’s challenge.

Improvement:

An improvement we are going to make is to implement Road Runner which uses the built-in inertial monitoring units (IMUs) in the control and expansion hubs. This should help correct the navigation if the robot veers off course.

Future Direction:

We also plan on using the navigation images to line up directly with the stack of cones during the autonomous period.

 

2. Autonomous

Initial:

We were able to modify the example code from RevRobotics to correctly identify the images on the signal cone. One nice feature of this year’s game, as opposed to last year’s, is that the field is symmetrical between alliance sides, which means we are able to use the same code regardless of which alliance we are on.

Improvement:

Since meet zero we have been able to use the FTC machine-learning tool to train a model for our custom signal sleeve. Due to the mirror-symmetry between starting positions, we will need to implement separate code depending if we start on the left or right side for the alliance. This will allow us to place our pre-loaded cone on the tallest terminal.  

Future Direction:

In the future, our goal is to place multiple cones on terminals during the autonomous portion.

3. Computer Vision

 Initial:

We were able to use the pre-trained model and example code for detecting the stock signal code. 

Improvement:

We have since made our own sleeve for the signal cone and trained a TensorFlow model using the FTC machine learning tool. For the first sleeve we made we only put a single image on each side.  The model had a hard time correctly detecting the images. We re-made the sleeve with more images on each side which improved the model performance.

Future Direction:

We will end up using Vuforia to find the navigation images and line up perfectly with the stack of cones because there is a navigation image right by the stack of cones.

Watch our Youtube video short to see our dueling autonomous robots, FTC Teams 9887 and 20799:

You May Also Like…

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *