top of page
Abstract Linear Background
Screenshot 2023-08-29 184833_edited.jpg

Software Platform

ftcwires Software Platform for easy autonomous coding

(designed for Rookie teams to have a good autonomous mode at their first qualifier!)

image.png

New:  FTC WIRES Blocks Platform for CENTERSTAGE with Open CV Based Team Element Detection Click here for details

Includes OpenCV based Vision Processor Helper Class in OnBot Java for creating vision blocks for Team Element Detection , 

Sample code for autonomous path.

image.png

Updated: : FTC WIRES Software Platform for CENTERSTAGE with 9.0.1 and RR 1.10 is now released. Click here for details

Integrates New Roadrunner 1.10 for motion planning , Vision library for Pixel detection, OpenCV based Team Element Detection (New)

Sample code for autonomous path.

image.png
image.png
image.png

Access the code and get instructions to tune and integrate

Click the Link below and provide your team name, number, location and contact information. Detailed instructions and access to sample source code would be provided. We will also provide continual updates through the season to stay current with FIRST SDK revisions and updates to Roadrunner. We will also be happy to provide help and any support for you to integrate the code.

FTCwires Software Platform is a quick use FTC software platform for coding Autonomous mode for the Centerstage season.

 

Intention: This platform is targeted to be used by rookie teams or teams who are learning autonomous programming. The aim is have all teams in Wisconsin have a basic autonomous mode working before their qualifiers.

 

Why: During the FTC WIRES survey in 2021-22 season, it was observed that many of the rookie teams and newer teams did not have a working autonomous mode in the early qualifiers. This was a demotivator for the teams as well as their alliance partners. This platform should ease the process of building a good and working autonomous mode in 1-2 days. We released the first FTC WIRES Software Platform in 2022-23 Powerplay season. Most Wisconsin teams had an autonomous mode in the year, The platform was subscribed by more than 70 teams worldwide too. Based on request, we are releasing the new version of FTCWIRES Software Platform for Centerstage 2023-24 Season 

Blocks based platform : 

  • This platform is based off the FTC Robot Controller 9.0.1 blocks platform released by FIRST.

  • It includes sample autonomous modes for operating a mecanum wheel based robot, and a sample teleop mode

  • It also includes a helper OnBot Java class for Open CV based vision to identify team game element on the spike marks to drop purple pixel and the location to drop Yellow pixel on Backdrop in autonomous mode

  • Using these, an example Autonomous mode for Centerstage is implemented. You could modify this easily to develop your own autonomous mode 

​Assumptions : 

  • You have a robot that uses mecanum wheels with the motor encoders connected for odometry. We call this Drive Encoder based odometry

  • You should also have a webcam connected and positioned in a way to see the pixel on the spike mark

  • The robot design assumes pick up of pixels from the front of the robot and drop of pixels on backdrop from back of the robot. (If you have a different orientation, all you need is to change the position coordinates)

  • Your robot would need to add the code for mechanism to drop purple pixel, ability to intake pixels and ability to drop pixels on backstage or backdrop.

  • You have a minimum understanding of Blocks coding for FTC.

If you have “No” on one of the assumptions, you could still use it : 

  • If you are using tank drive, you still could use this, but have to modify the functions for Moving forward and turning based on tank drive motors

Roadrunner 1.10 based platform: 

  1. The platform is a fork from Road-runner-Quickstart based off of FtcRobotController SDK 9.0.1 released by FIRST

  2. Roadrunner (Rev 1.10) is the newly released version of of the motion planning library developed by Acmerobotics (Ryan Brott) . Designed primarily for autonomous robotic movement, it allows for complex path following and generation while maintaining control of velocity and acceleration. This enables bots to have more accurate and advanced path following capabilities. We are going to use Drive Encoder based odometry. (Detailed information on this is available at https://rr.brott.dev/ and https://github.com/acmerobotics/road-runner-quickstart but the idea here is to help teams who find those pages overwhelming, so you dont need to look!)

  3. It also includes integration of the Vision Portal Tensor Flow detection of the white pixel (default one, not your customized team game element), to find the location to drop purple pixel in Spike Mark and the Yellow pixel on Backdrop in autonomous mode. This code is derived from the ConceptTensorFlowObjectDetection.java provided as example in the FTC sdk.

  4. New! Autonomous mode sample code has been added with Open CV based Vision Processor for Team Element detection. The code has been adapted from “Learn Java 4 FTC” by Alan G Smith. (Chapter 16) https://github.com/alan412/LearnJavaForFTC/blob/master/LearnJavaForFTC.pdf

  5. Using these, an example Autonomous mode for Centerstage is implemented. You could modify this easily to develop your own autonomous mode This also includes a simple tuning process for Roadrunner, as well as easy way to program positions for autonomous modes based on robot centric coordinate system.

  6. This also includes a sample version of TeleOp with motion management based on Roadrunner.

Assumptions : 

  1. You have a robot that uses mecanum drive (80% of FTC teams use this) (Mecanum wheels and motors with encoders connected to the Rev hub). 

  2. You should also have a webcam connected and positioned in a way to see the pixel on the spike mark

  3. The robot design assumes pick up of pixels from the front and drop of pixels on backdrop from back of the robot. (If you have a different orientation, all you need is to change the position coordinates)

  4. Your robot would need to add the code for mechanism to drop purple pixel, ability to intake pixels and ability to drop pixels on backstage or backdrop.

  5. You have a minimum understanding of Java and using the sdk, and coding using android studio.

 

If you have “No” on one of the assumptions, you could still use it : 

  1. If you are using block programming, you could always finish your teleop mode in block programming for all your non-drive systems and then transition to this platform get to the java code and add this

  2. If you are using tank drive, you still could use this, but have to figure out how roadrunner works with tank drive from the tuning docs of Roadrunner

  3. If you want to use dead wheel encoders, you can still use this. Just need to make the localizer changes and do tuning (based on roadrunner docs)

 

Disclaimer : 

  1. This is a basic autonomous mode - better than a crude one, but it is certainly not the best. 

  2. This only uses minimal roadrunner capability in terms of motion profiles possible. Roadrunner has several additional motion profiles to use, ability to time actions in parallel, sequential, stop and start, etc. which is not used here. 

  3. Roadrunner also provides ability to visualize motion on a digital dashboard. This version of the program wont support it, since the coordinate system assumed (for simplicity) is based on starting position of the robot. Dashboard requires field centric coordinate system. 

  4. If you feel like you will miss the fun of discovering how to code autonomous the hard way - don’t look, this is just to make the journey easy for those who want to start with an example.

How will the FTCWires Autonomous Mode work : 

The sample code provided gives an implementation of the autonomous mode based on instructions in the Game Manual 2. There are 4 modes to select - based on starting location of the robot (Red Left, Red Right, Blue Left, Blue Right. The starting point of the robot is assumed to be on the starting tile, and along the edge farther from the truss legs. You should also have a webcam connected and positioned in a way to see the middle spike mark and the spike mark away from the truss (and ideally nothing else). We assumed the camera to be in the center of the robot. 

Blocks based platform:  Autonomous mode

  1. Robot starts in the position S marked in the picture (aligned to tile edge farther from the truss)​

  2. On Init, The robot vision is started and it identifies the Team Element pixel in the spike mark (Left, Middle or Right).

  3. On Start, the robot first moves to the spike mark detected by vision. The code for dropping purple pixel needs to be executed at this point.

  4. Robot then moves to the back drop position corresponding to the spike mark. Code for dropping Yellow pixel on the back drop needs to be executed at this point.

  5. Robot then moves to the parking position

The same paths are defined in the pictures below. The code contains sample for all 4 starting positions for all 3 spike mark locations.

image.png
image.png

Roadrunner based platform : Blue Left and Red Right Autonomous mode

  1. Robot starts in the position S marked in the picture (aligned to tile edge farther from the truss)​

  2. On Init, The robot vision is started and it identifies the white pixel in the spike mark (Left, Middle or Right).

  3. On Start, the robot first moves to position Mb (to avoid hitting the truss legs) and then to the spike mark detected by vision (position P). The code for dropping purple pixel needs to be executed at this point.

  4. Robot them moves back to M1 (to avoid the purple pixel) and moves to position Y (based on spike mark detected). The code for dropping Yellow pixel on back board needs to be executed at this point.

  5. Robot then moves to parking position Pk.

image.png
loader,gif

Blue Right and Red Left Autonomous mode

  1. Robot starts in the position S marked in the picture (aligned to tile edge farther from the truss)​

  2. On Init, The robot vision is started and it identifies the white pixel in the spike mark (Left, Middle or Right).

  3. On Start, the robot first moves to position Mb (to avoid hitting the truss legs) and then to the spike mark detected by vision (position P). The code for dropping purple pixel needs to be executed at this point.

  4. Robot them moves back to M1 and M1a (to avoid the purple pixel) and moves to position In in front of pixel stack. Code for picking a pixel needs to be added here.

  5. Robot them moves to position M2 (through the central path. If the stage door needs to be opened for the robot to pass, code needs to be added for it.

  6. Robot then moves to position Y (based on spike mark detected). The code for dropping Yellow pixel on back board needs to be executed at this point.

  7. Robot then moves to parking position Pk.

image.png

Access the code and get instructions to tune and integrate

Click the Link below and provide your team name, number, location and contact information. Detailed instructions and access to sample source code would be provided. We will also provide continual updates through the season to stay current with FIRST SDK revisions and updates to Roadrunner. We will also be happy to provide help and any support for you to integrate the code.

bottom of page