top of page
Abstract Linear Background
Screenshot 2023-08-29 184833_edited.jpg

Software Platform

FTCwires Software Platform is a quick use FTC software platform for coding Autonomous mode for the Powerplay season. https://github.com/ftcwires/FtcRobotController-PowerPlay 

 

Intention: This platform is targeted to be used by rookie teams or teams who are learning autonomous programming. The aim is have all teams in Wisconsin have a basic autonomous mode working before their qualifiers.

 

Why: During the FTC WIRES survey in 2021-22 season, it was observed that many of the rookie teams and newer teams did not have a working autonomous mode in the early qualifiers. This was a demotivator for the teams as well as their alliance partners. This platform should ease the process of building a good and working autonomous mode in 1-2 days. 

 

What does the platform contain : 

  1. The platform is a fork from FtcRobotController SDK 8.0 released by FIRST. 

  2. On it, motion planning library - roadrunner by Acmerobotics is integrated. Designed primarily for autonomous robotic movement, it allows for complex path following and generation while maintaining control of velocity and acceleration. This enables bots to have more accurate and advanced path following capabilities. We are going to use Drive Encoder based odometry. (Detailed information on this is available at www.learnroadrunner.org or https://acme-robotics.gitbook.io/road-runner/, but the idea here is to help teams who find those pages overwhelming, so dont look!)

  3. It also includes implementation of the vision of the signal cone (default one, not your customized signal sleeve), to find the parking location in autonomous mode. This code is derived from the ConceptTensorFlowObjectDetection.java provided as example in the FTC sdk.

  4. Using these, an partially completed example Autonomous mode for Powerplay is implemented. You could modify this easily to develop your own autonomous mode that will work on all 4 starting positions in Powerplay - Blue Left, Blue Right, Red Left and Red Right

 

Assumptions : 

  1. You have a robot that uses mecanum drive (80% of FTC teams use this) (Mecanum wheels and motors with encoders connected to the Rev hub). 

  2. Your robot has a mechanism that can pick cones from a stack and drop it on a junction. 

  3. You should also have a webcam connected and positioned to identify the signal cones

  4. You have a minimum understanding of Java and using the sdk, and coding using android studio.

 

If you have “No” on one of the assumptions, you could still use it : 

  1. If you are using tank drive, you still could use this, but have to figure out how roadrunner works with tank drive.

  2. If you are using block programming, you could always finish your teleop mode in block programming for all your non-drive systems and then transition to this platform get to the java code and add this

 

Disclaimer : 

  1. This is a basic autonomous mode - better than a crude one, but it is certainly not the best.

  2. If you feel like you will miss the fun of discovering how to code autonomous the hard way - don’t look, this will make the journey easy

 

 

How will the Autonomous Mode work : 

Below is the picture from Game Manual 2 that explains

the field with respect to autonomous mode use. Notice on

the right side image of the field there are some cartesian

coordinates marked on the field. That axis notation will

be used to denote any position on the field. The quadrants

area also divided with each one for a start position of the robot,

eg. Red Left is when robot is starting on Tile F2. 

Screenshot 2023-08-31 193129.png

Auto Pick and Drop option https://github.com/ftcwires/FtcRobotController-PowerPlay/blob/master/TeamCode/src/main/java/org/firstinspires/ftc/teamcode/OpModes/AutoOpMode.java

Assuming the start position as F2 / Red Left, Below is the motion of the robot in the autonomous mode. (Arrow indicates the direction the front of the robot is facing, and is denoted by the angle in degres)

 

  1. Start on Tile F2 (Position S) 

  2. Detect the signal and determining Parking location based on signal 

  3. On pressing start, move to Midway position M

  4. Move to position to drop Cone D and drop preloaded cone on junction

  5. Move to position to pick cone P and pick the first one from the stack

  6. Move to position to drop Cone D and drop cone on junction

  7. Repeat steps 5-6 to pick second cone from stack and drop on junction

  8. Move to parking position Pk1, Pk2 or Pk3 corresponding to Location 1, 2, 3 on field and determined by what was detected on the signal cone.

  9. End of autonomous                                                                                                                                                                                                    If the above is done, and if you dropped all the cones on the High junction, you score 25 points. 

 

[If you use your own signal sleeve, you can score 35. You will need to update the Vision code for this and use Tensorflow detection for your sleeve.]

 

The coordinates for for all 4 start positions is as below : 


Auto Only Park Option https://github.com/ftcwires/FtcRobotController-PowerPlay/blob/master/TeamCode/src/main/java/org/firstinspires/ftc/teamcode/OpModes/AutoOnlyPark.java

For simplicity, here is an Only Parking option is provided as follows: 

 

  1. Start on Tile F2 (Position S) 

  2. Detect the signal and determining Parking location based on signal 

  3. On pressing start, move to Midway position M

  4. Move to parking position Pk1, Pk2 or Pk3 corresponding to Location 1, 2, 3 on field and determined by what was detected on the signal cone.

  5. End of autonomous

Screenshot 2023-08-31 193657.png
AUTONOMOUS_edited.jpg
CLDUJ.png

Setup, configuration and calibration steps : 

Great! Now let’s get started with the setup, configuration and calibration steps : 

Software Setup

  1. Open the webpage on your computer : https://github.com/ftcwires/FtcRobotController-PowerPlay

    • If you don’t have a github account, create one and login to see this. 

  2. Click on the green Code button and Download zip file and uncompress on your computer. 

  3. Create a new project on Android Studio with this downloaded code (/fork) and build the code.. And (hope) it completes successfully!

    • Congratulations you have the set up ready

 

 

Setting up Vuforia License Key :

You need to obtain your own license key to use Vuforia. A Vuforia 'Development' license key, can be obtained free of charge from the Vuforia developer web site at https://developer.vuforia.com/license-manager.

 

Vuforia license keys are always 380 characters long, and look as if they contain mostly random data. As an example, here is a example of a fragment of a valid key:

      ... yIgIzTqZ4mWjk9wd3cZO9T1axEqzuhxoGlfOOI2dRzKS4T0hQ8kT ...

Once you've obtained a license key, copy the string from the Vuforia web site and paste it into your code Teamcode/Subsystems/Vision.java on the line 86, between the double quotes.

    //" -- YOUR NEW VUFORIA KEY GOES HERE  --- ";

 

 

 

 

 

 

 

 

 

 

 

 

Configuring the code for your robot and calibrating roadrunner.

Here you have 2 options : 

  1. [Long Cut ] Go to https://learnroadrunner.com/drive-constants.html#drive-constants and follow the steps listed under Drive Constants, Drive Velocity PID tuning, Straight Test, Track Width Tuner, Turn test, Localization test, Follower PID tuner, SplineTest. This will ensure maximum accuracy for your Autonomous mode and would be the basis for developing complex autonomous routes. The coordinates you use with respect to the field (as in the diagram before) will be accurate. Estimate 1-2 days to get this done properly, and have a good amount of patience for this.

 

  1. [Short Cut, try this first, but at your own risk] The following steps will get your autonomous mode working fairly well. The coordinates mentioned in the earlier diagram may not match, but you should be able to iterate with the coordinate position values to get the robot moving to the physical point on the field quite well. The following are the substeps : 

    1. Go to https://learnroadrunner.com/drive-constants.html#drive-constants and Click on “Configure Me!” button. Follow instructions to enter the type of your chassis, what type of motor you are using, Gear ratio on your drive motors, Wheel radius of your mecanum wheel, Track width estimate (distance between mid of your left wheel to mid of right wheel), “Yep” for if you are using Drive Encoders.. And download the file that will be called “DriveConstants.java”

    2. Open AndroidStudio with your project and find the file DriveConstants.java under TeamCode/drive/opmode/DriveConstants.java. Copy the content from the file that was downloaded in the previous step and replace all the content in this file with what was copied.

    3. Find the line : 35 that reads :

“public static PIDFCoefficients MOTOR_VELO_PID = new PIDFCoefficients(0, 0, 0, getMotorVelocityF(MAX_RPM / 60 * TICKS_PER_REV))”

And update it to :

“public static PIDFCoefficients MOTOR_VELO_PID = new PIDFCoefficients(1, 0, 0, getMotorVelocityF(MAX_RPM / 60 * TICKS_PER_REV))”

(Basically update the first parameter of PIDFCoefficients to 1, instead of 0)

 

        4. Next open TeamCode/drive/opmode/SampleMecanumDrive.java and go to Line 125 that reads

“BNO055IMUUtil.remapZAxis(imu, AxisDirection.POS_Z);”

Change the line to the following based on which direction the Rev Control Hub logo is pointing on the robot. If pointing : 

  • Upwards “BNO055IMUUtil.remapZAxis(imu, AxisDirection.POS_Z);”

  • Downwards “BNO055IMUUtil.remapZAxis(imu, AxisDirection.NEG_Z);”

  • Left “BNO055IMUUtil.remapZAxis(imu, AxisDirection.NEG_X);”

  • Right “BNO055IMUUtil.remapZAxis(imu, AxisDirection.POS_X);”

  • Forward “BNO055IMUUtil.remapZAxis(imu, AxisDirection.POS_Y);”

  • Backward “BNO055IMUUtil.remapZAxis(imu, AxisDirection.NEG_Y;)”

 

5. On SampleMecanumDrive.java,  Check lines 151, 152 to ensure your drive motors are reversed correctly for opposite motion. If is currently set for the right motors to be rotating opposite of the right motors as follows.

          rightFront.setDirection(DcMotor.Direction.REVERSE);

        rightRear.setDirection(DcMotor.Direction.REVERSE);

  • You will know if you did this correctly, when you run TeleOp, if the robot is going opposite of what your gamepad joystick is doing, you have need to reverse the left motors instead of the right

 

6. Build and download your code to the Robot RevController (or Mobile + expansion box).

 

7. Configure the hardwareMap on your Rev Control Hub.

  • Ensure your drive motors are called leftFront, leftRear, rightRear, and rightFront

  • Ensure your webcam shows up and is called “Webcam 1”

  • Ensure you add “imu” as “I2C device 0” on your RevControl Hub

 

8. You should now try running the robot using the “TeleOp” that shows up. Use the gamepad 1  left and right stick to move the robot around. You are now running roadrunner motion control.

  • You could use this TeleOp code template to add other systems and make it your primary TeleOp.  

 

9. Congratulations! You have completed the configuration of the robot and if you have used the “Short Cut”, you have winged the calibration step. Fingers crossed that it works well!

DODUS.png

Autonomous Op Mode

Now that the setup is done, lets get to the real deal - Autonomous mode

 

Open the Autonomous mode code in Teamcode/OpModes/AutoOpMode.java

 

The code should be fairly self explanatory. The steps are the following : 

 

  1. Assume the robot is placed on the field. Once Init is pressed ( runOpMode() starts), driveTrain and vision objects are created.

 

  1. selectStartingPosition() function is used to take inputs from gamepad 1 such that pressing X will initiate Blue Left, Y - Blue Right, B - Red Left and A - Red Right paths.

 

  1. Once this is done, vision starts working when “vision.activateVuforiaTensorFlow()” is run. (This code is organized from the code in ConceptTensorFlowObjectDetection.java)

 

  1. Based on selected start location, buildAuto() creates the trajectory for running Autonoumous Mode.

 

  1. The while(!isStopRequested() && !opModeIsActive()) loop starts now, and the vision object is “seeing” the signal and detecting the image in a loop, until Start is pressed

.

  1. Once Start button is pressed opModeIsActive() becomes true and goes in the if (opModeIsActive() && !isStopRequested() condition.

 

  1. Here vision is first deactivated, and buildParking() creates the trajectories for running the parking sequence based on last detected target by vision.

 

  1. runAutoAndParking() runs the trajectory for Autonoumous and parking.

 

buildAuto()

This function has the first section of the code where the different positions - initPose, midwayPose, pickConePose, dropConePose, parkPose are set for each start location. The “Pose” is coded as (x-coordinate, y-coordinate, direction in radians). To learn more on Pose and Trajectories check https://learnroadrunner.com/before-you-start.html#terms-to-know and https://learnroadrunner.com/trajectories.html 

 

Trajectories are build using trajectorySequenceBuilder(). Information on this is at https://learnroadrunner.com/trajectory-sequence.html 

 

Motion is coded using the profile .lineToLinearHeading(). Information on this is at https://learnroadrunner.com/trajectorybuilder-functions.html

 

The .addDisplacementMarker() creates marker for adding the functionality of your subsystem (intake / arm / claw, etc)  to pickCone and dropCone. Information on markers is at https://learnroadrunner.com/markers.html#temporal-markers-basics

 

pickCone() and dropCone() functions can be used to code the actions of motors, servos, etc to create the required action.

 

 

 

Try it out

Once the functionality is added, try it out on the field - best is to try it on mats without the junctions, to see how the functionality works. Update the coordinates based on how your robot needs to position on the field and which junctions you chose to put the cones in.

 

Tip : Try running the code a few times. If the motion profile is consistent, but has drifts, you could try modifying the coordinates for each position to achieve the desired locations. If the motion is inconsistent, you have to perform the [Long-cut] tuning for roadrunner. (Start from https://learnroadrunner.com/drive-constants.html and perform steps in Drive Constants, Drive Velocity PID tuning, Straight Test, Track Width Tuner, Turn test, Localization test, Follower PID tuner, SplineTest.)

 

Tip : Dont have two consecutive coordinate positions as identical. Roadrunner will not handle this well.

 

Tip : If the program crashes right at the beginning, and shows error with Vision files - you missed updating the license string for Vuforia in vision.java. Follow instructions mentioned above to get your license string and update.

 

Tip : To speed up the robot, try .setVelConstraint(getVelocityConstraint(90, DriveConstants.MAX_VEL, DriveConstants.TRACK_WIDTH)). Information at https://learnroadrunner.com/trajectory-sequence.html#setvelconstraint-trajectoryvelocityconstraint 

 

TeleOp Mode

TeleOpMode.java provides basic TeleOp to run the robot using roadrunner functionality. You could add your TeleOp code for other subsystems here to utilize it.

 

 

That’s it : Enjoy programming Autonomous mode and score well!

Incase of any comments, questions or errors you found, or just need help - send a mail to ftcwires@gmail.com or DM on Instagram @ftcwires

Acknowledgement : 

Creators of Roadrunner, Acmeroboticslearnroadrunner.com and Team Hazmat 13201

For contributions to this page - please contact ftcwires@gmail.com 

bottom of page