Skip to main content

Integration with Code

This page shows how to use Ironclad's vision classes in your robot code (no new class files needed).
We instantiate cameras, create a PoseEstimator, register the cameras, and then call a single update each loop.


1) Create AprilTag Cameras (PhotonVision)

Use the names you configured in PhotonVision (e.g., tag-front-left, tag-front-right).
The robot→camera transforms must match your physical mounts.

import edu.wpi.first.math.geometry.*;
import edu.wpi.first.math.VecBuilder;
import frc.robot.vision.odometry.PhotonOdometryCamera;

// Example transforms — measure your own!
Transform3d frontLeftXform = new Transform3d(
new Translation3d(0.22, 0.10, 0.62), // +X forward, +Y left, +Z up (meters)
new Rotation3d(0.0, Math.toRadians(-10.0), 0.0) // roll, pitch, yaw (radians)
);

Transform3d frontRightXform = new Transform3d(
new Translation3d(0.22, -0.10, 0.62),
new Rotation3d(0.0, Math.toRadians(-10.0), 0.0)
);

// Tuning: single‑tag is noisier than multi‑tag
var singleTagStd = VecBuilder.fill(0.8, 0.8, 1.0); // (x,y,θ) std devs
var multiTagStd = VecBuilder.fill(0.25, 0.25, 0.35);

// Construct cameras by PhotonVision names
var frontLeftCamera = new PhotonOdometryCamera(
"tag-front-left",
frontLeftXform.getTranslation(),
frontLeftXform.getRotation(),
singleTagStd,
multiTagStd
);

var frontRightCamera = new PhotonOdometryCamera(
"tag-front-right",
frontRightXform.getTranslation(),
frontRightXform.getRotation(),
singleTagStd,
multiTagStd
);

2) Create and Register the PoseEstimator

Ironclad fuses vision with drivetrain odometry through a single PoseEstimator.
Pass in a simulation pose supplier for sim; on real hardware it's ignored.

import frc.robot.vision.odometry.PoseEstimator;

// Supply the sim pose (YAGSL example); on real robot it's ignored
var poseEstimator = new PoseEstimator(() -> swerveDrive.getSimulationDriveTrainPose().get());

// Register all AprilTag cameras
poseEstimator.addCameras(frontLeftCamera, frontRightCamera);

3) Update Once Per Loop

Call this from a periodic context (e.g., in your drivetrain subsystem or Robot):

// Fuse vision into the swerve odometry
poseEstimator.updatePoseEstimation(swerveDrive);

Under the hood, each camera:

  • pulls latest PhotonVision results,
  • computes a pose estimate with appropriate std‑devs,
  • and PoseEstimator injects it via swerveDrive.addVisionMeasurement(...).

(Optional) Game‑Piece Camera

Use the OV9782 color camera for piece detection and projection.

import frc.robot.vision.GamePieceCamera;

// Robot→camera (2D) transform for the piece cam
Transform2d robotToPieceCam = new Transform2d(0.30, 0.00, new Rotation2d(Math.toRadians(0)));

var gameCam = new GamePieceCamera(
"piece-cam",
() -> swerveDrive.getPose(), // current field pose
0.55, // camera height (m)
Math.toRadians(-12.0), // camera pitch (radians; down is negative)
0.05, // expected piece height (m)
robotToPieceCam
);

// In periodic:
gameCam.update();
var fieldPieces = gameCam.getDetectedGamePieces(); // Pose3d list (field‑relative)
var robotPieces = gameCam.getRobotRelativeGamePieces(); // Pose3d list (robot‑relative)

Telemetry & Visualization

  • Cameras publish to NetworkTables: Vision/<cameraName>/*
  • Visualize pose, tags, and latency in AdvantageScope
  • Log detection counts and confidence alongside drivetrain odometry

That’s it: create cameras → register with PoseEstimator → call updatePoseEstimation() each loop.