TRINITY

TRINITY unites Human, Vehicle, and Agent to create next‑gen micromobility while turning our Boyle Heights factory into a robotics college—bringing AI opportunities & not just AI vehicles to our communities. Brains on wheels, built from the agent up

TRINITY unites Human, Vehicle, and Agent to create next‑gen micromobility while turning our Boyle Heights factory into a robotics college—bringing AI opportunities & not just AI vehicles to our communities. Brains on wheels, built from the agent up

176

MILES

RANGE

1.1

HOURS

CHARGE

120

MPH

SPEED

0-60

MPH

1.8 SECONDS

Power & Perfomance

Advanced Connectivity and Intelligence
TRINITY is a next-gen micromobility platform that aligns Human + Vehicle + Agent, utilizing NVIDIA professional GPUs as the brain to power conversational workflows.

Self-Balancing and Collaborative Design:
Developed in collaboration with DEKA for self-balancing functionality.
Tri-wheel construction designed for car-like driving.

800 HP

ENGINE

TWO YASA

2,000

POUNDS

LIGHTWEIGHT

1.8s

ACCELERATE

0-60 MPH

120

MPH

TOPSPEED

Innovative Delivery

Innovtation for delivery services:

Ideal for delivery services such as DoorDash, Uber Eats, Amazon, UPS, and FedEx for small package deliveries

Allows drivers to maneuver through traffic like a motorcycle while maintaining safety.

Reduces city congestion and emissions, leading to faster deliveries and a cleaner environment.

Community Integration & Safety.

Law Enforcement

      Community Integration & Safety

  • Law Enforcement Applications

  • Safer alternative to motorcycles for police departments.

  • Enhances policing with dual capabilities human officer and digital agent.

  • Provides compassion-driven interactions through community data integration.

 

Yasa P400R

Motors

300 HP

400V

400 HP

600V

Spark + TRINITY = Vision for FYI RAiDiO

The Spark runs containerized open-source open-source vision models that continuously interpret incoming video frames. These real-time visual insights are streamed as contextual signals into the FYI RAiDiO agent, giving it live awareness of what’s happening around the user. That vision layer keeps RAiDiO’s responses grounded in the listener’s environment, actions, location, and journey—turning it into a dynamic, interactive, agentic hyper-media experience that sees, understands, and responds in the moment.