Edge Computing GPUs for Ground Robotics

Overview

For my senior capstone project with the URI College of Engineering, I’m working with a multidisciplinary engineering team in collaboration with the Charles Stark Draper Laboratory to develop an edge computing system for autonomous ground robots. The goal is to demonstrate how multiple compact robots can collaboratively explore and map complex indoor environments using GPU acceleration.

This project integrates embedded systems, computer vision, and swarm autonomy, leveraging NVIDIA Jetson hardware to process real-time sensor and camera data directly on the robots without relying on cloud computation.


Current Status

My partner and I have completed the production of the custom 3D printed chassis of the robot and put it together:

NVIDIA Jetson Orin Nano Robots

We have also successfully flashed the SD cards for each Jetson robot, gotten the Jetson up and running on external monitors, and downloaded VSCode onto them. We will now work on programming the robot’s basic functionality.


Motivation

Draper’s ongoing work in autonomous systems inspired this project. Their vision is to design reusable autonomy architectures that can operate in any environment; terrestrial, underwater, aerial, or space. Our capstone focuses on ground-based swarm autonomy, where robots share mapping and navigation data to cooperatively explore unknown spaces.


Technical Goals

The project is divided into three main milestones:

Baseline Development:

  • Implement 2D SLAM (Simultaneous Localization and Mapping)

  • Integrate centralized map fusion and visual mark detection

Advanced Phase:

  • Develop multi-robot navigation logic and data sharing

Stretch Goals:

  • Enable 3D map generation using stereo or monocular vision

  • Integrate CUDA acceleration for image and depth processing


My Role

As the Computer Engineer on the team, I’m responsible for:

  • Designing and testing the network architecture between robot nodes and the central server

  • Developing autonomy software for navigation and visual perception

  • Benchmarking system performance and optimizing compute efficiency with CUDA and OpenCV

  • Collaborating on integration of IMU, encoder, and camera data for reliable localization


Tools and Technologies

Hardware: NVIDIA Jetson Orin Nano

Software: C++, VSCode, ROS 2, OpenCV, CUDA

Concepts: SLAM, path planning, sensor fusion, real-time processing


Expected Impact

By the end of the academic year, we aim to demonstrate a multi-robot swarm system that performs real-time exploration, mapping, and decision-making entirely on-device. Draper plans to use the project’s outcomes to expand its edge computing capabilities for future defense and national security applications.

Beyond its immediate technical goals, the project also reflects a larger industry trend, moving computation closer to the edge to improve speed, security, and scalability in autonomous systems.


Next
Next

Preclosing Ticket Automation