Every Move You Make: Helping Operators See Where Their Robot Will Go
Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab. More details on the publication page.
The Problem: Driving a Robot When You Can’t See the Present
Imagine driving a car, but everything you see through the windshield happened 2.5 seconds ago. You press the brake, but you won’t see the car slow down until 2.5 seconds later. That’s the reality of controlling robots over long distances — whether it’s a rover on the Moon, a rescue robot in a collapsed building, or a surgical robot across the ocean.
This delay creates a frustrating cycle: operators press a button, wait to see what happens, realize they’ve gone too far, try to correct, wait again… This “move-and-wait” pattern makes remote control slow, exhausting, and error-prone.
The minimum Earth–Moon communication delay is about 2.5 seconds round-trip. That’s fast enough for a robot to drive off a cliff before you even see it approaching the edge.
Our Insight: Not All Uncertainty is the Same
Previous research treated communication delay as a single problem. We discovered it actually creates three different types of confusion for operators:

-
“When will my command happen?” — You pressed forward, but when will the robot actually start moving?
-
“Where will the robot end up?” — If you hold forward for 2 seconds, how far will it travel?
-
“What if the ground is slippery?” — Even if you know the intended path, the robot might drift on loose terrain.
Each type of confusion needs a different solution.
Three Visual Helpers We Designed
We built three different visual aids, each targeting one source of confusion:

Network Timeline — Shows your commands as blocks traveling along a timeline, so you can see exactly when each button press will take effect.
Path Preview — Draws lines on the ground showing where the robot will go based on your current commands — like the backup camera lines in modern cars, but for forward driving too.
Uncertainty Envelope — Shows a cone-shaped region representing where the robot might end up, accounting for slippery terrain or mechanical imperfections.
What We Found: Show Them Where, Not When
We tested all three visualizations with 24 people controlling a simulated lunar rover under realistic delay conditions. The results were striking:
The Path Preview was the clear winner:
- People completed tasks significantly faster
- They felt less mentally exhausted
- They stopped using the frustrating “move-and-wait” strategy
The Envelope helped with stress but not speed — knowing about possible drift made people feel better, but didn’t help them drive faster.
The Timeline didn’t help at all — knowing when commands take effect isn’t useful if you don’t know where you’ll end up.
“The path visualization helped me adapt my actions — I could finally plan ahead instead of constantly fixing mistakes.” — Study participant
The Key Lesson
Simply showing people more information isn’t enough. The information has to match what they actually need to make decisions. When you’re steering a robot, you need to know where it will go — not when your command will execute or how uncertain the outcome might be.
This is like the difference between a GPS showing your route vs. showing your network latency. One helps you drive; the other is just technical trivia.
Why This Matters
These insights apply far beyond lunar rovers:
- Space exploration — Future Moon and Mars missions will rely on teleoperated robots
- Disaster response — Search and rescue robots often operate through unstable network connections
- Remote surgery — Surgeons operating robotic instruments across distances face similar delays
- Drone delivery — As drones operate in more complex environments, operators may need to take over remotely
By understanding which information actually helps human operators, we can design better interfaces that keep humans effectively in control — even when physics makes direct control impossible.
Citation
@inproceedings{cardinaels2026everymove,
author = {Cardinaels, Dries and Ramakers, Raf and Veuskens, Tom and Pietrzak, Thomas and {Rovelo Ruiz}, Gustavo and Luyten, Kris},
title = {Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics},
booktitle = {Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems},
series = {CHI '26},
year = {2026},
location = {Barcelona, Spain},
publisher = {ACM},
address = {New York, NY, USA}
}
We’re excited to present this work in Barcelona!