Posts tagged: Robotics

Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert

DELEGACT: Let the Robot Watch, Then Decide Who Does What

Our extended abstract "Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly" has been accepted at CHI 2026 in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the Digital Future Lab (UHasselt - Flanders Make).

Read more →

Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon

Every Move You Make: Helping Operators See Where Their Robot Will Go

Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab (UHasselt - Flanders Make). More details on the publication page.

Paper page on driescardinaels.be

Read more →

Learning to delegate and act with DELEGACT: Multimodal language models for task-level human--cobot planning in industrial assembly

Industrial assembly is shifting toward human-robot collaboration (HRC) to leverage the complementary strengths of both agents. However, traditional task allocation referred to as the Robotic Assembly Line Balancing Problem (RALBP) remains labor-intensive and often lacks transparency. We introduce DELEGACT, a framework designed to produce workable, intelligible human-cobot task allocations. The framework uses a Vision-Language Model (VLM) to extract atomic operations from expert demonstration videos, then employs a Large Language Model (LLM) to delegate these tasks based on robot specifications, operator competencies, and material definitions. We provide a proof-of-concept prototype and preliminary testing on illustrative cases. Results demonstrate the system's ability to reason about complex constraints such as precision, weight, and ergonomics. This paper illustrates how off-the-shelf foundation models can automate HRC decision-making via a human-in-the-loop paradigm while preserving operator agency and understanding.

Read more →

Every move you make: Visualizing near-future motion under delay for telerobotics

Delays in direct teleoperation decouple operator input from robot feedback. We frame this not as a unitary problem but as three facets of operator uncertainty: (1) communication, when commands take effect, (2) trajectory, how inputs map to motion, and (3) environmental, how external factors alter outcomes. We externalized each facet through predictive visualizations: Network, Path, and Envelope. In a controlled study with 24 participants (novices in telerobotics) navigating a simulated robot under a fixed 2.56s round-trip delay, we compared these visualizations against a delayed-video baseline. Path significantly shortened task time, lowered perceived cognitive load, and reduced reliance on reactive "move-and-wait" behavior. Envelope lowered cognitive load but did not significantly reduce reactive behavior or improve performance, while Network had no measurable effect. These results indicate that predictive support is effective only when trajectory uncertainty is externalized, enabling operators to move from reactive to more proactive control

Read more →

Challenges and opportunities for delay-invariant telerobotic interactions (short paper)

Effective operation in direct-control telerobotics relies heavily on real-time communication between the operator and the robot, as the operator retains full control over the robot's actions. However, in scenarios involving long distances, communication delays disrupt this feedback loop, creating significant challenges for precise control. To investigate these challenges, we conducted a user study where participants operated a TurtleBot3 Waffle Pi under varying delay conditions. Post-experiment brainstorming and analysis revealed recurring challenges, including over-correction, unpredictable robot behavior, and reduced situational awareness. Potential solutions identified include improving robot behavior predictability, integrating feedforward mechanisms, and enhancing visual feedback. These findings underscore the importance of designing intelligent interfaces to mitigate the impact of delays on telerobotic performance.

Read more →

Master thesis Maties Claesen nominated for the EOS thesis award!

Very proud of my thesis student Maties Claesen, who has been nominated for the EOS thesis award! His work, “ZeroTraining: Extending Zero-Gravity Objects Simulation in Virtual Reality Using Robotics,” combines virtual reality and robotics to simulate weightless objects more realistically – crucial for astronaut training and space exploration. Maties demonstrated some impressive creative problem-solving skills, especially in combining diverse fields to tackle complex challenges with limited hardware resources.

Special thanks to Andreas Treuer, Martial Costantini, and Lionel Ferra at ESA for their support, valuable insights and feedback on this work. Andreas was particularly instrumental for this work by sharing his experiences and providing feedback throughout the project which was crucial in refining both the scope as well as the implementation of this work.

Read more →

AntHand: Interaction techniques for precise telerobotic control using scaled objects in virtual environments

This paper introduces AntHand, a set of interaction techniques for enhancing precision and adaptability in telerobotics through the use of scaled objects in virtual environments. AntHand operates in three phases: up-scaling interaction, for detailed control through a magnified virtual model; constraining interaction, which locks movement dimensions for accuracy; and post-editing, allowing manipulation trace optimization and noise reduction. Leveraging a use-case related to surgery, the application of AntHand is showcased in a scenario demanding high accuracy and precise manipulation. AntHand demonstrates how collaboration between humans and robots can improve precise control of robot actions in telerobotic operations, while maintaining the familiar use of traditional tools, rather than relying on specialized controllers.

Read more →

A VR prototype for one-dimensional movement visualizations for robotic arms

To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot's actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available "pixels", visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework. Leveraging Unity, we developed a Virtual Reality implementation to prototype and evaluate our framework. Within this framework, we introduce two visualization techniques for visualizing the movement of a robotic arm, laying a foundation for subsequent development and user testing.

Read more →

A visual design space for one-dimensional intelligible human-robot interaction visualizations

To enable effective communication between users and autonomous robots, it is crucial to have a shared understanding of goals and actions. This is made possible through an intelligible interface that communicates relevant information. This intelligibility enhances user comprehension, enabling them to anticipate the robot's actions and respond appropriately. However, because robots can perform a wide variety of actions and communication resources are limited, such as the number of available "pixels", visualizations must be carefully designed. To tackle this challenge, we have developed a visual design framework and design space that can be used to create intelligible visualizations for human-robot interaction. Our framework focuses on three key components: information type, pixel layout, and robot type. We demonstrate how intelligibility can be integrated into interactions through prototype visualizations featuring a one-dimensional pixel layout, laying the groundwork for developing more detailed and understandable visualizations.

Read more →

Choreobot: A reference framework and online visual dashboard for supporting the design of intelligible robotic systems

As robots are equipped with software that makes them increasingly autonomous, it becomes harder for humans to understand and control these robots. Human users should be able to understand and, to a certain amount, predict what the robot will do. The software that drives a robotic system is often very complex, hard to understand for human users, and there is only limited support for ensuring robotic systems are also intelligible. Adding intelligibility to the behavior of a robotic system improves the predictability, trust, safety, usability, and acceptance of such autonomous robotic systems. Applying intelligibility to the interface design can be challenging for developers and designers of robotic systems, as they are expert users in robot programming but not necessarily experts on interaction design. We propose Choreobot, an interactive, online, and visual dashboard to use with our reference framework to help identify where and when adding intelligibility to the interface design is required, desired, or optional. The reference framework and accompanying input cards allow developers and designers of robotic systems to specify a usage scenario as a set of actions and, for each action, capture the context data that is indispensable for revealing when feedforward is required. The Choreobot interactive dashboard generates a visualization that presents this data on a timeline for the sequence of actions that make up the usage scenario. A set of heuristics and rules are included that highlight where and when feedforward is desired. Based on these insights, the developers and designers can adjust the interactions to improve the interaction for the human users working with the robotic system.

Read more →

All Posts by Category or Tags.