Posts tagged: Paper

Extended Abstract accepted at CHI 2026: Teaching Cobots What to Do by Watching an Expert

DELEGACT: Let the Robot Watch, Then Decide Who Does What

Our extended abstract "Learning to Delegate and Act with DELEGACT: Multimodal Language Models for Task-Level Human–Cobot Planning in Industrial Assembly" has been accepted at CHI 2026 in Barcelona. This is work by Bram Verstappen together with Dries Cardinaels, Danny Leen, and Raf Ramakers at the Digital Future Lab (UHasselt - Flanders Make).

Read more →

Presented at EURECA-PRO Education & Research Days: Teaching as Training

Teaching as Training: Incremental and Iterative AI Skill Development

We presented our contribution “Teaching as Training: Iterative and Incremental AI Skill Development” () at the EURECA-PRO Education & Research Days in Hasselt, held under the theme Glocalising Universities: A Shifting Horizon. This is joint work with Jolien Notermans (Department of Educational Development, Policy and Quality Assurance) and Sarah Doumen (Faculty of Sciences) at Hasselt University. More details on the publication page. The visual story is generated using StoryBookly.

Read more →

Paper accepted at ICLR 2026: DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration

DIVERSE: Finding the Many Faces of AI Decision-Making

Our paper “DIVERSE: Disagreement-Inducing Vector Evolution for Rashomon Set Exploration” () has been accepted at ICLR 2026, one of the top venues for machine learning research. This is joint work with my PhD student Gilles Eerlings, Brent Zoomers, Jori Liesenborgs, and Gustavo Rovelo Ruiz at the Digital Future Lab. More details on the publication page.

Read more →

Paper accepted at CHI 2026: Helping Humans Control Robots on the Moon

Every Move You Make: Helping Operators See Where Their Robot Will Go

Our paper "Every Move You Make: Visualizing Near-Future Motion Under Delay for Telerobotics" () has been accepted at CHI 2026 in Barcelona — the premier conference for human-computer interaction research. This is joint work with my PhD student Dries Cardinaels, Raf Ramakers, Tom Veuskens, Thomas Pietrzak (Univ. Lille, Inria), and Gustavo Rovelo Ruiz at the Digital Future Lab (UHasselt - Flanders Make). More details on the publication page.

Paper page on driescardinaels.be

Read more →

Paper on A Visual Dashboard for Model Multiplicity

In AI research, model multiplicity can help users better understand the diversity of AI predictions. Our new system “AI-Spectra” provides a visual dashboard to harness this concept effectively. Instead of relying on a single AI model, AI-Spectra uses multiple models—each seen as an expert—to produce predictions for the same task. This helps users see not only what different models agree or disagree on, but also why these differences occur. Gilles Eerlings (a FAIR PhD student ) and Sebe Vanbrabant where the main contributors for this work and combined machine learning, model multiplicity and visualisations that focus on the characteristics of an AI model, instead of explaining the behaviour.

Read more →

Paper on Anthropomorphic User Interfaces

Anthropomorphic User Interfaces

Together with Eva Geurts, we explored Anthropomorphic User Interfaces (AUIs) and created a taxonomy that helps us to analyze, identify, and design appropriate AUIs. The paper is available here, and our interactive tool that helps you to find related resources for specific aspects from our technology is available at this URL: https://anthropomorphic-ui.onrender.com.

Citation

@inproceedings{geurtsantropomorphic2024,
title = {Anthropomorphic User Interfaces: Past, Present and Future of
Anthropomorphic Aspects for Sustainable Digital Interface Design},
author = {Eva Geurts and Kris Luyten},
booktitle = {Proceedings of the European Conference on Cognitive Ergonomics 2024},
articleno = {31},
numpages = {7},
keywords = {Anthropomorphism, Human-like interfaces, Taxonomy, User interface design},
location = {Paris, France},
series = {ECCE '24},
year = {2024},
publisher = {Association for Computing Machinery},
url={https://anthropomorphic-ui.onrender.com},
doi = {10.1145/3673805.3673831},
isbn = {9798400718243}
}

Abstract

Interactions with computing systems and conversational services such as ChatGPT have become an inherent part of our daily lives. It is surprising that user interfaces, the gateways through which we communicate with an interactive intelligent system, are still predominantly devoid of hedonic aspects. There is little attempt to make communication through user interfaces intentionally more like communication with humans. Anthropomorphic user interfaces can transform interactions with intelligent software into more pleasant experiences by integrating human-like attributes. Anthropomorphic user interfaces expose human-like attributes that enable people to perceive, connect, and interact with the interfaces as social actors. This integration of human-like aspects not only enhances user experience but also holds the potential to make interfaces more sustainable, as they rely on familiar human interaction patterns, thus potentially reducing the learning curve and increasing user adoption rates. However, there is little consensus on how to build these anthropomorphic user interfaces. We conducted an extensive literature review on existing anthropomorphic user interfaces for software systems (past), in order to map and connect existing definitions and interpretations in an overarching taxonomy (present). The taxonomy is used to organize and structure examples of anthropomorphic user interfaces into an accessible collection. The taxonomy and an accompanying web tool provide designers with a reference framework for analyzing and dissecting existing anthropomorphic user interfaces, and for designing new anthropomorphic user interfaces (future).

Read more →

Two contributions Accepted for ACM VRST 2024 - AR Pattern Guidance and VR Text Input Modalities

Paper and Poster Accepted for ACM VRST 2024: AR Pattern Guidance and VR Text Input Modalities

We are excited to announce that both our paper and poster have been conditionally accepted for presentation at the ACM Symposium on Virtual Reality Software and Technology (VRST) 2024, which will take place in Trier, Germany.

Paper: Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task

Our full paper titled “Evaluation of AR Pattern Guidance Methods for a Surface Cleaning Task.” has been conditionally accepted.

Read more →

Paper accepted for ISMAR 2024 - The Art of Timing in AR Guidance

Paper accepted for ISMAR 2024: The Art of Timing in AR Guidance

We are excited to announce that our paper titled “The Art of Timing: Effects of AR Guidance Timing on Speed Control” (with Jeroen Ceyssens, Bram van Deurzen, Gustavo Rovelo Ruiz and Fabian Di Fiore) has been accepted for presentation at the 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

graphical abstract

Abstract

Augmented Reality (AR) holds significant potential to facilitate users in executing manual tasks. For effective support, however, we need to understand how showing movement instructions in AR affects how well people can follow those movements in real life. In this paper, we examine the degree to which users can synchronize the speed of their movements with speed cues presented through an AR environment. Specifically, we investigate the effects of timing in AR visual guidance. We assess performance using a highly realistic Mixed Reality (MR) welding simulation. Welding is a task that requires very precise timing and control over hand and arm motion. Our results show that upfront visual guidance (before manual task execution) alone often fails to transfer the knowledge of intended speeds, especially at higher target speeds. Live guidance during manual task execution provides more accurate speed results but typically requires a higher overshoot at the start. Optimal outcomes occur when visual guidance appears upfront and continues during the activity for users to follow through.

Read more →

Papers accepted on Anthropomorphic UIs and Model Multiplicity

Model Multiplicity in Interactive Software Systems

We got a workshop paper accepted, presenting the initial work of Gilles Eerlings et al. We explore how model multiplicity can be a potential answer to reduce overtrust in AI, as well as avoid undertrust. Still a lot of work that lies ahead, but this seems like a promising direction.

Citation

@inproceedings{luyteneerlings-modelmultiplicity2024,
  author = {Kris Luyten and Gilles Eerlings and Jori Liesenborgs and Gustavo {Rovelo Ruiz} and Sebe Vanbrabant and Davy Vanacken},
  title = {Opportunities and Challenges of Model Multiplicity in Interactive Software Systems},
  booktitle = {The Second Workshop on Engineering Interactive Systems Embedding AI Technologies},
  year = {2024}
}

Abstract

The proliferation of artificial intelligence (AI) in interactive systems has led to significant challenges in model integration, but also end-user-related aspects such as over- and undertrust. This paper explores how multiple AI models with the same performance and behavior but different internal workings—a phenomenon called model multiplicity—affect system integration and user interaction. We discuss the implications of model multiplicity for transparency, trust, and operational effectiveness in interactive software systems.

Read more →

All Posts by Category or Tags.