Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI)
5th Edition
Workshop HRI 2025 - March 3
9:00 am to 12:30 pm Melbourne time (AEDT)
Location: Hybrid (Melbourne Convention and Exhibition Centre, Melbourne, Australia)
Time (GMT-6) | |||
---|---|---|---|
09:00 am - 09:10 am | Introductory Remarks: Bahar Irfan | ||
09:10 am - 09:40 am | Keynote by Dana Kulic: Human Variability and Long-term HRI
Abstract
While there are many challenges to long term human-robot interaction, one of the key challenges is how to handle human variability over time. Human variability can arise from both the need for the robot to interact with diverse users, and from changes to a single user's profile over time, e.g., due to ageing, skill improvement, or habituation. This talk will describe two types of approaches to handling variability:
designing policies that adapt to individual users, and designing policies that are robust to variations in users. Example HRI applications and domains will be highlighted, including warehouse robot supervision, gait rehabilitation and shared autonomy drone flight.
| Chair: Nikhil Churamani | |
09:40 am - 09:50 am | Long-Term Planning Around Humans in Domestic Environments [PDF] Authors: Ermanno Bartoli, Dennis Rotondi, Iolanda Leite Abstract
Long-term planning for robots operating in domestic environments poses unique challenges due to the interactions between humans, objects, and spaces. Recent advancements in trajectory planning have leveraged vision-language models
(VLMs) to extract contextual information for robots operating in real-world environments. While these methods achieve satisfying performance, they do not explicitly model human activities. Such activities influence surrounding objects and reshape spatial constraints. This paper presents a novel approach to trajectory planning that integrates human preferences, activities, and
spatial context through an enriched 3D scene graph (3DSG) representation. By incorporating activity-based relationships, our method captures the spatial impact of human actions, leading to more context-sensitive trajectory adaptation. Preliminary results demonstrate that our approach effectively assigns costs to spaces influenced by human activities, ensuring that the robot’s
trajectory remains contextually appropriate and sensitive to the ongoing environment. This balance between task efficiency and social appropriateness enhances context-aware human-robot interactions in domestic settings. Future work includes implementing a full planning pipeline and conducting user studies to evaluate trajectory acceptability.
| Chair: Michelle Zhao | |
09:50 am - 10:00 am | A Data-Driven Framework for Skill Representation [PDF] Authors: Mariah Schrum, Allison Morgan, Deepak Gopinath, Jean Costa, Emily Sumner, Guy Rosman, Tiffany Chen
Abstract
Understanding and modeling human skill is critical for advancing human-robot and human-AI interaction, particularly in domains requiring nuanced cooperation and long-term personalization. Effective collaboration depends on aligning AI
behavior with human capabilities, but quantifying skill is challenging, requiring both task knowledge and expert intuition. This work proposes a data-driven approach to modeling human skill through repeated interactions. Using high-performance driving
education (HPDE) as a case study, we synthesize literature and expert insights to identify supervision signals for learning a robust skill representation. Leveraging a dataset of novice and expert drivers, we demonstrate the feasibility of automatically extracting
skill representations from track-driving trajectory data. This foundation enables applications such as personalized coaching, skill-targeted challenges, and adaptive robotic interventions. Ultimately, we aim to democratize HPDE by making high-quality instruction more accessible through AI-driven personalization.
| ||
10:00 am - 10:10 am | Task-Relevant Active Learning without Prior Knowledge using Vision-Language Models [PDF] Authors: Usman Irshad Bhatti, Ali Ayub
Abstract
Traditional active learning methods rely on uncertainty-based selection, which assumes the presence of initial labeled data to guide sample selection. These methods also assume that all the unknown data is relevant to the task to be performed
by the robot, which is untrue in real world settings. This problem becomes more challenging when robots only know the names of relevant classes for their tasks but have no labeled data, rendering uncertainty sampling and other informative selection
strategies infeasible. The only practical method in these situations is random sampling, which results in an ineffective use of labeling resources. As a solution to this problem, we suggest a task-relevant active learning method in which we pre-select samples
that are most probably from relevant classes by using embeddings generated by vision-language models and calculating similarity scores between written task descriptions and object images. This minimizes annotation waste by ensuring that the labeled
dataset begins with a high percentage of pertinent samples. According to experimental findings, our approach outperforms random sampling and traditional active learning in terms of model performance and relevant data selection, which makes it a workable option
for robotic learning systems that have no initial labels and limited data to learn from in the real world.
| ||
10:10 am - 10:30 am | Break-out Session #1 | Chair: Ali Ayub | |
10:30 am - 11:00 am | Coffee break | ||
11:00 am - 11:50 am | Panel: Overcoming Inequalities with Adaptation | Moderator: Silvia Rossi Online Chair: Michelle Zhao | |
11:50 am - 12:15 am | Break-out Session #2 | Chair: Ali Ayub | |
12:15 am - 12:30 am | Break-out Wrap-up and Concluding Remarks | Chair: Bahar Irfan |