Invited Speakers

KEYNOTES

Short Bio: Pablo Barros is a machine learning scientist working at Sony R & D Center in Brussels, focusing on physiological and social signal processing applied to mental health solutions. Pablo holds a Ph.D. degree in computer science from the University of Hamburg, in Germany, and over the last years, has worked on different projects involving social robotics and affective computing. In particular, his research focuses on social perception, in particular facial expression recognition, but also on modelling, based on artificial intelligence, the role of social agents while interacting with humans.

Title of the Talk: How can we overcome feature-based and contextual biases when learning facial expression representations?

Abstract: Facial Expression Recognition (FER) has become a popular topic within the hyper-active computer vision community, which has led to the development of a plethora of FER solutions easily accessible to the general public. In most cases, based on deep learned facial expression representations. Such solutions became the backbone of human-based interaction research, being used as means for human behavior analysis, the backbone for interaction-driven models, and one of the most fundamental blocks of proposed cognitive architectures. Most of these important research rely blindly on the objective performance of FER systems, and their capability to categorize a face, in most cases even on a frame-level, into one known and pre-determined emotional category. Once you actually understand how deep-learned FER models actually categorize faces, it is easy to see that trusting on their outputs might bias drastically all of the previously mentioned research areas. These models are trained mostly in a supervised task, where groups of pixels are pushed to compose a specific and pre-determined emotional category. In most cases, these affective labels are deeply connected to the scenario represented by the datasets these models were trained on, which changes drastically the interpretation of their FER results. Similarly to the recent advents on non-universal facial perception, understanding the context in which these models were trained might help to avoid a strong bias in their application on fundamental research, and help us to be more responsible in our claims and findings. The goal of this talk is to discuss the core of the problem of trusting blindly FER systems, and to foster a discussion on the importance of understanding their functioning. In this regard, I will present our most recent research on facial expression perception and hot we can address the biasing of affective categorization based on the non-universal perception theory, and how this can impact in future use of FER technology to other fields.

Short Bio: Dorsa Sadigh is an assistant professor in Computer Science and Electrical Engineering at Stanford University. Her research interests lie in the intersection of robotics, learning, and control theory. Specifically, she is interested in developing algorithms for safe and adaptive human-robot and human-AI interaction. Dorsa received her doctoral degree in Electrical Engineering and Computer Sciences (EECS) from UC Berkeley in 2017, and received her bachelor’s degree in EECS from UC Berkeley in 2012. She is awarded the Sloan Fellowship, NSF CAREER, ONR Young Investigator Award, AFOSR Young Investigator Award, DARPA Young Faculty Award, Okawa Foundation Fellowship, MIT TR35, and the IEEE RAS Early Academic Career Award.

Title of the Talk: Aligning Robots with Human Preferences

Abstract: Aligning robot objectives with human preferences is a key challenge in robot learning. In this talk, I will start with discussing how active learning of human preferences can effectively query humans with the most informative questions to learn their preference reward functions. I will discuss some of the limitations of prior work, and how approaches such as few-shot learning can be integrated with active preference based learning for the goal of reducing the number of queries to a human expert and allowing for truly bringing in humans in the loop of learning neural reward functions. I will then talk about how we could go beyond active learning from a single human, and tap into large language models (LLMs) as another source of information to capture human preferences that are hard to specify. For example, we can capture complex notions of being “versatile”, “push-over”, “stubborn”, or “competitive” in negotiations using the rich context present in LLMs. I will discuss how LLMs can be queried within a reinforcement learning loop and help with reward design. In the second part of the talk, I will discuss how the robot can also provide useful information to the human and be more transparent about its learning process. We demonstrate how the robot’s transparent behavior would guide the human to provide compatible demonstrations that are more useful and informative for learning. Finally, I will briefly discuss Language-Informed Latent Actions with Corrections (LILAC) which will be presented at HRI as a shared autonomy paradigm for providing online corrections to robots that introduce a natural approach for aligning robots with human preferences.

DEBATERS

It is more impactful to focus on building robots that personalize effectively to users in a constricted task or application than to build more general-purpose robots

FOR

Short Bio: Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. Using computational modeling and socially interactive robots, his research evaluates models of how infants acquire social skills and assists in the diagnosis and quantification of disorders of social development (such as autism). His other interests include humanoid robots, human-robot interaction, artificial intelligence, machine perception, and social learning. Dr. Scassellati received his Ph.D. in Computer Science from the Massachusetts Institute of Technology in 2001. His dissertation work (Foundations for a Theory of Mind for a Humanoid Robot) with Rodney Brooks used models drawn from developmental psychology to build a primitive system for allowing robots to understand people. His work at MIT focused mainly on two well-known humanoid robots named Cog and Kismet. He also holds a Master of Engineering in Computer Science and Electrical Engineering (1995), and Bachelors degrees in Computer Science and Electrical Engineering (1995) and Brain and Cognitive Science (1995), all from MIT.

Short Bio: Since 2021, Dr. Séverin Lemaignan has been Senior Scientist at Barcelona-based PAL Robotics. He leads the Social Intelligence team, in charge of designing and developing the socio-cognitive capabilities of robots such as TIAGo and ARI. He was previously Associate Professor in Social Robotics and AI at the Bristol Robotics Laboratory, University of the West of England, Bristol. He obtained a joint PhD in Cognitive Robotics from CNRS/LAAS (France) and the Technical University of Munich (Germany) in 2012. He then joined the EPFL (Switzerland) and Plymouth University (UK) as post-doc, then lecturer in Robotics until 2018, when he joined the Bristol Robotics Lab. His research interest primarily concerns the socio-cognitive aspects of human-robot interaction and he has focused his recent experimental work on child-robot interaction and human-in-the-loop machine learning for social robots. Séverin Lemaignan has been involved in several European projects related to social and cognitive robotics (SPRING, SHAPES, DREAM,…) and was awarded in 2015 a EU H2020 Marie Skłodowska-Curie Individual Fellowship for his project on robots and theory of mind.

AGAINST

Short Bio: Eric Eaton is a research associate professor in the Department of Computer and Information Science at the University of Pennsylvania, with a secondary appointment in Biomedical and Health Informatics at Children’s Hospital of Philadelphia. He directs the lifelong machine learning research group within the General Robotics, Automation, Sensing, and Perception (GRASP) lab at Penn. His research focuses on developing versatile AI systems that can learn multiple tasks over a lifetime of experience in complex environments, transfer learned knowledge to rapidly acquire new abilities, and collaborate effectively with humans and other agents through interaction. This work spans a number of topics, including lifelong and continual learning, transfer learning, deep learning, reinforcement learning, machine perception, and interactive AI. Dr. Eaton’s research applies these methods to problems in service robotics and precision medicine.

Short Bio: Karol Hausman is a Staff Research Scientist at Google Brain and an Adjunct Professor at Stanford working on robotics and machine learning. He is interested in enabling robots to acquire general-purpose skills with minimal supervision in the real world. He received his PhD in CS from the University of Southern California and Masters from the Technical University Munich. When he is not debugging robots at Google, he teaches Deep RL class at Stanford.