– REACT-

viRtual agEnts Against disCriminaTion

The first workshop on Discrimination at the International Conference on Intelligent Virtual Agent (IVA)

Glasgow, Sept 19th 2024

Intelligent Virtual Agents are increasingly used to embody roles potentially involving strong socio-emotional relationships, such as friends, companions, or therapists. As IVAs enter society, it is desirable that they act ethically toward humans.

The objective of the workshop is to gather researchers and developers of IVAs to discuss and identify the ethical issues in the modelisation, implementation, and evaluation of IVAs to prevent discrimination.

We consider all types of discrimination (gender, race, age, disabilities, etc.). At the workshop, we aim to investigate different aspects by exploring both the risks and the benefits of IVAs concerning discrimination.

The workshop’s objective is to learn about each other’s work and to collaboratively address challenges in particular concerning:

  • the present and prospective risks of IVAs in terms of discriminations; 
  • the present and prospective benefits of using IVAs to prevent discriminations; 
  • guidelines and principles on how IVAs should behave and be used, and the study of the potential socio-cultural biases behind such guidelines;
  • guidelines and principles on how to model the appearance and behavior of IVAs to prevent bias, stereotypes, and discriminating behavior 

Program

9h15-9h20: Welcome

9h20-10h05 (45mn – Invited Speaker):

Fairness for Affective and Wellbeing Computing Systems and Agents
Hatice Gunes, University of Cambridge

10h05-10h25 (20mn – Paper presentation):

Using psychological science to diversify IVAs and address bias
Valentina Gosetti and Rachael E. Jack

10h25-10h45 (20mn – Paper presentation):

Analyzing gender bias in the non-verbal behaviors of
generative systems

Alice Delbosc, Marjorie Armando, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache and Magalie Ochs

10h45-11h05: Coffee break

11h05-11h45 (45mn – Invited Speaker):

Overview of Work from the EPSRC Project Designing Conversational
Assistants to Reduce Gender Bias

Matthew Aylett, Heriot Watt

 11h45-12h05 (20mn – Paper presentation):

Chatbots to strengthen democracy: An interdisciplinary seminar
to train identifying argumentation techniques of science denial

Ingo Siegert, Jan Nehring, Aranxa Márquez Ampudia, Matthias Busch and Stefan Hillmann

 12h05-12h10: Closing

Call for paper

We encourage contributions of (ongoing) research from different fields, including human-computer interaction, psychology, education, concerning studies on:

  • the stereotypes that virtual agents can convey through their multimodal behavior  and the discriminating behaviors that may be adopted by virtual agents ;
  • the stereotypes that users attribute to IVAs depending on their multimodal behavior;
  • the potential impacts of interactions with stereotyped or discriminating virtual agents on the user’s behaviors outside the virtual worlds;
  • the detection of biases and stereotypes in the models integrated into virtual agents (e.g. LLM-based dialog model) ;
  • the discriminations of users against virtual agents;
  • the perspectives on the use of IVAs to prevent stereotypes and discrimination (for instance with training systems) ;
  • The impact of socio-cultural characteristics (e.g. age, gender, culture) on the perception of stereotypes and discriminations.  

Submission

The workshop is organized on half a day to encourage discussions and collaborations. We welcome both technical and theoretical contributions around the discriminations issue in IVAs. We encourage researchers from different domains such as Computer Science, Psychology, Neuroscience, Computer Graphics to submit their work and attend the workshop. The type of contributions include but are not limited to overviews of existing works on the topic, human behavior and evaluation studies, models and methods to assess or prevent bias in machine learning models, use cases and field applications, etc.

Workshop paper submission deadline: June 29, 2024 July 20th
Notifications: August 16, 2024
Deadline for Camera ready version: September 02, 2024
Workshop: September 19, 2024

Submission format: max 6 pages (excluded references).

The submitted contribution must be written in English and should be anonymized. All submissions will be peer-reviewed by two anonymous, independent reviewers. 

Please submit your contribution via e-mail to  react(at)lis-lab.fr as a PDF, using the predefined using the predefined CEUR template (an Overleaf page for LaTeX users is also available). The accepted papers will be published in the workshop’s proceedings on CEUR Workshop Proceedings (http://ceur-ws.org/).

Authors of accepted submissions will be invited to give an oral presentation or to present a poster of their work.

In submitting a manuscript to this workshop, the authors acknowledge that no paper substantially similar in content has been submitted to another conference or workshop.

Program

Two keynote speakers, short presentations and debate on current and future challenges

Keynote Speakers

Dr Aylett is an Associate Professor at Heriot Watt and a co-founder and Chief Scientific Officer of CereProc, a speech synthesis company based in Edinburgh, Scotland. He holds the titles of Royal Society Industrial Fellowship alumnus and Deputy Chief Scientist of the Scott Morgan Foundation. He received his PhD and MSc in Speech and Language Technology from the University of Edinburgh and holds a BAHons in Computing and Artificial Intelligence from the University of Sussex. His career has spanned commercial, charity, and academic environments and his work in speech synthesis, human robot interaction and digital narrative in both a commercial and academic setting has given him the opportunity to work with many companies in the creative industries. He has a good understanding of the priorities and constraints of start-ups and SMEs. From working closely with large companies such as Intel, Sony, Bloomberg, and Honda he also have a good understanding of the corporate world. His experience is chiefly in applied AI, taking ideas from AI and using them in creative and concrete ways. He has over 20 years experience in speech technology (including issues such as audio deep fakes) and over a decades experience in human-robot interaction and social robotics.

Presentation of Dr. Aylett

Overview of Work from the EPSRC Project Designing Conversational
Assistants to Reduce Gender Bias

Conversational assistants are rapidly developing from purely transactional systems to social companions with “personality”. UNESCO pointed out that the female and submissive personality of current digital assistants gives rise for concern as it reinforces gender stereotypes. In the Gender Bias project, we explored this claim and established a principled framework for designing and developing alternative conversational personas by following the Responsible Innovation approach of anticipate, reflect, engage and act (AREA) in a new collaborative partnership, combing expertise from Computer Science, Social Psychology and Digital Education. In this talk I will give some context, then present some of the work carried out in this project and discuss the future and whether we see an industry coming to grips with its ethical dilemmas or one that is happy to regard them as someone else’s problem.

***************

Hatice Gunes is a Full Professor of Affective Intelligence and Robotics (AFAR) and the Director of the AFAR Lab at the University of Cambridge’s Department of Computer Science and Technology. She is an internationally recognized leader in affective computing and affective robotics, a former President of the Association for the Advancement of Affective Computing (AAAC), and a former Faculty Fellow of the Alan Turing Institute – UK’s national centre for data science and artificial intelligence. Prof Gunes obtained her PhD in computer science from the University of Technology Sydney (UTS) in Australia as an awardee of the Australian Government International Postgraduate Research Scholarship (IPRS) – a prestigious scholarship awarded on the basis of academic merit and research capacity. As a postdoctoral researcher at Imperial College London, she played a crucial role in the EU SEMAINE project, that created the world’s first publicly available multimodal, fully autonomous, and real-time human-agent interaction system ( the SAL system). Attentive to user affect and nonverbal expressions, the project developed novel nonverbal audiovisual human behaviour analysis and multimodal agent behaviour synthesis capabilities, and won the Best Demo Award at IEEE ACII’09. Now directing the Cambridge Affective Intelligence and Robotics Lab (AFAR Lab), Prof Gunes spearheads research on multimodal, social, and affective intelligence for AI systems, particularly embodied agents and robots, by cross-fertilizing research in the fields of Machine Learning, Affective Computing, Social Signal Processing, and Human Nonverbal Behaviour Understanding. Honoured with prestigious funding, including a 5-year EPSRC Fellowship (2019-present) and the EU Horizon 2020 Grant (2019-2022), she has been leading the AFAR team in establishing new collaborations with the Departments of Psychiatry and Psychology, which earned them the RSJ/KROS Distinguished Interdisciplinary Research Award Finalist at IEEE RO-MAN’21 as well as exploring ambitious research horizons that were consistently recognized with awards and honours. These range from affective intelligence for service robotics which was recognized with the Best Paper Award Finalist at IEEE RO-MAN’20, graph representation learning of multimodal behaviour for automatic depression assessment which was recognized with the Best Student Paper Award Finalist at IEEE FG’24 to using robots to using robots for mental wellbeing assessment in children — with over 1,000 global media reports and an interview with The Guardian — and taking the robotic wellbeing coaches from the lab to the workplace, attracting over 700 media coverages. The latter were honoured with the Runner-up for the Collaboration Award at the 2023 University of Cambridge Vice-Chancellor’s Awards for Research Impact and Engagement, and the Better Future Award at the Department of Computer Science and Technology’s Hall of Fame Awards 2023. Her team’s ongoing efforts in mitigating bias in affective and wellbeing computing also earned them the Best Paper Award in Responsible Affective Computing at ACII 2023.

Presentation of Prof. Gunes

Fairness for Affective and Wellbeing Computing Systems and Agents

Datasets, algorithms, machine learning models and AI powered tools that are used for perception, prediction and decision making constitute the core of affective and wellbeing computing. Majority of these are prone to data or algorithmic bias (e.g., along the demographic attributes of race, age, gender etc.) that could have catastrophic consequences for various members of the society. Therefore, making considerations and providing solutions to avoid and/or mitigate these are of utmost importance for creating and deploying fair and unbiased affective and wellbeing computing systems, as well as agents and robots that are embedded with these systems. This talk will present the Cambridge Affective Intelligence and Robotics (AFAR) Lab’s (https://cambridge-afar.github.io/) research explorations in this area and will outline recommendations to achieve greater fairness for affective and wellbeing computing, while emphasising the need for such models to be deployed and tested in real world settings and applications, for example for robotic wellbeing coaching via physical robots.

***************

Program committee bringing together researchers from different disciplines working on these issues

  • Ruth Aylett – Heriot-Watt University
  • Birgit Lugrin – University of Wuerzburg
  • Jean-Claude Martin – Université Paris Saclay
  • Gale Lucas – USC Institute for Creative Technologies
  • Rachael Jack – University of Glasgow
  • Minha Lee – Eindhoven University of Technology
  • Deborah Richard – Macquarie University 
  • Zerrin Yumak – Utrecht University
  • Benoit Favre – Aix Marseille Université
  • Dimosthenis Kontogiorgos – Massachusetts Institute of Technology
  • Lucile Sassatelli – Université Côte d’Azur
  • Willem-Paul Brinkman – Delft University of Technology
  • Tanvi Dinkar – Heriot Watt University



Organizers

Magalie Ochs – Aix Marseille Université (contact person)

magalie(dot)ochs(at)lis-lab.fr

Chloé Clavel – INRIA Paris

Catherine Pelachaud – CNRS – ISIR, Sorbonne Université