About the Workshop

Smart systems that apply complex reasoning to make decisions, such as decision support or recommender systems, are difficult for people to understand. Algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making; however, there are increasing concerns surrounding their fairness, bias, and accountability, as these processes are typically opaque to users. Transparency and accountability have attracted increasing interest toward more effective system training, better reliability, appropriate trust, and improved usability. The workshop on Transparency and Explanations in Smart Systems (TExSS) will provides a venue for exploring issues when designing, developing, or evaluating transparent intelligent user interfaces, with additional focus on explaining systems and models toward ensuring fairness and social justice.

This workshop was preceded by the joint workshop on Explainable Smart Systems for Algorithmic Transparency in Emerging Technologies (ExSS-ATEC 2020), 2nd Workshop on Explainable Smart Systems (ExSS 2019), and the 2nd International Workshop on Intelligent User Interfaces for Algorithmic Transparency in Emerging Technologies (IUI-ATEC 2019).

Schedule & Papers

The TExSS workshop has merged with the HUMANIZE workshop to provide an exciting program.
The 2022 event will be held completely virtually on 21 March. All times are CET.

SESSION 1: INTRODUCTION (Session Chair: Stella Kleanthous)
16:00 – 16:10 Welcome by organizers
16:10 – 17:00 Keynote (30 min + 20 min Q&A

Experiencing the COVID-19 pandemic through the lens of Google
Abstract: During times of crisis such as the COVID-19 pandemic, information access is crucial. Given the opaque processes behind modern search engines, it is important to understand the extent to which information accessed by users differs. In this talk, I present findings of a research study on a similarity analysis of Google search results. We crowdsourced pandemic-related image queries from participants in four countries and used these to simulate users of Google Image search in four different countries.

Frank Hopfgartner is a senior lecturer in Data Science at the Information School of University of Sheffield. His research to date can be placed in the intersection of information systems (e.g., information retrieval and recommender systems), content analysis and data science. He has (co-) authored over 150 publications in above mentioned research fields, including a book on smart information systems, various book chapters and papers in peer-reviewed journals, conferences and workshops.

17:00 – 17:10 Break
SESSION 2: CONCEPTS (Session Chair: Marko Tkalcic)
17:10 – 17:30 A Framework for Predicting Fairness Perception – Towards Personalized Explanations of Algorithmic Systems Results
Avital Shulner Tal, Doron Kliger and Tsvi Kuflik
17:30 – 17:50 Position: The Case Against Case-Based Explanation
Jonathan Dodge
17:50 – 18:10 Is explainable AI a race against model complexity?
Advait Sarkar
18:10 – 18:30 Development of an Instrument for Measuring Users’ Perception of Transparency in Recommender Systems
Marco Hellmann, Diana C. Hernandez-Bocanegra and Jürgen Ziegler
18:30 – 18:40 Break
SESSION 3: APPLICATIONS (Session Chair: Jon Dodge)
18:40 – 19:00 Explaining Podcast Recommendations To Users with Content Diversity Labels
Bernd Huber, Yixue Wang, Jean Garcia-Gathright and Jenn Thom
19:00 – 19:20 Supporting Responsible Data and Algorithmic Practices in The News Media
Dilruba Showkat
19:20 – 19:40 Towards Understanding the Transparency of Automations in Daily Environments
Fabio Paternò
19:40 – 19:50 Break
SESSION 4: INTERACTION (Session Chair: Tsvi Kuflik)
19:50 – 20:40 Moderated group discussion