Submit at EasyChair
Smart systems that apply complex reasoning to make decisions and plan behavior, such as decision support systems and personalized recommendations, are difficult for users to understand. A large variety of algorithms allow the exploitation of rich and varied data sources, in order to support human decision-making and/or taking direct actions. However, there are increasing concerns surrounding their transparency and accountability, as these processes are typically opaque to the user – e.g., because they are too technically complex to be explained or are protected trade secrets. The topics of transparency and accountability have attracted increasing interest in recent years, aiming at more effective system training, better reliability and improved usability.
This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating intelligent user interfaces that provide responsible, explainable AI taking into account the diversity of the stakeholders involved, and ensuring trust through system transparency. Furthermore, understanding users’ fairness perceptions especially when interacting with such systems (e.g. on how to explain systems and models towards ensuring social justice and trust), will lead into more effective system interactions, better reliability, improved usability and user experience.
Suggested themes include, but are not limited to:
- How can we build inclusive transparency and explanations of algorithmic systems, particularly those that demonstrate that they are fair, accountable, and not biased?
- How different stakeholders perceive algorithmic fairness, especially when interacting with AI enabled systems?
- Through explanations, transparency, or other means, how can we raise stakeholders’ awareness of the potential risk for biases and social harms that could result from developing and using intelligent interactive systems?
- How do different groups of users (e.g. experts, developers, end-users) perceive the explanations provided by those systems?
- How can we build (good) algorithmic systems, particularly those that demonstrate that they are fair and accountable?
- When are the optimal points at which explanations are needed for transparency?
- What is important in user modeling for system transparency and explanations?
- What are possible metrics that can be used when evaluating transparent systems and explanations?
- How can we evaluate explanations and their ability to accurately explain underlying algorithms and overall systems’ behavior, especially for the goals of fairness and accountability?
- What techniques can we apply for testing models and assumptions of transparent and explainable intelligent interactive systems, being mindful of the potential for social and discriminatory harm?
- How can explanations allow human evaluators to select model(s) that are unbiased, such as by revealing traits or outcomes of the underlying learned system?
- What are important social aspects in interaction design for system transparency and explanations?
- How to account for stakeholders’ diversity when designing and evaluating transparency and explanations?
Researchers and practitioners in academia or industry who have an interest in these areas are invited to submit papers up to 8 pages (not including references) in ACM SIGCHI Paper Format. These submissions must be original and relevant contributions to the workshop theme. These submissions must be original and relevant contributions. Examples include, but not limited to, position papers summarizing authors’ existing research in this area and how it relates to the workshop theme, papers offering an industrial perspective on the workshop theme or a real-world approach to the workshop theme, papers that review the related literature and offer a new perspective, and papers that describe work-in-progress research projects.
Papers should be submitted via Easychair by the end of January 9th 2023, and will be reviewed by committee members. Position papers do not need to be anonymized. At least one author of each accepted position paper must register for and attend the workshop. It is anticipated that accepted contributions will be published in dedicated workshop proceedings.
For further questions please contact the workshop organizers at firstname.lastname@example.org.