About the Workshop

This workshop will follow on from the very successful ExSS 2018 workshop held at IUI. It will bring together researchers in academia and industry who have an interest in making smart systems explainable to users and therefore more intelligible and transparent. This topic has attracted increasing interest to provide glimpses into the black-box behavior of these systems in order to provide more effective steering or training of the system, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating smart systems that use or provide explanations of their behavior.


Tentative Schedule

Time Activity
09:00-09:15 Welcome address
09:15-10:15 Keynote talk by Prof. Margaret Burnett – “Explaining AI Fairly (Well)”
10:15-10:45 Paper Panel Session 1

  • 18 min: 3 papers, 6 min each
  • 12 min: Q&A
10:45-11:00 Coffee Break
11:00-11.30 Paper Panel Session 2
11:30-12:00 Paper Panel Session 3
12:00-12:30 Paper Panel Session 4
12:30-13:30 Lunch
13:30-14:00 Poster Session
14:00-15:00 Hands-on Activity 1

Affinity Diagramming for Definitions, Criteria, Objectives for ExSS

15:00-15:15 Coffee Break (Poster Session continued)
15:00-16:45 Hands-on activity 2

Application of existing frameworks to make an explainable recommender system

16:45-17:00 Workshop Wrap-Up and Next Steps

Accepted Papers and Posters

We have accepted 20 papers as 12 panel presentations and 8 as posters. These will be downloadable soon after our Camera Ready deadline.

Panel: Frameworks

  • Michael Chromik, Malin Eiband, Sarah Völkel and Daniel Buschek. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems.
  • Brian Lim, Qian Yang, Ashraf Abdul and Danding Wang. Why these Explanations? Selecting Intelligibility Types for Explanation Goals.
  • Simone Stumpf. Horses For Courses: Making The Case For Persuasive Engagement In Smart Systems.

Panel: Human-in-the-Loop

  • Mandana Hamidi Haines, Zhongang Qi, Alan Fern, Fuxin Li and Prasad Tadepalli. Interactive Naming for Explaining Deep Neural Networks: A Formative Study.
  • Ruixue Liu, Advait Sarkar, Erin Solovey and Sebastian Tschiatschek. Evaluating Rule-based Programming and ReinforcementLearning for Personalising an Intelligent System.
  • Yiwei Yang, Eser Kandogan, Yunyao Li, Prithviraj Sen and Walter Lasecki. A Study on Human-AI Cooperative Model Development:Explainability and Interactivity Leads to Generalizability.

Panel: Real-world Systems and Industry

  • Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Mohamad Soudki, Ayan Banerjee and Sandeep Gupta. Learn2Sign: Explainable AI for Sign Language Learning.
  • Vanessa Putnam and Cristina Conati. Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS).
  • Christine T. Wolf and Jeanette Blomberg. Explainability in Context: Lessons from an Intelligent System in the IT Services Domain.

Panel: Visualization and Attention

  • Chiradeep Roy, Mahesh Shanbhag, Mahsan Nourani, Tahrima Rahman, Samia Kabir, Vibhav Gogate, Nicholas Ruozzi and Eric Ragan. Explainable Activity Recognition in Videos.
  • Alison Smith-Renner, Rob Rua and Mike Colony. Towards an Explainable Threat Detection Tool.
  • Mukund Sundararajan, Jinhua Xu, Ankur Taly, Rory Sayres and Amir Najmi. Exploring Principled Visualizations for Deep Network Attributions.

Posters

  • Federica Di Castro and Enrico Bertini. Surrogate Decision Tree Visualization.
  • Yuri Nakao, Junichi Shigezumi, Hikaru Yokono and Takuya Takagi. Requirements for Explainable Smart Systems in the Enterprises from Users and Society Based on FAT.
  • An Nguyen, Byron Wallace and Matt Lease. Mash: software tools for developing interactive and transparent machine learning systems.
  • Mahsan Nourani, Sina Mohseni, Eric Ragan, Chad A. Steed and John R. Goodall. Explanations in the Loop: Leveraging Implicit and Explicit Interactions as Feedback in Interpretable Systems.
  • Mireia Ribera and Agata Lapedriza. Can we do better explanations? a proposal of user-centered explainable AI.
  • Pedro Sequeira, Eric Yeh and Melinda Gervasio. Interestingness Elements for Explainable Reinforcement Learning through Introspection.
  • Jun Wang, Changsheng Zhao, Junfu Xiang and Kanji Uchino. Interactive Topic Model with Enhanced Interpretability.
  • Yao Xie, Xiang Chen and Ge Gao. Outlining the Design Space of Intelligible Intelligent Systems for Medical Diagnosis.

Nothing Found

It seems we can’t find what you’re looking for. Perhaps searching can help.