It seems we can’t find what you’re looking for. Perhaps searching can help.
About the Workshop
This workshop will follow on from the very successful ExSS 2018 workshop held at IUI. It will bring together researchers in academia and industry who have an interest in making smart systems explainable to users and therefore more intelligible and transparent. This topic has attracted increasing interest to provide glimpses into the black-box behavior of these systems in order to provide more effective steering or training of the system, better reliability and improved usability. This workshop will provide a venue for exploring issues that arise in designing, developing and evaluating smart systems that use or provide explanations of their behavior.
Keynote Speaker: Margaret Burnett
Margaret Burnett is an OSU Distinguished Professor at Oregon State University. She began her career in industry, where she was the first woman software developer ever hired at Procter & Gamble Ivorydale. A few degrees and start-ups later, she joined academia, with a research focus on people who are engaged in some form of software development. Together with her collaborators and students, she has contributed some of the seminal work on explaining AI to ordinary end users. She also co-founded the area of end-user software engineering, which aims to improve software for computer users that are not trained in programming, and leads the team that created GenderMag, a software inspection process that uncovers gender biases in software from spreadsheets to programming environments. Burnett is an ACM Fellow, a member of the ACM CHI Academy, an award-winning mentor, and serves on the Academic Alliance Advisory Board of the National Center for Women In Technology (NCWIT).
Explaining AI Fairly (Well)
How can the field of Explainable AI (XAI) get from where we are now, explaining some aspects of AI fairly well, to where we need to be—explaining AI fairly and well? In this keynote, I’ll talk about three critical challenges to our field, focusing especially on the third of these: explaining AI fairly.
Tentative Schedule
Time | Activity |
---|---|
09:00-09:15 | Welcome address |
09:15-10:15 | Keynote talk by Prof. Margaret Burnett – “Explaining AI Fairly (Well)” |
10:15-10:45 | Paper Panel Session 1: Visualization and Attention
|
10:45-11:00 | Coffee Break |
11:00-11.30 | Paper Panel Session 2: Human-in-the-Loop |
11:30-12:00 | Paper Panel Session 3: Frameworks |
12:00-12:30 | Paper Panel Session 4: Real-world Systems and Industry |
12:30-13:30 | Lunch |
13:30-14:00 | Poster Session |
14:00-15:00 | Hands-on Activity 1
Affinity Diagramming for Definitions, Criteria, Objectives for ExSS |
15:00-15:30 | Hands-on activity 2
Application of existing frameworks to make an explainable recommender system |
15:30-15:45 | Coffee Break (Poster Session continued) |
15:45-16:45 | Hands-on activity 2 continued |
16:45-17:00 | Workshop Wrap-Up and Next Steps |
Accepted Papers and Posters
We have 1 keynote, and 19 accepted papers (12 panel presentations and 7 posters). You may download the all accepted position papers for this workshop here.
Keynote
- Margaret Burnett. Explaining AI Fairly (Well).
Panel: Visualization and Attention
- Chiradeep Roy, Mahesh Shanbhag, Mahsan Nourani, Tahrima Rahman, Samia Kabir, Vibhav Gogate, Nicholas Ruozzi and Eric Ragan. Explainable Activity Recognition in Videos.
- Alison Smith-Renner, Rob Rua and Mike Colony. Towards an Explainable Threat Detection Tool.
- Mukund Sundararajan, Jinhua Xu, Ankur Taly, Rory Sayres and Amir Najmi. Exploring Principled Visualizations for Deep Network Attributions.
Panel: Human-in-the-Loop
- Mandana Hamidi Haines, Zhongang Qi, Alan Fern, Fuxin Li and Prasad Tadepalli. Interactive Naming for Explaining Deep Neural Networks: A Formative Study.
- Ruixue Liu, Advait Sarkar, Erin Solovey and Sebastian Tschiatschek. Evaluating Rule-based Programming and ReinforcementLearning for Personalising an Intelligent System.
- Yiwei Yang, Eser Kandogan, Yunyao Li, Prithviraj Sen and Walter Lasecki. A Study on Interaction in Human-In-The-Loop Machine
Learning for Text Analytics.
Panel: Frameworks
- Michael Chromik, Malin Eiband, Sarah Völkel and Daniel Buschek. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems.
- Brian Lim, Qian Yang, Ashraf Abdul and Danding Wang. Why these Explanations? Selecting Intelligibility Types for Explanation Goals.
- Simone Stumpf. Horses For Courses: Making The Case For Persuasive Engagement In Smart Systems.
Panel: Real-world Systems and Industry
- Prajwal Paudyal, Junghyo Lee, Azamat Kamzin, Mohamad Soudki, Ayan Banerjee and Sandeep Gupta. Learn2Sign: Explainable AI for Sign Language Learning.
- Vanessa Putnam and Cristina Conati. Exploring the Need for Explainable Artificial Intelligence (XAI) in Intelligent Tutoring Systems (ITS).
- Christine T. Wolf and Jeanette Blomberg. Explainability in Context: Lessons from an Intelligent System in the IT Services Domain.
Posters
- Federica Di Castro and Enrico Bertini. Surrogate Decision Tree Visualization.
- Yuri Nakao, Junichi Shigezumi, Hikaru Yokono and Takuya Takagi. Requirements for Explainable Smart Systems in the Enterprises from Users and Society Based on FAT.
- An Nguyen, Byron Wallace and Matt Lease. Mash: software tools for developing interactive and transparent machine learning systems.
- Mireia Ribera and Agata Lapedriza. Can we do better explanations? a proposal of user-centered explainable AI.
- Pedro Sequeira, Eric Yeh and Melinda Gervasio. Interestingness Elements for Explainable Reinforcement Learning through Introspection.
- Jun Wang, Changsheng Zhao, Junfu Xiang and Kanji Uchino. Interactive Topic Model with Enhanced Interpretability.
- Yao Xie, Xiang Chen and Ge Gao. Outlining the Design Space of Intelligible Intelligent Systems for Medical Diagnosis.