It seems we can’t find what you’re looking for. Perhaps searching can help.
17 December 2017
23 Jan 2018
Camera Ready Copy Due
6 February 2018
11 March 2018
Held in conjunction with ACM Intelligent User Interfaces (IUI), Tokyo, Japan, 11 March 2018
This workshop will bring together researchers in academia and industry who have an interest in making smart systems explainable to users and therefore more intelligible and transparent.
Smart systems that apply complex reasoning to make decisions and plan behavior, such as clinical decision support systems, personalized recommendations, and machine learning classifiers, are difficult for users to understand. While research to make systems more explainable and therefore more intelligible and transparent is gaining pace, there are numerous issues and problems regarding these systems that demand further attention. The goal of this workshop is to bring researchers and industry together to address these issues, such as when and how to provide an explanation to a user. The workshop will include a keynote, poster panels, and group activities, with the goal of developing concrete approaches to handling challenges related to the design and development of explainable smart systems.
David Gunning is DARPA program manager in the Information Innovation Office (I2O). Dave has an extensive background in the development and application of artificial intelligence (AI) technology. At DARPA, Dave now manages the Explainable AI (XAI) and the Communicating with Computers (CwC) programs. Dave comes to DARPA as an IPA from Pacific Research National Lab (PNNL). Prior to PNNL, Dave was a Program Director for Data Analytics and Contextual Intelligence at the Palo Alto Research Center (PARC), a Senior Research Manager at Vulcan Inc., a Program Manager at DARPA (twice before), SVP of SET Corp., VP of Cycorp, and a Senior Scientist in the Air Force Research Labs. At DARPA previously, Dave managed the Personalized Assistant that Learns (PAL) project that produced Siri and the Command Post of the Future (CPoF) project that was adopted by the US Army as their Command and Control system for use in Iraq and Afghanistan. Dave holds a M.S. in Computer Science from Stanford University, a M.S. in Cognitive Psychology from the University of Dayton, and a B.S. in Psychology from Otterbein College.
You may download the slides of the keynote.
Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI—especially explainable machine learning—will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.
XAI is developing new machine-learning systems with the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future. The strategy for achieving that goal is to develop new or modified machine-learning techniques that will produce more explainable models. These models will be combined with state-of-the-art human-computer interface techniques capable of translating models into understandable and useful explanation dialogues for the end user. Our strategy is to pursue a variety of techniques in order to generate a portfolio of methods that will provide future developers with a range of design options covering the performance-versus-explainability trade space.
XAI research prototypes will be tested and continually evaluated throughout the course of the program. At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems. After the program is complete, these toolkits would be available for further refinement and transition into defense or commercial applications.
|09:15-10:00||Keynote talk by Dave Gunning – “What are the current challenges for explainable smart systems?”|
|10.00-10.45||Themed poster panels (all accepted papers should prepare a poster providing an overview of their paper which you have 3 minutes to present; poster size not to exceed A1 format, poster size roughly 594 x 841 mm or 23.4 x 33.1 inches),
|11:00-11.30||Introduction of activities, sub-group assignment (3-4 groups; no more than 6/group) and example systems|
|11:30-12:15||Sub-group activity – Decide which system to focus on, explore concrete challenges for the system of focus which the group is concentrating on|
|12:15-12:30||Presentation of sub-group activity|
|13:30-15:00||Sub-group activity – “designing” concrete approaches for the challenges identified for the system of focus|
|15:15-16:15||Presentation of sub-group activity and “design crit”|
|16:15-16:45||Sub-group activity – Write-up and planning next steps|
You may download the all accepted position papers for this workshop here.