Introduction: The origin story, part 2

Upol Ehsan shares the "why" behind the workshop followed by an introduction of the organizing committee. Watch this if you want to learn how the workshop came about, what our goals were, and how we plan to build on the conversation around Human-centered XAI.


Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Fireside Chat with Tanja Lombrozo

Tania Lombrozo is the Arthur W. Marks ’19 Professor of Psychology at Princeton University and will share her views on XAI.

Moderator: Upol Ehsan


Paper Session #1: Concerns and Issues

Session Chair: Vera Q. Liao

Paper #4:  Explainable AI: Another Successful Failure?
Bran Knowles

Paper #6: On the Relationship Between Explanations, Fairness Perceptions, and Decisions
Jakob Schoeffer*, Maria De-Arteaga*, Niklas Kuehl*  (* equal contributions)

Paper #8: How XAI May Be Exploited To Create Seemingly Blameworthy AI
Gabriel Lima, Meeyoung Cha, Grgic-Hláca, Jin Keun Jeong

Paper #15: "If it didn't happen, why would I change my decision?": How Judges Respond to Counterfactual Explanations for the Public Safety Assessment
Yaniv Yacoby, Ben Green, Christopher L. Griffin Jr., Finale Doshi-Velez 


Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Paper Session #2: Trust and Cooperation


Session Chair: Carina Manger

Paper #36: Human Trust for AI Partnerships
Robin Welsch, Thomas Weber

Paper #12: How to Facilitate Mental Model Building and Mitigate Overtrust Using HCXAI
Lisa Graichen, Matthias Graichen, Mareike Petrosjan  

Paper #28: The Role of Information Asymmetry in Human-AI Decision-Making
Patrick Hemmer*, Max Schemmer*, Niklas Kühl, Michael Vössing, Gerhard Satzger (* equal contributions) 


Paper Session #3: Human-centered Explanation Design

 

Session Chair: Elizabeth Watkins

Paper #18: Fighting Deceit and Disguise in Language Interfaces with WYHIWYS
Claudio S. Pinhanez

Paper #27: Explainability in Automated Training and Feedback Systems
Rooja Rao S. B., Dinesh Babu Jayagopi, Mauro Cherubini

Paper #33: Dangerously Convincing: Applying Guidelines of Risk Communication Message Design to Human-Centered Explainable AI
Angel Hsing-Chi Hwang   

Paper #13: User-friendly Conversational Explanations: A Research Summary
Fatemeh Alizadeh, Aikaterini Mniestri, Dominik Pins, Gunnar Stevens 

Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Poster Spotlight Videos

Paper #10: Fair and Trustworthy Welfare Systems: Rethinking Explainable AI in the Public Sector 

Dilruba Showkat, Shereen Bellamy and Alexandra To 

Paper #19: Explaining Binary Time-Series Classification with Counterfactuals in an Industrial Use Case 

Anahid Jalali, Bernhard Haslhofer, Clemens Heistracher, Denis Katic and Andreas Rauber

Paper #30: Towards a Learner-Centered Explainable AI  
Anna Kawakami, Luke Guerdan, Yanghuidi Cheng, Anita Sun, Alison Hu, Kate Glazko, Nikos Arechiga, Matthew Lee, Scott Carter, Haiyi Zhu and Kenneth Holstein

Paper #23: Co-Designing Explainable AI for a Mobile Banking App
Alexander Blandin, Matt Roach, Matt Jones, Jen Pearson, Daniele Doneddu and David Sullivan 


Paper #9: Explaining the envelope of acceptability 

Alistair Sutcliffe 
 

Paper #14: Designing Explainable Chatbot Interfaces: Enhancing Usefulness, Transparency, and Trust 

Anjali Khurana and Parmit Chilana 
 

Paper #7: Towards Human-Centric XAI Chatbots in Mental Health for End-User Experiences 

Elena Korshakova and Sang Won Bae

Paper #45: Treading on Thin Ice: Using Analogies to Promote Appropriate Reliance in Human-AI Decision Making
Gaole He and Ujwal Gadiraju
 

Paper #16: Perceptions on Explanations in Automated Fact-Checkers 

Gionnieve Lim and Simon Perrault 
 

Paper #22: Why and How? A User Study to Evaluate Explainability Within the Automotive Context 

Julia Graefe, Selma Paden and Klaus Bengler 
 

Paper #38: A Quantitative Human-Grounded Evaluation Process for Explainable ML 

Katharina Beckh, Sebastian Müller and Stefan Rüping

Paper #24: A Meta-Analysis of the Utility of Explainable Artificial Intelligence-assisted Decision-Making
Max Schemmer, Patrick Hemmer, Niklas Kühl and Maximilian Nitsche 

Paper #20: Creative Uses of AI Systems and their Explanations: A Case Study from Insurance 
Michaela Benk, Raphael Weibel and Andrea Ferrario 
 

Paper #44: Interpreting a Mirage: Lessons from a Design Study Toward Synthetic Weather Visualizations
Steven Gomez and Kevin Nam 

 

Paper #5: HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim, Nicole Meister, Vikram V. Ramaswamy, Ruth Fong and Olga Russakovsky


Paper #40: Combining user-centred Explainability and xAI 

Janin Koch and Vitor Fortes Rey
  

Paper #41: Grounding Explainability Within the Context of Global South in XAI 

Deepa Singh, Michal Slupczynski, Ajit G. Pillai and Vinoth Pandian Sermuga Pandian

Paper #46: Runaway Models: Scrollytelling for Language Model Bias Communication to Lay Audiences
Tamara Lottering, Mennatallah El-Assady and Benjamin Bach 


Wrap Up & Closing

Upol Ehsan concludes the workshop - where we are right now, where we can go in the future, and next steps to shape the discourse around HCXAI. 


Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Content from Youtube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to Youtube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.

Consent & Show

Group Works Timelapse

Watch what our breakout groups have achieved in 2 hours in only 20 seconds!