ACM CHI 2023 Workshop on 

Human-Centered Explainable AI (HCXAI)

Apr 28 - 29, 2023

1000-1530 ET/ 1600-2130 CEST (each day)

Tentative Schedule

 

Friday, April 28. 2023

10:00 ET | 16:00 CEST - Introduction and Main Event

Live discussion: What happens when a philosopher, a psychologist, and a computer scientist walk into a bubble tea bar to talk about Explainable AI? (scroll below the schedule for more information)

12:00 ET | 18:00 CEST - Paper Session#1 

Why Don't You Do Something About It? Outlining Connections between AI Explanations & User Actions (by Gennie Mansi and Mark Riedl)

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations (by Edward Small, Yueqing Xuan, Danula Hettiachchi and Kacper Sokol)

Gig Workers and HCXAI in the Global South: An Evaluative Typology (by Isha Bhallamudi)

13:00 ET | 19:00 CEST - Paper Session#2

Generating Process-Centric Explanations to Enable Contestability in Algorithmic Decision Making: Challenges and Opportunities (by Mireia Yurrita, Agathe Balayn and Ujwal Gadiraju)

Using Learning Theories to Evolve Human-Centered XAI: Future Perspectives and Challenges (by Karina Cortiñas Lorenzo and Gavin Doherty)

Understanding and Mitigating the Negative Consequences of Training Dataset Explanations (by Ariful Islam Anik and Andrea Bunt)

14:00 ET | 20:00 CEST - Group Work Part #1

15:30pm ET | 21:30 CEST - End of Day 1


Saturday, April 29. 2023

10:00 ET | 16:00 CEST - Group Work Part #2

12:00 ET | 18:00 CEST - Poster Spotlight & Video Session

Posters
Redesigning the HCXAI Agenda for the Post-ChatGPT Era of AI Systems

Trust and Transparency in Recommender Systems 
Understanding the Role of Human Intuition on Reliance in Human-AI Decision-Making with Explanations
Insights from the Folk Theories of Recommender System Users
A human-centered XAI system for phishing detection
Developing AI Educational Tools for Children
Peeking Inside the Schufa Blackbox: Explaining the German Housing Scoring System
XAI human-centered design framework for Automated Vehicles
Acceptance of AI – Is It Domain-Specific?
Conceptualizing the Relationship between AI Explanations and User Agency
Explainable AI for End-Users
Adaptation of AI Explanations to Users’ Roles
Explainable AI for Strep Throat Increases Clinicians Ability to Identify Positive Cases in Telehealth
The Guide Dog Metaphor: Explainable AI via Coachable AI
Towards Feminist Intersectional XAI: From Explainability to Response-Ability
The role of XAI in creating conversation agents for early adolescents

Short Videos
Democratizing Healthcare with VitalsAI 
The Dilemma of Whitebox Models
A Human-Centric Assessment Framework for AI 

13:00 ET | 19:00 CEST - Paper Session#3

Explainable AI for the Arts (by Nick Bryan-Kinns)

Explainable AI And Visual Reasoning: Insights From Radiology (by Robert Kaufman and David Kirsh)

Exploring how expertise impacts acceptability of AI explanations: A case study from manufacturing (by Zibin Zhao and Cagatay Turkay)

14:00 ET | 20:00 CEST - Ask the Organizers & Closing

15:30pm ET | 21:30 CEST - End of Day 2

Main Event at HCXAI 2023


What happens when a philosopher, a psychologist, and a computer scientist walk into a bubble tea bar to talk about Explainable AI?


Only one way to find out!


Join our main event at the workshop for a deep dive with leading experts Andrés Páez (philosopher, Universidad de los Andes), Tania Lombrozo (Psychologist, Princeton University), and Tim Miller (Computer Scientist, University of Melbourne).

This will be an engaging and interactive hangout. You (the audience) will have plenty of opportunity to directly ask your questions! 

If you know anything about #HCXAI, you probably know the following: we don't like boring keynotes! 

We've always spiced things up and innovated on main events (e.g., the 2022 edition had an interactive fireside chat). This is the latest evolution in that process. 

We're so excited for this crossover episode of the XAI multiverse! 


Broadening Participation

 📢 You can apply to attend the Human-centered Explainable AI (#HCXAI) workshop without an accepted paper! 

💡 Why are we opening things up? Over the last 3 years, practitioners & policymakers have shared challenges of submitting a paper (bandwidth, resources, etc)


So, we reduced the barriers 🎁

🔥 Note that this is unique-- typically workshops at CHI require accepted papers to attend.

 💡Spots are extremely limited so fill this form ASAP. Form will close on Apr 14, 2023 at 11:59pm AoE

About the Workshop

Our lives are increasingly algorithmically mediated by Artificial Intelligence (AI) systems. The purview of these systems has reached consequential and safety-critical domains such as healthcare, finance, automated driving, etc. Despite their continuously improving capabilities, these AI systems suffer from opacity issues where the mechanics underlying their decisions often remain invisible or incomprehensible to end-users. Crucial to trustworthy and accountable Human-AI collaboration thus is the explainability of AI systems—these systems need to be able to make their decisions explainable and comprehensible to affected humans Explainability has been sought as primary means, even fundamental rights, for people to understand AI in order to contest and improve AI, guarantee fair and ethical AI, as well as to foster human-AI cooperation. Consequently, “explainable artificial intelligence” (XAI) has become a prominent interdisciplinary domain in the past years, including researchers from fields such as machine learning, data science, and visualization, human-computer interaction/human factors, design, psychology, or law. Although XAI has been a fast-growing field, there is no agreed-upon definition of it, let alone methods to evaluate it, nor guidelines to create XAI technology. Discussions to chart the domain and shape these important topics call for human-centered and socio-technical perspectives, input from diverse stakeholders, as well as the participation of the broader HCI community.

The last two workshops have facilitated progress in all these areas in HCXAI. In 2021, we opened the stage towards HCXAI, which resulted in paper sessions addressing the

involvement of end users, explanation design, and theoretical frameworks. The 2022 workshop took the conversation further and branched out into topics such as dark patterns and problems in XAI (Concerns and Issues), how end users perceive and directly engage with explanations (Trust and Cooperation), as well as how explanations can be tailored for individuals (Human-centered explanation design). Given the progress so far, it is imperative to continue the critically constructive narrative around HCXAI to address intellectual blind spots and propose human-centered interventions. The goal is not to impose a normativity but to systematically articulate the different interpretive flexibilities of each relevant social groups in XAI. This allows us to make actionable progress at all three levels–conceptual, methodological, and technical.

To read the workshop proposal, please click here.

Call for Papers

Explainability is an essential pillar of Responsible AI. Explanations can improve real-world efficacy, provide harm mitigation levers, and can serve as a primary means to ensure humans’ right to understand and contest decisions made about them by AI systems. In ensuring this right, XAI can foster equitable, efficient, and resilient Human-AI collaboration. In this workshop, we serve as a junction point of cross-disciplinary stakeholders of the XAI landscape, from designers to engineers, from researchers to end-users. The goal is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Consequently, we call for position papers making justifiable arguments (up to 4 pages excluding references) that address topics involving the who (e.g., relevant diverse stakeholders), why (e.g., social/individual factors influencing explainability goals), when (e.g., when to trust the AI’s explanations vs. not) or where (e.g., diverse application areas, XAI for actionability or human-AI collaboration, or XAI evaluation). Papers should follow the CHI Extended Abstract format and be submitted through the workshop’s submission site (https://hcxai.jimdosite.com/).

All accepted papers must have at least one author register and attend the workshop. CHI has now the option for workshop-only registration (https://cvent.me/l8GAXB). This makes our workshop accessible (registration cost-wise) if it's the only thing you are coming for. Further, contributing authors are invited to provide their views in the form of short panel discussions with the workshop audience. In addition to papers, we will host a video track in 2023 (“The Sanities & Insanities of XAI: Provocations & Evocations”). Participants submit 90-second videos with provocative content (for example, design fiction, speculative design, or other creative ideas) discussing the future of XAI and human-AI interactions. With an effort towards an equitable discourse, we particularly welcome participation from the Global South and from stakeholders whose voices are underrepresented in the dominant XAI discourse.
 
The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as source of inspiration:

  • How might we chart the landscape of different ‘whos’ (relevant stakeholders) in XAI and their respective explainability needs? 
  • What user goals should XAI aim to support, for whom, and why? 
  • How can we address value tensions amongst stakeholders in XAI? 
  • How do user characteristics (e.g., educational background, profession) impact needs around explainability? 
  • Where, or in what categories of AI applications, should we prioritize our XAI efforts on? 
  • How might we develop transferable evaluation methods for XAI? What key constructs need to be considered? 
  • Given the contextual nature of explanations, what are the potential pitfalls of evaluation metrics standardization? 
  • How might we take into account the who, why, and where in the evaluation methods? 
  • How might we stop AI explanations from being weaponized (e.g., inducing dark patterns)? 
  • Not all harms are intentional. How might we address unintentional negative effects of AI explanations (e.g., 
  • inadvertently triggering cognitive biases that lead to over-trust? 
  • What steps should we take to hold organizations/creators of XAI systems accountable and prevent “ethics washing” (the practice of ethical window dressing where ‘lip service’ is provided around AI ethics)? 
  • From an AI governance perspective, how can we address perverse incentives in organizations that might lead to harmful effects (e.g., privileging growth and AI adoption above all else)? 
  • How do we address power dynamics in the XAI ecosystem to promote equity and diversity? 
  • What are issues in the Global South that impact Human-centered XAI? Why? How might we address them? 

 


Researchers, practitioners, or policy makers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references) in the two-column (landscape) Extended Abstract Format that CHI workshops have traditionally used. Templates: [Overleaf] [Word] [PDF]

Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop's organizing and program committees will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference.

Submissions must be original and relevant contributions to the workshop's theme. Each paper should directly and explicitly address how it speaks to the workshops goals and themes. Pro-tip: direct mapping to a question or goal posed above will help. We are looking for position papers that take a well-justified stance and can generate productive and lively discussions during the workshop. Examples include, but not limited to, position papers that include research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects.

Since this workshop will be held virtually which reduces visa or travel-related burdens, we aim to have global and diverse participation. With an effort towards equitable conversations, we welcome participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)

Submission pro tips:

1. Explicitly align your submission with the workshop's goals and topics. How? (a) Refer to the questions in the Call for Papers. (b) Read the workshop proposal

2. Engage with past submissions (build on, don't repeat). This year, we are putting extra emphasis on how authors are building on prior papers in this workshop. All papers are available on the website. Please engage with them, and build on them. 

3. Position papers must make a well-justified argument, not just summarize findings. This means that even if you are summarizing findings, make an argument around that summarization and justify why that argument (position) is something that is discussion-worthy and valuable to the community. 


 Submit your paper here

Call for Videos

The Sanities and Insanities of XAI: Provocations & Evocations
Are explanations for end users a good or bad idea? What can go wrong when decision-makers wrongly interpret explanations when deciding on policy? How long can a team of pilots discuss an explanation before hitting the ground? How will our world look in 100 years, with or without explainable AI?

For the first time, we host a dedicated video track at our HCXAI workshop. Submissions do not need deep scientific grounding but should address provocative ideas or important questions relevant to the XAI community (for example, design fiction, speculative design, or other creative ideas).

Submissions guidelines:

60-90 second video (full HD, mp4, 100MB max)A 150-word abstract describing the contents

Submit your video here

FAQs

Do our papers need to be dealing with explanations generated by an AI system to be applicable?
Not necessarily; in fact, we encourage an end-to-end perspective. So if there are aspects that we aren't currently considering in the way we conceptualize explainability and you want to highlight that, that could be an interesting discussion point. E.g., if there is an upstream aspect (such as dataset preparation) that could have a downstream effect (such as explanation generation) but is not currently considered, that'd be a fair contribution. The goal is to connect explainability in many facets and devise ways of operationalizing HC-perspectives of explainability.

Do papers need to have prior work or can they be early work or have a case study?
Case studies or new takes on lit review are fine as long as there is a clear line to human-centered perspectives and explainability.

Can I submit a paper describing a potential dissertation idea?
 Absolutely! We encourage you to discuss planned and future work at the workshop, but please provide a scientifically grounded proposal with a focus on research questions and methodologies. Still, be aware that your ideas are then publicly discussed.
 
Can I attend the workshop if I do not have an accepted paper?
As of now, the short answer is no. You need an accepted paper to attend the workshop. However, once all submissions are reviewed, the organizing committee will discuss the possibility of opening the workshop to those without accepted papers. Our goal is to strike the right balance between the size of the workshop, interactivity, and the depth of discussions. Please keep a close eye on the website of an update.

I am a non-academic practitioner. How may I join the workshop?
 ̶R̶e̶g̶a̶r̶d̶l̶e̶s̶s̶ ̶o̶f̶ ̶y̶o̶u̶r̶ ̶b̶a̶c̶k̶g̶r̶o̶u̶n̶d̶,̶ ̶y̶o̶u̶ ̶w̶i̶l̶l̶ ̶n̶e̶e̶d̶ ̶a̶n̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶ ̶t̶o̶ ̶b̶e̶ ̶f̶i̶r̶s̶t̶ ̶i̶n̶v̶i̶t̶e̶d̶ ̶i̶n̶t̶o̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶.̶ ̶I̶f̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶,̶ ̶t̶h̶e̶n̶ ̶y̶o̶u̶ ̶w̶i̶l̶l̶ ̶r̶e̶g̶i̶s̶t̶e̶r̶ ̶t̶h̶r̶o̶u̶g̶h̶ ̶t̶h̶e̶ ̶C̶H̶I̶ ̶c̶o̶n̶f̶e̶r̶e̶n̶c̶e̶ ̶(̶r̶e̶g̶i̶s̶t̶r̶a̶t̶i̶o̶n̶ ̶d̶e̶t̶a̶i̶l̶s̶ ̶h̶e̶r̶e̶)̶.̶ ̶S̶i̶n̶c̶e̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶ ̶i̶s̶ ̶v̶i̶r̶t̶u̶a̶l̶,̶ ̶y̶o̶u̶ ̶w̶i̶l̶l̶ ̶o̶n̶l̶y̶ ̶n̶e̶e̶d̶ ̶t̶o̶ ̶p̶a̶y̶ ̶t̶h̶e̶ ̶v̶i̶r̶t̶u̶a̶l̶ ̶r̶e̶g̶i̶s̶t̶r̶a̶t̶i̶o̶n̶ ̶f̶e̶e̶.̶ ̶N̶o̶t̶e̶ ̶t̶h̶a̶t̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶ ̶o̶r̶g̶a̶n̶i̶z̶e̶r̶s̶ ̶d̶o̶ ̶n̶o̶t̶ ̶c̶o̶n̶t̶r̶o̶l̶ ̶t̶h̶e̶ ̶r̶e̶g̶i̶s̶t̶r̶a̶t̶i̶o̶n̶ ̶p̶r̶o̶c̶e̶s̶s̶.̶ ̶F̶o̶r̶ ̶a̶l̶l̶ ̶r̶e̶g̶i̶s̶t̶r̶a̶t̶i̶o̶n̶ ̶r̶e̶l̶a̶t̶e̶d̶ ̶i̶s̶s̶u̶e̶s̶,̶ ̶p̶l̶e̶a̶s̶e̶ ̶c̶o̶n̶t̶a̶c̶t̶ ̶t̶h̶e̶ ̶r̶e̶g̶i̶s̶t̶r̶a̶t̶i̶o̶n̶ ̶t̶e̶a̶m̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶C̶H̶I̶ ̶c̶o̶n̶f̶e̶r̶e̶n̶c̶e̶.̶ ̶
This year, based on popular demand, we opened up participation for people without accepted papers. Refer to the form shared above to apply.

If accepted, do I need to pay to attend the workshop?
Yes, like all CHI workshops, there is a registration fee to attend. Everyone, including organizers, have to pay it.

Do you offer fee waivers?
Unfortunately, no. We'd love to offer fee waivers but do not have the financial budget to accommodate that. 


What you can get out of this workshop

All workshop participants will be provided state-of-the-art knowledge on explainable artificial intelligence (already before the workshop in form of downloadable content, but also during the workshop in form of presentations), and you will be able to initiate contact and cooperation with other attendees. We will build individual groups to brainstorm and work on problems relevant to this emerging domain. 

  • If you are an HCI researcher/practitioner: Learn about state-of-the-art methods for visualizing algorithmic transparency. 
  • If you are an AI/ML researcher/practitioner: Learn how to tailor your explanation according to your users' needs. 
  • If you are a policymaker or work in AI governance: Learn how different stakeholders approach the same problem and how that speaks to your perspectives.

Organizers

Upol Ehsan

Georgia Institute of Technology

Philipp Wintersberger

University of Applied Sciences Upper Austria
TU Wien

Elizabeth Anne Watkins

 Princeton University

Carina Manger

Technische Hochschule Ingolstadt

Gonzalo Ramos

Microsoft

Hal Daume III

University of Maryland & Microsoft Research

Justin D. Weisz

IBM Research

Andreas Riener

Technische Hochschule Ingolstadt

Mark Riedl

Georgia Institute of Technology

Contact

Portugal

I have read and understood the Privacy Policy.