ACM CHI 2022 Workshop on 

Human-Centered Explainable AI (HCXAI)

Fully Virtual 

May 12 - 13, 2022 

@ 10:00-15:00 EDT/ 16:00-21:00 CET (on both days)

Important Dates

Paper Notifications

April 22, 2022


Camera Ready Deadline

May 2, 2022 @ 11:59pm AoE (update your submission via EasyChair)

Workshop Dates

May 12-13, 2022 
10:00-15:00 EDT/ 16:00-21:00 CET (on both days)
 

Fireside Chat with Tania Lombrozo

We are excited to have Dr. Tania Lombrozo as the key speaker for the workshop. In lieu of a traditional keynote, we are switching it up with an engaging fireside chat with her. Dr. Lombrozo is the Arthur W. Marks ’19 Professor of Psychology at Princeton University where she directs the Concepts & Cognition Lab.

Dr. Lombrozo's work been formative for many threads of XAI, especially the psychological and philosophical dimensions of explanations. Just like us, she is excited to engage with the HCXAI community. Here is her message:


 "As a cognitive scientist who studies explanation from psychological and philosophical perspectives, I’m really looking forward to learning from the HCXAI community and identifying ways our respective disciplines can benefit from further interaction."

 

Broadening Participation

 📢 You can now attend the workshop even if you don't have an accepted paper! 

🎯 Want to join? Please sincerely fill this form out. Deadline: May 6, 2022, 11:59pm AoE

💡 Spots are limited so quality of responses will guide decisions

🎁 We're opening things up to reduce barriers to participation, especially if you're a practitioner/newcomer to the domain & couldn't submit a paper. 

Please note:


  • Filling out the survey does **not** guarantee a spot in the workshop. 
  • Your responses will be reviewed and evaluated by the organizing committee. 
  • If your responses have adequate depth and align with the workshop goals and if we have the capacity,  we will email you with the registration code (you will need to add the workshop through CHI's registration). 
  • You will be attending as a non-presenting participant. You can fully engage in the discussions but will not be able to able present anything. All presentations have already been finalized. 
  • We aim to balance the diversity of perspectives while being conscious about logistical constraints. 


About the Workshop

Our lives are increasingly algorithmically mediated by Artificial Intelligence (AI) systems. The purview of these systems has reached consequential and safety-critical domains such as healthcare, finance, automated driving, etc. Despite their continuously improving capabilities, these AI systems suffer from opacity issues where the mechanics underlying their decisions often remain invisible or incomprehensible to end-users. Crucial to trustworthy and accountable Human-AI collaboration thus is the explainability of AI systems—these systems need to be able to make their decisions explainable and comprehensible to affected humans Explainability has been sought as primary means, even fundamental rights, for people to understand AI in order to contest and improve AI, guarantee fair and ethical AI, as well as to foster human-AI cooperation. Consequently, “explainable artificial intelligence” (XAI) has become a prominent interdisciplinary domain in the past years, including researchers from fields such as machine learning, data science, and visualization, human-computer interaction/human factors, design, psychology, or law. Although XAI has been a fast-growing field, there is no agreed-upon definition of it, let alone methods to evaluate it, nor guidelines to create XAI technology. Discussions to chart the domain and shape these important topics call for human-centered and socio-technical perspectives, input from diverse stakeholders, as well as the participation of the broader HCI community.

 

In this workshop, we want to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels towards a Human-Centered Explainable AI (HCXAI). We put the emphasis on “operationalizing”: aiming to produce actionable frameworks, contextually transferable evaluation methods, concrete

design guidelines, etc. for explainable AI, and encourage a holistic approach when it comes to articulating operationalization of these human-centered perspectives.

Download the HCXAI CHI workshop proposal

Call for Papers

We are interested in a wide range of topics, from sociotechnical aspects of XAI to human-centered evaluation techniques to the responsible use of XAI. We are especially interested in the discourse around one or more of the questions: who (e.g., clarifying who the human is in XAI, how different who’s interpret explainability), why (e.g., how social and individual factors influence explainability goals), and where (e.g., contextual explainability differences

in diverse application areas).  Beyond these, we invite work on topics including but not limited to weaponizing AI explanations (e.g., inducing over-trust in AI), harmful effects of XAI, appropriate trust calibration, designing for accountability, and avoiding “ethics washing” in XAI.

The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as source of inspiration:

  • How might we chart the landscape of different ‘whos’ (relevant stakeholders) in XAI and their respective explainability needs? 
  • What user goals should XAI aim to support, for whom, and why? 
  • How can we address value tensions amongst stakeholders in XAI? 
  • How do user characteristics (e.g., educational background, profession) impact needs around explainability? 
  • Where, or in what categories of AI applications, should we prioritize our XAI efforts on? 
  • How might we develop transferable evaluation methods for XAI? What key constructs need to be considered? 
  • Given the contextual nature of explanations, what are the potential pitfalls of evaluation metrics standardization? 
  • How might we take into account the who, why, and where in the evaluation methods? 
  • How might we stop AI explanations from being weaponized (e.g., inducing dark patterns)? 
  • Not all harms are intentional. How might we address unintentional negative effects of AI explanations (e.g., 
  • inadvertently triggering cognitive biases that lead to over-trust)? 
  • What steps should we take to hold organizations/creators of XAI systems accountable and prevent “ethics washing” (the practice of ethical window dressing where ‘lip service’ is provided around AI ethics)? 
  • From an AI governance perspective, how can we address perverse incentives in organizations that might lead to harmful effects (e.g., privileging growth and AI adoption above all else)? 
  • How do we address power dynamics in the XAI ecosystem to promote equity and diversity? 
  • What are issues in the Global South that impact Human-centered XAI? Why? How might we address them? 



Researchers, practitioners, or policy makers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references) in the two-column (landscape) Extended Abstract Format that CHI workshops have traditionally used. Templates: [Overleaf] [Word] [PDF]

Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop's organizing and program committees will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference.

Submissions must be original and relevant contributions to the workshop's theme. Each paper should directly and explicitly address how it speaks to the workshops goals and themes. Pro-tip: direct mapping to a question or goal posed above will help. We are looking for position papers that take a well-justified stance and can generate productive and lively discussions during the workshop. Examples include, but not limited to, position papers that include research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects.

Since this workshop will be held virtually which reduces visa or travel-related burdens, we aim to have global and diverse participation. With an effort towards equitable conversations, we welcome participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)

Submit your paper here:  
https://easychair.org/conferences/?conf=hcxai2022  

What you can get out of this workshop

All workshop participants will be provided state-of-the-art knowledge on explainable artificial intelligence (already before the workshop in form of downloadable content, but also during the workshop in form of presentations), and you will be able to initiate contact and cooperation with other attendees. We will build individual groups to brainstorm and work on problems relevant to this emerging domain. 

  • If you are an HCI researcher/practitioner: Learn about state-of-the-art methods for visualizing algorithmic transparency. 
  • If you are an AI/ML researcher/practitioner: Learn how to tailor your explanation according to your users' needs. 
  • If you are a policymaker or work in AI governance: Learn how different stakeholders approach the same problem and how that speaks to your perspectives.

FAQs


Do our papers need to be dealing with explanations generated by an AI system to be applicable?
Not necessarily; in fact, we encourage an end-to-end perspective. So if there are aspects that we aren't currently considering in the way we conceptualize explainability and you want to highlight that, that could be an interesting discussion point. E.g., if there is an upstream aspect (such as dataset preparation) that could have a downstream effect (such as explanation generation) but is not currently considered, that'd be a fair contribution. The goal is to connect explainability in many facets and devise ways of operationalizing HC-perspectives of explainability.

Do papers need to have prior work or can they be early work or have a case study?
Case studies or new takes on lit review are fine as long as there is a clear line to human-centered perspectives and explainability.

Can I submit a paper describing a potential dissertation idea?
Absolutely! We encourage you to discuss planned and future work at the workshop, but please provide a scientifically grounded proposal with a focus on research questions and methodologies. Still, be aware that your ideas are then publicly discussed.

Can I attend the workshop if I do not have an accepted paper?
UPDATE:
we have opened up the workshop to participants without accepted papers. Check out the "Broadening Participation" section above for more details.  ̶A̶s̶ ̶o̶f̶ ̶n̶o̶w̶,̶ ̶t̶h̶e̶ ̶s̶h̶o̶r̶t̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶ ̶n̶o̶.̶ ̶Y̶o̶u̶ ̶n̶e̶e̶d̶ ̶a̶n̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶ ̶t̶o̶ ̶a̶t̶t̶e̶n̶d̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶.̶ ̶H̶o̶w̶e̶v̶e̶r̶,̶ ̶o̶n̶c̶e̶ ̶a̶l̶l̶ ̶s̶u̶b̶m̶i̶s̶s̶i̶o̶n̶s̶ ̶a̶r̶e̶ ̶r̶e̶v̶i̶e̶w̶e̶d̶,̶ ̶t̶h̶e̶ ̶o̶r̶g̶a̶n̶i̶z̶i̶n̶g̶ ̶c̶o̶m̶m̶i̶t̶t̶e̶e̶ ̶w̶i̶l̶l̶ ̶d̶i̶s̶c̶u̶s̶s̶ ̶t̶h̶e̶ ̶p̶o̶s̶s̶i̶b̶i̶l̶i̶t̶y̶ ̶o̶f̶ ̶o̶p̶e̶n̶i̶n̶g̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶ ̶t̶o̶ ̶t̶h̶o̶s̶e̶ ̶w̶i̶t̶h̶o̶u̶t̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶s̶.̶ ̶O̶u̶r̶ ̶g̶o̶a̶l̶ ̶i̶s̶ ̶t̶o̶ ̶s̶t̶r̶i̶k̶e̶ ̶t̶h̶e̶ ̶r̶i̶g̶h̶t̶ ̶b̶a̶l̶a̶n̶c̶e̶ ̶b̶e̶t̶w̶e̶e̶n̶ ̶t̶h̶e̶ ̶s̶i̶z̶e̶ ̶o̶f̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶,̶ ̶i̶n̶t̶e̶r̶a̶c̶t̶i̶v̶i̶t̶y̶,̶ ̶a̶n̶d̶ ̶t̶h̶e̶ ̶d̶e̶p̶t̶h̶ ̶o̶f̶ ̶d̶i̶s̶c̶u̶s̶s̶i̶o̶n̶s̶.̶ ̶P̶l̶e̶a̶s̶e̶ ̶k̶e̶e̶p̶ ̶a̶ ̶c̶l̶o̶s̶e̶ ̶e̶y̶e̶ ̶o̶n̶ ̶t̶h̶e̶ ̶w̶e̶b̶s̶i̶t̶e̶ ̶f̶o̶r̶ ̶a̶n̶ ̶u̶p̶d̶a̶t̶e̶.̶

Organizers

Upol Ehsan

Georgia Institute of Technology

Philipp Wintersberger

TU Wien

Q. Vera Liao

Microsoft Research Montreal

Elizabeth Anne Watkins

 Princeton University

Carina Manger

Technische Hochschule Ingolstadt

Hal Daume III

University of Maryland & Microsoft Research

Andreas Riener

Technische Hochschule Ingolstadt

Mark Riedl

Georgia Institute of Technology

Program Committee

Niels van Berkel, Aalborg University 

Daniel Buschek, University of Bayreuth 

Jürgen Cito, TU Wien and Facebook
Jay DeYoung, Northeastern University
Ujwal Gadiraju, Delft University of Technology
Matthew Guzdial, University of Alberta
Sarthak Jain, Northeastern University
Anahid Jalali, TU Wien
Andreas Löcken, Technische Hochschule Ingolstadt (THI)
Mahsan Nourani, University of Florida
Jeroen Ooge, KU Leuven 

Samir Passi, Independent Researcher
Tim Schrills, Universität zu Lübeck
Xinru Wang, Purdue University

Contact

International

I have read and understood the Privacy Policy.