ACM CHI 2024 Workshop on
Human-Centered Explainable AI (HCXAI)
May 12, 2024 (hybrid)
Scroll down for video recordings!
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Main Event at HCXAI 2024
Janet Haven is the executive director of Data & Society. She has worked at the
intersection of technology policy, governance, and accountability for more than twenty
years, both domestically and internationally. Janet is a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises the president on a range of issues related to artificial intelligence. She writes and speaks regularly on matters related to technology and society, federal AI research and development, and AI governance and policy. Before joining Data & Society, Janet spent more than a decade at the Open Society Foundations leading a global grantmaking program on technology, accountability, and human rights.
Kush R. Varshney is an IBM Fellow responsible for innovations in AI governance. He is based at the IBM T. J. Watson Research Center where he directs the trustworthy machine intelligence and human-centered artificial intelligence teams. He co-founded the IBM Science for Social Good initiative in 2016, created the AI Fairness 360 and AI Explainability 360 open-source toolkits in 2018-2019, was a visiting scientist at IBM Research – Africa in 2019, and published the book “Trustworthy Machine Learning” in 2022.
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Schedule and Proceedings
All times refer to Hawaii Standard Time (HST)
09:00 - Workshop Introduction
09:30 - Keynote and Discussion
10:30 - Coffee Break
11:00 - Paper Session #1
11:30 - Paper Session #2
12:00 - Paper Session #3
LUNCH BREAK
14:00 - Poster Spotlight Videos & Discussion (2-min videos per paper)
14:30 - Group Work Introduction
14:45 - Group Work
16:00 - Coffee Break
16:30 - Group Work Presentations
17:15 - Open Discussion - the Future of HCXAI
17:45 - Wrap-up & Closing
Paper Session #1: Personalization (5-min presentation per paper)
- Personalized Human-Centered Explainable AI by Cristina Conati
- Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User Objectives by Orfeas Menis Mastromichalakis, Jason Liartis and Giorgos Stamou
- Categorizing Sources of Information for Explanations in Conversational AI Systems in the Home for Older Adults Aging in Place by Niharika Mathur, Elizabeth Mynatt and Tamara Zubatiy
Paper Session #2: Consideration of Stakeholders (5-min presentation per paper)
- Establishing Control via Targeted Explanations: How XAI enables Stakeholder Negotiations about the Distribution of Accountability and Control in the Context of AI-based Systems in Safety-Critical Domains by Lena Schneider, Daniel Boos and Gudela Grote
- The Drawback of Insight: Detailed Explanations Can Reduce Agreement with XAI by Sabid Bin Habib Pias, Alicia Freel, Timothy Trammel, Taslima Akter, Donald Williamson and Apu Kapadia
- Manspl(AI)ning: A Feminist Approach to Explainable AI by Natalie Nova, Mark Hancock and Cayley MacArthur
Paper Session #3: Large Language Models (5-min presentation per paper)
- Addressing Social Misattributions of Large Language Models: an HCXAI-based Approach by Andrea Ferrario, Alberto Termine and Alessandro Facchini
- ACE, Action and Control via Explanations: A Proposal for LLMs to Provide Human-Centered Explainability for Multimodal AI Assistants by Elizabeth Watkins, Ramesh Manuvinakurike, Richard Beckwith, Emanuel Moss, Meng Shi and Giuseppe Raffa
- LLMs for XAI: Future Directions for Explaining Explanations by Alexandra Zytek, Sara Pidò and Kalyan Veeramachaneni
- Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain by Agathe Balayn, Lorenzo Corti, Fanny Rancourt, Fabio Casati and Ujwal Gadiraju
Poster Spotlight Session
- Expressive HCXAI: An Art-Science Design Framework for Ethical and Usable AI Systems by Dashiel Carrera
- Balancing Act: Improving Privacy in AI through Explainability by Rithika Lakshminarayanan and Sanjana Gautam
- Explainable Interfaces for Rapid Gaze-Based Interactions in Mixed Reality by Mengjie Yu et al.
- Large Language Models Cannot Explain Themselves by Advait Sarkar
- More Questions than Answers? Lessons from Integrating Explainable AI into a Cyber-AI Tool by Ashley Suh, Harry Li, Caitlin Kenney, Kenneth Alperin and Steven Gomez
- Design Requirements for Human-Centered Graph Neural Network Explanations by Pantea Habibi, Peyman Baghershahi, Sourav Medya and Debaleena Chattopadhyay
- Exploring Personality-Driven Personalization in XAI: Enhancing User Trust in Gameplay by Zhaoxin Li, Sophie Yang and Shijie Wang
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Content from YouTube can't be displayed due to your current cookie settings. To show this content, please click "Consent & Show" to confirm that necessary data will be transferred to YouTube to enable this service. Further information can be found in our Privacy Policy. Changed your mind? You can revoke your consent at any time via your cookie settings.
Important Dates
Submission Deadlines
Papers: F̶e̶b̶r̶u̶a̶r̶y̶ ̶2̶1̶,̶ ̶2̶0̶2̶4̶,̶ ̶2̶3̶:̶5̶9̶,̶ ̶A̶o̶E̶
February 28, 2024, 23:59, AoE
Videos: March 5, 2024, 23:59, AoE
Acceptance Notifications
March 25, 2024
Camera Ready Deadline
April 5, 2024
Call for Papers
Explainability is an essential pillar of Responsible AI. Explanations can improve real-world efficacy, provide harm mitigation levers, and serve as a primary means to ensure humans’ right to understand and contest decisions made about them by AI systems. In ensuring this right, XAI can foster equitable, efficient, and resilient Human-AI collaboration. In this workshop, we serve as a junction point of cross-disciplinary stakeholders of the XAI landscape, from designers to engineers, from researchers to end-users. The goal is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Consequently, we call for position papers making justifiable arguments (up to 4 pages excluding references) that address topics involving the who (e.g., relevant diverse stakeholders), why (e.g., social/individual factors influencing explainability goals), when (e.g., when to trust the AI’s explanations vs. not) or where (e.g., diverse application areas, XAI for actionability or human-AI collaboration, or XAI evaluation). Papers should follow the CHI Extended Abstract format and be submitted through the workshop’s submission site (https://hcxai.jimdosite.com/). All accepted papers will be presented, provided at least one author attends the workshop and registers at least one day of the conference. Further, contributing authors are invited to provide their views in the form of short panel discussions with the workshop audience. With an effort towards an equitable discourse, we particularly welcome participation from the Global South and from stakeholders whose voices are underrepresented in the dominant XAI discourse.
All accepted papers must have at least one author register and attend the workshop. CHI has now the option for workshop-only registration. This makes our workshop accessible (registration cost-wise) if it's the only thing you are coming for. Further, contributing authors are invited to provide their views in the form of short panel discussions with the workshop audience. In addition to papers, we will host the video track again in 2024 (“The Good and the Bad of XAI: Provocations & Evocations”). Participants submit 90-second videos with provocative content (for example, design fiction, speculative design, or other creative ideas) discussing the future of XAI and human-AI interactions.
The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as source of inspiration:
- From an HCXAI angle, how should we think about the explainability of LLMs given the challenges of translating multi-billion-parameter models into meaningful and accessible explanations for lay users?
- Just because LLMs can respond to why-questions, does that mean LLMs can “explain” themselves?
- All LLMs hallucinate. How might we use HCXAI to detect hallucinations & mitigate negative effects?
- How do we address the power dynamics in XAI? Whose “voices” are represented in AI explanations? Who gets to say what explanations users see?
- How should we practice Responsible AI when it comes to XAI? How might we mitigate risks with explanations, what risks would those be, and how does risk mitigation map to different stakeholders?
- How can we create XAI Impact Assessments (similar to Algorithmic Impact Assessments)?
- How should organizations/creators of XAI systems be held accountable to prevent “ethics washing” (the practice of ethical window dressing where “lip service” is provided around AI ethics)?
- How might we design XAI systems that are dark-pattern resistant—how might we hinder AI explanations from being weaponized for over-reliance or over-adoption?
- Can we reconcile the tension between XAI and privacy? If yes, how? If no, why?
- Given the contextual nature of explanations, what are the potential pitfalls of standardized evaluation metrics? How might we take into account the who, why, and where in the evaluation methods?
- How might explanations be designed for actionability, to provide action-oriented nudges to enable users to become better collaborators with AI systems?
- How might we address XAI issues in the Global South (MajorityWorld)?
- How should we think about explanations in physical systems (e.g., self-driving cars) vs. those in non-physical ones (e.g., automated lending)? Are there effectively the same? Are they different?
- What steps should we take to hold organizations/creators of XAI systems accountable and prevent “ethics washing” (the practice of ethical window dressing where ‘lip service’ is provided around AI ethics)?
- From an AI governance perspective, how can we address perverse incentives in organizations that might lead to harmful effects (e.g., privileging growth and AI adoption above all else)?
- How do we address power dynamics in the XAI ecosystem to promote equity and diversity?
- What are issues in the Global South that impact Human-centered XAI? Why? How might we address them?
Researchers, practitioners, or policy makers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references) in the two-column (landscape) Extended Abstract Format that CHI workshops have traditionally used. Templates: [Overleaf] [Word] [PDF]
Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop's organizing and program committees will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference.
Submissions must be original and relevant contributions to the workshop's theme. Each paper should directly and explicitly address how it speaks to the workshops goals and themes. Pro-tip: direct mapping to a question or goal posed above will help. We are looking for position papers that take a well-justified stance and can generate productive and lively discussions during the workshop. Examples include, but not limited to, position papers that include research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects. Submissions are non-archival and will not be part of any official proceedings. They will likely be hosted on the website in line with what we have done for past years.
We aim to have global and diverse participation in the workshop given its hybrid (virtual-first) design format reduces visa or travel-related burdens,. With an effort towards equitable conversations, we welcome participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)
Submission pro tips:
1. Explicitly align your submission with the workshop's goals and topics. How? (a) Refer to the questions in the Call for Papers. (b) Read the workshop proposal
2. Engage with past submissions (build on, don't repeat). This year, we are putting extra emphasis on how authors are building on prior papers in this workshop. All papers are available on the website. Please engage with them, and build on them.
3. Position papers must make a well-justified argument, not just summarize findings. This means that even if you are summarizing findings, make an argument around that summarization and justify why that argument (position) is something that is discussion-worthy and valuable to the community.
Call for Videos
The Sanities and Insanities of XAI: Provocations & Evocations
Are explanations for end users a good or bad idea? What can go wrong when decision-makers wrongly interpret explanations when deciding on policy? How long can a team of pilots discuss an explanation before hitting the ground? How will our world look in 100 years, with or without explainable AI?
For the second time, we host a dedicated video track at our HCXAI workshop. Submissions do not need deep scientific grounding but should address provocative ideas or important questions relevant to the XAI community (for example, design fiction, speculative design, or other creative ideas).
Submissions guidelines:
60-90 second video (full HD, mp4, 100MB max)A 150-word abstract describing the contents
FAQs
Do our papers need to be dealing with explanations generated by an AI system to be applicable?
Not necessarily; in fact, we encourage an end-to-end perspective. So if there are aspects that we aren't currently considering in the way we conceptualize explainability and you want to highlight that, that could be an interesting discussion point. E.g., if there is an upstream aspect (such as dataset preparation) that could have a downstream effect (such as explanation generation) but is not currently considered, that'd be a fair contribution. The goal is to connect explainability in many facets and devise ways of operationalizing HC-perspectives of explainability.
Do papers need to have prior work or can they be early work or have a case study?
Case studies or new takes on lit review are fine as long as there is a clear line to human-centered perspectives and explainability.
Can I submit a paper describing a potential dissertation idea?
Absolutely! We encourage you to discuss planned and future work at the workshop, but please provide a scientifically grounded proposal with a focus on research questions and methodologies. Still, be aware that your ideas are then publicly discussed.
Can I attend the workshop if I do not have an accepted paper?
As of now, the short answer is no. You need an accepted paper to attend the workshop. However, once all submissions are reviewed, the organizing committee will discuss the possibility of opening the workshop to those without accepted papers. Our goal is to strike the right balance between the size of the workshop, interactivity, and the depth of discussions. Please keep a close eye on the website of an update.
I am a non-academic practitioner. How may I join the workshop?
Regardless of your background, you will need an accepted paper to be first invited to the workshop. If accepted, then you will register through the CHI conference.
If accepted, do I need to pay to attend the workshop?
Yes, like all CHI workshops, there is a registration fee to attend. Everyone, including organizers, have to pay it.
Do you offer fee waivers?
Unfortunately, no. We'd love to offer fee waivers but do not have the financial budget to accommodate that.