ACM CHI Workshop on
Operationalizing Human-Centered Perspectives in Explainable AI
Sat. May 8 & Sun May 9, 2021
@ 1300 EDT/ 1900 CEST
Videos are out!
We received tons of requests to make the videos available online from those who couldn't join us live. Thank you for all the emails and messages.
Broadening participation is central to the organizers' ethos.
We want to reduce the barrier to your participation in the dialogue around Human-centered XAI.
Here are videos.
Accommodations around Ramadan
CHI 2021 intersects with Ramadan, which is observed globally by a majority of 1.2 billion Muslims, many of whom are integral members of the CHI community. CHI 2021 also falls on Eid-al-Fitr, the festival marking the end of Ramadan. For perspective, imagine CHI happening during December 21-28.
We cannot change the dates of CHI 2021. However, as workshop organizers, we can, should, and will accommodate participants observing Ramadan and attending the HCXAI workshop.
The virtual format entails the workshop times will likely clash with Iftar (meal to break the fast) and Sehri/ Suhoor (pre-dawn meal before starting to fast) in some parts of the world. If you need any accommodation, please reach out to the organizers. Beyond needs around Ramadan, we will also accommodate anyone who needs it— be it for caregiving, childcare, bad internet connection, or anything else. If it’s within our power to help, we will make it happen.
To those observing Ramadan, we see you and acknowledge the potential compromises you’re having to make to participate in our workshop and CHI at large. Thank you for choosing to participate despite the challenges.
Expert Panel Discussion
We are excited to have a stellar line up of renowned scholars -- Michael Muller, Simone Stumpf, Brian Lim, and Enrico Bertini -- engage in an interactive panel discussion at the workshop!
Bridging the diverse threads of their work, the panelists will discuss:
- What is the biggest barrier to operationalize human-centered perspectives in XAI?
- How might we address this barrier?
- What can go wrong if we don't address this?
We are excited to announce that our keynote speaker will be Tim Miller! Tim is a Professor in the School of Computing and Information Systems at The University of Melbourne, and Co-Director for the Centre of AI and Digital Ethics.
Tim's work has been formative for many threads of XAI, from bridging lessons from the social sciences to designing counterfactual explanations.
Just like us, Tim is also excited to engage with the participants at the workshop. Here is what he had to say about the workshop:
"I'm looking forward to engaging with participants because the HCXAI workshop is encouraging an interdisciplinary approach to XAI -- just look at the breadth of knowledge on the organising committee! My talk will focus on evaluation in XAI, including asking whether we should care about trust in XAI evaluation, and how it should be done. This sits right at the heart of interdisciplinary research!"
Virtual Workshop: May 8 & 9, 2021 (two half-day sessions)
To maximize the benefit in this virtual format, we will conduct two sessions on Saturday, May 8 2021, and Sunday, May 9 2021. We chose the two-session format to allow participation from a great variety of time zones. Based on the time zones of the accepted authors and attendees, we tried to find a slot that works as many people across as many time zones as possible.
May 8, 2021: 1300 EDT/ 1900 CEST - 1700 EDT/ 2300 CEST
May 9, 2021: 1300 EDT/ 1900 CEST - 1700 EDT/ 2300 CEST
The sessions will consist of topic introductions, a keynote speech, (video) paper presentations of accepted position papers, panel discussions with the respective authors, as well as collaborative group work to progress in relevant topics and to foster (future) cooperation between participants. The schedule is here.
About the Workshop
Our lives are increasingly algorithmically mediated by Artificial Intelligence (AI) systems. The purview of these systems has reached consequential and safety-critical domains such as healthcare, finance, automated driving, etc. Despite their continuously improving capabilities, these AI systems suffer from opacity issues where the mechanics underlying their decisions often remain invisible or incomprehensible to end-users. Crucial to trustworthy and accountable Human-AI collaboration thus is the explainability of AI systems—these systems need to be able to make their decisions explainable and comprehensible to affected humans Explainability has been sought as primary means, even fundamental rights, for people to understand AI in order to contest and improve AI, guarantee fair and ethical AI, as well as to foster human-AI cooperation. Consequently, “explainable artificial intelligence” (XAI) has become a prominent interdisciplinary domain in the past years, including researchers from fields such as machine learning, data science, and visualization, human-computer interaction/human factors, design, psychology, or law. Although XAI has been a fast-growing field, there is no agreed-upon definition of it, let alone methods to evaluate it, nor guidelines to create XAI technology. Discussions to chart the domain and shape these important topics call for human-centered and socio-technical perspectives, input from diverse stakeholders, as well as the participation of the broader HCI community.
In this workshop, we want to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels towards a Human-Centered Explainable AI (HCXAI). We put the emphasis on “operationalizing”: aiming to produce actionable frameworks, contextually transferable evaluation methods, concrete
design guidelines, etc. for explainable AI, and encourage a holistic approach when it comes to articulating operationalization of these human-centered perspectives.
Download the HCXAI CHI workshop proposal
What you will get out of this workshop
All workshop participants will be provided state-of-the-art knowledge on explainable artificial intelligence (already before the workshop in form of downloadable content, but also during the workshop in form of presentations), and you will be able to initiate contact and cooperation with other attendees. We will build individual groups to brainstorm and work on problems relevant to this emerging domain.
- If you are an HCI researcher/practitioner: Learn about state-of-the-art methods for visualizing algorithmic transparency.
- If you are an AI/ML researcher/practitioner: Learn how to tailor your explanation according to your users' needs.
Call for Papers
We are interested in a wide range of topics, from sociotechnical aspects of XAI to human-centered evaluation techniques to the responsible use of XAI. We are especially interested in the discourse around one or more of the questions: who (e.g., clarifying who the human is in XAI, how different who’s interpret explainability), why (e.g., how social and individual factors influence explainability goals), and where (e.g., contextual explainability differences
in diverse application areas). We particularly welcome participation from the Global South and from stakeholders whose voices are under-represented in the dominant XAI discourse. The following list of guiding questions, by no means, is an exhaustive one; rather, it is provided as source of inspiration:
- Who are the consumers and relevant stakeholders of XAI? What are their needs for explainability? What values are reflected and tensions arise in these needs?
- Why is explainability sought? What user goals should XAI aim to support? How are these goals shaped by technological, individual, and social factors?
- Where, or in what categories of AI applications, should we prioritize our XAI efforts? What do we need to understand about the users as well as the socio-organizational contexts of these applications?
- What are we missing from a technocentric view of XAI? Which human-centered and socio-technical perspectives should we bring in to better understand the who, why, where, to move towards human-centered XAI?
- How can we develop transferable evaluation methods for XAI? What key constructs need to be considered?
- Given the contextual nature of explanations, what are the potential pitfalls of standardization of evaluation metrics? How to take into account the who, why, and where in the evaluation methods?
- What are the explainability challenges where we move beyond the dominant one-to-one Human-AI interaction paradigms? How might a human-centered perspective address these challenges? [Here, one-to-one Human-AI interaction refers to one user interacting with one AI system; something beyond this one-to-one paradigm would be many users interacting with one or many AI systems.]
- What are the important research questions to be answered when we move towards a human-centered explainable artificial intelligence? Why are they important to be addressed now?
- What might operationalizing XAI in the Global South entail? Where are the points of alignment and departure? What insights should we be aware of while considering Human-centered XAI in the Global South?
Researchers, practitioners, or policy makers in academia or industry who have an interest in these areas are invited to submit papers up to 4 pages (not including references) in the two-column (landscape) Extended Abstract Format that CHI workshops have traditionally used. Templates: [Overleaf] [Word] [PDF]
Submissions are single-blind reviewed; i.e., submissions must include the author’s names and affiliation. The workshop's organizing and program committees will review the submissions and accepted papers will be presented at the workshop. We ask that at least one of the authors of each accepted position paper attends the workshop. Presenting authors must register for the workshop and at least one full day of the conference.
Submissions must be original and relevant contributions to the workshop's theme. We are looking for position papers that take a well-justified stance and can generate productive and lively discussions during the workshop. Examples include, but not limited to, position papers that include research summaries, literature reviews, industrial perspectives, real-world approaches, study results, or work-in-progress research projects. Since this workshop will be held virtually, we welcome global and diverse participation. We encourage participation from under-represented perspectives and communities in XAI (e.g., lessons from the Global South, civil liberties and human rights perspectives, etc.)
Paper Submission (Extended!)
̶F̶e̶b̶r̶u̶a̶r̶y̶ ̶1̶7̶,̶ ̶2̶0̶2̶1̶
̶1̶1̶p̶m̶ ̶E̶S̶T̶/̶8̶p̶m̶ ̶P̶S̶T̶
̶M̶a̶r̶c̶h̶ ̶1̶4̶,̶ ̶2̶0̶2̶1̶
Camera Ready Copy Due
̶M̶a̶r̶c̶h̶ ̶2̶6̶,̶ ̶2̶0̶2̶1̶ ̶
̶1̶1̶:̶5̶9̶p̶m̶ ̶A̶o̶E̶ ̶(̶A̶n̶y̶w̶h̶e̶r̶e̶ ̶o̶n̶ ̶E̶a̶r̶t̶h̶)̶
Do our papers need to be dealing with explanations generated by an AI system to be applicable?
Not necessarily; in fact, we encourage an end-to-end perspective. So if there are aspects that we aren't currently considering in the way we conceptualize explainability and you want to highlight that, that could be an interesting discussion point. E.g., if there is an upstream aspect (such as dataset preparation) that could have a downstream effect (such as explanation generation) but is not currently considered, that'd be a fair contribution. The goal is to connect explainability in many facets and devise ways of operationalizing HC-perspectives of explainability.
Do papers need to have prior work or can they be early work or have a case study?
Case studies or new takes on lit review are fine as long as there is a clear line to human-centered perspectives and explainability.
Can I submit a paper describing a potential dissertation idea?
Absolutely! We encourage you to discuss planned and future work at the workshop, but please provide a scientifically grounded proposal with a focus on research questions and methodologies. Still, be aware that your ideas are then publicly discussed.
Can I attend the workshop if I do not have an accepted paper?
̶A̶s̶ ̶o̶f̶ ̶n̶o̶w̶,̶ ̶t̶h̶e̶ ̶s̶h̶o̶r̶t̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶ ̶n̶o̶.̶ ̶T̶h̶e̶ ̶d̶e̶f̶a̶u̶l̶t̶ ̶p̶l̶a̶n̶ ̶r̶e̶q̶u̶i̶r̶e̶s̶ ̶a̶n̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶ ̶t̶o̶ ̶a̶t̶t̶e̶n̶d̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶.̶ ̶H̶o̶w̶e̶v̶e̶r̶,̶ ̶o̶n̶c̶e̶ ̶a̶l̶l̶ ̶t̶h̶e̶ ̶s̶u̶b̶m̶i̶s̶s̶i̶o̶n̶s̶ ̶a̶r̶e̶ ̶r̶e̶v̶i̶e̶w̶e̶d̶,̶ ̶t̶h̶e̶ ̶o̶r̶g̶a̶n̶i̶z̶i̶n̶g̶ ̶c̶o̶m̶m̶i̶t̶t̶e̶e̶ ̶w̶i̶l̶l̶ ̶d̶i̶s̶c̶u̶s̶s̶ ̶t̶h̶e̶ ̶p̶o̶s̶s̶i̶b̶i̶l̶i̶t̶y̶ ̶o̶f̶ ̶a̶l̶l̶o̶w̶i̶n̶g̶ ̶p̶a̶r̶t̶i̶c̶i̶p̶a̶n̶t̶s̶ ̶w̶h̶o̶ ̶h̶a̶v̶e̶n̶'̶t̶ ̶s̶u̶b̶m̶i̶t̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶s̶ ̶t̶o̶ ̶a̶t̶t̶e̶n̶d̶.̶ ̶W̶e̶ ̶w̶a̶n̶t̶ ̶t̶o̶ ̶s̶t̶r̶i̶k̶e̶ ̶t̶h̶e̶ ̶r̶i̶g̶h̶t̶ ̶b̶a̶l̶a̶n̶c̶e̶ ̶b̶e̶t̶w̶e̶e̶n̶ ̶t̶h̶e̶ ̶s̶i̶z̶e̶ ̶o̶f̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶,̶ ̶t̶h̶e̶ ̶i̶n̶t̶e̶r̶a̶c̶t̶i̶v̶i̶t̶y̶,̶ ̶a̶n̶d̶ ̶t̶h̶e̶ ̶v̶a̶l̶u̶e̶ ̶p̶a̶r̶t̶i̶c̶i̶p̶a̶n̶t̶s̶ ̶g̶e̶t̶ ̶f̶r̶o̶m̶ ̶i̶t̶.̶ ̶P̶l̶e̶a̶s̶e̶ ̶k̶e̶e̶p̶ ̶a̶ ̶c̶l̶o̶s̶e̶ ̶e̶y̶e̶ ̶o̶n̶ ̶t̶h̶e̶ ̶w̶e̶b̶s̶i̶t̶e̶ ̶f̶o̶r̶ ̶a̶n̶ ̶u̶p̶d̶a̶t̶e̶.̶ ̶
Update (Mar 25, 2021): ̶Y̶o̶u̶ ̶r̶e̶q̶u̶e̶s̶t̶e̶d̶.̶ ̶W̶e̶ ̶l̶i̶s̶t̶e̶n̶e̶d̶!̶ ̶
̶B̶a̶s̶e̶d̶ ̶o̶n̶ ̶p̶o̶p̶u̶l̶a̶r̶ ̶d̶e̶m̶a̶n̶d̶,̶ ̶t̶h̶e̶ ̶o̶r̶g̶a̶n̶i̶z̶i̶n̶g̶ ̶c̶o̶m̶m̶i̶t̶t̶e̶e̶ ̶h̶a̶s̶ ̶o̶p̶e̶n̶e̶d̶ ̶u̶p̶ ̶t̶h̶e̶ ̶p̶o̶s̶s̶i̶b̶i̶l̶i̶t̶y̶ ̶o̶f̶ ̶p̶a̶r̶t̶i̶c̶i̶p̶a̶t̶i̶o̶n̶ ̶e̶v̶e̶n̶ ̶i̶f̶ ̶y̶o̶u̶ ̶d̶o̶ ̶n̶o̶t̶ ̶h̶a̶v̶e̶ ̶a̶n̶ ̶a̶c̶c̶e̶p̶t̶e̶d̶ ̶p̶a̶p̶e̶r̶ ̶a̶t̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶.̶ ̶W̶e̶ ̶s̶t̶i̶l̶l̶ ̶h̶a̶v̶e̶ ̶c̶o̶n̶s̶t̶r̶a̶i̶n̶t̶s̶ ̶a̶r̶o̶u̶n̶d̶ ̶h̶e̶a̶d̶c̶o̶u̶n̶t̶.̶ ̶̶I̶f̶ ̶y̶o̶u̶ ̶w̶a̶n̶t̶ ̶t̶o̶ ̶j̶o̶i̶n̶,̶ ̶p̶l̶e̶a̶s̶e̶ ̶f̶i̶l̶l̶ ̶o̶u̶t̶ ̶t̶h̶i̶s̶ ̶s̶u̶r̶v̶e̶y̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶u̶s̶ ̶a̶n̶ ̶u̶n̶d̶e̶r̶s̶t̶a̶n̶d̶i̶n̶g̶ ̶o̶f̶ ̶w̶h̶o̶ ̶y̶o̶u̶ ̶a̶r̶e̶,̶ ̶w̶h̶a̶t̶ ̶y̶o̶u̶ ̶h̶o̶p̶e̶ ̶t̶o̶ ̶g̶a̶i̶n̶ ̶f̶r̶o̶m̶ ̶t̶h̶e̶ ̶w̶o̶r̶k̶s̶h̶o̶p̶ ̶a̶t̶t̶e̶n̶d̶a̶n̶c̶e̶,̶ ̶a̶n̶d̶ ̶w̶h̶a̶t̶ ̶v̶a̶l̶u̶e̶ ̶y̶o̶u̶ ̶a̶i̶m̶ ̶t̶o̶ ̶a̶d̶d̶ ̶t̶o̶ ̶i̶t̶.̶ ̶T̶h̶e̶ ̶f̶o̶r̶m̶ ̶w̶i̶l̶l̶ ̶b̶e̶ ̶l̶i̶v̶e̶ ̶t̶i̶l̶l̶ ̶1̶1̶p̶m̶ ̶E̶T̶,̶ ̶A̶p̶r̶i̶l̶ ̶1̶5̶,̶ ̶2̶0̶2̶1̶.
Update (April 21, 2021): The form is now closed and the list of attendees has been finalized. No one else can be added now. Thanks to everyone who filled it out!
- Allison Renner (University of Maryland)
- Daniel Buschek (University of Bayreuth)
- Justin Weisz (IBM Research)
- Michael Madaio (Microsoft Research, NYC)
- Samir Passi (Cornell University)
- Sarah Theres Völkel (LMU Munich)
- Sherry Wu (University of Washington)
- Vivian Lai (CU Boulder)