These notes are a summary of concepts presented in “Exploring Tangible Explainable AI (TangXAI): A User Study of Two XAI Approaches.”
Ashley Colley, Matilda Kalving, Jonna Häkkilä, and Kaisa Väänänen. 2024. Exploring Tangible Explainable AI (TangXAI): A User Study of Two XAI Approaches. In Proceedings of the 35th Australian Computer-Human Interaction Conference (OzCHI ’23). Association for Computing Machinery, New York, NY, USA, 679–683. https://doi.org/10.1145/3638380.3638426
- . Overview of Explainable AI Approaches
- Highlight input parameters most significant to the AI’s decision (feature relevance)
- Explain how input parameters must shift to alter AI conclusions (local explanations)
- Concept of Data Physicalization
- Translating digital data into tangible, physical forms (e.g., 3D printed models)
- Aims to make abstract AI concepts more tangible and understandable
- Tangible Interaction Design for Explainable AI
- Enables physical interaction with AI explanations
- Supports collaborative data exploration and deeper understanding
- General XAI Approaches
- Simplified rule extraction
- Breaking AI logic into simple rules
- Feature relevance
- Scoring the importance of input parameters
- Local explanations
- Highlighting parameter changes needed to alter AI outcomes
- Visual explanations
- Using visual representations to convey AI behavior
- Simplified rule extraction
- Tangible Interfaces for Feature Relevance and Local Explanations
- Feature relevance
- Scores parameter importance in AI decisions
- Helps identify critical and irrelevant parameters
- Local explanations
- Demonstrates minimum input changes to shift AI decisions
- Feature relevance
- Case Study: Tangible Interfaces in Action
- Lego Duplo bar chart
- Visualizes parameters driving AI recipe recommendations
- Users adjust recommendations by adding/removing bricks
- Parameter expansion
- Encourages users to suggest additional relevant parameters
- Lego Duplo bar chart
- Insights on Trust and Explainable AI Interaction
- Discussions on trust emphasized training data accuracy and parameter selection
- Experimental trust-building through comparisons (e.g., route planning tools)
- AI model performance not explicitly linked to trust perceptions
- Challenges in Understanding Explainable AI Interfaces
- Confusion between using the AI tool and the Explainable AI interface for understanding
- Feature relevance
- Simple physical representations (e.g., Lego blocks) appreciated
- Misinterpretation as a selection tool interface
- Local explanations
- Less comprehensible, needing alternative design approaches
- Impact of Tangible Interfaces
- Slower interactions promote deeper understanding
- Enhanced trust and comprehension but less suited for
- High-speed interactions
- Scalability to numerous parameters
- Portability
- Future Considerations
- Balance between tangible and digital interfaces based on use-case requirements
- Address misunderstandings and usability issues in local explanation designs
- Evaluate scalability and speed trade-offs in tangible Explainable AI applications