Two papers accepted at the Hawaii International Conference on System Sciences (HICSS) 2024
(27.09.2023) Two papers of the cii research group have been accepted for publication at the 57th Hawaii International Conference on System Sciences (HICSS), which will take place in O’ahu, Hawaii in January 3-6, 2024
Authors: Manuel Schmidt-Kraepelin, Maroua Ben Ayed, Simon Warsinsky, Shanshan Hu, Scott Thiebes, Ali Sunyaev
Title: Leaderboards in Gamified Information Systems for Health Behavior Change: The Role of Positioning, Psychological Needs, and Gamification User Types
Abstract: Leaderboards are widely used in gamified information systems (IS) for health behavior change (HBC) to evoke both instrumental and experiential outcomes within users. In literature, however, they are often discussed controversially as they are perceived positively by some users but discouraging by others. In this work, we investigate under which circumstances users’ position on the leaderboard influences their attitudes toward an mHealth app. Based on self-determination theory and the gamification user types hexad, we conducted an online experiment among 179 potential users. The results support our hypotheses that positioning influences perceived competence and relatedness, which alongside perceived autonomy positively impact users’ attitude. Yet, our findings do not support the assumption that the relationship between needs and attitude is moderated by gamification user type. This finding reinforces recent research which questions the effectiveness of user type-based gamification and calls to focus on general need satisfaction.
Authors: Philipp A. Toussaint, Simon Warsinsky, Manuel Schmidt-Kraepelin, Scott Thiebes, Ali Sunyaev
Title: Designing Gamification Concepts for Expert Explainable Artificial Intelligence Evaluation Tasks: A Problem Space Exploration
Abstract: Artificial intelligence (AI) models are often complex and require additional explanations for use in high-stakes decision-making contexts like healthcare. To this end, explainable AI (XAI) developers must evaluate their explanations with domain experts to ensure understandability. As these evaluations are tedious and repetitive, we look at gamification as a means to motivate and engage experts in XAI evaluation tasks. We explore the problem space associated with gamified expert XAI evaluation. Based on a literature review of 22 relevant studies and seven interviews with experts in XAI evaluation, we elicit knowledge about affected stakeholders, eight needs, eight goals, and seven requirements. Our results help us understand better the problems associated with expert XAI evaluation and paint a broad application potential for gamification to improve XAI expert evaluations. In doing so, we lay the foundation for the design of successful gamification concepts for expert XAI evaluation.