Zwei Artikel der Forschungsgruppe wurden auf der Hawaii International Conference on System Sciences (HICSS) 2024 angenommen

(27.09.2023) Zwei Artikel der Forschungsgruppe wurden zur Publikation auf der 57. Hawaii International Conference on System Sciences (HICSS) 2024 angenommen. Die Konferenz wird vom 3. Bis zum 6. Januar 2024 auf O’ahu, Hawaii stattfinden.

Autoren:  Manuel Schmidt-Kraepelin, Maroua Ben Ayed, Simon Warsinsky, Shanshan Hu, Scott Thiebes, Ali Sunyaev
Titel: Leaderboards in Gamified Information Systems for Health Behavior Change: The Role of Positioning, Psychological Needs, and Gamification User Types
Kurzbeschreibung: Leaderboards are widely used in gamified information systems (IS) for health behavior change (HBC) to evoke both instrumental and experiential outcomes within users. In literature, however, they are often discussed controversially as they are perceived positively by some users but discouraging by others. In this work, we investigate under which circumstances users’ position on the leaderboard influences their attitudes toward an mHealth app. Based on self-determination theory and the gamification user types hexad, we conducted an online experiment among 179 potential users. The results support our hypotheses that positioning influences perceived competence and relatedness, which alongside perceived autonomy positively impact users’ attitude. Yet, our findings do not support the assumption that the relationship between needs and attitude is moderated by gamification user type. This finding reinforces recent research which questions the effectiveness of user type-based gamification and calls to focus on general need satisfaction.

Autoren: Philipp A. Toussaint, Simon Warsinsky, Manuel Schmidt-Kraepelin, Scott Thiebes, Ali Sunyaev
Titel: Designing Gamification Concepts for Expert Explainable Artificial Intelligence Evaluation Tasks: A Problem Space Exploration
Kurzbeschreibung: Artificial intelligence (AI) models are often complex and require additional explanations for use in high-stakes decision-making contexts like healthcare. To this end, explainable AI (XAI) developers must evaluate their explanations with domain experts to ensure understandability. As these evaluations are tedious and repetitive, we look at gamification as a means to motivate and engage experts in XAI evaluation tasks. We explore the problem space associated with gamified expert XAI evaluation. Based on a literature review of 22 relevant studies and seven interviews with experts in XAI evaluation, we elicit knowledge about affected stakeholders, eight needs, eight goals, and seven requirements. Our results help us understand better the problems associated with expert XAI evaluation and paint a broad application potential for gamification to improve XAI expert evaluations. In doing so, we lay the foundation for the design of successful gamification concepts for expert XAI evaluation.