Vier Artikel der Forschungsgruppe wurden auf der Hawaii International Conference on System Sciences (HICSS) 2023 angenommen

(13.09.2022) Vier Artikel der Forschungsgruppe wurden zur Publikation auf der 56. Hawaii International Conference on System Sciences (HICSS) 2023 angenommen. Die Konferenz wird vom 3. Bis zum 6. Januar 2023 in Maui, Hawaii stattfinden.

  1. Authors: Maximilian Renner, Sebastian Lins, Matthias Söllner, Sirkka L. Jarvenpaa, Ali Sunyaev
    Title: Artificial Intelligence-Driven Convergence and its Moderating Effect on Multi-Source Trust Transfer
    Abstract: We witness AI-driven convergence, whereas new converged products emerge from the interplay of embedding artificial intelligence (AI) in existing technologies. Trust transfer theory provides an excellent opportunity to deepen prevailing discussions about trust in such converged products. However, AI-driven convergence challenges existing theoretical assumptions. The context-specific interplay of multiple trust sources may affect users’ trust transfer and the predominance of trust sources. We contextualized AI-driven convergence and investigated its impact on multi-source trust transfer. We conducted semi-structured interviews with 25 participants in the context of autonomous vehicles. Our results indicate that users’ perceived source control, perceived source accessibility, and perceived value creation share of the sources may moderate users’ trust transfer. We contribute to research by contextualizing convergence in AI and revealing the impact of AI-driven convergence on trust transfer and the importance of trust as a dynamic construct.
     
  2. Authors: Shanshan Hu, Aylin Usta, Manuel Schmidt-Kraepelin, Simon Warsinsky, Scott Thiebes, Ali Sunyaev
    Title: Be Mindful of User Preferences: An Explorative Study on Game Design Elements in Mindfulness Applications
    Abstract: Mindfulness practices are valuable exercises for physical and mental health. Various digital applications exist that support individuals in practicing mindfulness. Following the trend of gamifying utilitarian systems, many mindfulness applications (MAs) incorporate game design elements (GDEs). However, little is known about users’ GDE preferences in this unique context. In line with extant research that investigated users’ GDE preferences in other contexts, we conducted an online survey among 168 potential users of MAs. The results indicate that users generally prefer progress, levels, and goals in MAs, while leaderboards and avatars are not highly rated. Furthermore, we identified four context-independent and three context-dependent rationales that help explain users’ GDE preferences. By providing first insights into MAs as a peculiar application context for gamification, our work contributes to advancing knowledge of contextual differences in users’ GDE preferences while challenging the extant research assumptions regarding the dominance of contextual factors in forming user preferences.
     
  3. Authors: Kathrin Brecker, Sebastian Lins, Ali Sunyaev
    Title: Why it Remains Challenging to Assess Artificial Intelligence
    Abstract: Artificial Intelligence (AI) assessment to mitigate risks arising from biased, unreliable, or regulatory non-compliant systems remains an open challenge for researchers, policymakers, and organizations across industries. Due to the scattered nature of research on AI across disciplines, there is a lack of overview on the challenges that need to be overcome to move AI assessment forward. In this study, we synthesize existing research on AI assessment applying a descriptive literature review. Our study reveals seven challenges along three main categories: ethical implications, regulatory gaps, and socio-technical limitations. This study contributes to a better understanding of the challenges in AI assessment so that AI researchers and practitioners can resolve these challenges to move AI assessment forward.
     
  4. Authors: Florian Leiser, Simon Warsinsky, Marie Daum, Manuel Schmidt-Kraepelin, Scott Thiebes, Martin Wagner, Ali Sunyaev
    Title: Understanding the Role of Expert Intuition in Medical Image Annotation: A Cognitive Task Analysis Approach
    Abstract: To improve contemporary machine learning (ML) models, research is increasingly looking at tapping in and incorporating the knowledge of domain experts. However, expert knowledge often relies on intuition, which is difficult to formalize for incorporation into ML models. Against this backdrop, we investigate the role of intuition in the context of expert medical image annotation. We apply a cognitive task analysis approach, where we observe and interview six expert medical image annotators to gain insights into pertinent decision cues and the role of intuition during annotation. Our results show that intuition plays an important role in various steps of the medical image annotation process, particularly in the appraisals of very easy or very difficult images, and in case purely cognitive appraisals remain inconclusive. Overall, we contribute to a better understanding of expert intuition in medical image annotation and provide possible interfaces to incorporate said intuition into ML models.