2026
Murat Yalcin, Marc Erich Latoschik,
End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach, In
Frontiers in Physiology.
2026. To be published
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{yalcin2026endtoend,
author = {Murat Yalcin and Marc Erich Latoschik},
journal = {Frontiers in Physiology},
url = {https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2026.1694995/abstract},
year = {2026},
title = {End-to-End Non-Invasive ECG Signal Generation from PPG Signal: A Self-Supervised Learning Approach}
}
Abstract:
Electrocardiogram (ECG) signals are frequently utilized for detecting important cardiac events, such as variations in ECG intervals, as well as for monitoring essential physiological metrics, including heart rate (HR) and heart rate variability (HRV). However, the accurate measurement of ECG traditionally requires a clinical environment, thereby limiting its feasibility for continuous, everyday monitoring. In contrast, Photoplethysmography (PPG) offers a non-invasive, cost-effective optical method for capturing cardiac data in daily settings and is increasingly utilized in various clinical and commercial wearable devices. However, PPG measurements are significantly less detailed than those of ECG. In this study, we propose a novel approach to synthesize ECG signals from PPG signals, facilitating the generation of robust ECG waveforms using a simple, unobtrusive wearable setup. Our approach utilizes a Transformer-based Generative Adversarial Network model, designed to accurately capture ECG signal patterns and enhance generalization capabilities. Additionally, we incorporate self-supervised learning techniques to enable the model to learn diverse ECG patterns through specific tasks. Model performance is evaluated using various metrics, including heart rate calculation and root minimum squared error (RMSE) on two different datasets. The comprehensive performance analysis demonstrates that our model exhibits superior efficacy in generating accurate ECG signals (with reducing 83.9\% and 72.4\% of the heart rate calculation error on MIMIC III and Who is Alyx? datasets, respectively), suggesting its potential application in the healthcare domain to enhance heart rate prediction and overall cardiac monitoring. As an empirical proof of concept, we also present an Atrial Fibrillation (AF) detection task, showcasing the practical utility of the generated ECG signals for cardiac diagnostic applications. To encourage replicability and reuse in future ECG generation studies, we have shared the dataset and will also make the code as publicly available.
Marie Luisa Fiedler, Christian Merz, Lukas Schach, Jonathan Tschanter, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior, In
IEEE Transactions on Visualization and Computer Graphics.
2026. To be published.
[BibTeX]
[Abstract]
[BibSonomy]
@article{fiedler2026still,
author = {Marie Luisa Fiedler and Christian Merz and Lukas Schach and Jonathan Tschanter and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
title = {Am I Still Me? Visual Congruence Across Reality–Virtuality and Avatar Appearance in Shaping Self-Perception and Behavior}
}
Abstract:
This paper presents the first systematic investigation of how congruence in visual self-representation influences self-perception and behavior. We span a continuum from the physical self through avatars with graded self-similarity to clearly dissimilar avatars in virtual reality (VR). In a 1x4 within-user study, participants completed movement and quiz tasks in either physical reality or a digital twin environment in VR, where they embodied one of three avatars: a photorealistic self-similar avatar, a dissimilar same-gender avatar, or a dissimilar opposite-gender avatar. Subjective measures included presence, sense of embodiment, self-identification, and perceived change, and were complemented by an objective movement metric of behavioral change. Compared to physical reality, VR, even with a self-similar avatar, produced lower presence, a weaker sense of embodiment, and reduced self-identification, revealing a persistent gap in visual congruence. Within VR, self-similar avatars enhanced body ownership, self-location, and self-identification relative to dissimilar avatars. Conversely, dissimilar avatars produced measurable behavioral changes compared with self-similar ones. Gender cues, however, had little impact in gender-neutral tasks. Overall, the findings show that photorealistic self-similar avatars reinforce embodiment and self-identification. However, VR still falls short of achieving congruence with physical reality, underscoring key challenges for avatar realism and ecological validity.
Jonathan Tschanter, Christian Merz, Marie Luisa Fiedler, Carolin Wienrich, Marc Erich Latoschik,
Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR, In
IEEE Transactions on Visualization and Computer Graphics.
2026. To be published
[BibTeX]
[Abstract]
[BibSonomy]
@article{tschanter2026matters,
author = {Jonathan Tschanter and Christian Merz and Marie Luisa Fiedler and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
title = {Use Case Matters: Comparing the User Experience and Task Performance Across Tasks for Embodied Interaction in VR}
}
Abstract:
Integrated Virtual Reality (IVR) systems are central to avatar-mediated use cases in Virtual Reality (VR), reconstructing users' movements on avatars. They differ primarily in their tracking architectures, which determine how completely and accurately users' movements are captured and reconstructed on avatars.
Many current IVR systems reduce user-worn hardware, trading reconstruction accuracy against cost and setup complexity, yet their impact on user experience and task performance across use cases remains underexplored. We compared three reduced user-worn IVR systems. Each system has distinct technical approaches: (1) Captury (markerless outside-in optical tracking), (2) Meta Movement SDK (markerless inside-out optical tracking), and (3) Vive Trackers (marker-based outside-in optical tracking with IMUs).
In a 3x5 mixed-design, participants performed five tasks, simulating different use cases, to probe distinct aspects of these systems. No system consistently outperformed the others. Meta excelled in hand-based, fast-paced interactions, while Captury and Vive performed better in lower-body tasks and during full-body pose observation. These findings underscore the need to evaluate reduced user-worn IVR systems within the specific use case. We offer practical guidance for system selection based on use-case demands and released our tasks as an open-source, extensible framework to support future evaluations for selecting IVR systems.
Jonathan Tschanter, Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study, In
IEEE Transactions on Visualization and Computer Graphics.
2026. To be published
[BibTeX]
[Abstract]
[BibSonomy]
@article{tschanter2026harassment,
author = {Jonathan Tschanter and Christian Merz and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
title = {How Harassment Shapes Self-Perception and Well-Being in Social VR: Evidence from a Controlled Lab Study}
}
Abstract:
Social Virtual Reality (SVR) allows users to meet and build relationships through embodied avatars and real-time interaction in virtual spaces. While embodiment can strengthen social connections and presence, it can also intensify negative encounters, making SVR particularly vulnerable to harassment. Despite frequent reports of verbal, visual, and "physical" violations in SVR, little is known about how harassment reshapes users' self-perception, including their sense of embodiment, self-identification, closeness, and avatar customization preferences. We conducted a controlled experiment with 52 participants who experienced either a neutral or a harassment condition in a scenario modeled after real SVR incidents. Participants perceived the harassing peer as significantly more negative, annoying, and disturbing than the neutral peer. Contrary to prior reports, harassment did not significantly affect well-being measures, including emotional state, self-esteem, and physiological arousal, within this controlled scenario. However, participants reported stronger bodily change, attributed more of their own attitudes and emotions to their avatars, and increased interpersonal distance when personal space was invaded. Self-reported coping strategies included ignoring, stepping back, using humor, and retaliating. Notably, avatar customization preferences shifted across conditions. Participants in the neutral condition favored personalized avatars, whereas those in the harassment condition more frequently preferred anonymity in public spaces. Together, these findings demonstrate that harassment in SVR not only exploits embodiment but also reshapes self-perception. We further contribute methodological insights into how harassment can be ethically and reproducibly studied in controlled SVR-like experiments.
David Obremski, Paula Friedrich, Carolin Wienrich,
To be Healed or Hacked? - User‑Centered Ethical Design for Embodied AI in Mental Health Care, In
IEEE Transactions on Visualization and Computer Graphics.
2026. To be published
[BibTeX]
[Abstract]
[BibSonomy]
@article{obremski2026healed,
author = {David Obremski and Paula Friedrich and Carolin Wienrich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2026},
title = {To be Healed or Hacked? - User‑Centered Ethical Design for Embodied AI in Mental Health Care}
}
Abstract:
The global prevalence of mental health disorders has created a substantial treatment gap. To support clinicians and increase access to care, researchers in the field of Artificial Intelligence (AI) and Virtual Reality (VR) have investigated technology-mediated psychotherapy for years.
However, research about stakeholders' concerns and their readiness to use AI in psychotherapy remains scarce. This study focuses on a user-centered approach to accommodate patients' concerns and, based on the results, implement measures to foster self-disclosure and trust towards an embodied AI therapist in VR.
First, we conducted an online study with mental health patients ($N = 152$), which identified data autonomy and transparency as their primary ethical concerns. In a subsequent in-person VR study ($N = 90$) we compared effects of increased data autonomy and transparency on self-disclosure and trust towards an embodied AI therapist.
Results indicated that higher data autonomy led to greater self-disclosure, while transparency had no significant effect. Manipulating data autonomy and transparency did not affect perceived trust, though exploratory calculations revealed that women reported significantly higher trust levels than men. These findings illuminate patients' priorities and provide implications for technical designs for AI-driven mental health care.
Florian Kern, Lukas Polifke, Paula Friedrich, Marc Erich Latoschik, Carolin Wienrich, David Obremski,
CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality, In
2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
2026. To be published
[BibTeX]
[Abstract]
[BibSonomy]
@inproceedings{kern2026configurable,
author = {Florian Kern and Lukas Polifke and Paula Friedrich and Marc Erich Latoschik and Carolin Wienrich and David Obremski},
year = {2026},
booktitle = {2026 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title = {CECA - A Configurable Framework for Embodied Conversational AI Agents in Extended Reality}
}
Abstract:
We present CECA, a configurable framework for embodied conversational AI agents in Unity-based extended reality (XR) applications. CECA employs a client–server architecture to decouple agent logic from game engine–based embodiment. Built on LiveKit Agents, our approach integrates speech-to-text (STT), large language models (LLMs), and text-to-speech (TTS) into a unified, streaming voice-to-voice pipeline configured via metadata rather than code changes. We outline how this architecture flexibly integrates local and cloud AI providers while mitigating limited provider SDK support in Unity. Finally, we highlight opportunities for future work, including multi-agent scenarios, higher-level templates for XR research, and systematic user studies.
2025
Smi Hinterreiter, Martin Wessel, Fabian Schliski, Isao Echizen, Marc Erich Latoschik, Timo Spinde,
NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback, In
Proceedings of the International AAAI Conference on Web and Social Media, Vol. 19.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@article{hinterreiter2025newsunfold,
author = {Smi Hinterreiter and Martin Wessel and Fabian Schliski and Isao Echizen and Marc Erich Latoschik and Timo Spinde},
journal = {Proceedings of the International AAAI Conference on Web and Social Media},
url = {https://ojs.aaai.org/index.php/ICWSM/article/view/35847},
year = {2025},
volume = {19},
title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback}
}
Abstract:
Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
Peter Kullmann, Theresa Schell, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness, In
IEEE Transactions on Visualization and Computer Graphics, Vol. 31(
5)
, pp. 3613-3622.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025coverage,
author = {Peter Kullmann and Theresa Schell and Timo Menzel and Mario Botsch and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://ieeexplore.ieee.org/document/10919002},
year = {2025},
pages = {3613-3622},
volume = {31},
doi = {10.1109/TVCG.2025.3549887},
title = {Coverage of Facial Expressions and Its Effects on Avatar Embodiment, Self-Identification, and Uncanniness}
}
Abstract:
Facial expressions are crucial for many eXtended Reality (XR) use cases, from mirrored self exposures to social XR, where users interact via their avatars as digital alter egos. However, current XR devices differ in sensor coverage of the face region. Hence, a faithful reconstruction of facial expressions either has to exclude these areas or synthesize missing animation data with model-based approaches, potentially leading to perceivable mismatches between executed and perceived expression. This paper investigates potential effects of the coverage of facial animations (none, partial, or whole) on important factors of self-perception. We exposed 83 participants to their mirrored personalized avatar. They were shown their mirrored avatar face with upper and lower face animation, upper face animation only, lower face animation only, or no face animation. Whole animations were rated higher in virtual embodiment and slightly lower in uncanniness. Missing animations did not differ from partial ones in terms of virtual embodiment. Contrasts showed significantly lower humanness, lower eeriness, and lower attractiveness for the partial conditions. For questions related to self-identification, effects were mixed. We discuss participants' shift in body part attention across conditions. Qualitative results show participants perceived their virtual representation as fascinating yet uncanny.
Larissa Brübach, Deniz Celikhan, Lennard Rüffert, Franziska Westermeier, Marc Erich Latoschik, Carolin Wienrich,
When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR, In
Proceedings of the 32nd IEEE Virtual Reality conference (VR '25).
IEEE Computer Science,
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@proceedings{brubach2025overshadows,
author = {Larissa Brübach and Deniz Celikhan and Lennard Rüffert and Franziska Westermeier and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-bruebach-height-and-plausibility-preprint.pdf},
year = {2025},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
publisher = {IEEE Computer Science},
doi = {10.1109/VR59515.2025.00089},
title = {When Fear Overshadows Perceived Plausibility: The Influence of Incongruencies on Acrophobia in VR}
}
Abstract:
Virtual Reality Exposure Therapy (VRET) has become an effective, customizable, and affordable treatment for various psychological and physiological disorders. Specifically, it is used to treat specific anxiety disorders, such as acrophobia or arachnophobia, for decades. However, to ensure a positive outcome for patients, we must understand and control the effects potentially caused by the technology and medium of Virtual Reality (VR) itself. This article specifically investigates the impact of the Plausibility illusion (Psi), as one of the two theorized presence components, on the fear of heights. In two experiments, 30 participants each experienced two different heights with congruent and incongruent object behaviors in a 2 x 2 within-subject design. Results show that the strength of the congruence manipulation plays a significant role. Only when incongruencies are strong enough will they be recognized by users, specifically in high fear conditions, as triggered by exposure to increased heights. If incongruencies are too subtle, they seem to be overshadowed by the stronger fear reactions. Our evidence contributes to recent theories of VR effects and emphasizes the importance of understanding and controlling factors potentially assumed to be incidental, specifically during VRET designs. Incongruencies should be controlled so that they do not have an unwanted influence on the patient's fear response.
Franziska Westermeier, Chandni Murmu, Kristopher Kohm, Christopher Pagano, Carolin Wienrich, Sabarish V. Babu, Marc Erich Latoschik,
Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception, In
Proceedings of the 32nd IEEE Virtual Reality conference (VR '25), pp. 537-547.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{westermeier2025interpupillary,
author = {Franziska Westermeier and Chandni Murmu and Kristopher Kohm and Christopher Pagano and Carolin Wienrich and Sabarish V. Babu and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-ipd-icd.pdf},
year = {2025},
booktitle = {Proceedings of the 32nd IEEE Virtual Reality conference (VR '25)},
pages = {537-547},
doi = {10.1109/VR59515.2025.00077},
title = {Interpupillary to Inter-Camera Distance of Video See-Through AR and its Impact on Depth Perception}
}
Abstract:
Interpupillary distance (IPD) is a crucial characteristic of head-mounted displays (HMDs) because it defines an important property for generating a stereoscopic parallax, which is essential for correct depth perception. This is why contemporary HMDs offer adjustable lenses to adapt to users' individual IPDs.
However, today's Video See-Through Augmented Reality (VST AR) HMDs use fixed camera placements to reconstruct the stereoscopic view of a user's environment.
This leads to a potential mismatch between individual IPD settings and the fixed Inter-Camera Distances (ICD), which in turn can lead to perceptual incongruencies, limiting the usability and potentially the applicability of VST AR in depth-sensitive use cases. To investigate this incongruency between IPD and ICD, we conducted a 2x3 mixed-factor design empirical evaluation using a near-field, open-loop reaching task comparing distance judgments of Virtual Reality (VR) and VST AR. We also explored improvements in reaching performance via perceptual calibration by incorporating a feedback phase between pre- and post-phase conditions, with a particular focus on the influence of IPD-ICD differences. Our Linear Mixed Model (LMM) analysis showed a significant difference between VR and VST AR, a significant effect of IPD-ICD mismatch, as well as a combined effect of both factors. This novel insight and its consequences are discussed specifically for depth perception tasks in AR, eXtended Reality (XR), and potential use cases.
Marie Luisa Fiedler, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception, In
IEEE Transactions on Visualization and Computer Graphics, Vol. 31(
5)
.
2025. Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{fiedler2025selfsimilarity,
author = {Marie Luisa Fiedler and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-fiedler-self-similarity-beats-motor-control.pdf},
year = {2025},
volume = {31},
doi = {10.1109/TVCG.2025.3549851},
title = {Self-Similarity Beats Motor Control in Augmented Reality Body Weight Perception}
}
Abstract:
This paper investigates if and how self-similarity and having motor control impact sense of embodiment, self-identification, and body weight perception in Augmented Reality (AR). We conducted a 2x2 mixed design experiment involving 60 participants who interacted with either synchronously moving virtual humans or independently moving ones, each with self-similar or generic appearances, across two consecutive AR sessions. Participants evaluated their sense of embodiment, self-identification, and body weight perception of the virtual human. Our results show that self-similarity significantly enhanced sense of embodiment, self-identification, and the accuracy of body weight estimates with the virtual human. However, the effects of having motor control over the virtual human movements were notably weaker in these measures than in similar VR studies. Further analysis indicated that not only the virtual human itself but also the participants' body weight, self-esteem, and body shape concerns predict body weight estimates across all conditions. Our work advances the understanding of virtual human body weight perception in AR systems, emphasizing the importance of factors such as coherence with the real-world environment.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Does Task Matter? Task-Dependent Effects of Cross-Device Collaboration on Social Presence, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW).
IEEE Computer Science,
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2025taskasymmetry,
author = {Christian Merz and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevrw-task-cross-device.pdf},
year = {2025},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
publisher = {IEEE Computer Science},
doi = {10.1109/VRW66409.2025.00116},
title = {Does Task Matter? Task-Dependent Effects of Cross-Device Collaboration on Social Presence}
}
Abstract:
In this work, we explored asymmetric collaboration under two distinct tasks: collaborative sorting and conversational talking tasks. We answer the research question of how different tasks impact the user experience in asymmetric interaction. Our mixed design compared one symmetric and one asymmetric interaction and two tasks, assessing self-perception (presence, embodiment), other-perception (co-presence, social presence, plausibility), and task perception (task load, enjoyment). 52 participants collaborated in dyads on the two tasks, either using head-mounted displays (HMDs) or one participant using an HMD and the other a desktop setup. Results indicate that differences in social presence diminished or disappeared during the purely conversational talking task in comparison to the sorting task. This indicates that differences in how we perceive a social interaction, which is caused by asymmetric interaction, only occur during specific use cases. These findings underscore the critical role of task characteristics in shaping users’ social XR experiences and highlight that asymmetric collaboration can be effective across different use cases and is even on par with symmetric interaction during conversations.
Jonathan Tschanter, Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Towards Understanding Harassment in Social Virtual Reality: A Study Design on the Impact of Avatar Self-Similarity, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW).
IEEE Computer Science,
2025. IDEATExR Best Paper 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{tschanter2025harassment,
author = {Jonathan Tschanter and Christian Merz and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevrw-towards-understanding-harassment-in-social-virtual-reality.pdf},
year = {2025},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
publisher = {IEEE Computer Science},
title = {Towards Understanding Harassment in Social Virtual Reality: A Study Design on the Impact of Avatar Self-Similarity}
}
Abstract:
In social virtual reality (VR), harassment persists as a pervasive and critical issue. Prior work emphasizes its perceived realness and emotional impact. However, the influence of avatar design, particularly the role of self-similarity, remains underexplored. Self-similar avatars can enhance user identification and engagement, yet potentially intensify the psychological and physiological effects of harassment. Existing studies often rely on interviews or user-generated content, lacking systematic analysis and controlled comparisons. To address these gaps, we present a process for creating realistic VR harassment scenarios. We built a scenario based on existing literature and validated it with expert reviews and user feedback. We propose a 2 x 2 between-subjects design to systematically examine users' emotional and physiological states, their identification with avatars, and the effects of avatar self-similarity. The study design will deepen the understanding of harassment dynamics in VR. Additionally, it can provide actionable insights for designing safer, more inclusive virtual environments that promote user well-being and foster equitable communities.
Marie Luisa Fiedler, Arne Bürger, Sabrina Mittermeier, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Evaluating VR and AR Mirror Exposure for Anorexia Nervosa Therapy in Adolescents: A Method Proposal for Understanding Stakeholder Perspectives, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW), pp. 965-970.
IEEE Computer Science,
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{fiedler2025evaluating,
author = {Marie Luisa Fiedler and Arne Bürger and Sabrina Mittermeier and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-fiedler-stakeholder-focus-group-proposal.pdf},
year = {2025},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
publisher = {IEEE Computer Science},
pages = {965-970},
title = {Evaluating VR and AR Mirror Exposure for Anorexia Nervosa Therapy in Adolescents: A Method Proposal for Understanding Stakeholder Perspectives}
}
Abstract:
Body image distortions in anorexia nervosa pose significant therapeutic challenges, requiring innovative interventions. Virtual Reality (VR) and Augmented Reality (AR) technologies offer promising solutions, yet stakeholder preferences, from therapists and patients, remain unexplored. This methodological proposal outlines focus groups to compare VR and AR mirror exposures using personalized and body-weight-modifiable avatars in anorexia nervosa therapy. Therapists will evaluate therapeutic potential, risks, and practicality, while adolescent patients will assess comfort, stress responses, and usability. The findings aim to advance the user-centered integration of VR and AR into anorexia nervosa therapy, addressing critical treatment gaps.
Lena Holderrieth, Erik Wolf, Marie Luisa Fiedler, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality, In
2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW), pp. 1404-1405.
IEEE Computer Science,
2025. Best Poster 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{holderrieth2025better,
author = {Lena Holderrieth and Erik Wolf and Marie Luisa Fiedler and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ieeevr-holderrieth-do-you-feel-better.pdf},
year = {2025},
booktitle = {2025 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (IEEE VRW)},
publisher = {IEEE Computer Science},
pages = {1404-1405},
doi = {10.1109/VRW66409.2025.00348},
title = {Do You Feel Better? The Impact of Embodying Photorealistic Avatars with Ideal Body Weight on Attractiveness and Self-Esteem in Virtual Reality}
}
Abstract:
Body weight issues can manifest in low self-esteem through a negative body image or the feeling of unattractiveness. To explore potential interventions, the pilot study examined whether embodying a photorealistically personalized avatar with enhanced attractiveness affects self-esteem. Participants in the manipulation group adjusted their avatar's body weight to their self-defined ideal, while a control group used unmodified avatars. To confirm the manipulation, we measured the perceived avatars' attractiveness. Results showed that participants found avatars at their ideal weight significantly more attractive, confirming an effective manipulation. Further, the ideal weight group showed a clear trend towards higher self-esteem post-exposure.
Peter Kullmann, Theresa Schell, Mario Botsch, Marc Erich Latoschik,
Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality, In
Frontiers in Virtual Reality, Vol. 6.
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kullmann2025eyetoeye,
author = {Peter Kullmann and Theresa Schell and Mario Botsch and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2025.1594350},
year = {2025},
volume = {6},
doi = {10.3389/frvir.2025.1594350},
title = {Eye-to-eye or face-to-face? Face and head substitution for co-located augmented reality}
}
Abstract:
In co-located extended reality (XR) experiences, headsets occlude their wearers’ facial expressions, impeding natural conversation. We introduce two techniques to mitigate this using off-the-shelf hardware: compositing a view of a personalized avatar behind the visor (“see-through visor”) and reducing the headset’s visibility and showing the avatar’s head (“head substitution”). We evaluated them in a repeated-measures dyadic study (N = 25) that indicated promising effects. Collaboration with a confederate with our techniques, compared to a no-avatar baseline, resulted in quicker consensus in a judgment task and enhanced perceived mutual understanding. However, the avatar was also rated and commented on as uncanny, though participant comments indicate tolerance for avatar uncanniness since they restore gaze utility. Furthermore, performance in an executive task deteriorated in the presence of our techniques, indicating that our implementation drew participants’ attention to their partner’s avatar and away from the task. We suggest giving users agency over how these techniques are applied and recommend using the same representation across interaction partners to avoid power imbalances.
Ronja Heinrich, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces, In
Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25).
Association for Computing Machinery,
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{heinrich2025systematic,
author = {Ronja Heinrich and Chris Zimmerer and Martin Fischbach and Marc Erich Latoschik},
url = {https://dl.acm.org/doi/10.1145/3716553.3750790},
year = {2025},
booktitle = {Proceedings of the 27th International Conference on Multimodal Interaction (ICMI '25)},
publisher = {Association for Computing Machinery},
doi = {doi: 10.1145/3716553.3750790},
title = {A Systematic Review of Fusion Methods for the User-Centered Design of Multimodal Interfaces}
}
Abstract:
This systematic review investigates the current state of research on multimodal fusion methods, i.e., the joint analysis of multimodal inputs, for intentional, instruction-based human-computer interactions, focusing on the combination of speech and spatially expressive modalities such as gestures, touch, pen, and gaze.
We examine 50 systems from a User-Centered Design perspective, categorizing them by modality combinations, fusion strategies, application domains and media, as well as reusability. Our findings highlight a predominance of descriptive late fusion methods, limited reusability, and a lack of standardized tool support, hampering rapid prototyping and broader applicability. We identify emerging trends in machine learning-based fusion and outline future research directions to advance reusable and user-centered multimodal systems.
Christian Merz, Niklas Krome, Carolin Wienrich, Stefan Kopp, Marc Erich Latoschik,
The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR, In
IEEE Transactions on Visualization and Computer Graphics.
2025. IEEE ISMAR Best Paper Award Honorable Mention 🏆
[BibTeX]
[Abstract]
[BibSonomy]
[Doi]
@article{merz2025impact,
author = {Christian Merz and Niklas Krome and Carolin Wienrich and Stefan Kopp and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
doi = {10.1109/TVCG.2025.3616864},
title = {The Impact of AI-Based Real-Time Gesture Generation and Immersion on the Perception of Others and Interaction Quality in Social XR}
}
Abstract:
This study explores how people interact in dyadic social eXtended Reality (XR), focusing on two main factors: the animation type of a conversation partner’s avatar and how immersed the user feels in the virtual environment. Specifically, we investigate how 1) idle behavior, 2) AI-generated gestures, and 3) motion-captured movements from a confederate (a controlled partner in the study) influence the quality of conversation and how that partner is perceived. We examined these effects in both symmetric interactions (where both participants use VR headsets and controllers) and asymmetric interactions (where one participant uses a desktop setup). We developed a social XR platform that supports asymmetric device configurations to provide varying levels of immersion. The platform also supports a modular avatar animation system providing idle behavior, real-time AI-generated co-speech gestures, and full-body motion capture. Using a 2×3 mixed design with 39 participants, we measured users’ sense of spatial presence, their perception of the confederate, and the overall conversation quality. Our results show that users who were more immersed felt a stronger sense of presence and viewed their partner as more human-like and believable. Surprisingly, however, the type of avatar animation did not significantly affect conversation quality or how the partner was perceived. Participants often reported focusing more on what was said rather than how the avatar moved.
Andrea Zimmerer, Lydia Bartels, Marc Erich Latoschik,
The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application, In
2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1031-1041.
IEEE Computer Society,
2025.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{zimmerer2025feedback,
author = {Andrea Zimmerer and Lydia Bartels and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2025-ismar-feedback-from-a-virtual-coach-in-vr-exercise.pdf},
year = {2025},
booktitle = {2025 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
pages = {1031-1041},
doi = {10.1109/ISMAR67309.2025.00110},
title = {The Impact of Performance-Specific Feedback from a Virtual Coach in a Virtual Reality Exercise Application}
}
Abstract:
Virtual reality (VR) exercise applications are promising tools, e.g., for at-home training and rehabilitation. However, existing applications vary significantly in key design choices such as environments, embodiment, and virtual coaching, making it difficult to derive clear design guidelines. A prominent design choice is the use of embodied virtual coaches, which guide user interaction and provide feedback. In a user study with 76 participants, we investigated how different levels of performance specificity in feedback from an embodied virtual coach affect intermediate factors, such as VR experience, motivation, and coach perception. Participants performed lower-body movement exercises, i.e., Leg Raises and Knee Extensions, commonly used in knee rehabilitation. We found that highly performance-specific feedback led to higher scores compared to medium specificity for perceived realism, as well as the anthropomorphism and sympathy of the virtual coach, but did not affect motivation. Based on our findings, we propose the design suggestion to include precise, performance-specific details when creating feedback for a virtual coach. We observed a descriptive pattern of higher scores in the low specificity condition compared to the medium condition on most measures, which raises the possibility that less specific feedback may, in some cases, be perceived more positively than moderately specific feedback. These findings provide valuable insights into how design choices impact relevant intermediate factors that are crucial for maximizing both workout effectiveness and the quality of the virtual coaching experience.
Joanna Grause, Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
The Stability of Plausibility and Presence in Claustrophobic Virtual Reality Exposure Therapy, In
Proceedings of the Mensch Und Computer 2025, p. 181–192. New York, NY, USA:
Association for Computing Machinery,
2025.
[BibTeX]
[Download]
[BibSonomy]
[Doi]
@inproceedings{noauthororeditor2025stability,
author = {Joanna Grause and Larissa Brübach and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3743049.3743068},
year = {2025},
booktitle = {Proceedings of the Mensch Und Computer 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {MuC '25},
pages = {181–192},
doi = {10.1145/3743049.3743068},
title = {The Stability of Plausibility and Presence in Claustrophobic Virtual Reality Exposure Therapy}
}
Abstract:
Samantha Monty, Dennis Alexander Mevißen, Marc Erich Latoschik,
Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues, In
IEEE Transactions on Visualization and Computer Graphics.
2025. To be published.
[BibTeX]
[Abstract]
[BibSonomy]
@article{monty2025improving,
author = {Samantha Monty and Dennis Alexander Mevißen and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2025},
title = {Improving Mid-Air Sketching in Room-Scale Virtual Reality with Dynamic Color-to-Depth and Opacity Cues}
}
Abstract:
Immersive 3D mid-air sketching systems liberate users from the confines of traditional 2D sketching canvases. However, complications from perceptual challenges in Virtual Reality (VR), combined with the ergonomic and cognitive challenges of sketching in all three dimensions in mid-air lower the accuracy and aesthetic quality of 3D sketches. This paper explores how color-to-depth and opacity cues support users to create and perceive freehand, 3D strokes in room-scale sketching, unlocking a full 360° of freedom for creation. We implemented three graphic depth shader cues modifying the (1) alpha, (2) hue, and (3) value levels of a single color to dynamically adjust the color and transparency of meshes relative to their depth from the user. We investigated how these depth cues influence sketch efficiency, sketch quality, and total sketch experience with 24 participants in a comparative, counterbalanced, 4 x 1 within-subjects user study. First, with our graphic depth shader cues we could successfully transfer results of prior research in seated sketching tasks to room-scale scenarios. Our color-to-depth cues improved the similarity of sketches to target models. This highlights the usefulness of the color-to-depth approach even for the increased range of motion and depth in room-scale sketching. Second, our shaders assisted participants to complete tasks faster, spend a greater percentage of task time sketching, reduced the feeling of mental tiredness and improved the feeling of sketch efficiency in room-scale sketching. We discuss these findings and share our insights and conclusions to advance the research on improving spatial cognition in immersive sketching systems.
2024
Christian Rack, Vivek Nair, Lukas Schach, Felix Foschum, Marcel Roth, Marc Erich Latoschik,
Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{noauthororeditor2024navigating,
author = {Christian Rack and Vivek Nair and Lukas Schach and Felix Foschum and Marcel Roth and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-01-Rack-Navigating_the_Kinematic_Maze.pdf},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
doi = {10.1109/VRW62533.2024.00098},
title = {Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets}
}
Abstract:
This paper addresses the critical importance of standards and documentation in kinematic research, particularly within Extended Reality (XR) environments. We focus on the pivotal role of motion data, emphasizing the challenges posed by the current lack of standardized practices in XR user motion datasets. Our work involves a detailed analysis of 8 existing datasets, identifying gaps in documentation and essential specifications such as coordinate systems, rotation representations, and units of measurement. We highlight how these gaps can lead to misinterpretations and irreproducible results. Based on our findings, we propose a set of guidelines and best practices for creating and documenting motion datasets, aiming to improve their quality, usability, and reproducibility. We also created a web-based tool for visual inspection of motion recordings, further aiding in dataset evaluation and standardization. Furthermore, we introduce the XR Motion Dataset Catalogue, a collection of the analyzed datasets in a unified and aligned format. This initiative significantly streamlines access for researchers, allowing them to download partial or entire datasets with a single line of code and without the need for additional alignment efforts. Our contributions enhance dataset integrity and reliability in kinematic research, paving the way for more consistent and scientifically robust studies in this evolving field.
Erik Wolf, Carolin Wienrich, Marc Erich Latoschik,
Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality, In
2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 857-858.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{wolf2024towards,
author = {Erik Wolf and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-altered-body-perception-through-modulated-self-preprint.pdf},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {857-858},
doi = {10.1109/VRW62533.2024.00225},
title = {Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality}
}
Abstract:
Self-exposure using modulated embodied avatars in virtual reality (VR) may support a positive body image. However, further investigation is needed to address methodological challenges and to understand the concrete effects, including their quantification. We present an iteratively refined paradigm for studying the tangible effects of exposure to a modulated self in VR. Participants perform body-centered movements in front of a virtual mirror, encountering their photorealistically personalized embodied avatar with increased, decreased, or unchanged body size. Additionally, we propose different body size estimation tasks conducted in reality and VR before and after exposure to assess participants' putative-elicited perceptual adaptations.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik,
Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task, In
Proceedings of the 31st IEEE Virtual Reality conference (VR '24).
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{oberdorfer2024influence,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-stroop-shoes-preprint.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)},
title = {Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task}
}
Abstract:
Depending on their formality, clothes do not only change one's appearance, but can also influence behavior and cognitive processes. Shoes are a special aspect of an outfit. Besides coming in various degrees of formality, their structure can affect human gait. Avatars used to embody users in immersive Virtual Reality (VR) can wear any kind of clothing. According to the Proteus Effect, the appearance of a user's avatar can influence their behavior. Users change their behavior in accordance to the expected behavior of the avatar. In our study, we embody 39 participants with a generic avatar of the user's gender wearing three different pairs of shoes as within condition. The shoes differ in degree of formality. We measure the gait during a 2-minute walking task during which participants wore the same real shoe and assess selective attention using the Stroop task. Our results show significant differences in gait between the tested virtual shoe pairs. We found small effects between the three shoe conditions with respect to selective attention. However, we found no significant differences with respect to correct items and response time in the Stroop task. Thus, our results indicate that virtual shoes are accepted by users and, although not eliciting any physical constraints, lead to changes in gait. This suggests that users not only adjust personal behavior according to the Proteus Effect, but also are affected by virtual biomechanical constraints. Also, our results suggest a potential influence of virtual clothing on cognitive performance.
Pascal Martinez Pankotsch, Sebastian Oberdörfer, Marc Erich Latoschik,
Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR, In
Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24).
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{martinezpankotsch2024effects,
author = {Pascal Martinez Pankotsch and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-agent-encouragement-peer-pressure-preprint.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)},
title = {Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR}
}
Abstract:
Our study investigated how virtual agents impact users in challenging VR environments, exploring if nonverbal animations affect social pressure, positive encouragement, and trust in 30 female participants. Despite showing signs of pressure and support during the experimental trials, we could not find significant differences in post-exposure measurements of social pressure and encouragement, interpersonal trust, and well-being. While inconclusive, the findings suggest potential, indicating the need for further research with improved animations and a larger sample size for validation.
Carolin Wienrich, Viktoria Horn, Arne Bürger, Jana Krauss,
Personal Space Invasion to Prevent Cyberbullying: Design, Development, and Evaluation of an Immersive Prevention Measure for Children and Adolescents, In
Springer Virtual Reality.
2024.
[BibTeX]
[Abstract]
[BibSonomy]
@article{wienrich2024personal,
author = {Carolin Wienrich and Viktoria Horn and Arne Bürger and Jana Krauss},
journal = {Springer Virtual Reality},
year = {2024},
title = {Personal Space Invasion to Prevent Cyberbullying: Design, Development, and Evaluation of an Immersive Prevention Measure for Children and Adolescents}
}
Abstract:
Previous work on cyberbullying has shown that the number of victims is increasing and the need for prevention is exceptionally high among younger school students (5th-9th grade). Due to the omnipresence of cyberattacks, victims can hardly distance themselves psychologically, thus experience an intrusion in almost all areas of life. The perpetrators, on the other hand, feel the consequences of their actions even less in cyberspace. However, there is a gap between the need and the existence of innovative prevention programs tied to the digital reality of the target group and the treatment of essential aspects of psychological distance. This article explores the design space, feasibility, and effectiveness of a unique VR-based cyberbullying prevention component in a human-centered iterative approach. The central idea is reflected in creating a virtual personal space invasion with virtual objects associated with cyberbullying making the everyday intrusion of victims tangible. A pre-study revealed that harmful speech texts in bright non-removable message boxes best transferred the psychological determinants associated with a personal space invasion to virtual objects contextualized in cyberbullying scenarios. Therefore, these objects were incorporated into a virtual prevention program that was then tested in a laboratory study with 41 participants. The results showed that the intervention could trigger cognitive dissonance and empathy. In the second step, the intervention was evaluated and improved in a focus group with the actual target group of children and adolescents. The improved application was then evaluated in a school workshop for five days with 100 children and adolescents. The children understood the metaphor of virtual space invasion by the harmful text boxes and reported the expected psychological effects. They also showed great interest in VR. In summary, this paper contributes to the innovative and effective prevention of cyberbullying by using the potential of VR. It provides empirical evidence from a laboratory experiment and a field study with a large sample from the target group of children and adolescents and discusses implications for future developments.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference, In
IEEE Transactions on Visualization and Computer Graphics, Vol. 30(
5)
, pp. 2140 - 2150.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{westermeier2024assessing,
author = {Franziska Westermeier and Larissa Brübach and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://ieeexplore.ieee.org/document/10458408},
year = {2024},
pages = {2140 - 2150},
volume = {30},
doi = {10.1109/TVCG.2024.3372061},
title = {Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference}
}
Abstract:
Spatial User Interfaces along the Reality-Virtuality continuum heavily depend on accurate depth perception. However, current display technologies still exhibit shortcomings in the simulation of accurate depth cues, and these shortcomings also vary between Virtual or Augmented Reality (VR, AR: eXtended Reality (XR) for short). This article compares depth perception between VR and Video See-Through (VST) AR. We developed a digital twin of an existing office room where users had to perform five depth-dependent tasks in VR and VST AR. Thirty-two participants took part in a user study using a 1×4 within-subjects design. Our results reveal higher misjudgment rates in VST AR due to conflicting depth cues between virtual and physical content. Increased head movements observed in participants were interpreted as a compensatory response to these conflicting cues. Furthermore, a longer task completion time in the VST AR condition indicates a lower task performance in VST AR. Interestingly, while participants rated the VR condition as easier and contrary to the increased misjudgments and lower performance with the VST AR display, a majority still expressed a preference for the VST AR experience. We discuss and explain these findings with the high visual dominance and referential power of the physical content in the VST AR condition, leading to a higher spatial presence and plausibility.
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik,
Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities, In
IEEE Transactions on Visualization and Computer Graphics, pp. 1-11.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10460576,
author = {Florian Kern and Jonathan Tschanter and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://ieeexplore.ieee.org/document/10460576},
year = {2024},
pages = {1-11},
doi = {10.1109/TVCG.2024.3372124},
title = {Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities}
}
Abstract:
Text input is desirable across various eXtended Reality (XR) use cases and is particularly crucial for knowledge and office work. This article compares handwriting text input between Virtual Reality (VR) and Video See-Through Augmented Reality (VST AR), facilitated by physically aligned and mid-air surfaces when writing simple and complex sentences. In a 2x2x2 experimental design, 72 participants performed two ten-minute handwriting sessions, each including ten simple and ten complex sentences representing text input in real-world scenarios. Our developed handwriting application supports different XR displays, surface alignments, and handwriting recognition based on digital ink. We evaluated usability, user experience, task load, text input performance, and handwriting style. Our results indicate high usability with a successful transfer of handwriting skills to the virtual domain. XR displays and surface alignments did not impact text input speed and error rate. However, sentence complexities did, with participants achieving higher input speeds and fewer errors for simple sentences (17.85 WPM, 0.51% MSD ER) than complex sentences (15.07 WPM, 1.74% MSD ER). Handwriting on physically aligned surfaces showed higher learnability and lower physical demand, making them more suitable for prolonged handwriting sessions. Handwriting on mid-air surfaces yielded higher novelty and stimulation ratings, which might diminish with more experience. Surface alignments and sentence complexities significantly affected handwriting style, leading to enlarged and more connected cursive writing in both mid-air and for simple sentences. The study also demonstrated the benefits of using XR controllers in a pen-like posture to mimic styluses and pressure-sensitive tips on physical surfaces for input detection. We additionally provide a phrase set of simple and complex sentences as a basis for future text input studies, which can be expanded and adapted.
Murat Yalcin, Andreas Halbig, Martin Fischbach, Marc Erich Latoschik,
Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors, In
Frontiers in Virtual Reality, Vol. 5.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{10.3389/frvir.2024.1364207,
author = {Murat Yalcin and Andreas Halbig and Martin Fischbach and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1364207},
year = {2024},
volume = {5},
doi = {10.3389/frvir.2024.1364207},
title = {Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors}
}
Abstract:
Cybersickness is still a prominent risk factor potentially affecting the usability of virtual reality applications. Automated real-time detection of cybersickness promises to support a better general understanding of the phenomena and to avoid and counteract its occurrence. It could be used to facilitate application optimization, that is, to systematically link potential causes (technical development and conceptual design decisions) to cybersickness in closed-loop user-centered development cycles. In addition, it could be used to monitor, warn, and hence safeguard users against any onset of cybersickness during a virtual reality exposure, especially in healthcare applications. This article presents a novel real-time-capable cybersickness detection method by deep learning of augmented physiological data. In contrast to related preliminary work, we are exploring a unique combination of mid-immersion ground truth elicitation, an unobtrusive wireless setup, and moderate training performance requirements. We developed a proof-of-concept prototype to compare (combinations of) convolutional neural networks, long short-term memory, and support vector machines with respect to detection performance. We demonstrate that the use of a conditional generative adversarial network-based data augmentation technique increases detection performance significantly and showcase the feasibility of real-time cybersickness detection in a genuine application example. Finally, a comprehensive performance analysis demonstrates that a four-layered bidirectional long short-term memory network with the developed data augmentation delivers superior performance (91.1% F1-score) for real-time cybersickness detection. To encourage replicability and reuse in future cybersickness studies, we released the code and the dataset as publicly available.
Smi Hinterreiter, Timo Spinde, Sebastian Oberdörfer, Isao Echizen, Marc Erich Latoschik,
News Ninja: Gamified Annotation Of Linguistic Bias In
Online News, In
Proceedings of the ACM Human-Computer Interaction, Vol. 8(
CHI PLAY, Article 327)
, p. 29.
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{hinterreiter2024ninja,
author = {Smi Hinterreiter and Timo Spinde and Sebastian Oberdörfer and Isao Echizen and Marc Erich Latoschik},
journal = {Proceedings of the ACM Human-Computer Interaction},
number = {CHI PLAY, Article 327},
url = {https://dl.acm.org/doi/10.1145/3677092},
year = {2024},
pages = {29},
volume = {8},
doi = {10.1145/3677092},
title = {News Ninja: Gamified Annotation Of Linguistic Bias In
Online News}
}
Abstract:
Recent research shows that visualizing linguistic bias mitigates its negative effects. However, reliable automatic detection methods to generate such visualizations require costly, knowledge-intensive training data. To facilitate data collection for media bias datasets, we present News Ninja, a game employing data-collecting game mechanics to generate a crowdsourced dataset. Before annotating sentences, players are educated on media bias via a tutorial. Our findings show that datasets gathered with crowdsourced workers trained on News Ninja can reach significantly higher inter-annotator agreements than expert and crowdsourced datasets with similar data quality. As News Ninja encourages continuous play, it allows datasets to adapt to the reception and contextualization of news over time, presenting a promising strategy to reduce data collection expenses, educate players, and promote long-term bias mitigation.
Christian Rack, Lukas Schach, Felix Achter, Yousof Shehada, Jinghuai Lin, Marc Erich Latoschik,
Motion Passwords, In
Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology(
19)
, pp. 1-11. New York, NY, USA:
Association for Computing Machinery,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@conference{rack2024motion,
author = {Christian Rack and Lukas Schach and Felix Achter and Yousof Shehada and Jinghuai Lin and Marc Erich Latoschik},
number = {19},
url = {https://doi.org/10.1145/3641825.3687711},
year = {2024},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
pages = {1-11},
doi = {10.1145/3641825.3687711},
title = {Motion Passwords}
}
Abstract:
This paper introduces “Motion Passwords”, a novel biometric authentication approach where virtual reality users verify their identity by physically writing a chosen word in the air with their hand controller. This method allows combining three layers of verification: knowledge-based password input, handwriting style analysis, and motion profile recognition. As a first step towards realizing this potential, we focus on verifying users based on their motion profiles. We conducted a data collection study with 48 participants, who performed over 3800 Motion Password signatures across two sessions. We assessed the effectiveness of feature-distance and similarity-learning methods for motion-based verification using the Motion Passwords as well as specific and uniform ball-throwing signatures used in previous works. In our results, the similarity-learning model was able to verify users with the same accuracy for both signature types. This demonstrates that Motion Passwords, even when applying only the motion-based verification layer, achieve reliability comparable to previous methods. This highlights the potential for Motion Passwords to become even more reliable with the addition of knowledge-based and handwriting style verification layers. Furthermore, we present a proof-of-concept Unity application demonstrating the registration and verification process with our pretrained similarity-learning model. We publish our code, the Motion Password dataset, the pretrained model, and our Unity prototype on https://github.com/cschell/MoPs
Larissa Brübach, Mona Röhm, Franziska Westermeier, Marc Erich Latoschik, Carolin Wienrich,
Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. IEEE ISMAR Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2024manipulating,
author = {Larissa Brübach and Mona Röhm and Franziska Westermeier and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-manipulating-immersion.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
doi = {10.1109/ISMAR62088.2024.00124},
title = {Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR}
}
Abstract:
This work presents a study where we used incongruencies on the cognitive and the perceptual layer to investigate their effects on perceived plausibility and, thereby, presence and spatial presence. We used a 2x3 within-subject design with the factors familiar size (cognitive manipulation) and immersion (perceptual manipulation). For the different levels of immersion, we implemented three different tracking qualities: rotation-and-translation tracking, rotation-only tracking, and stereoscopic-view-only tracking. Participants scanned products in a virtual supermarket where the familiar size of these objects was manipulated. Simultaneously, they could either move their head normally or need to use the thumbsticks to navigate their view of the environment. Results show that both manipulations had a negative effect on perceived plausibility and, thereby, presence. In addition, the tracking manipulation also had a negative effect on spatial presence. These results are especially interesting in light of the ongoing discussion about the role of plausibility and congruence in evaluating XR environments. The results can hardly be explained by traditional presence models, where immersion should not be an influencing factor for perceived plausibility. However, they are in agreement with the recently introduced Congruence and Plausibility (CaP) model and provide empirical evidence for the model's predicted pathways.
Larissa Brübach, Marius Röhm, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence, In
Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology(
3)
. New York, NY, USA:
Association for Computing Machinery,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2024influence,
author = {Larissa Brübach and Marius Röhm and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
number = {3},
url = {https://doi.org/10.1145/3641825.3687713},
year = {2024},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
doi = {10.1145/3641825.3687713},
title = {The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence}
}
Abstract:
The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider but inhomogeneous FoV on the perceived plausibility, measuring its effects on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogeneous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted two experiments with 30 and 27 participants, respectively. In a randomized 2x3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In experiment 2, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, neither study measured the low-resolution FoV extension's effect in terms of perceived plausibility, presence, spatial presence, or VR sickness. We found that one of the incongruencies could cause a break in plausibility without the extension, confirming the results of a previous study.
Sophia Maier, Sebastian Oberdörfer, Marc Erich Latoschik,
Ballroom Dance Training with Motion Capture and Virtual Reality, In
Proceedings of Mensch Und Computer 2024 (MuC '24), pp. 617-621. New York, NY, USA:
Association for Computing Machinery,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{maier2024ballroom,
author = {Sophia Maier and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-muc-ballroom-dance-training-with-motion-capture-and-virtual-reality-preprint.pdf},
year = {2024},
booktitle = {Proceedings of Mensch Und Computer 2024 (MuC '24)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
pages = {617-621},
doi = {10.1145/3670653.3677499},
title = {Ballroom Dance Training with Motion Capture and Virtual Reality}
}
Abstract:
This paper investigates the integration of motion capture and virtual reality (VR) technologies in competitive ballroom dancing (slow walz, tango, slow foxtrott, viennese waltz, quickstep), aiming to analyze posture correctness and provide feedback to dancers for posture enhancement. Through qualitative interviews, the study identifies specific requirements and gathers insights into potentially helpful feedback mechanisms. Using Unity and motion capture technology, we implemented a prototype system featuring real-time visual cues for posture correction and a replay function for analysis. A validation study with competitive ballroom dancers reveals generally positive feedback on the system’s usefulness, though challenges like cable obstruction and bad usability of the user interface are noted. Insights from participants inform future refinements, emphasizing the need for precise feedback, cable-free movement, and user-friendly interfaces. While the program is promising for ballroom dance training, further research is needed to evaluate the system’s overall efficacy.
Samantha Monty, Florian Kern, Marc Erich Latoschik,
Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{monty2024,
author = {Samantha Monty and Florian Kern and Marc Erich Latoschik},
url = {https://ieeexplore.ieee.org/document/10765456},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
doi = {10.1109/ISMAR62088.2024.00041},
title = {Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks}
}
Abstract:
Immersive 3D sketching systems empower users with tools to create
sketches directly in the air around themselves, in all three dimensions,
using only simple hand gestures. These sketching systems
have the potential to greatly extend the interactive capabilities
of immersive learning environments. The perceptual challenges of
Virtual Reality (VR), however, combined with the ergonomic and
cognitive challenges of creating mid-air 3D sketches reduce the effectiveness
of immersive sketching used for problem-solving, reflection,
and to capture fleeting ideas. We contribute to the understanding
of the potential challenges of mid-air sketching systems in
educational settings, where expression is valued higher than accuracy,
and sketches are used to support problem-solving and to explain
abstract concepts. We conducted an empirical study with 36
participants with different spatial abilities to investigate if the way
that people sketch in mid-air is dependent on the goal of the sketch.
We compare the technique, quality, efficiency, and experience of
participants as they create 3D mid-air sketches in three different
tasks. We examine how users approach mid-air sketching when the
sketches they create serve to convey meaning and when sketches are
merely reproductions of geometric models created by someone else.
We found that in tasks aimed at expressing personal design ideas,
between starting and ending strokes, participants moved their heads
more and their controllers at higher velocities and created strokes
in faster times than in tasks aimed at recreating 3D geometric figures.
They reported feeling less time pressure to complete sketches
but redacted a larger percentage of strokes. These findings serve to
inform the design of creative virtual environments that support reasoning
and reflection through mid-air sketching. With this work, we
aim to strengthen the power of immersive systems that support midair
3D sketching by exploiting natural user behavior to assist users
to more quickly and faithfully convey their meaning in sketches.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1127-1136.
IEEE Computer Society,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{merz2024voice,
author = {Christian Merz and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-does-voice-matter-preprint.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
pages = {1127-1136},
doi = {10.1109/ISMAR62088.2024.00129},
title = {Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR}
}
Abstract:
This work evaluates how the asymmetry of device configurations and verbal communication influence the user experience of social eXtended Reality (XR) for self-perception, other-perception, and task perception. We developed an application that enables social collaboration between two users with varying device configurations. We compare the conditions of one symmetric interaction, where both device configurations are Head-Mounted Displays (HMDs) with tracked controllers, with the conditions of one asymmetric interaction, where one device configuration is an HMD with tracked controllers and the other device configuration is a desktop screen with a mouse. In our study, 52 participants collaborated in a dyadic interaction on a sorting task while talking to each other. We compare our results to previous work that evaluated the same scenario without verbal communication. In line with prior research, self-perception is influenced by the immersion of the used device configuration and verbal communication. While co-presence was not affected by the device configuration or the inclusion of verbal communication, social presence was only higher for HMD configurations that allowed verbal communication. Task perception was hardly affected by the device configuration or verbal communication. We conclude that the device in social XR is important for self-perception with or without verbal communication. However, the results indicate that the device configuration only affects the qualities of social interaction in collaborative scenarios when verbal communication is enabled. To sum up, asymmetric collaboration maintains the high quality of self-perception and interaction for highly immersed users while still enabling the participation of less immersed users.
Jinghuai Lin, Christian Rack, Carolin Wienrich, Marc Erich Latoschik,
Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{lin2024usability,
author = {Jinghuai Lin and Christian Rack and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-social-vr-identity-management-preprint.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
doi = {10.1109/ISMAR62088.2024.00027},
title = {Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality}
}
Abstract:
In social virtual reality (social VR), users are threatened by potential cybercrimes, such as identity theft, sensitive data breaches, and embodied harassment. These concerns are heightened by the increasing interest in the metaverse, the advancements in photorealistic 3D user reconstructions, and the rising incidents of online privacy violations. Designing secure social VR applications that protect users while enhancing their experience, acceptance and trust remains a challenge. This article investigates potential identity management solutions in social VR, and their impacts on usability and user acceptance. We developed a social VR prototype with novel and established countermeasures, including motion biometric verification, and conducted a study with 52 participants. Our findings reveal diverse preferences for identity management and underscore the importance of authenticity, autonomy, and reciprocity. Key findings include: passive verification is favored for pragmatic user experience, while active verification is preferred for its hedonic quality; continuous or periodic verification strengthens users’ confidence in their privacy; and while user awareness promotes authentic engagement, it may also diminish the willingness to disclose personal information. This research not only offers foundational insights into the evaluated scenarios and countermeasures, but also sheds light on the designs of more trustworthy and inclusive social VR applications.
Andreas Halbig, Marc Erich Latoschik,
Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment, In
23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 1117-1126. Los Alamitos, CA, USA:
IEEE Computer Society,
2024.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{halbig2024common,
author = {Andreas Halbig and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ISMAR-halbig-common-cues.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
pages = {1117-1126},
doi = {10.1109/ISMAR62088.2024.00128},
title = {Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment}
}
Abstract:
The sense of presence and the sense of embodiment are two fundamental qualia, pivotal to many virtual reality experiences. Empirical research indicates a notable interdependence between these two qualia, where manipulations designed to affect one often exhibit a concurrent influence on the other. Existing theories on the development of qualia in virtual reality make no or only insufficient statements on this deep interdependence. In this work, we present a novel theoretical perspective on this connection. Based on existing theories, we argue that all the fundamental cues influencing one quale have the potential to impact the other one too. We present three studies ($n=42, n=42, n=32$) that generally support this novel perspective. Among other things, they show that traditional spatial presence cues such as head-tracking and passive depth cues (stereoscopy, linear perspective, etc.) can potentially also serve as embodiment cues. Conversely, they show that typical embodiment cues such as the visuotactile and visuoproprioceptive synchrony of a virtual hand are also spatial presence cues. The cues only differ in terms of how strongly they influence the respective quale. This novel perspective not only enhances our understanding of fundamental mechanics of virtual reality but it can also guide the development of more effective measurement instruments.
2023
Nina Döllinger, Erik Wolf, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness, In
2023 CHI Conference on Human Factors in Computing Systems, pp. 1-14.
2023. Honorable Mention 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{dollinger2023embodied,
author = {Nina Döllinger and Erik Wolf and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-virtual-mirrors-body-awareness-preprint.pdf},
year = {2023},
booktitle = {2023 CHI Conference on Human Factors in Computing Systems},
pages = {1-14},
doi = {10.1145/3544548.3580918},
title = {Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness}
}
Abstract:
Virtual Reality (VR) allows us to replace our visible body with a virtual self-representation (avatar) and to explore its effects on our body perception. While the feeling of owning and controlling a virtual body is widely researched, how VR affects the awareness of internal body signals (body awareness) remains open. Forty participants performed moving meditation tasks in reality and VR, either facing their mirror image or not. Both the virtual environment and avatars photorealistically matched their real counterparts.
We found a negative effect of VR on body awareness, mediated by feeling embodied in and changed by the avatar. Further, we revealed a negative effect of a mirror on body awareness. Our results indicate that assessing body awareness should be essential in evaluating VR designs and avatar embodiment aiming at mental health, as even a scenario as close to reality as possible can distract users from their internal body signals.
Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
Exploring Plausibility and Presence in Mixed Reality Experiences, In
IEEE Transactions on Visualization and Computer Graphics, Vol. 29(
5)
, pp. 2680-2689.
2023. IEEE VR Best Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{westermeier2023exploring,
author = {Franziska Westermeier and Larissa Brübach and Marc Erich Latoschik and Carolin Wienrich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://ieeexplore.ieee.org/document/10049710},
year = {2023},
pages = {2680-2689},
volume = {29},
doi = {10.1109/TVCG.2023.3247046},
title = {Exploring Plausibility and Presence in Mixed Reality Experiences}
}
Abstract:
Mixed Reality (MR) applications along Milgram's Reality-Virtuality (RV) continuum motivated a number of recent theories on potential constructs and factors describing MR experiences. This paper investigates the impact of incongruencies on the sensation/perception and cognition layers to provoke breaks in plausibility, and the effects these breaks have on spatial and overall presence as prominent constructs of Virtual Reality (VR). We developed a simulated maintenance application to test virtual electrical devices. Participants performed test operations on these devices in a counterbalanced, randomized 2x2 between-subject design in either VR as congruent, or Augmented Reality (AR) as incongruent on the sensation/perception layer. Cognitive incongruency was induced by the absence of traceable power outages, decoupling perceived cause and effect after activating potentially defective devices. Our results indicate significant differences in the plausibility ratings between the VR and AR conditions, hence between congruent/incongruent conditions on the sensation/perception layer. In addition, spatial presence revealed a comparable interaction pattern with the VR vs AR conditions. Both factors decreased for the AR condition (incongruent sensation/perception) compared to VR (congruent sensation/perception) for the congruent cognitive case but increased for the incongruent cognitive case. The results are discussed and put into perspective in the scope of recent theories of MR experiences.
Florian Kern, Florian Niebling, Marc Erich Latoschik,
Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality, In
IEEE Transactions on Visualization and Computer Graphics, pp. 2658--2669.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{kern2023input,
author = {Florian Kern and Florian Niebling and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://ieeexplore.ieee.org/document/10049665/},
year = {2023},
pages = {2658--2669},
doi = {10.1109/TVCG.2023.3247098},
title = {Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality}
}
Abstract:
This article compares two state-of-the-art text input techniques between non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) use-cases as XR display condition. The developed contact-based mid-air virtual tap and wordgesture (swipe) keyboard provide established support functions for text correction, word suggestions, capitalization, and punctuation. A user evaluation with 64 participants revealed that XR displays and input techniques strongly affect text entry performance, while subjective measures are only influenced by the input techniques. We found significantly higher usability and user experience ratings for tap keyboards compared to swipe keyboards in both VR and VST AR. Task load was also lower for tap keyboards. In terms of performance, both input techniques were significantly faster in VR than in VST AR. Further, the tap keyboard was significantly faster than the swipe keyboard in VR. Participants showed a significant learning effect with only ten sentences typed per condition. Our results are consistent with previous work in VR and optical see-through (OST) AR, but additionally provide novel insights into usability and performance of the selected text input techniques for VST AR. The significant differences in subjective and objective measures emphasize the importance of specific evaluations for each possible combination of input techniques and XR displays to provide reusable, reliable, and high-quality text input solutions. With our work, we form a foundation for future research and XR workspaces. Our reference implementation is publicly available to encourage replicability and reuse in future XR workspaces.
Peter Kullmann, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
An Evaluation of Other-Avatar Facial Animation Methods for Social VR, In
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1--7. New York, NY, USA:
Association for Computing Machinery,
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{kullmann2023facialExpressionComparison,
author = {Peter Kullmann and Timo Menzel and Mario Botsch and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-Kullmann-An-Evaluation-of-Other-Avatar-Facial-Animation-Methods-for-Social-VR.pdf},
year = {2023},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '23},
pages = {1--7},
doi = {10.1145/3544549.3585617},
title = {An Evaluation of Other-Avatar Facial Animation Methods for Social VR}
}
Abstract:
We report a mixed-design study on the effect of facial animation method (static, synthesized, or tracked expressions) and its synchronization to speaker audio (in sync or delayed by the method’s inherent latency) on an avatar’s perceived naturalness and plausibility. We created a virtual human for an actor and recorded his spontaneous half-minute responses to conversation prompts. As a simulated immersive interaction, 44 participants unfamiliar with the actor observed and rated performances rendered with the avatar, each with the different facial animation methods. Half of them observed performances in sync and the others with the animation method’s latency. Results show audio synchronization did not influence ratings and static faces were rated less natural and less plausible than animated faces. Notably, synthesized expressions were rated as more natural and more plausible than tracked expressions. Moreover, ratings of verbal behavior naturalness differed in the same way. We discuss implications of these results for avatar-mediated communication.
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality, In
2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 894-901.
IEEE Computer Society,
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{brubach2023systematic,
author = {Larissa Brübach and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ismar-bruebach-a-systematic-evaluation-of-incongruencies-preprint.pdf},
year = {2023},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
series = {ISMAR '23},
pages = {894-901},
doi = {10.1109/ISMAR59233.2023.00105},
title = {A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality}
}
Abstract:
Currently, there is an ongoing debate about the influencing factors of one's extended reality (XR) experience. Plausibility, congruence, and their role have recently gained more and more attention. One of the latest models to describe XR experiences, the Congruence and Plausibility model (CaP), puts plausibility and congruence right in the center. However, it is unclear what influence they have on the overall XR experience and what influences our perceived plausibility rating. In this paper, we implemented four different incongruencies within a virtual reality scene using breaks in plausibility as an analogy to breaks in presence. These manipulations were either located on the cognitive or perceptual layer of the CaP model. They were also either connected to the task at hand or not. We tested these manipulations in a virtual bowling environment to see which influence they had. Our results show that manipulations connected to the task caused a lower perceived plausibility. Additionally, cognitive manipulations seem to have a larger influence than perceptual manipulations. We were able to cause a break in plausibility with one of our incongruencies. These results show a first direction on how the influence of plausibility in XR can be systematically investigated in the future.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies, In
Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA:
Association for Computing Machinery,
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{westermeier2023virtualized,
author = {Franziska Westermeier and Larissa Brübach and Carolin Wienrich and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3611659.3617227},
year = {2023},
booktitle = {Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '23},
doi = {10.1145/3611659.3617227},
title = {A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies}
}
Abstract:
When blending virtual and physical content, certain incongruencies emerge from hardware limitations, inaccurate tracking, or different appearances of virtual and physical content. They restrain us from perceiving virtual and physical content as one experience. Hence, it is crucial to investigate these issues to determine how they influence our experience. We present a virtualized augmented reality simulation that can systematically examine single incongruencies or different configurations.
Florian Kern, Marc Erich Latoschik,
Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research, In
2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 74-76.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@inproceedings{10322199,
author = {Florian Kern and Marc Erich Latoschik},
url = {https://ieeexplore.ieee.org/document/10322199},
year = {2023},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
pages = {74-76},
doi = {10.1109/ISMAR-Adjunct60411.2023.00023},
title = {Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research}
}
Abstract:
This paper introduces Reality Stack I/O (RSIO), a versatile and modular framework designed to facilitate the development of extended reality (XR) applications. Researchers and developers often spend a significant amount of time enabling cross-device and cross-platform compatibility, leading to delays and increased complexity. RSIO provides the essential features to simplify and unify the development of XR applications. It enhances cross-device and cross-platform compatibility, expedites integration, and allows developers to focus more on building XR experiences rather than device integration. We offer a public Unity reference implementation with examples.
Carolin Wienrich, Jana Krauss, Lukas Polifke, Viktoria Horn, Arne Bürger, Marc-Erich Latoschik,
Harnessing the Potential of the Metaverse to Counter its Dangers.
2023.
[BibTeX]
[BibSonomy]
@misc{wienrich2023harnessing,
author = {Carolin Wienrich and Jana Krauss and Lukas Polifke and Viktoria Horn and Arne Bürger and Marc-Erich Latoschik},
year = {2023},
title = {Harnessing the Potential of the Metaverse to Counter its Dangers}
}
Abstract:
Felix Sittner, Oliver Hartmann, Sergio Montenegro, Jan-Philipp Friese, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
An Update on the Virtual Mission Control Room, In
2023 Small Satellite Conference. Utah State University, Logan, UT.
2023.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@proceedings{sittner2023update,
author = {Felix Sittner and Oliver Hartmann and Sergio Montenegro and Jan-Philipp Friese and Larissa Brübach and Marc Erich Latoschik and Carolin Wienrich},
url = {https://digitalcommons.usu.edu/smallsat/2023/all2023/193/},
year = {2023},
booktitle = {2023 Small Satellite Conference},
address = {Utah State University, Logan, UT},
title = {An Update on the Virtual Mission Control Room}
}
Abstract:
In 2021 we presented the Virtual Mission Control Room (VMCR) on the verge from fun educational project to testing ground for remote cooperative mission control. Since then, we successfully participated in ESA's 2022 campaign "New ideas to make XR a reality", which granted us additional funding to improve the VMCR software and conduct usability testing in cooperation with the chair of human-computer-interaction. In this paper and the corresponding poster session we give an update on the current state of the project, the new features and project structure. We explain the changes suggested by early test users and ESA to make operators feel more at home in the virtual environment. Subsequently, our project partners present their first suggestions for improvements to the VMCR as well as their plans for user testing. We conclude with lessons learned and and a look ahead into our plans for the future of the project.
2022
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility, In
IEEE Transactions on Visualization and Computer Graphics, Vol. 28(
5)
, pp. 2267-2276.
2022. IEEE VR Best Journal Paper Nominee 🏆
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
[Doi]
@article{9714117,
author = {Larissa Brübach and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-breaking-plausibility-without-breaking-presence.pdf},
year = {2022},
pages = {2267-2276},
volume = {28},
doi = {10.1109/TVCG.2022.3150496},
title = {Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility}
}
Abstract:
A novel theoretical model recently introduced coherence and plausibility as the essential conditions of XR experiences, challenging contemporary presence-oriented concepts. This article reports on two experiments validating this model, which assumes coherence activation on three layers (cognition, perception, and sensation) as the potential sources leading to a condition of plausibility and from there to other XR qualia such as presence or body ownership. The experiments introduce and utilize breaks in plausibility (in analogy to breaks in presence): We induce incoherence on the perceptual and the cognitive layer simultaneously by a simulation of object behaviors that do not conform to the laws of physics, i.e., gravity. We show that this manipulation breaks plausibility and hence confirm that it results in the desired effects in the theorized condition space but that the breaks in plausibility did not affect presence. In addition, we show that a cognitive manipulation by a storyline framing is too weak to successfully counteract the strong bottom-up inconsistencies. Both results are in line with the predictions of the recently introduced three-layer model of coherence and plausibility, which incorporates well-known top-down and bottom-up rivalries and its theorized increased independence between plausibility and presence.
Andrea Bartl, Christian Merz, Daniel Roth, Marc Erich Latoschik,
The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application, In
IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
2022.
[BibTeX]
[Abstract]
[Download]
[BibSonomy]
@inproceedings{bartl2022effects,
author = {Andrea Bartl and Christian Merz and Daniel Roth and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-ilast-avatar-environment-design-vr-exercise-application.pdf},
year = {2022},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
title = {The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application}
}
Abstract:
ABSTRACT
The development of embodied Virtual Reality (VR) systems involves multiple central design choices. These design choices affect the user perception and therefore require thorough consideration. This article reports on two user studies investigating the influence of common design choices on relevant intermediate factors (sense of embodiment, presence, motivation, activation, and task load) in a VR application for physical exercises. The first study manipulated the avatar fidelity (abstract, partial body vs. anthropomorphic, full-body) and the environment (with vs. without mirror). The second study manipulated the avatar type (healthy vs. injured) and the environment type (beach vs. hospital) and, hence, the avatar-environment congruence. The full-body avatar significantly increased the sense of embodiment and decreased mental demand. Interestingly, the mirror did not influence the dependent variables. The injured avatar significantly increased the temporal demand. The beach environment significantly reduced the tense activation. On the beach, participants felt more present in the incongruent condition embodying the injured avatar.