ICAT-EGVE2024
Permanent URI for this collection
Browse
Browsing ICAT-EGVE2024 by Title
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item Analysis of Tennis Forehand Technique using Machine Learning(The Eurographics Association, 2024) Kán, Peter; Gerstweiler, Georg; Sebernegg, Anna; Kaufmann, Hannes; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaAnalysis of human motion is instrumental in many areas including sports, arts, and rehabilitation. This paper presents a novel method for human motion analysis with the focus on tennis training and forehand technique assessment. We address the problems of automatic motion analysis and incorrect technique identification by a machine learning approach. We utilize the concept of training rules that are used to individually assess specific aspects of a given type of motion. Our method for motion analysis is based on insights from professional trainers and our training rules are co-designed with them. The presented method is evaluated quantitatively using recorded dataset of tennis forehand motions. This evaluation compares two variants of sport technique correctness classification: informed and uninformed learning. Both learning variants fall into the category of supervised learning, but informed learning additionally utilizes motion features and motion phases derived from tennis training methodology. Our experiments suggest that informed learning leads to higher accuracy and faster speed of the algorithm. Finally, we studied our method in a qualitative expert study.Item An Asymmetric Multiplayer Augmented Reality Game with Spatial Sharing of a Physical Environment(The Eurographics Association, 2024) Sawanobori, Yuki; Iriyama, Taishi; Komuro, Takashi; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaIn this paper, we propose a competitive game in which a player wearing an augmented reality (AR) head-mounted display (HMD) and a player not wearing an HMD share not only a virtual environment but also the structure of a physical environment. Through the proposed game, we explore the interaction between players in an online multiplayer game using an AR HMD, which is enjoyable and has a high social presence. For this exploration, we created a game design that actively utilizes a physical environment and the asymmetry between players wearing and not wearing an HMD. We implemented the designed game and conducted a user study (n=14) to evaluate the game using the Game Experience Questionnaire and an our own questionnaire. The results revealed that the players had a highly positive affect toward the game and showed a high social presence. We also obtained insights into how to make the game more interesting and to increase social presence of players.Item BlendPCR: Seamless and Efficient Rendering of Dynamic Point Clouds captured by Multiple RGB-D Cameras(The Eurographics Association, 2024) Mühlenbrock, Andre; Weller, Rene; Zachmann, Gabriel; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaTraditional techniques for rendering continuous surfaces from dynamic, noisy point clouds using multi-camera setups often suffer from disruptive artifacts in overlapping areas, similar to z-fighting. We introduce BlendPCR, an advanced rendering technique that effectively addresses these artifacts through a dual approach of point cloud processing and screen space blending. Additionally, we present a UV coordinate encoding scheme to enable high-resolution texture mapping via standard camera SDKs. We demonstrate that our approach offers superior visual rendering quality over traditional splat and mesh-based methods and exhibits no artifacts in those overlapping areas, which still occur in leading-edge NeRF and Gaussian Splat based approaches like Pointersect and P2ENet. In practical tests with seven Microsoft Azure Kinects, processing, including uploading the point clouds to GPU, requires only 13.8 ms (when using one color per point) or 29.2 ms (using high-resolution color textures), and rendering at a resolution of 3580 x 2066 takes just 3.2 ms, proving its suitability for real-time VR applications.Item Character-Voice Embodiment Impacts on the Cognitive Task Performance with the Voice Ownership Illusion.(The Eurographics Association, 2024) Kunimi, Yusuke; Kimura, Kenta; Matsumoto, Keigo; Takamichi, Shinnosuke; Narumi, Takuji; Mochimaru, Masaaki; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaEmbodying a voice quality different from the innate one by utilizing real-time voice conversion has paid attention to enhance the cognitive abilities and manipulating emotions while social interaction in physical activity. Past research has shown that embodying voice qualities that evoke specific stereotypes can induce a variety of cognitive effects and emotion. However, such an approach has been criticized for its active use of stereotypes and thus reinforces stereotypes about certain groups within society. In contrast, the use of images of well-known characters in stories has the potential to influence thinking and behavior without reinforcing stereotypes of specific social groups. This paper investigate the impact of voice conversion to a animation character voice quality on attitude, behavior, and personality. The results show that animation character-based voice conversion enhanced the planning ability according to the social image of the character.Item Conversational Agent for Procedural Building Design in Virtual Reality(The Eurographics Association, 2024) Bosco, Matteo; Kán, Peter; Kaufmann, Hannes; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaWith the emergence of large language models (LLMs), conversational agents have gained significant attention across various domains, including virtual reality (VR). This paper investigates the use of conversational agents as an interface for procedural building design in VR. We propose a voice interface that allows a user to control parameters of procedural generation and gain insights about the building construction metrics through natural conversation. The pipeline introduced for the conversational agent involves utilizing LLMs in two separate API calls for natural language understanding and natural language generation. This separation enables the invocation of various actions in procedural generation as well as meaningful agent responses to building-related questions. Furthermore, we conducted a user study to assess our proposed conversational interface in comparison to a traditional graphical user interface (GUI) in a VR architectural design task focused on circular economy. The study scrutinize the user-reported usability, presence, realism, errors, and effectiveness of both interfaces. Results suggest that while the non-embodied conversational agent enhances effectiveness due to its explanatory capabilities, it surprisingly decreases realism compared to the GUI. Overall, the preference between the conversational agent and the GUI varied greatly among participants, highlighting the need for further research into the evolving shift towards speech interaction in VR.Item Do we study an archaeological artifact differently in VR and in reality?(The Eurographics Association, 2024) Dumonteil, Maxime; Gouranton, Valérie; Macé, Marc; Nicolas, Théophane; Gaugne, Ronan; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThe use of virtual reality (VR) in archaeological research is increasing year over year. Nevertheless, the influence of VR technologies on researchers' perception and interpretation is frequently overlooked. These device-induced biases require careful consideration and mitigation strategies to ensure the integrity and reliability of archaeological research results. Our aim is to identify potential interpretation biases introduced by the use of VR tools in this field, by analyzing both eye-tracking patterns and participant's behavior. We have designed an experimental protocol for a user study involving an analysis task on a corpus of archaeological artifacts across different modalities: a real environment and two virtual environments, one using a head-mounted device and the other an immersive room. The aim of this experiment is to compare participants' behavior (head movements, gaze patterns and task performance) between the three modalities. The main contribution of this work is to design a methodology to generate comparable and consistent results between the data recorded during the experiment in the three different contexts. The results highlight a number of points to watch out when using VR in archaeology for analysis and interpretive purposes.Item Effect of Physical Extension on the Range of Demonstrative Indicators by Wearing Non-Humanoid Avatars with Different Looks(The Eurographics Association, 2024) Yamada, Takayoshi; Horii, Moeki; Ebihara, Tadashi; Wakatsuki, Naoto; Zempo, Keiichi; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaUsers can interact in virtual reality (VR) spaces through avatars that differ markedly from their real-world looks. These avatars can be customized to any appearance and size, whether they are based on real entities or are entirely fictitious. These avatars include non-humanoid avatars as well. Some non-humanoid avatars do not have hands, in which case the problem arises that they cannot reference using gesture. In this case, the interlocutor must determine the object from the direction of the referent's gaze and the context. Given the impact of avatar characteristics on the visual communication process of joint attention among users, it is essential to elucidate the connection between avatar traits and the range of reference to facilitate smooth interaction. In this study, the influence of avatar looks on the referential range of demonstrative indicators was elucidated. Experiments were conducted in a VR spaces using avatars of different appearances and sizes, with the aim of understanding how these differences impact the ability to refer to objects using both distal and proximal indicators. Specifically, the study aimed to identify the transition point from the proximal to the distal referential field for each type of avatar. This research seeks to deepen the understanding of how avatars, as proxies for humans in VR spaces, influence communication dynamics. Looking forward, it is anticipated that the findings will enhance the VR experience by improving referential communication among avatars of diverse appearances and sizes. This enhancement is expected to foster richer user interactions, thereby contributing to the future growth of the VR market.Item Empathy in Virtual Agents: How Emotional Expressions can Influence User Perception(The Eurographics Association, 2024) Rings, Sebastian; Schmidt, Susanne; Janßen, Julia; Lehmann-Willenbrock, Nale; Steinicke, Frank; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaWith advancements in natural language processing, intelligent virtual agents (IVAs) are increasingly integrated into various sectors like education, customer service, personal assistance, and healthcare. Medical counseling and digital therapy, fields that require trusting relationships between patient and practitioner, profit immensely from the use of IVAs. A key component of these social relationships is empathy, which helps build trust and understanding. This paper investigates whether simulated empathy through emotional expressions in IVAs can improve interactions and influence users' emotional contagion. Additionally, it explores the correlation between self-reported empathy and users' expressiveness. Participants alternate reading an emotional story with a virtual agent (VA) which mirrors the users' emotional expressions in one condition, while remaining neutral in the baseline condition. The results show that simulated emotions can animate participants to elicit more facial expressions in response to the VA's, while correlating with the users' self reported empathy.Item Examining the Effects of Teleportation on Semantic Memory of a Virtual Museum Compared to Natural Walking(The Eurographics Association, 2024) Choudhary, Zubin Datta; Battistel, Laura; Syamil, Raiffa; Furuya, Hiroshi; Argelaguet, Ferran; Bruder, Gerd; Welch, Greg; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaOver the past decades there has been extensive research investigating the trade-offs between various Virtual Reality (VR) locomotion techniques. One of the most highly researched techniques is teleportation, due to its ability to quickly traverse large virtual spaces even in limited physical tracking spaces. The majority of teleportation research has been focused on its effects on spatial cognition, such as spatial understanding and retention. However, relatively little is known about whether the use of teleportation in immersive learning experiences can effect the acquisition of semantic knowledge - our knowledge about facts, concepts, and ideas - which is essential for long-term learning. In this paper we present a human-subjects study to investigate the effects of teleportation compared to natural walking on the retention of semantic information about artifacts in a virtual museum. Participants visited unique 3D artifacts accompanied by audio clips and artifact names. Our results show that participants reached the same semantic memory performance with both locomotion techniques but with different behaviors, self-assessed performance, and preference. In particular, participants subjectively indicated that they felt that they recalled more semantic memory with walking than teleportation. However, objectively, they spent more time with the artifacts while walking, meaning that they learnt less per a set amount of time than with teleportation. We discuss the relationships, implications, and guidelines for VR experiences designed to help users acquire new knowledge.Item An Exploration of the Effects of in-VR Assessment Format on User Performance and Experience(The Eurographics Association, 2024) Acevedo, Pedro; Jimenez, Angela L.; Magana, Alejandra J.; Benes, Bedrich; Mousas, Christos; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaFor virtual reality (VR) training and learning applications, post-intervention assessment serves as a means to validate the effectiveness of the designed practice. These assessments can occur in the virtual environment by embedding questionnaires and necessary response mechanisms. Researchers have explored embedded VR (in-VR) assessment to minimize disruption to immersion and interference with the user's sense of presence compared to 2D screen-based (out-VR) surveys. However, the influence of in-VR assessment formats on user experience and performance still needs to be explored. Therefore, we conducted a within-group study (N = 25) to compare three assessment formats on task load, usability, user experience, self-efficacy, and performance metrics (i.e., completion time, movement, and response correctness). Using an educational application focused on charged particles and electric fields, we observed no significant differences in self-reported user experience metrics across the in-VR assessment formats. However, participants achieved higher scores when interacting with the 3DStatic assessment. This preference for 3DStatic assessment highlights the advantages of 3D visualizations in VR over traditional 2D user interfaces.Item Extension of Wearable Olfactory Display for Multisensory VR Experience(The Eurographics Association, 2024) Zou, Zhe; Prasetyawan, Dani; Wu, Hsueh Han; Cheng, Kelvin; Nakamoto, Takamichi; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaAn olfactory display is a device that allows users to experience a range of olfactory stimuli. Despite its potential to enhance user experience, challenges remain, including limited odor variety, unwanted odor persistence, constraints on continuous operation, and a restricted range of scent generation. We propose a novel wearable olfactory display that incorporates up to eight odor components to expand the variety of generated scents. Additionally, the device integrates an airflow control system, deodorant filtering, and optimized electrical and mechanical structures. This design aims to provide a more immersive user experience in virtual reality (VR) environments by significantly improving the generation of olfactory stimuli.Item High-Speed Vision-Based Haptic Sensor for Robotic Dermatological Palpation: Force Sensing Method Using Asymmetric Stiffness Coefficient Matrix(The Eurographics Association, 2024) Kato, Fumihiro; Shi, Miaohui; Kamishima, Kaito; Iwata, Hiroyasu; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaIn this paper, we propose a vision-based haptic sensor (VHS) capable of acquiring force at the required speed for softness discrimination during palpation, along with a force measurement algorithm. Palpation requires the simultaneous acquisition of surface imagery from the affected area and haptic information, such as softness and surface texture. Additionally, the sensor must exhibit softness comparable to human skin to avoid causing discomfort to the patient. By designing a force sensor that tracks markers embedded in transparent gel using a camera, we enable the concurrent capture of visual and haptic data. An algorithm is also presented for calculating normal forces based on the extension of the markers' image plane. Accurate force modeling was achieved by training a normal force estimation model using an asymmetric stiffness coefficient matrix, which effectively mitigates cross-talk effects. Furthermore, the process was optimized by employing sparse search techniques with narrow marker search ranges between frames during high-speed imaging, enabling rapid detection of circular force markers and achieving force acquisition at 601.25 Hz. Compared to previous methods, the proposed approach offers higher measurement accuracy and speed within the force range required for palpation. It can measure at 500 Hz or higher, which is crucial for discriminating the five levels of softness important in dermatological palpation. Therefore, the proposed haptic sensor shows promise for use in robotic palpation.Item ICAT-EGVE 2024 - International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments(The Eurographics Association, 2024) Hasegawa, Shoichi; Sakata, Nobuchika; Veronica Sundstedt; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaItem The Influence of a Prop Mass on Task Performance in Virtual Reality(The Eurographics Association, 2024) Thomesse, Lucas; Cauquis, Julien; Peillard, Etienne; Dominjon, Lionel; Duval, Thierry; Moreau, Guillaume; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaAs Virtual Reality (VR) applications continue to develop further, many questions persist regarding how to optimize user performance in virtual environments. Among the numerous variables that could influence performance, the mass of the props used within VR applications is particularly noteworthy. This paper thus proposes a user study to investigate the influence of the mass of a prop (a tool replica) on users' performance in a pointing task. A VR within-subject experiment was conducted, with three different weighted replicas, to collect objective and subjective data from participants. Results suggest that the mass of the prop can influence task performance in terms of error-free selection time, number of errors, and subjective perceptions such as perceived difficulty and cognitive load. Indeed, performance was significantly better when using a lighter replica than a heavier one, and subjective user-experience-related metrics were also significantly improved with a light replica. These results help pave the way for additional research on user performance within virtual environments.Item Influence of Virtual Reality Setup on Locomotion Technique Usage during Navigation with Walking, Steering and Teleportation(The Eurographics Association, 2024) Brument, Hugo; Zhang, Renate; Kaufmann, Hannes; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThe evaluations of Locomotion Techniques (LTs) provide information regarding the advantages and shortcomings of LTs for navigating in Virtual Reality (VR). While the primary approach is to assess the LTs separately (e.g., comparing walking versus steering versus teleportation), little is known about how LTs can be used simultaneously (i.e., how users navigate when several options are offered), especially in different VR setups. This paper aimed to investigate the influence of real and virtual environment size on LT usage during VR navigation for the first time. We conducted a user study (n=24), where participants had to explore a virtual garden and pick up mushrooms. Participants could choose to walk, steer, or teleport. We varied the size of the virtual environment as well as the size of the user's physical workspace. We found that users' LT usage depends on the VR setup. For instance, they tend to do more displacements with teleportation (which was users' favorite technique overall) but would rather walk or steer when the size of the virtual environment is the same as the workspace. This work contributes to understanding user behavior in VR, particularly regarding LT usage, which tends to be an overlooked topic.Item Insights from an Experiment Investigating the Relationship between the Effect of Electrical Stimulation of the Ankle Tendons and the User's Biological Structure, Gender, or Age(The Eurographics Association, 2024) Ota, Takashi; Kuzuoka, Hideaki; Amemiya, Tomohiro; Aoyama, Kazuma; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThe effects of transcutaneous electrical nerve stimulation (TENS) have individual differences in sensory presentation. These differences may stem from variations in the user's biological structure, including body size and skin conditions. In particular, TENS of the lower limbs is assumed to be affected by the differences in biological structure because the muscles of the lower limbs are larger than the muscles of the upper limbs, and a certain number of people have more hair on their skin than those of the upper limbs. Identifying the factors that explain these individual differences in TENS is crucial for evaluating the potential applications of TENS and developing appropriate research protocols in the future. In this study, we examined the individual differences in the effects of TENS by focusing on tendon electrical stimulation of the ankle, a method that presents body tilt sensations. Specifically, we investigated the correlation between the body tilt sensations and demographic (age, gender) or biostructure metrics (body weight, body fat percentage, etc.) in 28 experimental participants. The results revealed significant differences in the correct answer rate and the magnitude of body tilt sensations based on gender. Furthermore, there was a correlation between the correct answer rate or magnitude and the age of female participants at specific stimulation intensities. No biostructure metrics in this study were sufficiently correlated with the correct answer rate or magnitude of body tilt sensations.Item Learning-based Event-based Human Gaze Tracking with Blink Detection(The Eurographics Association, 2024) Kanno, Mao; Isogawa, Mariko; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThis paper proposes an eye-tracking system using a CNN-LSTM network that utilizes only event data. This method holds potential for future applications in a wide range of fields, including AR/VR headsets, healthcare, and sports. Compared to traditional frame-based camera methods, our proposed approach achieves high FPS and low power consumption by utilizing event cameras. To improve the estimation accuracy, our gaze estimation system incorporates a blink detection, which was absent in existing systems. Our results shows that our method achieves better performance compared to existing studies.Item Preliminary Analysis of Emergency Vehicle Driving Behavior in Traffic Signal Violation Scenarios using a VR Simulator(The Eurographics Association, 2024) Sudou, Takuma; Inoue, Sota; Yamaguchi, Shingo; Nagata, Shouhei; Yamazoe, Hirotake; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThis paper introduces our investigation into driving behaviors during emergency operations, such as entering an intersection on a red traffic light, to compare and analyze behavior differences based on the driver's emergency driving experience and skills. As a preliminary step, we developed a VR-based simulator that replicates emergency driving scenarios, in consultation with firefighters. Drivers with varied levels of experience and skill in emergency driving executed emergency driving maneuvers using this VR simulator, during which the system recorded their behaviors, including eye movement, head orientation, throttle and brake positions, and steering angle. Our analysis revealed distinct behavioral differences based on experience and skill; for example, professional firefighters repeatedly decelerated, briefly stopped, and checked both left and right sides before entering the intersection.Item Priming and personality effects on the Sense of Embodiment for human and non-human avatars in Virtual Reality(The Eurographics Association, 2024) Higgins, Darragh; McDonnell, Rachel; Normand, Jean-Marie; Fribourg, Rebecca; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaThe increasingly widespread use of Virtual Reality (VR) technology necessitates a deeper understanding of virtual embodiment and its relationship to human subjectivity. Individual differences and primed perceptual associations that could influence the perception of one's virtual body remain incompletely explored. In the study outlined below, we exposed participants to human and nonhuman avatars, with half of the sample experiencing a concept primer beforehand. We also gathered measurements on subjective traits, in the Multidimensional Assessment of Interoceptive Awareness and the Ten Item Personality Inventory. Results support previous work which suggests greater body ownership in human as opposed to non-human avatars, and suggest that concept priming could have an influence on embodiment and state body mindfulness. Additionally, results highlight an array of personality trait influences on embodiment and body mindfulness measures.Item Psychophysical Analysis of Delay Detection in a VR Avatar's Standing-up Motion(The Eurographics Association, 2024) Olimov, Muhammadolim; Goto, Yuta; Okamoto, Shogo; Hasegawa, Shoichi; Sakata, Nobuchika; Sundstedt, VeronicaDiscrepancies between an avatar's movements in virtual space and participants' movements in the real world can degrade the quality of the virtual reality (VR) experience. One prominent form of such a discrepancy is delay. Many previous studies have investigated the acceptable delay between head-tracking and landscape rendering, or the delay of the seen user's hand movements. However, the minimum detectable delay during full-body movements, particularly those involving significant changes in viewpoint, has not yet been fully investigated. In this study, the detection threshold for delays between participants' real-world movements, where the head and viewpoint positions move substantially, and corresponding avatar movements in virtual space was investigated. In the experiment, participants wearing VR goggles performed stand-up motions. Corresponding stand-up motion of the avatar in the VR space involved the delay of up to 300 ms. Participants looked at the avatar's movements through the mirror placed in front of him/her in the VR space. The detection thresholds of five individuals were investigated using psychophysical method of constant stimuli. In the experiment, the participants answered whether the avatar's movements delayed or did not delay comparing with their own movements. The mean detection threshold, at which the participant reports the presence of delay for 50% of all the time, was found to be 129.70 ms, with a 95% confidence interval of 31.59 ms. These findings provide some insights for designers of VR applications.