Virtual Reality (VR) sickness is often accompanied by symptoms such as nausea and dizziness, and a prominent theory explaining this phenomenon is the sensory conflict theory. Recently, studies have used Deep Learning to classify VR sickness levels; however, there is a paucity of research on Deep Learning models that utilize both visual information and motion data based on sensory conflict theory. In this paper, the authors propose a parallel merging of a Deep Learning model (4bay) to classify the level of VR sickness by utilizing the user's motion data (HMD, controller data) and visual data (rendered image, depth image) based on sensory conflict theory. The proposed model consists of a visual processing module, a motion processing module, and an FC-based VR sickness level classification module. The performance of the proposed model was compared with that of the developed models at the time of design. As a result of the comparison, it was confirmed that the proposed model performed better than the single model and the merged (2bay) model in classifying the user's VR sickness level.
- APA 7th style
- Chicago style
- IEEE style
- Vancouver style