13 research outputs found
Cybersickness and Its Severity Arising from Virtual Reality Content: A Comprehensive Study
Virtual reality (VR) experiences often elicit a negative effect, cybersickness, which results in nausea, disorientation, and visual discomfort. To quantitatively analyze the degree of cybersickness depending on various attributes of VR content (i.e., camera movement, field of view, path length, frame reference, and controllability), we generated cybersickness reference (CYRE) content with 52 VR scenes that represent different content attributes. A protocol for cybersickness evaluation was designed to collect subjective opinions from 154 participants as reliably as possible in conjunction with objective data such as rendered VR scenes and biological signals. By investigating the data obtained through the experiment, the statistically significant relationships—the degree that the cybersickness varies with each isolated content factor—are separately identified. We showed that the cybersickness severity was highly correlated with six biological features reflecting brain activities (i.e., relative power spectral densities of Fp1 delta, Fp 1 beta, Fp2 delta, Fp2 gamma, T4 delta, and T4 beta waves) with a coefficient of determination greater than 0.9. Moreover, our experimental results show that individual characteristics (age and susceptibility) are also quantitatively associated with cybersickness level. Notably, the constructed dataset contains a number of labels (i.e., subjective cybersickness scores) that correspond to each VR scene. We used these labels to build cybersickness prediction models and obtain a reliable predictive performance. Hence, the proposed dataset is supposed to be widely applicable in general-purpose scenarios regarding cybersickness quantification
Effect of Emotional Information Preference on Cognitive Processes: Specifics for Decision-making
Emotions affect cognitive processes such as memory and reasoning, but cognitive evaluation and control processes are also important to our emotional experiences. The current study aims to examine the effect of emotional information preference on decision-making processes and whether cognitive processing is affected by task self-relevance and cognitive load. Participants completed the Mini-Mental State Examination (MMSE) and the Cognitive Load Valence interaction. We found that both young and older adults show emotional information preferences when under load and emotional preference in the absence of load for all types of self-reliabilities. The cognitive load valence interaction revealed that the young adults exhibited a more robust negativity decrease when under cognitive load than a positive decrease, with the positive information preference maintained under both cognitive load and no cognitive load The findings from the present study support DIT's suitability for examining emotional information selectivity during decision making processes based on information acquisition goals
Deep Transformer Based Video Inpainting Using Fast Fourier Tokenization
Bridging distant space-time interactions is important for high-quality video inpainting with large moving masks. Most existing technologies exploit patch similarities within the frames, or leaverage large-scale training data to fill the hole along spatial and temporal dimensions. Recent works introduce promissing Transformer architecture into deep video inpainting to escape from the dominanace of nearby interactions and achieve superior performance than their baselines. However, such methods still struggle to complete larger holes containing complicated scenes. To alleviate this issue, we first employ a fast Fourier convolutions, which cover the frame-wide receptive field, for token representation. Then, the token passes through the seperated spatio-temporal transformer to explicitly moel the long-range context relations and simultaneously complete the missing regions in all input frames. By formulating video inpainting as a directionless sequence-to-sequence prediction task, our model fills visually consistent content, even under conditions such as large missing areas or complex geometries. Furthermore, our spatio-temporal transformer iteratively fills the hole from the boundary enabling it to exploit rich contextual information. We validate the superiority of the proposed model by using standard stationary masks and more realistic moving object masks. Both qualitative and quantitative results show that our model compares favorably against the state-of-the-art algorithms
Stereoscopic 3D Visual Discomfort Prediction: A Dynamic Accommodation and Vergence Interaction Model
Deep visual saliency on stereoscopic images
Visual saliency on stereoscopic 3D (S3D) images has been shown to be heavily influenced by image quality. Hence, this dependency is an important factor in image quality prediction, image restoration and discomfort reduction, but it is still very difficult to predict such a nonlinear relation in images. In addition, most algorithms specialized in detecting visual saliency on pristine images may unsurprisingly fail when facing distorted images. In this paper, we investigate a deep learning scheme named Deep Visual Saliency (DeepVS) to achieve a more accurate and reliable saliency predictor even in the presence of distortions. Since visual saliency is influenced by low-level features (contrast, luminance, and depth information) from a psychophysical point of view, we propose seven low-level features derived from S3D image pairs and utilize them in the context of deep learning to detect visual attention adaptively to human perception. During analysis, it turns out that the low-level features play a role to extract distortion and saliency information. To construct saliency predictors, we weight and model the human visual saliency through two different network architectures, a regression and a fully convolutional neural networks. Our results from thorough experiments confirm that the predicted saliency maps are up to 70% correlated with human gaze patterns, which emphasize the need for the hand-crafted features as input to deep neural networks in S3D saliency detection
Targeted Inhibition of the NCOA1/STAT6 Protein-Protein Interaction
The complex formation between transcription factors (TFs) and coactivator proteins is required for transcriptional activity, and thus disruption of aberrantly activated TF/coactivator interactions could be an attractive therapeutic strategy. However, modulation of such protein protein interactions (PPIs) has proven challenging. Here we report a cell-permeable, proteolytically stable, stapled helical peptide directly targeting nuclear receptor coactivator 1 (NCOA1), a coactivator required for the transcriptional activity of signal transducer and activator of transcription 6 (STAT6). We demonstrate that this stapled peptide disrupts the NCOA1/STAT6 complex, thereby repressing STAT6-mediated transcription. Furthermore, we solved the first crystal structure of a stapled peptide in complex with NCOA1. The stapled peptide therefore represents an invaluable chemical probe for understanding the precise role of the NCOA1/STAT6 interaction and an excellent starting point for the development of a novel class of therapeutic agents.115sciescopu