968 research outputs found

    Visual working memory in immersive visualization: a change detection experiment and an image-computable model

    Get PDF
    Visual working memory (VWM) is a cognitive mechanism essential for interacting with the environment and accomplishing ongoing tasks, as it allows fast processing of visual inputs at the expense of the amount of information that can be stored. A better understanding of its functioning would be beneficial to research fields such as simulation and training in immersive Virtual Reality or information visualization and computer graphics. The current work focuses on the design and implementation of a paradigm for evaluating VWM in immersive visualization and of a novel image-based computational model for mimicking the human behavioral data of VWM. We evaluated the VWM at the variation of four conditions: set size, spatial layout, visual angle (VA) subtending stimuli presentation space, and observation time. We adopted a full factorial design and analysed participants' performances in the change detection experiment. The analysis of hit rates and false alarm rates confirms the existence of a limit of VWM capacity of around 7 & PLUSMN; 2 items, as found in the literature based on the use of 2D videos and images. Only VA and observation time influence performances (p<0.0001). Indeed, with VA enlargement, participants need more time to have a complete overview of the presented stimuli. Moreover, we show that our model has a high level of agreement with the human data, r>0.88 (p<0.05)

    Visualization and Interaction Technologies in Serious and Exergames for Cognitive Assessment and Training: A Survey on Available Solutions and Their Validation

    Get PDF
    Exergames and serious games, based on standard personal computers, mobile devices and gaming consoles or on novel immersive Virtual and Augmented Reality techniques, have become popular in the last few years and are now applied in various research fields, among which cognitive assessment and training of heterogeneous target populations. Moreover, the adoption of Web based solutions together with the integration of Artificial Intelligence and Machine Learning algorithms could bring countless advantages, both for the patients and the clinical personnel, as allowing the early detection of some pathological conditions, improving the efficacy and adherence to rehabilitation processes, through the personalisation of training sessions, and optimizing the allocation of resources by the healthcare system. The current work proposes a systematic survey of existing solutions in the field of cognitive assessment and training. We evaluate the visualization and interaction technologies commonly adopted and the measures taken to fulfil the need of the pathological target populations. Moreover, we analyze how implemented solutions are validated, i.e. The chosen experimental designs, data collection and analysis. Finally, we consider the availability of the applications and raw data to the large community of researchers and medical professionals and the actual application of proposed solutions in the standard clinical practice. Despite the potential of these technologies, research is still at an early stage. Although the recent release of accessible immersive virtual reality headsets and the increasing interest on vision-based techniques for tracking body and hands movements, many studies still rely on non-immersive virtual reality (67.2%), mainly mobile and personal computers, and standard gaming tools for interactions (41.5%). Finally, we highlight that although the interest of research community in this field is increasingly higher, the sharing of dataset (10.6%) and implemented applications (3.8%) should be promoted and the number of healthcare structures which have successfully introduced the new technological approaches in the treatment of their host patients is limited (10.2%)

    The Effects of Weather on the Life Time of Wireless Sensor Networks Using FSO/RF Communication

    Get PDF
    The increased interest in long lasting wireless sensor networks motivates to use Free Space Optics (FSO) link along with radio frequency (RF) link for communication. Earlier results show that RF/FSO wireless sensor networks have life time twice as long as RF only wireless sensor networks. However, for terrestrial applications, the effect of weather conditions such as fog, rain or snow on optical wireless communication link is major concern, that should be taken into account in the performance analysis. In this paper, life time performance of hybrid wireless sensor networks is compared to wireless sensor networks using RF only for terrestrial applications and weather effects of fog, rain and snow. The results show that combined hybrid network with three threshold scheme can provide efficient power consumption of 6548 seconds, 2118 seconds and 360 seconds for measured fog, snow and rain events respectively resulting in approximately twice of the life time with only RF link

    Near-optimal combination of disparity across a log-polar scaled visual field

    Get PDF
    The human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual system integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of human disparity sensitivity. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment
    corecore