5 research outputs found

    Does the human brain really like ICT tools and being outdoors? a brief overview of the cognitive neuroscience perspective of the CyberParks concept

    Get PDF
    The paper presents an overview of the latest studies on cognitive neuroscience that can help evaluate concepts that promote technologically-enhanced outdoor activities, such as CyberParks. The following questions are asked in the paper: does the human brain really like ICT tools? Does the human brain really like being outdoors? And finally: does the human brain really like technologically-enhanced outdoor activities? The results of the studies presented show that the human brain does not like ICT tools yet, it likes being outdoors very much. At the same time, it was shown that outdoors activities may be encouraged by ICT tools, yet outdoors activities themselves should be free from ICT tools. Using ICT tools and physical activity at the same time is a dual task, a type of activity that leads to cognitive and physical processes being destabilised, which leads to weakened effects of both cognitive and physical tasks. From the perspective of cognitive neuroscience, CyberParks are not a solution that the human brain really likes. Another issue is also discussed, namely: do technologically-enhanced outdoor activities—such as in CyberParks—really increase the quality of life?The study was supported by European Cooperation in Science and Technology Action: Fostering knowledge about the relationship between Information and Communication Technologies and Public Spaces supported by strategies to improve their use and attractiveness (CYBERPARKS) (TUD COST Action TU1306).peer-reviewe

    Explaining the Timing of Natural Scene Understanding with a Computational Model of Perceptual Categorization.

    No full text
    Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called "superordinate advantage." Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability

    Comparing computational models of vision to human behaviour

    Get PDF
    Biological vision and computational models of vision can be split into three independent components (image description, decision process, and image set). The thesis presented here aimed to investigate the influence of each of these core components on computational model’s similarity to human behaviour. Chapter 3 investigated the similarity of different computational image descriptors to their biological counterparts, using an image matching task. The results showed that several of the computational models could explain a significant amount of the variance in human performance on individual images. The deep supervised convolutional neural net explained the most variance, followed by GIST, HMAX and then PHOW. Chapter 4 investigated which computational decision process best explained observers’ behaviour on an image categorization task. The results showed that Decision Bound theory produced behaviour the closest to that of observers. This was followed by Exemplar theory and Prototype theory. Chapter 5 examined whether the naturally differing image set between computational models and observers could partially account for the difference in their behaviour. The results showed that, indeed, the naturally differing image set between computational models and observers was affecting the similarity of their behaviour. This gap did not alter which image descriptor best fit observers’ behaviour and could be reduced by training observers on the image set the computational models were using. Chapter 6 investigated, using computational models of vision, the impact of the neighbouring (masking) images on the target images in a RSVP task. This was done by combining the neighbouring images with the target image for the computational models’ simulation for each trial. The results showed that models behaviour became closer to that of the human observers when the neighbouring mask images were included in the computational simulations, as would be expected given an integration period for neural mechanisms. This thesis has shown that computational models can show quite similar behaviours to human observers, even at the level of how they perform with individual images. While this shows the potential utility in computational models as a tool to study visual processing, It has also shown the need to take into account many aspects of the overall model of the visual process and task; not only the image description, but the task requirements, the decision processes, the images being used as stimuli and even the sequence in which they are presented

    Comparing computational models of vision to human behaviour

    Get PDF
    Biological vision and computational models of vision can be split into three independent components (image description, decision process, and image set). The thesis presented here aimed to investigate the influence of each of these core components on computational model’s similarity to human behaviour. Chapter 3 investigated the similarity of different computational image descriptors to their biological counterparts, using an image matching task. The results showed that several of the computational models could explain a significant amount of the variance in human performance on individual images. The deep supervised convolutional neural net explained the most variance, followed by GIST, HMAX and then PHOW. Chapter 4 investigated which computational decision process best explained observers’ behaviour on an image categorization task. The results showed that Decision Bound theory produced behaviour the closest to that of observers. This was followed by Exemplar theory and Prototype theory. Chapter 5 examined whether the naturally differing image set between computational models and observers could partially account for the difference in their behaviour. The results showed that, indeed, the naturally differing image set between computational models and observers was affecting the similarity of their behaviour. This gap did not alter which image descriptor best fit observers’ behaviour and could be reduced by training observers on the image set the computational models were using. Chapter 6 investigated, using computational models of vision, the impact of the neighbouring (masking) images on the target images in a RSVP task. This was done by combining the neighbouring images with the target image for the computational models’ simulation for each trial. The results showed that models behaviour became closer to that of the human observers when the neighbouring mask images were included in the computational simulations, as would be expected given an integration period for neural mechanisms. This thesis has shown that computational models can show quite similar behaviours to human observers, even at the level of how they perform with individual images. While this shows the potential utility in computational models as a tool to study visual processing, It has also shown the need to take into account many aspects of the overall model of the visual process and task; not only the image description, but the task requirements, the decision processes, the images being used as stimuli and even the sequence in which they are presented
    corecore