3 research outputs found

    Bonnard´s representation of the perception of substance

    Get PDF
    Artists are said to be like neuroscientists able to exploit the capacities of the brain to generate aesthetic experience (Zeki, 2001). Pierre Bonnard (1867-1947) has been recognized as one of the greatest and most enigmatic masters of the 20th century painting. For his understanding of the eye movements, attentional shifts mechanism and the representation in his paintings of the complexity of the physiological process of vision perception, something that he famously referred to as "the transcription of the adventures of the optic nerve", he is considered a revolutionary painter. Our recent eye movements study on Bonnard's paintings evidences a "temporal-extended" mechanism in the control of scanpaths that refers to a progression of the scanpath pattern during repetitive viewings and supports the phenomenon of late emotional response which was one of the artist's artistic and perceptual objective

    Parsing eye movement analysis of scanpaths of naïve viewers of art: How do we differentiate art from non-art pictures?

    Get PDF
    Relating to G. Buswell’s early work we posed the questions: How do art-naïve people look at pairs of artful pictures and similarly looking snapshots? Does the analysis of their eye movement recordings reveal a difference in their perception? Parsing eye scanpaths using string editing, similarity coefficients can be sorted out and represented for the two measures ‘Sp’ (Similarities of position) and ‘Ss’ (Similarities of sequences). 25 picture pairs were shown 5 times to 7 subjects with no specific task, who were ‘art-naïve’ to avoid confounding of the results through specific art knowledge of the subjects. A significant difference between scanpaths of artful pictures compared to snapshots was not found in our subjects´ repeated viewing sessions. Auto-similarity (same subject viewing the same picture) and cross-similarity (different subjects viewing the same picture) significantly demonstrated this result, for sequences of eye fixations (Ss) as well as their positions (Sp): In case of global (different subjects and different pairs) sequential similarity Ss we found that about 84 percent of the picture pairs where viewed with very low similarity, in quasi random mode within the range of random values. Only in 4 out of 25 artful-picture snapshot pairs was a high similarity found. A specific restricted set of representative regions in the internal cognitive model of the picture is essential for the brain to perceive and eventually recognize the picture: This representative set is quite similar for different subjects and different picture pairs independently of their art–non art features that where in most cases not recognized by our subjects. Furthermore our study shows that the distinction of art versus non-art has vanished, causing confusion about the ratio of signal and noise in the communication between artists and viewers of art

    Rock segmentation in the navigation vision of the planetary rovers

    Get PDF
    Visual navigation is an essential part of planetary rover autonomy. Rock segmentation emerged as an important interdisciplinary topic among image processing, robotics, and mathematical modeling. Rock segmentation is a challenging topic for rover autonomy because of the high computational consumption, real-time requirement, and annotation difficulty. This research proposes a rock segmentation framework and a rock segmentation network (NI-U-Net++) to aid with the visual navigation of rovers. The framework consists of two stages: the pre-training process and the transfer-training process. The pre-training process applies the synthetic algorithm to generate the synthetic images; then, it uses the generated images to pre-train NI-U-Net++. The synthetic algorithm increases the size of the image dataset and provides pixel-level masks—both of which are challenges with machine learning tasks. The pre-training process accomplishes the state-of-the-art compared with the related studies, which achieved an accuracy, intersection over union (IoU), Dice score, and root mean squared error (RMSE) of 99.41%, 0.8991, 0.9459, and 0.0775, respectively. The transfer-training process fine-tunes the pre-trained NI-U-Net++ using the real-life images, which achieved an accuracy, IoU, Dice score, and RMSE of 99.58%, 0.7476, 0.8556, and 0.0557, respectively. Finally, the transfer-trained NI-U-Net++ is integrated into a planetary rover navigation vision and achieves a real-time performance of 32.57 frames per second (or the inference time is 0.0307 s per frame). The framework only manually annotates about 8% (183 images) of the 2250 images in the navigation vision, which is a labor-saving solution for rock segmentation tasks. The proposed rock segmentation framework and NI-U-Net++ improve the performance of the state-of-the-art models. The synthetic algorithm improves the process of creating valid data for the challenge of rock segmentation. All source codes, datasets, and trained models of this research are openly available in Cranfield Online Research Data (CORD)
    corecore