10 research outputs found

    Selecting the special or choosing the common? A high-powered conceptual replication of Kim and Markus’ (1999) pen study

    Get PDF
    Kim and Markus (1999; Study 3) found that 74% of European Americans selected a pen with an uncommon (vs. common) color, whereas only 24% of East Asians made such a choice, highlighting a pronounced cross-cultural difference in the extent to which people opt for originality or make majority- based choices. The present high-powered study (N = 729) conceptually replicates the results from Kim and Markus (1999; Study 3), although our effect size (r = .12) is significantly weaker than that of the original study (r = .52). Interestingly, a larger proportion of Chinese, but not US, participants selected a pen with an uncommon color now than during the original study. Thus, our findings indicate a potential transmission of certain Western values to cultures traditionally characterized by collectivism and conformity, likely exacerbated by the globalization of mass media and the rapid economic growth in many East Asian countries.publishedVersio

    The deep past in the virtual present: developing an interdisciplinary approach towards understanding the psychological foundations of palaeolithic cave art

    Get PDF
    Virtual Reality (VR) has vast potential for developing systematic, interdisciplinary studies to understand ephemeral behaviours in the archaeological record, such as the emergence and development of visual culture. Upper Palaeolithic cave art forms the most robust record for investigating this and the methods of its production, themes, and temporal and spatial changes have been researched extensively, but without consensus over its functions or meanings. More compelling arguments draw from visual psychology and posit that the immersive, dark conditions of caves elicited particular psychological responses, resulting in the perception—and depiction—of animals on suggestive features of cave walls. Our research developed and piloted a novel VR experiment that allowed participants to perceive 3D models of cave walls, with the Palaeolithic art digitally removed, from El Castillo cave (Cantabria, Spain). Results indicate that modern participants’ visual attention corresponded to the same topographic features of cave walls utilised by Palaeolithic artists, and that they perceived such features as resembling animals. Although preliminary, our results support the hypothesis that pareidolia—a product of our cognitive evolution—was a key mechanism in Palaeolithic art making, and demonstrates the potential of interdisciplinary VR research for understanding the evolution of art, and demonstrate the potential efficacy of the methodology

    Influence of dynamic content on visual attention during video advertisements

    Get PDF
    Purpose Dynamic advertising, including television and online video ads, demands new theory and tools developed to understand attention to moving stimuli. The purpose of this study is to empirically test the predictions of a new dynamic attention theory, Dynamic Human-Centred Communication Systems Theory, versus the predictions of salience theory. Design/methodology/approach An eye-tracking study used a sample of consumers to measure visual attention to potential areas of interest (AOIs) in a random selection of unfamiliar video ads. An eye-tracking software feature called intelligent bounding boxes (IBBs) was used to track attention to moving AOIs. AOIs were coded for the presence of static salience variables (size, brightness, colour and clutter) and dynamic attention theory dimensions (imminence, motivational relevance, task relevance and stability). Findings Static salience variables contributed 90% of explained variance in fixation and 57% in fixation duration. However, the data further supported the three-way interaction uniquely predicted by dynamic attention theory: between imminence (central vs peripheral), relevance (motivational or task relevant vs not) and stability (fleeting vs stable). The findings of this study indicate that viewers treat dynamic stimuli like real life, paying less attention to central, relevant and stable AOIs, which are available across time and space in the environment and so do not need to be memorised. Research limitations/implications Despite the limitations of small samples of consumers and video ads, the results of this study demonstrate the potential of two relatively recent innovations, which have received limited emphasis in the marketing literature: dynamic attention theory and IBBs. Practical implications This study documents what does and does not attract attention to video advertising. What gets attention according to salience theory (e.g. central location) may not always get attention in dynamic advertising because of the effects of relevance and stability. To better understand how to execute video advertising to direct and retain attention to important AOIs, advertisers and advertising researchers are encouraged to use IBBs. Originality/value This study makes two original contributions: to marketing theory, by showing how dynamic attention theory can predict attention to video advertising better than salience theory, and to marketing research, showing the utility of tracking visual attention to moving objects in video advertising with IBBs, which appear underutilised in advertising research

    Overcoming coordination failure in games with focal points:An experimental investigation

    Get PDF
    Focal points (Schelling, 1960) have shown limitations as coordination devices in games with conflict, such as the battle of the sexes games. We experimentally test whether an increase in their salience can counteract the negative impact of conflict on coordination. The intuition is that, in the presence of conflict, the solution to the coordination dilemma offered by the focal point loses importance. Increasing its salience increases its relevance and therefore coordination success. Our results provide strong support for this conjecture. Furthermore, when games feature outcomes with different degrees of payoffs’ inequality (i.e. the difference of players’ payoffs) and efficiency (i.e. the sum of players’ payoffs), increasing salience does not lead to an obvious increase in coordination, unless the salience of the focal point is maximal

    Biologically Inspired Computer Vision/ Applications of Computational Models of Primate Visual Systems in Computer Vision and Image Processing

    Get PDF
    Biologically Inspired Computer VisionApplications of Computational Models of Primate Visual Systems in Computer Vision and Image Processing Reza Hojjaty Saeedy Abstract Biological vision systems are remarkable at extracting and analyzing the information that is essential for vital functional needs. They perform all these tasks with both high sensitivity and strong reliability. They can efficiently and quickly solve most of the difficult computa- tional problems that are still challenging for artificial systems, such as scene segmentation, 3D/depth perception, motion recognition, etc. So it is no surprise that biological vision systems have been a source of inspiration for computer vision problems. In this research, we aim to provide a computer vision task centric framework out of models primarily originating in biological vision studies. We try to address two specific tasks here: saliency detection and object classification. In both of these tasks we use features extracted from computational models of biological vision systems as a starting point for further processing. Saliency maps are 2D topographic maps that catch the most conspicuous regions of a scene, i.e. the pixels in an image that stand out against their neighboring pixels. So these maps can be thought of as representations of the human attention process and thus have a lot of applications in computer vision. We propose a cascade that combines two well- known computational models for perception of color and orientation in order to simulate the responses of the primary areas of the primate visual cortex. We use these responses as inputs to a spiking neural network(SNN) and finally the output of this SNN will serve as the input to our post-processing algorithm for saliency detection. Object classification/detection is the most studied task in computer vision and machine learning and it is interesting that while it looks trivial for humans it is a difficult problem for artificial systems. For this part of the thesis we also design a pipeline including feature extraction using biologically inspired systems, manifold learning for dimensionality reduction and self-organizing(vector quantization) neural network as a supervised method for prototype learning

    Biologically Inspired Computer Vision/ Applications of Computational Models of Primate Visual Systems in Computer Vision and Image Processing

    Get PDF
    Biologically Inspired Computer VisionApplications of Computational Models of Primate Visual Systems in Computer Vision and Image Processing Reza Hojjaty Saeedy Abstract Biological vision systems are remarkable at extracting and analyzing the information that is essential for vital functional needs. They perform all these tasks with both high sensitivity and strong reliability. They can efficiently and quickly solve most of the difficult computa- tional problems that are still challenging for artificial systems, such as scene segmentation, 3D/depth perception, motion recognition, etc. So it is no surprise that biological vision systems have been a source of inspiration for computer vision problems. In this research, we aim to provide a computer vision task centric framework out of models primarily originating in biological vision studies. We try to address two specific tasks here: saliency detection and object classification. In both of these tasks we use features extracted from computational models of biological vision systems as a starting point for further processing. Saliency maps are 2D topographic maps that catch the most conspicuous regions of a scene, i.e. the pixels in an image that stand out against their neighboring pixels. So these maps can be thought of as representations of the human attention process and thus have a lot of applications in computer vision. We propose a cascade that combines two well- known computational models for perception of color and orientation in order to simulate the responses of the primary areas of the primate visual cortex. We use these responses as inputs to a spiking neural network(SNN) and finally the output of this SNN will serve as the input to our post-processing algorithm for saliency detection. Object classification/detection is the most studied task in computer vision and machine learning and it is interesting that while it looks trivial for humans it is a difficult problem for artificial systems. For this part of the thesis we also design a pipeline including feature extraction using biologically inspired systems, manifold learning for dimensionality reduction and self-organizing(vector quantization) neural network as a supervised method for prototype learning

    Overcoming coordination failure in games with focal points: An experimental investigation

    Get PDF
    We experimentally test whether increasing the salience of payoff-irrelevant focal points (Schelling, 1960) can counteract the negative impact of conflicts of interest on coordination. The intuition is that, in the presence of conflict, the solution to the coordination dilemma offered by the focal point loses importance. Increasing its salience increases its relevance and, therefore, coordination success. When we vary label salience between subjects, we find support for this conjecture in games with a constant degree of conflict, similar to battle of the sexes games, but not in games that feature outcomes with different degrees of payoff inequality and efficiency. In an additional experiment in which we vary label salience within subjects, choices are found not to be affected by our salience manipulation. Yet, the proportion of choices consistent with the focal point is significantly greater than that in the between-subject design

    UEyes: Understanding Visual Saliency across User Interface Types

    Get PDF
    Funding Information: This work was supported by Aalto University’s Department of Information and Communications Engineering, the Finnish Center for Artifcial Intelligence (FCAI), the Academy of Finland through the projects Human Automata (grant 328813) and BAD (grant 318559), the Horizon 2020 FET program of the European Union (grant CHISTERA-20-BCI-001), and the European Innovation Council Pathfnder program (SYMBIOTIK project, grant 101071147). We appreciate Chuhan Jiao’s initial implementation of the baseline methods for saliency prediction and active discussion with Yao (Marc) Wang. Publisher Copyright: © 2023 Owner/Author.While user interfaces (UIs) display elements such as images and text in a grid-based layout, UI types differ significantly in the number of elements and how they are displayed. For example, webpage designs rely heavily on images and text, whereas desktop UIs tend to feature numerous small images. To examine how such differences affect the way users look at UIs, we collected and analyzed a large eye-tracking-based dataset, UEyes (62 participants and 1,980 UI screenshots), covering four major UI types: webpage, desktop UI, mobile UI, and poster. We analyze its differences in biases related to such factors as color, location, and gaze direction. We also compare state-of-the-art predictive models and propose improvements for better capturing typical tendencies across UI types. Both the dataset and the models are publicly available.Peer reviewe

    Early Visual Processing of Feature Saliency Tasks: A Review of Psychophysical Experiments

    Get PDF
    The visual system is constantly bombarded with information originating from the outside world, but it is unable to process all the received information at any given time. In fact, the most salient parts of the visual scene are chosen to be processed involuntarily and immediately after the first glance along with endogenous signals in the brain. Vision scientists have shown that the early visual system, from retina to lateral geniculate nucleus (LGN) and then primary visual cortex, selectively processes the low-level features of the visual scene. Everything we perceive from the visual scene is based on these feature properties and their subsequent combination in higher visual areas. Different experiments have been designed to investigate the impact of these features on saliency and understand the relative visual mechanisms. In this paper, we review the psychophysical experiments which have been published in the last decades to indicate how the low-level salient features are processed in the early visual cortex and extract the most important and basic information of the visual scene. Important and open questions are discussed in this review as well and one might pursue these questions to investigate the impact of higher level features on saliency in complex scenes or natural images

    Understanding, Modeling, and Simulating the Discrepancy Between Intended and Perceived Image Appearance on Optical See-Through Augmented Reality Displays

    Get PDF
    Augmented reality (AR) displays are transitioning from being primarily used in research and development settings, to being used by the general public. With this transition, these displays will be used by more people, in many different environments, and in many different contexts. Like other displays, the user\u27s perception of virtual imagery is influenced by the characteristics of the user\u27s environment, creating a discrepancy between the intended appearance and the perceived appearance of virtual imagery shown on the display. However, this problem is much more apparent for optical see-through AR displays, such as the HoloLens. For these displays, imagery is superimposed onto the user\u27s view of their environment, which can cause the imagery to become transparent and washed out in appearance from the user\u27s perspective. Any change in the user\u27s environment conditions or in the user\u27s position introduces changes to the perceived appearance of the AR imagery, and current AR displays do not adapt to maintain a consistent perceived appearance of the imagery being displayed. Because of this, in many environments the user may misinterpret or fail to notice information shown on the display. In this dissertation, I investigate the factors that influence user perception of AR imagery and demonstrate examples of how the user\u27s perception is affected for applications involving user interfaces, attention cues, and virtual humans. I establish a mathematical model that relates the user, their environment, their AR display, and AR imagery in terms of luminance or illuminance contrast. I demonstrate how this model can be used to classify the user\u27s viewing conditions and identify problems the user is prone to experience when in these conditions. I demonstrate how the model can be used to simulate changes in the user\u27s viewing conditions and to identify methods to maintain the perceived appearance of the AR imagery in changing conditions
    corecore