21 research outputs found

    Products as Affective Modifiers of Identities

    Full text link
    © The Author(s) 2015. Are salesclerks seen as better, more powerful, or more active when they drive Mustangs? What about entrepreneurs? What about driving a mid-sized car? Intuitively, we have ideas about these, but much of the research on the affective nature of products is on purchasing, desires, and self-fulfillment. Drawing on symbolic interactionism, we argue that people's association with products has some basis in the impression management of their identity. For this to occur, there must be some cultural consensus about the way that products modify identities. Drawing on affect control theory's (ACT) methodology and equations, we measure the goodness, powerfulness, and activeness of several products, identities, and the associated product-modified identities to explore how products function as affective modifiers of identities. We find consistent effects across several types of technology products, whereby products pull the modified identity in the direction of the products' affective qualities. Support is established for the ACT equations that predict how traits modify identities as also having utility for predicting how products modify identities. This suggests that the opening questions can be answered empirically by measuring cultural-specific sentiments of the identity and the product and by developing equations to predict the identity modification process

    Snapshot navigation in the wavelet domain

    Get PDF
    Many animals rely on robust visual navigation which can be explained by snapshot models, where an agent is assumed to store egocentric panoramic images and subsequently use them to recover a heading by comparing current views to the stored snapshots. Long-range route navigation can also be explained by such models, by storing multiple snapshots along a training route and comparing the current image to these. For such models, memory capacity and comparison time increase dramatically with route length, rendering them unfeasible for small-brained insects and low-power robots where computation and storage are limited. One way to reduce the requirements is to use a compressed image representation. Inspired by the filter bank-like arrangement of the visual system, we here investigate how a frequency-based image representation influences the performance of a typical snapshot model. By decomposing views into wavelet coefficients at different levels and orientations, we achieve a compressed visual representation that remains robust when used for navigation. Our results indicate that route following based on wavelet coefficients is not only possible but gives increased performance over a range of other models

    An Infomax algorithm can perform both familiarity discrimination and feature extraction in a single network.

    No full text
    Psychological experiments have shown that the capacity of the brain for discriminating visual stimuli as novel or familiar is almost limitless. Neurobiological studies have established that the perirhinal cortex is critically involved in both familiarity discrimination and feature extraction. However, opinion is divided as to whether these two processes are performed by the same neurons. Previously proposed models have been unable to simultaneously extract features and discriminate familiarity for large numbers of stimuli. We show that a well-known model of visual feature extraction, Infomax, can simultaneously perform familiarity discrimination and feature extraction efficiently. This model has a significantly larger capacity than previously proposed models combining these two processes, particularly when correlation exists between inputs, as is the case in the perirhinal cortex. Furthermore, we show that once the model fully extracts features, its ability to perform familiarity discrimination increases markedly

    Computational models can replicate the capacity of human recognition memory.

    No full text
    The capacity of human recognition memory was investigated by Standing, who presented several groups of participants with different numbers of pictures (from 20 to 10 000), and subsequently tested their ability to distinguish between previously presented and novel pictures. The estimated number of pictures retained in recognition memory by different groups when plotted as a logarithmic function of the number of pictures presented formed a straight line, representing a power-law relationship. Here, we investigate if published models of familiarity discrimination can replicate Standing's results. We first consider a simplified assumption that visual stimuli are represented by uncorrelated patterns of firing of visual neurons providing input to the familiarity discrimination network. We show that for this case three models (Familiarity discrimination based on Energy (FamE), Anti-Hebbian and Info-max) can reproduce the observed power-law relationship when their synaptic weights are appropriately initialized. For more realistic assumptions on neural representation of stimuli, the FamE model is no longer able to reproduce the power-law relationship in simulations, while the Anti-Hebbian and Info-max can reproduce it. Nevertheless, the slopes of the power-law relationships produced by the models in all simulations differ from that observed by Standing. We discuss possible reasons for this difference, including separate contributions of familiarity and recollection processes, and describe experimentally testable predictions based on our analysis
    corecore