59,124 research outputs found
A Heuristic for Distance Fusion in Cover Song Identification
In this paper, we propose a method to integrate the results of different cover song identification algorithms into one single measure which, on the average, gives better results than initial algorithms. The fusion of the different distance measures is made by projecting all the measures in a multi-dimensional space, where the dimensionality of this space is the number of the considered distances. In our experiments, we test two distance measures, namely the Dynamic Time Warping and the Qmax measure when applied in different combinations to two features, namely a Salience feature and a Harmonic Pitch Class Profile (HPCP). While the HPCP is meant to extract purely harmonic descriptions, in fact, the Salience allows to better discern melodic differences. It is shown that the combination of two or more distance measure improves the overall performance
Salience-based selection: attentional capture by distractors less salient than the target
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience
Multi-Sensor Image Fusion Based on Moment Calculation
An image fusion method based on salient features is proposed in this paper.
In this work, we have concentrated on salient features of the image for fusion
in order to preserve all relevant information contained in the input images and
tried to enhance the contrast in fused image and also suppressed noise to a
maximum extent. In our system, first we have applied a mask on two input images
in order to conserve the high frequency information along with some low
frequency information and stifle noise to a maximum extent. Thereafter, for
identification of salience features from sources images, a local moment is
computed in the neighborhood of a coefficient. Finally, a decision map is
generated based on local moment in order to get the fused image. To verify our
proposed algorithm, we have tested it on 120 sensor image pairs collected from
Manchester University UK database. The experimental results show that the
proposed method can provide superior fused image in terms of several
quantitative fusion evaluation index.Comment: 5 pages, International Conferenc
Towards Egocentric Person Re-identification and Social Pattern Analysis
Wearable cameras capture a first-person view of the daily activities of the
camera wearer, offering a visual diary of the user behaviour. Detection of the
appearance of people the camera user interacts with for social interactions
analysis is of high interest. Generally speaking, social events, lifestyle and
health are highly correlated, but there is a lack of tools to monitor and
analyse them. We consider that egocentric vision provides a tool to obtain
information and understand users social interactions. We propose a model that
enables us to evaluate and visualize social traits obtained by analysing social
interactions appearance within egocentric photostreams. Given sets of
egocentric images, we detect the appearance of faces within the days of the
camera wearer, and rely on clustering algorithms to group their feature
descriptors in order to re-identify persons. Recurrence of detected faces
within photostreams allows us to shape an idea of the social pattern of
behaviour of the user. We validated our model over several weeks recorded by
different camera wearers. Our findings indicate that social profiles are
potentially useful for social behaviour interpretation
Context and perceptual salience influence the formation of novel stereotypes via cumulative cultural evolution
We use a transmission chain method to establish how context and category salience influence the formation of novel stereotypes through cumulative cultural evolution. We created novel alien targets by combining features from three category dimensions—color, movement, and shape—thereby creating social targets that were individually unique but that also shared category membership with other aliens (e.g., two aliens might be the same color and shape but move differently). At the start of the transmission chains each alien was randomly assigned attributes that described it (e.g., arrogant, caring, confident). Participants were given training on the alien-attribute assignments and were then tested on their memory for these. The alien-attribute assignments participants produced during test were used as the training materials for the next participant in the transmission chain. As information was repeatedly transmitted an increasingly simplified, learnable stereotype-like structure emerged for targets who shared the same color, such that by the end of the chains targets who shared the same color were more likely to share the same attributes (a reanalysis of data from Martin et al., 2014 which we term Experiment 1). The apparent bias toward the formation of novel stereotypes around the color category dimension was also found for objects (Experiment 2). However, when the category dimension of color was made less salient, it no longer dominated the formation of novel stereotypes (Experiment 3). The current findings suggest that context and category salience influence category dimension salience, which in turn influences the cumulative cultural evolution of information.<br/
- …