29,701 research outputs found

    Salient Objects in Clutter: Bringing Salient Object Detection to the Foreground

    Full text link
    We provide a comprehensive evaluation of salient object detection (SOD) models. Our analysis identifies a serious design bias of existing SOD datasets which assumes that each image contains at least one clearly outstanding salient object in low clutter. The design bias has led to a saturated high performance for state-of-the-art SOD models when evaluated on existing datasets. The models, however, still perform far from being satisfactory when applied to real-world daily scenes. Based on our analyses, we first identify 7 crucial aspects that a comprehensive and balanced dataset should fulfill. Then, we propose a new high quality dataset and update the previous saliency benchmark. Specifically, our SOC (Salient Objects in Clutter) dataset, includes images with salient and non-salient objects from daily object categories. Beyond object category annotations, each salient image is accompanied by attributes that reflect common challenges in real-world scenes. Finally, we report attribute-based performance assessment on our dataset.Comment: ECCV 201

    A graph theoretic approach to scene matching

    Get PDF
    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors

    I'm sorry to say, but your understanding of image processing fundamentals is absolutely wrong

    Full text link
    The ongoing discussion whether modern vision systems have to be viewed as visually-enabled cognitive systems or cognitively-enabled vision systems is groundless, because perceptual and cognitive faculties of vision are separate components of human (and consequently, artificial) information processing system modeling.Comment: To be published as chapter 5 in "Frontiers in Brain, Vision and AI", I-TECH Publisher, Viena, 200

    Identifying residential sub-markets using intra-urban migrations: the case of study of Barcelona’s neighborhoods

    Get PDF
    The dynamic evolution of the real estate market, as well as the sophistications of the interactions of the actors involved in it have caused that, contrary to classical economic theory, the real estate market is increasingly being thought of as a set of submarkets. This is because, among other things, the modeling of a segmented housing market allows, on the one hand, to design housing policies that are better adapted to the needs of the population, but on the other hand, it allows the generation of both marketing and supply strategies Oriented to specific population sectors. Such strategies in theory should behave as options with relatively low uncertainty, thus representing an attractive offer to all market players. However, in praxis, the segmentation of the real estate market is usually modeled on the offer. It is therefore that this paper proposes a modeling from observed preferences3 seen through intraurban migrations. In particular, it is proposed to model the market through the interaction value of Coombes, scaling the results in order to visualize the resulting submarket structure from the construction of a PAM (Partitioning Algorithm Medoids).Peer ReviewedPostprint (published version

    From holism to compositionality: memes and the evolution of segmentation, syntax, and signification in music and language

    Get PDF
    Steven Mithen argues that language evolved from an antecedent he terms “Hmmmmm, [meaning it was] Holistic, manipulative, multi-modal, musical and mimetic”. Owing to certain innate and learned factors, a capacity for segmentation and cross-stream mapping in early Homo sapiens broke the continuous line of Hmmmmm, creating discrete replicated units which, with the initial support of Hmmmmm, eventually became the semantically freighted words of modern language. That which remained after what was a bifurcation of Hmmmmm arguably survived as music, existing as a sound stream segmented into discrete units, although one without the explicit and relatively fixed semantic content of language. All three types of utterance – the parent Hmmmmm, language, and music – are amenable to a memetic interpretation which applies Universal Darwinism to what are understood as language and musical memes. On the basis of Peter Carruthers’ distinction between ‘cognitivism’ and ‘communicativism’ in language, and William Calvin’s theories of cortical information encoding, a framework is hypothesized for the semantic and syntactic associations between, on the one hand, the sonic patterns of language memes (‘lexemes’) and of musical memes (‘musemes’) and, on the other hand, ‘mentalese’ conceptual structures, in Chomsky’s ‘Logical Form’ (LF)
    corecore