27,589 research outputs found

    Weakly Supervised Domain-Specific Color Naming Based on Attention

    Full text link
    The majority of existing color naming methods focuses on the eleven basic color terms of the English language. However, in many applications, different sets of color names are used for the accurate description of objects. Labeling data to learn these domain-specific color names is an expensive and laborious task. Therefore, in this article we aim to learn color names from weakly labeled data. For this purpose, we add an attention branch to the color naming network. The attention branch is used to modulate the pixel-wise color naming predictions of the network. In experiments, we illustrate that the attention branch correctly identifies the relevant regions. Furthermore, we show that our method obtains state-of-the-art results for pixel-wise and image-wise classification on the EBAY dataset and is able to learn color names for various domains.Comment: Accepted at ICPR201

    CoRe: Color Regression for Multicolor Fashion Garments

    Full text link
    Developing deep networks that analyze fashion garments has many real-world applications. Among all fashion attributes, color is one of the most important yet challenging to detect. Existing approaches are classification-based and thus cannot go beyond the list of discrete predefined color names. In this paper, we handle color detection as a regression problem to predict the exact RGB values. That's why in addition to a first color classifier, we include a second regression stage for refinement in our newly proposed architecture. This second step combines two attention models: the first depends on the type of clothing, the second depends on the color previously detected by the classifier. Our final prediction is the weighted spatial pooling over the image pixels RGB values, where the illumination has been corrected. This architecture is modular and easily expanded to detect the RGBs of all colors in a multicolor garment. In our experiments, we show the benefits of each component of our architecture.Comment: 6 pages,3 figures,1 tabl

    Assessment of Neuropsychological Trajectories in Longitudinal Population-Based Studies of Children

    Get PDF
    This paper provides a strategy for the assessment of brain function in longitudinal cohort studies of children. The proposed strategy invokes both domain-specific and omnibus intelligence test approaches. In order to minimise testing burden and practice effects, the cohort is divided into four groups with one-quarter tested at 6-monthly intervals in the 0–2-year age range (at ages 6 months, 1.0, 1.5 and 2.0 years) and at annual intervals from ages 3–20 (one-quarter of the children at age 3, another at age 4, etc). This strategy allows investigation of cognitive development and of the relationship between environmental influences and development at each age. It also allows introduction of new domains of function when age-appropriate. As far as possible, tests are used that will provide a rich source of both longitudinal and cross-sectional data. The testing strategy allows the introduction of novel tests and new domains as well as piloting of tests when the test burden is relatively light. In addition to the recommended tests for each age and domain, alternative tests are described. Assessment methodology and knowledge about child cognitive development will change over the next 20 years, and strategies are suggested for altering the proposed test schedule as appropriate

    Phrase Frequency Effects in Language Production

    Get PDF
    A classic debate in the psychology of language concerns the question of the grain-size of the linguistic information that is stored in memory. One view is that only morphologically simple forms are stored (e.g., ‘car’, ‘red’), and that more complex forms of language such as multi-word phrases (e.g., ‘red car’) are generated on-line from the simple forms. In two experiments we tested this view. In Experiment 1, participants produced noun+adjective and noun+noun phrases that were elicited by experimental displays consisting of colored line drawings and two superimposed line drawings. In Experiment 2, participants produced noun+adjective and determiner+noun+adjective utterances elicited by colored line drawings. In both experiments, naming latencies decreased with increasing frequency of the multi-word phrase, and were unaffected by the frequency of the object name in the utterance. These results suggest that the language system is sensitive to the distribution of linguistic information at grain-sizes beyond individual words

    Three symbol ungrounding problems: Abstract concepts and the future of embodied cognition

    Get PDF
    A great deal of research has focused on the question of whether or not concepts are embodied as a rule. Supporters of embodiment have pointed to studies that implicate affective and sensorimotor systems in cognitive tasks, while critics of embodiment have offered nonembodied explanations of these results and pointed to studies that implicate amodal systems. Abstract concepts have tended to be viewed as an important test case in this polemical debate. This essay argues that we need to move beyond a pretheoretical notion of abstraction. Against the background of current research and theory, abstract concepts do not pose a single, unified problem for embodied cognition but, instead, three distinct problems: the problem of generalization, the problem of flexibility, and the problem of disembodiment. Identifying these problems provides a conceptual framework for critically evaluating, and perhaps improving upon, recent theoretical proposals

    The rationality of vagueness

    Get PDF

    Color inference from semantic labeling for person search in videos

    Full text link
    We propose an explainable model to generate semantic color labels for person search. In this context, persons are described from their semantic parts, such as hat, shirt, etc. Person search consists in looking for people based on these descriptions. In this work, we aim to improve the accuracy of color labels for people. Our goal is to handle the high variability of human perception. Existing solutions are based on hand-crafted features or learnt features that are not explainable. Moreover most of them only focus on a limited set of colors. We propose a method based on binary search trees and a large peer-labelled color name dataset. This allows us to synthesize the human perception of colors. Using semantic segmentation and our color labeling method, we label segments of pedestrians with their associated colors. We evaluate our solution on person search on datasets such as PCN, and show a precision as high as 80.4%.Comment: 8 pages, 7 figures ICIAR 202

    Guidelines for writing definitions in ontologies

    Get PDF
    Ontologies are being used increasingly to promote the reusability of scientific information by allowing heterogeneous data to be integrated under a common, normalized representation. Definitions play a central role in the use of ontologies both by humans and by computers. Textual definitions allow ontologists and data curators to understand the intended meaning of ontology terms and to use these terms in a consistent fashion across contexts. Logical definitions allow machines to check the integrity of ontologies and reason over data annotated with ontology terms to make inferences that promote knowledge discovery. Therefore, it is important not only to include in ontologies multiple types of definitions in both formal and in natural languages, but also to ensure that these definitions meet good quality standards so they are useful. While tools such as Protégé can assist in creating well-formed logical definitions, producing good definitions in a natural language is still to a large extent a matter of human ingenuity supported at best by just a small number of general principles. For lack of more precise guidelines, definition authors are often left to their own personal devices. This paper aims to fill this gap by providing the ontology community with a set of principles and conventions to assist in definition writing, editing, and validation, by drawing on existing definition writing principles and guidelines in lexicography, terminology, and logic
    • …
    corecore