5,517 research outputs found

    A functional-cognitive framework for attitude research

    Get PDF
    In attitude research, behaviours are often used as proxies for attitudes and attitudinal processes. This practice is problematic because it conflates the behaviours that need to be explained (explanandum) with the mental constructs that are used to explain these behaviours (explanans). In the current chapter we propose a meta-theoretical framework that resolves this problem by distinguishing between two levels of analysis. According to the proposed framework, attitude research can be conceptualised as the scientific study of evaluation. Evaluation is defined not in terms of mental constructs but in terms of elements in the environment, more specifically, as the effect of stimuli on evaluative responses. From this perspective, attitude research provides answers to two questions: (1) Which elements in the environment moderate evaluation? (2) What mental processes and representations mediate evaluation? Research on the first question provides explanations of evaluative responses in terms of elements in the environment (functional level of analysis); research on the second question offers explanations of evaluation in terms of mental processes and representations (cognitive level of analysis). These two levels of analysis are mutually supportive, in that better explanations at one level lead to better explanations at the other level. However, their mutually supportive relation requires a clear distinction between the concepts of their explanans and explanandum, which are conflated if behaviours are treated as proxies for mental constructs. The value of this functional-cognitive framework is illustrated by applying it to four central questions of attitude research

    Dramaturgy in Archival Research: A Frame Analysis of Disciplinary Reconstruction in Sociology

    Get PDF
    Research in the history of sociology has with few exceptions depended primarily on interviews, reminiscences, and information gleaned from published sources rather than upon archival data such as unpublished correspondence, manuscripts, diaries, and memos. Recently, however, Mary Jo Deegan (1988) and others have demonstrated the power of archival data for rehabilitating the history of American sociology. Archival research is not without its own set of pitfalls and problems, but archival data can at times provide needed corrections to the skewed and often self-serving historical images portrayed in many of the standard published accounts of our disciplinary history

    Dramaturgy in Archival Research: A Frame Analysis of Disciplinary Reconstruction in Sociology

    Get PDF
    Research in the history of sociology has with few exceptions depended primarily on interviews, reminiscences, and information gleaned from published sources rather than upon archival data such as unpublished correspondence, manuscripts, diaries, and memos. Recently, however, Mary Jo Deegan (1988) and others have demonstrated the power of archival data for rehabilitating the history of American sociology. Archival research is not without its own set of pitfalls and problems, but archival data can at times provide needed corrections to the skewed and often self-serving historical images portrayed in many of the standard published accounts of our disciplinary history

    A reusable iterative optimization software library to solve combinatorial problems with approximate reasoning

    Get PDF
    Real world combinatorial optimization problems such as scheduling are typically too complex to solve with exact methods. Additionally, the problems often have to observe vaguely specified constraints of different importance, the available data may be uncertain, and compromises between antagonistic criteria may be necessary. We present a combination of approximate reasoning based constraints and iterative optimization based heuristics that help to model and solve such problems in a framework of C++ software libraries called StarFLIP++. While initially developed to schedule continuous caster units in steel plants, we present in this paper results from reusing the library components in a shift scheduling system for the workforce of an industrial production plant.Comment: 33 pages, 9 figures; for a project overview see http://www.dbai.tuwien.ac.at/proj/StarFLIP

    Semantic knowledge integration for learning from semantically imprecise data

    Get PDF
    Low availability of labeled training data often poses a fundamental limit to the accuracy of computer vision applications using machine learning methods. While these methods are improved continuously, e.g., through better neural network architectures, there cannot be a single methodical change that increases the accuracy on all possible tasks. This statement, known as the no free lunch theorem, suggests that we should consider aspects of machine learning other than learning algorithms for opportunities to escape the limits set by the available training data. In this thesis, we focus on two main aspects, namely the nature of the training data, where we introduce structure into the label set using concept hierarchies, and the learning paradigm, which we change in accordance with requirements of real-world applications as opposed to more academic setups.Concept hierarchies represent semantic relations, which are sets of statements such as "a bird is an animal." We propose a hierarchical classifier to integrate this domain knowledge in a pre-existing task, thereby increasing the information the classifier has access to. While the hierarchy's leaf nodes correspond to the original set of classes, the inner nodes are "new" concepts that do not exist in the original training data. However, we pose that such "imprecise" labels are valuable and should occur naturally, e.g., as an annotator's way of expressing their uncertainty. Furthermore, the increased number of concepts leads to more possible search terms when assembling a web-crawled dataset or using an image search. We propose CHILLAX, a method that learns from semantically imprecise training data, while still offering precise predictions to integrate seamlessly into a pre-existing application

    Completeness and properness of refinement operators in inductive logic programming

    Get PDF
    AbstractWithin Inductive Logic Programming, refinement operators compute a set of specializations or generalizations of a clause. They are applied in model inference algorithms to search in a quasi-ordered set for clauses of a logical theory that consistently describes an unknown concept. Ideally, a refinement operator is locally finite, complete, and proper. In this article we show that if an element in a quasi-ordered set 〈S, ≥〉 has an infinite or incomplete cover set, then an ideal refinement operator for 〈S, ≥〉 does not exist. We translate the nonexistence conditions to a specific kind of infinite ascending and descending chains and show that these chains exist in unrestricted sets of clauses that are ordered by θ-subsumption. Next we discuss how the restriction to a finite ordered subset can enable the construction of ideal refinement operators. Finally, we define an ideal refinement operator for restricted θ-subsumption ordered sets of clauses
    • …
    corecore