20,104 research outputs found

    Parsing of Spoken Language under Time Constraints

    Get PDF
    Spoken language applications in natural dialogue settings place serious requirements on the choice of processing architecture. Especially under adverse phonetic and acoustic conditions parsing procedures have to be developed which do not only analyse the incoming speech in a time-synchroneous and incremental manner, but which are able to schedule their resources according to the varying conditions of the recognition process. Depending on the actual degree of local ambiguity the parser has to select among the available constraints in order to narrow down the search space with as little effort as possible. A parsing approach based on constraint satisfaction techniques is discussed. It provides important characteristics of the desired real-time behaviour and attempts to mimic some of the attention focussing capabilities of the human speech comprehension mechanism.Comment: 19 pages, LaTe

    The relationships between internal and external threat and right-wing attitudes: A three-wave longitudinal study

    Get PDF
    The interplay between threat and right-wing attitudes has received much research attention, but its longitudinal relationship has hardly been investigated. In this study, we investigated the longitudinal relationships between internal and external threat and right-wing attitudes using a cross-lagged design at three different time points in a large nationally representative sample (N = 800). We found evidence for bidirectional relationships. Higher levels of external threat were related to higher levels of Right-Wing Authoritarianism and to both the egalitarianism and dominance dimensions of Social Dominance Orientation at a later point in time. Conversely, higher levels of RWA were also related to increased perception of external threat later in time. Internal threat did not yield significant direct or indirect longitudinal relationships with right-wing attitudes. Theoretical and practical implications of these longitudinal effects are discussed

    Rough set and rule-based multicriteria decision aiding

    Get PDF
    The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA). DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems

    Generalized Discernibility Function Based Attribute Reduction in Incomplete Decision Systems

    Get PDF
    A rough set approach for attribute reduction is an important research subject in data mining and machine learning. However, most attribute reduction methods are performed on a complete decision system table. In this paper, we propose methods for attribute reduction in static incomplete decision systems and dynamic incomplete decision systems with dynamically-increasing and decreasing conditional attributes. Our methods use generalized discernibility matrix and function in tolerance-based rough sets

    Modeling the Psychology of Consumer and Firm Behavior with Behavioral Economics

    Get PDF
    Marketing is an applied science that tries to explain and influence how firms and consumers actually behave in markets. Marketing models are usually applications of economic theories. These theories are general and produce precise predictions, but they rely on strong assumptions of rationality of consumers and firms. Theories based on rationality limits could prove similarly general and precise, while grounding theories in psychological plausibility and explaining facts which are puzzles for the standard approach. Behavioral economics explores the implications of limits of rationality. The goal is to make economic theories more plausible while maintaining formal power and accurate prediction of field data. This review focuses selectively on six types of models used in behavioral economics that can be applied to marketing. Three of the models generalize consumer preference to allow (1) sensitivity to reference points (and loss-aversion); (2) social preferences toward outcomes of others; and (3) preference for instant gratification (quasi-hyperbolic discounting). The three models are applied to industrial channel bargaining, salesforce compensation, and pricing of virtuous goods such as gym memberships. The other three models generalize the concept of gametheoretic equilibrium, allowing decision makers to make mistakes (quantal response equilibrium), encounter limits on the depth of strategic thinking (cognitive hierarchy), and equilibrate by learning from feedback (self-tuning EWA). These are applied to marketing strategy problems involving differentiated products, competitive entry into large and small markets, and low-price guarantees. The main goal of this selected review is to encourage marketing researchers of all kinds to apply these tools to marketing. Understanding the models and applying them is a technical challenge for marketing modelers, which also requires thoughtful input from psychologists studying details of consumer behavior. As a result, models like these could create a common language for modelers who prize formality and psychologists who prize realism

    A Maximum Entropy Procedure to Solve Likelihood Equations

    Get PDF
    In this article, we provide initial findings regarding the problem of solving likelihood equations by means of a maximum entropy (ME) approach. Unlike standard procedures that require equating the score function of the maximum likelihood problem at zero, we propose an alternative strategy where the score is instead used as an external informative constraint to the maximization of the convex Shannon\u2019s entropy function. The problem involves the reparameterization of the score parameters as expected values of discrete probability distributions where probabilities need to be estimated. This leads to a simpler situation where parameters are searched in smaller (hyper) simplex space. We assessed our proposal by means of empirical case studies and a simulation study, the latter involving the most critical case of logistic regression under data separation. The results suggested that the maximum entropy reformulation of the score problem solves the likelihood equation problem. Similarly, when maximum likelihood estimation is difficult, as is the case of logistic regression under separation, the maximum entropy proposal achieved results (numerically) comparable to those obtained by the Firth\u2019s bias-corrected approach. Overall, these first findings reveal that a maximum entropy solution can be considered as an alternative technique to solve the likelihood equation

    Data-driven Inverse Optimization with Imperfect Information

    Full text link
    In data-driven inverse optimization an observer aims to learn the preferences of an agent who solves a parametric optimization problem depending on an exogenous signal. Thus, the observer seeks the agent's objective function that best explains a historical sequence of signals and corresponding optimal actions. We focus here on situations where the observer has imperfect information, that is, where the agent's true objective function is not contained in the search space of candidate objectives, where the agent suffers from bounded rationality or implementation errors, or where the observed signal-response pairs are corrupted by measurement noise. We formalize this inverse optimization problem as a distributionally robust program minimizing the worst-case risk that the {\em predicted} decision ({\em i.e.}, the decision implied by a particular candidate objective) differs from the agent's {\em actual} response to a random signal. We show that our framework offers rigorous out-of-sample guarantees for different loss functions used to measure prediction errors and that the emerging inverse optimization problems can be exactly reformulated as (or safely approximated by) tractable convex programs when a new suboptimality loss function is used. We show through extensive numerical tests that the proposed distributionally robust approach to inverse optimization attains often better out-of-sample performance than the state-of-the-art approaches
    • …
    corecore