1,270 research outputs found

    Binary extensions and choice theory

    Get PDF

    The computational complexity of boundedly rational choice behavior

    Get PDF
    This research examines the computational complexity of two boundedly rational choice models that use multiple rationales to explain observed choice behavior. First, we show that the notion of rationalizability by K rationales as introduced by Kalai, Rubinstein, and Spiegler (2002) is NP-complete for K greater or equal to two. Second, we show that the question of sequential rationalizability by K rationales, introduced by Manzini and Mariotti (2007), is NP-complete for K greater or equal to three if choices are single valued, and that it is NP-complete for K greater or equal to one if we allow for multi-valued choice correspondences. Motivated by these results, we present two binary integer feasibility programs that characterize the two boundedly rational choice models and we compute their power. Finally, by using results from descriptive complexity theory, we explain why it has been so difficult to obtain `nice' characterizations for these models.

    The computational complexity of rationalizing Pareto optimal choice behavior

    Get PDF
    We consider a setting where a coalition of individuals chooses one or several alternatives from each set in a collection of choice sets. We examine the computational complexity of Pareto rationalizability. Pareto rationalizability requires that we can endow each individual in the coalition with a preference relation such that the observed choices are Pareto efficient. We differentiate between the situation where the choice function is considered to select all Pareto optimal alternatives from a choice set and the situation where it only contains one or several Pareto optimal alternatives. In the former case we find that Pareto rationalizability is an NP-complete problem. For the latter case we demonstrate that, if we have no additional information on the individual preference relations, then all choice behavior is Pareto rationalizable. However, if we have such additional information, then Pareto rationalizability is again NP-complete. Our results are valid for any coalition of size greater or equal than two.

    Porting concepts from DNNs back to GMMs

    Get PDF
    Deep neural networks (DNNs) have been shown to outperform Gaussian Mixture Models (GMM) on a variety of speech recognition benchmarks. In this paper we analyze the differences between the DNN and GMM modeling techniques and port the best ideas from the DNN-based modeling to a GMM-based system. By going both deep (multiple layers) and wide (multiple parallel sub-models) and by sharing model parameters, we are able to close the gap between the two modeling techniques on the TIMIT database. Since the 'deep' GMMs retain the maximum-likelihood trained Gaussians as first layer, advanced techniques such as speaker adaptation and model-based noise robustness can be readily incorporated. Regardless of their similarities, the DNNs and the deep GMMs still show a sufficient amount of complementarity to allow effective system combination

    The computational complexity of rationalizing Pareto optimal choice behavior.

    Get PDF
    We consider a setting where a coalition of individuals chooses one or several alternatives from each set in a collection of choice sets. We examine the computational complexity of Pareto rationalizability. Pareto rationalizability requires that we can endow each individual in the coalition with a preference relation such that the observed choices are Pareto efficient. We differentiate between the situation where the choice function is considered to select all Pareto optimal alternatives from a choice set and the situation where it only contains one or several Pareto optimal alternatives. In the former case we find that Pareto rationalizability is an NP-complete problem. For the latter case we demonstrate that, if we have no additional information on the individual preference relations, then all choice behavior is Pareto rationalizable. However, if we have such additional information, then Pareto rationalizability is again NP-complete. Our results are valid for any coalition of size greater or equal than two.

    A revealed preference analysis of the rational addiction model.

    Get PDF
    We provide a revealed preference analysis of the rational addiction model. The revealed preference approach avoids the need to impose an, a priori unverifiable, functional form on the underlying utility function. Our results extend the previously established revealed preference characterizations for the life cycle model and the one-lag habits model. We show that our characterization is easily testable by means of linear programming methods and we demonstrate its practical usefulness by means of an application to Spanish household consumption data.

    A revealed preference analysis of the rational addiction model

    Get PDF
    We provide a revealed preference analysis of the rational addiction model. The revealed preference approach avoids the need to impose an, a priori unverifiable, functional form on the underlying utility function. Our results extend the previously established revealed preference characterizations for the life cycle model and the one-lag habits model. We show that our characterization is easily testable by means of linear programming methods and we demonstrate its practical usefulness by means of an application to Spanish household consumption data.

    ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification

    Get PDF
    Current speaker verification techniques rely on a neural network to extract speaker representations. The successful x-vector architecture is a Time Delay Neural Network (TDNN) that applies statistics pooling to project variable-length utterances into fixed-length speaker characterizing embeddings. In this paper, we propose multiple enhancements to this architecture based on recent trends in the related fields of face verification and computer vision. Firstly, the initial frame layers can be restructured into 1-dimensional Res2Net modules with impactful skip connections. Similarly to SE-ResNet, we introduce Squeeze-and-Excitation blocks in these modules to explicitly model channel interdependencies. The SE block expands the temporal context of the frame layer by rescaling the channels according to global properties of the recording. Secondly, neural networks are known to learn hierarchical features, with each layer operating on a different level of complexity. To leverage this complementary information, we aggregate and propagate features of different hierarchical levels. Finally, we improve the statistics pooling module with channel-dependent frame attention. This enables the network to focus on different subsets of frames during each of the channel's statistics estimation. The proposed ECAPA-TDNN architecture significantly outperforms state-of-the-art TDNN based systems on the VoxCeleb test sets and the 2019 VoxCeleb Speaker Recognition Challenge.Comment: proceedings of INTERSPEECH 202

    Improving large vocabulary continuous speech recognition by combining GMM-based and reservoir-based acoustic modeling

    Get PDF
    In earlier work we have shown that good phoneme recognition is possible with a so-called reservoir, a special type of recurrent neural network. In this paper, different architectures based on Reservoir Computing (RC) for large vocabulary continuous speech recognition are investigated. Besides experiments with HMM hybrids, it is shown that a RC-HMM tandem can achieve the same recognition accuracy as a classical HMM, which is a promising result for such a fairly new paradigm. It is also demonstrated that a state-level combination of the scores of the tandem and the baseline HMM leads to a significant improvement over the baseline. A word error rate reduction of the order of 20\% relative is possible
    corecore