194 research outputs found

    Multilateral productivity comparisons and homotheticity

    Get PDF
    In this paper it is shown that a well known procedure (GEKS) of transitivizing a bilateral system of productivity comparisons is implicitly a way of imposing a homothetic structure onto the data. The main implication of this result is that deviations between the bilateral and the multilateral (GEKS) indexes can be interpreted as a measure of local deviation from the homothetic assumption. This establishes an additional link between homotheticity and transitivity

    Efficiency decomposition for multi-level multi-components production technologies

    Get PDF
    This paper addresses the efficiency measurement of firms composed by multiple components, and assessed at different decision levels. In particular it develops models for three levels of decision/production: the subunit (production division/process), the DMU (firm) and the industry (system). For each level, inefficiency is measured using a directional distance function and the developed measures are contrasted with existing radial models. The paper also investigates how the efficiency scores computed at different levels are related to each other by proposing a decomposition into exhaustive and mutually exclusive components. The proposed method is illustrated using data on Portuguese hospitals. Since most of the topics addressed in this paper are related to more general network structures, avenues for future research are proposed and discussed.info:eu-repo/semantics/publishedVersio

    Mixed up? That's good for motivation

    Get PDF
    An essential ingrediant in models of career concerns is ex ante uncertainty about an agent's type. This paper shows how career concerns can arise even in the absence of any such ex ante uncertainty, if the unobservable actions that an agent takes influence his future productivity. By implementing effort in mixed strategies the principal can endogenously induce uncertainty about the agent's ex post productivity and generate reputational incentives. Our main result is that creating such ambiguity can be optimal for the principal, even though this exposes the agent to additional risk and reduces output. This finding demonstrates the importance of mixed strategies in contracting environments with imperfect commitment, which contrasts with standard agency models where implementing mixed strategy actions typically is not optimal if pure strategies are also implementable

    Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays

    Full text link
    During sleep and awake rest, the hippocampus replays sequences of place cells that have been activated during prior experiences. These have been interpreted as a memory consolidation process, but recent results suggest a possible interpretation in terms of reinforcement learning. The Dyna reinforcement learning algorithms use off-line replays to improve learning. Under limited replay budget, a prioritized sweeping approach, which requires a model of the transitions to the predecessors, can be used to improve performance. We investigate whether such algorithms can explain the experimentally observed replays. We propose a neural network version of prioritized sweeping Q-learning, for which we developed a growing multiple expert algorithm, able to cope with multiple predecessors. The resulting architecture is able to improve the learning of simulated agents confronted to a navigation task. We predict that, in animals, learning the world model should occur during rest periods, and that the corresponding replays should be shuffled.Comment: Living Machines 2018 (Paris, France

    Adaptive cluster expansion for the inverse Ising problem: convergence, algorithm and tests

    Get PDF
    We present a procedure to solve the inverse Ising problem, that is to find the interactions between a set of binary variables from the measure of their equilibrium correlations. The method consists in constructing and selecting specific clusters of variables, based on their contributions to the cross-entropy of the Ising model. Small contributions are discarded to avoid overfitting and to make the computation tractable. The properties of the cluster expansion and its performances on synthetic data are studied. To make the implementation easier we give the pseudo-code of the algorithm.Comment: Paper submitted to Journal of Statistical Physic

    Principal component analysis of ensemble recordings reveals cell assemblies at high temporal resolution

    Get PDF
    Simultaneous recordings of many single neurons reveals unique insights into network processing spanning the timescale from single spikes to global oscillations. Neurons dynamically self-organize in subgroups of coactivated elements referred to as cell assemblies. Furthermore, these cell assemblies are reactivated, or replayed, preferentially during subsequent rest or sleep episodes, a proposed mechanism for memory trace consolidation. Here we employ Principal Component Analysis to isolate such patterns of neural activity. In addition, a measure is developed to quantify the similarity of instantaneous activity with a template pattern, and we derive theoretical distributions for the null hypothesis of no correlation between spike trains, allowing one to evaluate the statistical significance of instantaneous coactivations. Hence, when applied in an epoch different from the one where the patterns were identified, (e.g. subsequent sleep) this measure allows to identify times and intensities of reactivation. The distribution of this measure provides information on the dynamics of reactivation events: in sleep these occur as transients rather than as a continuous process

    Modelling generalized firms' restructuring using inverse DEA

    Get PDF
    The key consideration for firms’ restructuring is improving their operational efficiencies. Market conditions often offer opportunities or generate threats that can be handled by restructuring scenarios through consolidation, to create synergy, or through split, to create reverse synergy. A generalized restructuring refers to a move in a business market where a homogeneous set of firms, a set of pre-restructuring decision making units (DMUs), proceed with a restructuring to produce a new set of post-restructuring entities in the same market to realize efficiency targets. This paper aims to develop a novel inverse Data Envelopment Analysis based methodology, called GInvDEA (Generalized Inverse DEA), for modeling the generalized restructuring. Moreover, the paper suggests a linear programming model that allows determining the lowest performance levels, measured by efficiency that can be achieved through a given generalized restructuring. An application in banking operations illustrates the theory developed in the paper
    corecore