195 research outputs found
Multilateral productivity comparisons and homotheticity
In this paper it is shown that a well known procedure (GEKS) of transitivizing a bilateral system of productivity comparisons is implicitly a way of imposing a homothetic structure onto the data. The main implication of this result is that deviations between the bilateral and the multilateral (GEKS) indexes can be interpreted as a measure of local deviation from the homothetic assumption. This establishes an additional link between homotheticity and transitivity
The decomposition of efficiency in parallel network production models
Kao (2012) proposed a method to decompose DMU efficiency into sub-unit efficiencies for parallel production systems. We provide a numerical example showing that the proposed method can yield negative sub-unit efficiency scores under variable returns to scale, against common sense and standard postulates requiring this score to be non-negative. As a solution, we propose a decomposition based on the directional distance function that does not suffer from this problem. In particular, we recognize that the overall inefficiency of the DMU is composed of the sub-units technical inefficiencies and a reallocation inefficiency component. The proposed method can be also applied to non-convex technologies, therefore providing a more general method to implement such a decomposition. Given the connection between the directional distance function and slack-based efficiency measurement, the method can easily be extended to this case as well.info:eu-repo/semantics/publishedVersio
Efficiency decomposition for multi-level multi-components production technologies
This paper addresses the efficiency measurement of firms composed by multiple components, and assessed at different decision levels. In particular it develops models for three levels of decision/production: the subunit (production division/process), the DMU (firm) and the industry (system). For each level, inefficiency is measured using a directional distance function and the developed measures are contrasted with existing radial models. The paper also investigates how the efficiency scores computed at different levels are related to each other by proposing a decomposition into exhaustive and mutually exclusive components. The proposed method is illustrated using data on Portuguese hospitals. Since most of the topics addressed in this paper are related to more general network structures, avenues for future research are proposed and discussed.info:eu-repo/semantics/publishedVersio
Recommended from our members
The quality and efficiency of public service delivery in the UK and China
The quality and efficiency of public service delivery in the UK and China, Regional Studies. This paper
examines the efficiency of public service delivery at a regional level in both the UK and China using a method based on data envelopment analysis (DEA) that measures aggregate country-level inefficiency. This country-level inefficiency is then decomposed into three components: (1) lack of best practices at a regional level; (2) quality of the public service delivery; and (3) potential efficiency gains realizable via reallocation of expenditure across regions. The empirical results indicate that most UK inefficiency comes from the reallocation effect, while most Chinese inefficiency is attributable to lack of best practices; quality explains more of the expenditure variations in the UK relative to China. The paper speculates about fiscal (de)centralization as a possible explanation for such differences
Mixed up? That's good for motivation
An essential ingrediant in models of career concerns is ex ante uncertainty about an agent's type. This paper shows how career concerns can arise even in the absence of any such ex ante uncertainty, if the unobservable actions that an agent takes influence his future productivity. By implementing effort in mixed strategies the principal can endogenously induce uncertainty about the agent's ex post productivity and generate reputational incentives. Our main result is that creating such ambiguity can be optimal for the principal, even though this exposes the agent to additional risk and reduces output. This finding demonstrates the importance of mixed strategies in contracting environments with imperfect commitment, which contrasts with standard agency models where implementing mixed strategy actions typically is not optimal if pure strategies are also implementable
Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays
During sleep and awake rest, the hippocampus replays sequences of place cells
that have been activated during prior experiences. These have been interpreted
as a memory consolidation process, but recent results suggest a possible
interpretation in terms of reinforcement learning. The Dyna reinforcement
learning algorithms use off-line replays to improve learning. Under limited
replay budget, a prioritized sweeping approach, which requires a model of the
transitions to the predecessors, can be used to improve performance. We
investigate whether such algorithms can explain the experimentally observed
replays. We propose a neural network version of prioritized sweeping
Q-learning, for which we developed a growing multiple expert algorithm, able to
cope with multiple predecessors. The resulting architecture is able to improve
the learning of simulated agents confronted to a navigation task. We predict
that, in animals, learning the world model should occur during rest periods,
and that the corresponding replays should be shuffled.Comment: Living Machines 2018 (Paris, France
Adaptive cluster expansion for the inverse Ising problem: convergence, algorithm and tests
We present a procedure to solve the inverse Ising problem, that is to find
the interactions between a set of binary variables from the measure of their
equilibrium correlations. The method consists in constructing and selecting
specific clusters of variables, based on their contributions to the
cross-entropy of the Ising model. Small contributions are discarded to avoid
overfitting and to make the computation tractable. The properties of the
cluster expansion and its performances on synthetic data are studied. To make
the implementation easier we give the pseudo-code of the algorithm.Comment: Paper submitted to Journal of Statistical Physic
Principal component analysis of ensemble recordings reveals cell assemblies at high temporal resolution
Simultaneous recordings of many single neurons reveals unique insights into network processing spanning the timescale from single spikes to global oscillations. Neurons dynamically self-organize in subgroups of coactivated elements referred to as cell assemblies. Furthermore, these cell assemblies are reactivated, or replayed, preferentially during subsequent rest or sleep episodes, a proposed mechanism for memory trace consolidation. Here we employ Principal Component Analysis to isolate such patterns of neural activity. In addition, a measure is developed to quantify the similarity of instantaneous activity with a template pattern, and we derive theoretical distributions for the null hypothesis of no correlation between spike trains, allowing one to evaluate the statistical significance of instantaneous coactivations. Hence, when applied in an epoch different from the one where the patterns were identified, (e.g. subsequent sleep) this measure allows to identify times and intensities of reactivation. The distribution of this measure provides information on the dynamics of reactivation events: in sleep these occur as transients rather than as a continuous process
- …