309,457 research outputs found
Robustly Solvable Constraint Satisfaction Problems
An algorithm for a constraint satisfaction problem is called robust if it
outputs an assignment satisfying at least -fraction of the
constraints given a -satisfiable instance, where
as . Guruswami and
Zhou conjectured a characterization of constraint languages for which the
corresponding constraint satisfaction problem admits an efficient robust
algorithm. This paper confirms their conjecture
Application of the European Customer Satisfaction Index to Postal Services. Structural Equation Models versus Partial Least Squares
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services.European Customer Satisfaction Index; ECSI; Structural Equation Models; Robust Statistics; Missing Data; Maximum Likelihood
Robust Satisfaction of Temporal Logic Specifications via Reinforcement Learning
We consider the problem of steering a system with unknown, stochastic
dynamics to satisfy a rich, temporally layered task given as a signal temporal
logic formula. We represent the system as a Markov decision process in which
the states are built from a partition of the state space and the transition
probabilities are unknown. We present provably convergent reinforcement
learning algorithms to maximize the probability of satisfying a given formula
and to maximize the average expected robustness, i.e., a measure of how
strongly the formula is satisfied. We demonstrate via a pair of robot
navigation simulation case studies that reinforcement learning with robustness
maximization performs better than probability maximization in terms of both
probability of satisfaction and expected robustness.Comment: 8 pages, 4 figure
Robust satisfaction of temporal logic specifications via reinforcement learning
We consider the problem of steering a system with unknown, stochastic dynamics to satisfy a rich, temporally-layered task given as a signal temporal logic formula. We represent the system as a finite-memory Markov decision process with unknown transition probabilities and whose states are built from a partition of the state space. We present provably convergent reinforcement learning algorithms to maximize the probability of satisfying a given specification and to maximize the average expected robustness, i.e. a measure of how strongly the formula is satisfied. Robustness allows us to quantify progress towards satisfying a given specification. We demonstrate via a pair of robot navigation simulation case studies that, due to the quantification of progress towards satisfaction, reinforcement learning with robustness maximization performs better than probability maximization in terms of both probability of satisfaction and expected robustness with a low number of training examples
Learning-based predictive control for linear systems: a unitary approach
A comprehensive approach addressing identification and control for
learningbased Model Predictive Control (MPC) for linear systems is presented.
The design technique yields a data-driven MPC law, based on a dataset collected
from the working plant. The method is indirect, i.e. it relies on a model
learning phase and a model-based control design one, devised in an integrated
manner. In the model learning phase, a twofold outcome is achieved: first,
different optimal p-steps ahead prediction models are obtained, to be used in
the MPC cost function; secondly, a perturbed state-space model is derived, to
be used for robust constraint satisfaction. Resorting to Set Membership
techniques, a characterization of the bounded model uncertainties is obtained,
which is a key feature for a successful application of the robust control
algorithm. In the control design phase, a robust MPC law is proposed, able to
track piece-wise constant reference signals, with guaranteed recursive
feasibility and convergence properties. The controller embeds multistep
predictors in the cost function, it ensures robust constraints satisfaction
thanks to the learnt uncertainty model, and it can deal with possibly
unfeasible reference values. The proposed approach is finally tested in a
numerical example
Formal Synthesis of Control Strategies for Positive Monotone Systems
We design controllers from formal specifications for positive discrete-time
monotone systems that are subject to bounded disturbances. Such systems are
widely used to model the dynamics of transportation and biological networks.
The specifications are described using signal temporal logic (STL), which can
express a broad range of temporal properties. We formulate the problem as a
mixed-integer linear program (MILP) and show that under the assumptions made in
this paper, which are not restrictive for traffic applications, the existence
of open-loop control policies is sufficient and almost necessary to ensure the
satisfaction of STL formulas. We establish a relation between satisfaction of
STL formulas in infinite time and set-invariance theories and provide an
efficient method to compute robust control invariant sets in high dimensions.
We also develop a robust model predictive framework to plan controls optimally
while ensuring the satisfaction of the specification. Illustrative examples and
a traffic management case study are included.Comment: To appear in IEEE Transactions on Automatic Control (TAC) (2018), 16
pages, double colum
Performance pay, sorting and the dimensions of job satisfaction
This paper investigates the influence of performance related pay on several dimensions of job satisfaction. In cross-sectional estimates, performance related pay is associated with increased overall satisfaction, satisfaction with pay, satisfaction with job security and satisfaction with hours. It appears to be negatively associated with satisfaction with the work itself. Yet, after accounting for worker fixed-effects, the positive associations remain and the negative association vanishes. These results appear robust to a variety of alternative specifications and support the notion that performance pay allows increased opportunities for worker optimization and do not generally demotivate workers or crowd out intrinsic motivation.
Q-learning for robust satisfaction of signal temporal logic specifications
This paper addresses the problem of learning optimal policies for satisfying signal temporal logic (STL) specifications by agents with unknown stochastic dynamics. The system is modeled as a Markov decision process, in which the states represent partitions of a continuous space and the transition probabilities are unknown. We formulate two synthesis problems where the desired STL specification is enforced by maximizing the probability of satisfaction, and the expected robustness degree, that is, a measure quantifying the quality of satisfaction. We discuss that Q-learning is not directly applicable to these problems because, based on the quantitative semantics of STL, the probability of satisfaction and expected robustness degree are not in the standard objective form of Q-learning. To resolve this issue, we propose an approximation of STL synthesis problems that can be solved via Q-learning, and we derive some performance bounds for the policies obtained by the approximate approach. The performance of the proposed method is demonstrated via simulations
- …