36,096 research outputs found
A Supervisor for Control of Mode-switch Process
Many processes operate only around a limited number of operation points. In order to have adequate control around each operation point, and adaptive controller could be used. When the operation point changes often, a large number of parameters would have to be adapted over and over again. This makes application of conventional adaptive control unattractive, which is more suited for processes with slowly changing parameters. Furthermore, continuous adaptation is not always needed or desired. An extension of adaptive control is presented, in which for each operation point the process behaviour can be stored in a memory, retrieved from it and evaluated. These functions are co-ordinated by a ÂżsupervisorÂż. This concept is referred to as a supervisor for control of mode-switch processes. It leads to an adaptive control structure which quickly adjusts the controller parameters based on retrieval of old information, without the need to fully relearn each time. This approach has been tested on experimental set-ups of a flexible beam and of a flexible two-link robot arm, but it is directly applicable to other processes, for instance, in the (petro) chemical industry
On the interpretation and identification of dynamic Takagi-Sugenofuzzy models
Dynamic Takagi-Sugeno fuzzy models are not always easy to interpret, in particular when they are identified from experimental data. It is shown that there exists a close relationship between dynamic Takagi-Sugeno fuzzy models and dynamic linearization when using affine local model structures, which suggests that a solution to the multiobjective identification problem exists. However, it is also shown that the affine local model structure is a highly sensitive parametrization when applied in transient operating regimes. Due to the multiobjective nature of the identification problem studied here, special considerations must be made during model structure selection, experiment design, and identification in order to meet both objectives. Some guidelines for experiment design are suggested and some robust nonlinear identification algorithms are studied. These include constrained and regularized identification and locally weighted identification. Their usefulness in the present context is illustrated by examples
A Framework for Uplink Intercell Interference Modeling with Channel-Based Scheduling
This paper presents a novel framework for modeling the uplink intercell
interference (ICI) in a multiuser cellular network. The proposed framework
assists in quantifying the impact of various fading channel models and
state-of-the-art scheduling schemes on the uplink ICI. Firstly, we derive a
semianalytical expression for the distribution of the location of the scheduled
user in a given cell considering a wide range of scheduling schemes. Based on
this, we derive the distribution and moment generating function (MGF) of the
uplink ICI considering a single interfering cell. Consequently, we determine
the MGF of the cumulative ICI observed from all interfering cells and derive
explicit MGF expressions for three typical fading models. Finally, we utilize
the obtained expressions to evaluate important network performance metrics such
as the outage probability, ergodic capacity, and average fairness numerically.
Monte-Carlo simulation results are provided to demonstrate the efficacy of the
derived analytical expressions.Comment: IEEE Transactions on Wireless Communications, 2013. arXiv admin note:
substantial text overlap with arXiv:1206.229
Control Aware Radio Resource Allocation in Low Latency Wireless Control Systems
We consider the problem of allocating radio resources over wireless
communication links to control a series of independent wireless control
systems. Low-latency transmissions are necessary in enabling time-sensitive
control systems to operate over wireless links with high reliability. Achieving
fast data rates over wireless links thus comes at the cost of reliability in
the form of high packet error rates compared to wired links due to channel
noise and interference. However, the effect of the communication link errors on
the control system performance depends dynamically on the control system state.
We propose a novel control-communication co-design approach to the low-latency
resource allocation problem. We incorporate control and channel state
information to make scheduling decisions over time on frequency, bandwidth and
data rates across the next-generation Wi-Fi based wireless communication links
that close the control loops. Control systems that are closer to instability or
further from a desired range in a given control cycle are given higher packet
delivery rate targets to meet. Rather than a simple priority ranking, we derive
precise packet error rate targets for each system needed to satisfy stability
targets and make scheduling decisions to meet such targets while reducing total
transmission time. The resulting Control-Aware Low Latency Scheduling (CALLS)
method is tested in numerous simulation experiments that demonstrate its
effectiveness in meeting control-based goals under tight latency constraints
relative to control-agnostic scheduling
Achieving the Dispatchability of Distribution Feeders through Prosumers Data Driven Forecasting and Model Predictive Control of Electrochemical Storage
We propose and experimentally validate a control strategy to dispatch the
operation of a distribution feeder interfacing heterogeneous prosumers by using
a grid-connected battery energy storage system (BESS) as a controllable element
coupled with a minimally invasive monitoring infrastructure. It consists in a
two-stage procedure: day-ahead dispatch planning, where the feeder 5-minute
average power consumption trajectory for the next day of operation (called
\emph{dispatch plan}) is determined, and intra-day/real-time operation, where
the mismatch with respect to the \emph{dispatch plan} is corrected by applying
receding horizon model predictive control (MPC) to decide the BESS
charging/discharging profile while accounting for operational constraints. The
consumption forecast necessary to compute the \emph{dispatch plan} and the
battery model for the MPC algorithm are built by applying adaptive data driven
methodologies. The discussed control framework currently operates on a daily
basis to dispatch the operation of a 20~kV feeder of the EPFL university campus
using a 750~kW/500~kWh lithium titanate BESS.Comment: Submitted for publication, 201
A fine-grain time-sharing Time Warp system
Although Parallel Discrete Event Simulation (PDES) platforms relying on the Time Warp (optimistic) synchronization
protocol already allow for exploiting parallelism, several techniques have been proposed to
further favor performance. Among them we can mention optimized approaches for state restore, as well as
techniques for load balancing or (dynamically) controlling the speculation degree, the latter being specifically
targeted at reducing the incidence of causality errors leading to waste of computation. However, in
state of the art Time Warp systems, events’ processing is not preemptable, which may prevent the possibility
to promptly react to the injection of higher priority (say lower timestamp) events. Delaying the processing
of these events may, in turn, give rise to higher incidence of incorrect speculation. In this article we present
the design and realization of a fine-grain time-sharing Time Warp system, to be run on multi-core Linux
machines, which makes systematic use of event preemption in order to dynamically reassign the CPU to
higher priority events/tasks. Our proposal is based on a truly dual mode execution, application vs platform,
which includes a timer-interrupt based support for bringing control back to platform mode for possible CPU
reassignment according to very fine grain periods. The latter facility is offered by an ad-hoc timer-interrupt
management module for Linux, which we release, together with the overall time-sharing support, within the
open source ROOT-Sim platform. An experimental assessment based on the classical PHOLD benchmark and
two real world models is presented, which shows how our proposal effectively leads to the reduction of the
incidence of causality errors, as compared to traditional Time Warp, especially when running with higher
degrees of parallelism
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
- …