362 research outputs found
A High Reliability Asymptotic Approach for Packet Inter-Delivery Time Optimization in Cyber-Physical Systems
In cyber-physical systems such as automobiles, measurement data from sensor
nodes should be delivered to other consumer nodes such as actuators in a
regular fashion. But, in practical systems over unreliable media such as
wireless, it is a significant challenge to guarantee small enough
inter-delivery times for different clients with heterogeneous channel
conditions and inter-delivery requirements. In this paper, we design scheduling
policies aiming at satisfying the inter-delivery requirements of such clients.
We formulate the problem as a risk-sensitive Markov Decision Process (MDP).
Although the resulting problem involves an infinite state space, we first prove
that there is an equivalent MDP involving only a finite number of states. Then
we prove the existence of a stationary optimal policy and establish an
algorithm to compute it in a finite number of steps.
However, the bane of this and many similar problems is the resulting
complexity, and, in an attempt to make fundamental progress, we further propose
a new high reliability asymptotic approach. In essence, this approach considers
the scenario when the channel failure probabilities for different clients are
of the same order, and asymptotically approach zero. We thus proceed to
determine the asymptotically optimal policy: in a two-client scenario, we show
that the asymptotically optimal policy is a "modified least time-to-go" policy,
which is intuitively appealing and easily implementable; in the general
multi-client scenario, we are led to an SN policy, and we develop an algorithm
of low computational complexity to obtain it. Simulation results show that the
resulting policies perform well even in the pre-asymptotic regime with moderate
failure probabilities
Using Recurrent Neural Networks to Optimize Dynamical Decoupling for Quantum Memory
We utilize machine learning models which are based on recurrent neural
networks to optimize dynamical decoupling (DD) sequences. DD is a relatively
simple technique for suppressing the errors in quantum memory for certain noise
models. In numerical simulations, we show that with minimum use of prior
knowledge and starting from random sequences, the models are able to improve
over time and eventually output DD-sequences with performance better than that
of the well known DD-families. Furthermore, our algorithm is easy to implement
in experiments to find solutions tailored to the specific hardware, as it
treats the figure of merit as a black box.Comment: 18 pages, comments are welcom
Debiased Regression Adjustment in Completely Randomized Experiments with Moderately High-dimensional Covariates
Completely randomized experiment is the gold standard for causal inference.
When the covariate information for each experimental candidate is available,
one typical way is to include them in covariate adjustments for more accurate
treatment effect estimation. In this paper, we investigate this problem under
the randomization-based framework, i.e., that the covariates and potential
outcomes of all experimental candidates are assumed as deterministic quantities
and the randomness comes solely from the treatment assignment mechanism. Under
this framework, to achieve asymptotically valid inference, existing estimators
usually require either (i) that the dimension of covariates grows at a rate
no faster than as sample size ; or (ii) certain
sparsity constraints on the linear representations of potential outcomes
constructed via possibly high-dimensional covariates. In this paper, we
consider the moderately high-dimensional regime where is allowed to be in
the same order of magnitude as . We develop a novel debiased estimator with
a corresponding inference procedure and establish its asymptotic normality
under mild assumptions. Our estimator is model-free and does not require any
sparsity constraint on potential outcome's linear representations. We also
discuss its asymptotic efficiency improvements over the unadjusted treatment
effect estimator under different dimensionality constraints. Numerical analysis
confirms that compared to other regression adjustment based treatment effect
estimators, our debiased estimator performs well in moderately high dimensions
Control of Ocean Wave Energy Converters with Finite Stroke
In the design of ocean wave energy converters, proper control design is essential for the maximization of power generation performance. However, in practical applications, this control must be undertaken in the presence of stroke saturation and model uncertainty. In this dissertation, we address these challenges separately.
To address stroke saturation, a nonlinear control design procedure is proposed, which guarantees to keep the stroke within its limits. The technique exploits the passivity of the wave energy converter to guarantee closed-loop stability. The proposed technique consists of three steps: 1) design of a linear feedback controller using multi-objective optimization techniques; 2) augmentation of this design with an extra input channel that adheres to a closed-loop passivity condition; and 3) design of an outer, nonlinear passive feedback loop that controls this augmented input in such a way as to ensure stroke limits are maintained. The discrete-time version of this technique is also presented.
To address model uncertainty, in particular we consider the nonlinear viscosity drag effect as the model uncertainty. This robust control design problem can be regarded as a multi-objective optimization problem, whose primary objective is to optimize the nominal performance, while the second objective is to robustly stabilize the closed-loop system. The robust stability constraint can be posed using the concept of circle criterion. Because this optimization is non-convex, Loop Transfer Recovery methods are used to solve for sub-optimal solutions to the problem.
These techniques are demonstrated in simulation, for arrays of buoy-type wave energy converters.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163263/1/waynelao_1.pd
Limit Theorems for Fast-slow partially hyperbolic systems
We prove several limit theorems for a simple class of partially hyperbolic
fast-slow systems. We start with some well know results on averaging, then we
give a substantial refinement of known large (and moderate) deviation results
and conclude with a completely new result (a local limit theorem) on the
distribution of the process determined by the fluctuations around the average.
The method of proof is based on a mixture of standard pairs and Transfer
Operators that we expect to be applicable in a much wider generality
NASA Formal Methods Workshop, 1990
The workshop brought together researchers involved in the NASA formal methods research effort for detailed technical interchange and provided a mechanism for interaction with representatives from the FAA and the aerospace industry. The workshop also included speakers from industry to debrief the formal methods researchers on the current state of practice in flight critical system design, verification, and certification. The goals were: define and characterize the verification problem for ultra-reliable life critical flight control systems and the current state of practice in industry today; determine the proper role of formal methods in addressing these problems, and assess the state of the art and recent progress toward applying formal methods to this area
A Representation for Serial Robotic Tasks
The representation for serial robotic tasks proposed in this thesis is a language of temporal constraints derived directly from a model of the space of serial plans. It was specifically designed to encompass problems that include disjunctive ordering constraints. This guarantees that the proposed language can completely and, to a certain extent, compactly represent all possible serial robotic tasks. The generality of this language carries a penalty. The proposed language of temporal constraints is NP-Complete. Specific methods have been demonstrated for normalizing constraints posed in this language in order to make subsequent sequencing and analysis more tractable. Using this language, the planner can specify necessary and alternative orderings to control undesirable interactions between steps of a plan. For purposes of analysis, the planner can factor a plan into strategies, and decompose those strategies into essential components. Using properly normalized constraint expressions the sequencer can derive admissible sequences and admissible next operations. Using these facilities, a robot can be given the specification of a task and it can adapt its sequence of operations according to run-time events and the constraints on the operations to be performed
Applying Bayesian networks to model uncertainty in project scheduling
PhDRisk Management has become an important part of Project Management. In spite
of numerous advances in the field of Project Risk Management (PRM), handling
uncertainty in complex projects still remains a challenge. An important
component of Project Risk Management (PRM) is risk analysis, which attempts to
measure risk and its impact on different project parameters such as time, cost and
quality. By highlighting the trade-off between project parameters, the thesis
concentrates on project time management under uncertainty.
The earliest research incorporating uncertainty/risk in projects started in the late
1950’s. Since then, several techniques and tools have been introduced, and many
of them are widely used and applied throughout different industries. However,
they often fail to capture uncertainty properly and produce inaccurate, inconsistent
and unreliable results. This is evident from consistent problems of cost and
schedule overrun.
The thesis will argue that the simulation-based techniques, as the dominant and
state-of-the-art approach for modelling uncertainty in projects, suffers from
serious shortcomings. More advanced techniques are required.
Bayesian Networks (BNs), are a powerful technique for decision support under
uncertainty that have attracted a lot of attention in different fields. However,
applying BNs in project risk management is novel.
The thesis aims to show that BN modelling can improve project risk assessment.
A literature review explores the important limitations of the current practice of
project scheduling under uncertainty. A new model is proposed which applies
BNs for performing the famous Critical Path Method (CPM) calculation. The
model subsumes the benefits of CPM while adding BN capability to properly
capture different aspects of uncertainty in project scheduling
- …