28,578 research outputs found
The relevance of outsourcing and leagile strategies in performance optimization of an integrated process planning and scheduling
Over the past few years growing global competition has forced the manufacturing industries to upgrade their old production strategies with the modern day approaches. As a result, recent interest has been developed towards finding an appropriate policy that could enable them to compete with others, and facilitate them to emerge as a market winner. Keeping in mind the abovementioned facts, in this paper the authors have proposed an integrated process planning and scheduling model inheriting the salient features of outsourcing, and leagile principles to compete in the existing market scenario. The paper also proposes a model based on leagile principles, where the integrated planning management has been practiced. In the present work a scheduling problem has been considered and overall minimization of makespan has been aimed. The paper shows the relevance of both the strategies in performance enhancement of the industries, in terms of their reduced makespan. The authors have also proposed a new hybrid Enhanced Swift Converging Simulated Annealing (ESCSA) algorithm, to solve the complex real-time scheduling problems. The proposed algorithm inherits the prominent features of the Genetic Algorithm (GA), Simulated Annealing (SA), and the Fuzzy Logic Controller (FLC). The ESCSA algorithm reduces the makespan significantly in less computational time and number of iterations. The efficacy of the proposed algorithm has been shown by comparing the results with GA, SA, Tabu, and hybrid Tabu-SA optimization methods
Recommended from our members
Robust H2/Hâ-state estimation for discrete-time systems with error variance constraints
Copyright [1997] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper studies the problem of an Hâ-norm and variance-constrained state estimator design for uncertain linear discrete-time systems. The system under consideration is subjected to
time-invariant norm-bounded parameter uncertainties in both the state and measurement matrices. The problem addressed is the design of
a gain-scheduled linear state estimator such that, for all admissible measurable uncertainties, the variance of the estimation error of each state is not more than the individual prespecified value, and the transfer function from disturbances to error state outputs satisfies the prespecified Hâ-norm upper bound constraint, simultaneously. The conditions for the existence of desired estimators are obtained in terms of matrix inequalities, and the explicit expression of these estimators is also derived. A numerical example is provided to demonstrate various aspects of theoretical results
Research on output feedback control
A summary is presented of the main results obtained during the course of research on output feedback control. The term output feedback is used to denote a controller design approach which does not rely on an observer to estimate the states of the system. Thus, the order of the controller is fixed, and can even be zero order, which amounts to constant gain ouput feedback. The emphasis has been on optimal output feedback. That is, a fixed order controller is designed based on minimizing a suitably chosen quadratic performance index. A number of problem areas that arise in this context have been addressed. These include developing suitable methods for selecting an index of performance, both time domain and frequency domain methods for achieving robustness of the closed loop system, developing canonical forms to achieve a minimal parameterization for the controller, two time scale design formulations for ill-conditioned systems, and the development of convergent numerical algorithms for solving the output feedback problem
Optimized pulses for the control of uncertain qubits
Constructing high-fidelity control fields that are robust to control, system,
and/or surrounding environment uncertainties is a crucial objective for quantum
information processing. Using the two-state Landau-Zener model for illustrative
simulations of a controlled qubit, we generate optimal controls for \pi/2- and
\pi-pulses, and investigate their inherent robustness to uncertainty in the
magnitude of the drift Hamiltonian. Next, we construct a quantum-control
protocol to improve system-drift robustness by combining environment-decoupling
pulse criteria and optimal control theory for unitary operations. By
perturbatively expanding the unitary time-evolution operator for an open
quantum system, previous analysis of environment-decoupling control pulses has
calculated explicit control-field criteria to suppress environment-induced
errors up to (but not including) third order from \pi/2- and \pi-pulses. We
systematically integrate this criteria with optimal control theory,
incorporating an estimate of the uncertain parameter, to produce improvements
in gate fidelity and robustness, demonstrated via a numerical example based on
double quantum dot qubits. For the qubit model used in this work, post facto
analysis of the resulting controls suggests that realistic control-field
fluctuations and noise may contribute just as significantly to gate errors as
system and environment fluctuations.Comment: 38 pages, 15 figures, RevTeX 4.1, minor modifications to the previous
versio
Domain Decomposition for Stochastic Optimal Control
This work proposes a method for solving linear stochastic optimal control
(SOC) problems using sum of squares and semidefinite programming. Previous work
had used polynomial optimization to approximate the value function, requiring a
high polynomial degree to capture local phenomena. To improve the scalability
of the method to problems of interest, a domain decomposition scheme is
presented. By using local approximations, lower degree polynomials become
sufficient, and both local and global properties of the value function are
captured. The domain of the problem is split into a non-overlapping partition,
with added constraints ensuring continuity. The Alternating Direction
Method of Multipliers (ADMM) is used to optimize over each domain in parallel
and ensure convergence on the boundaries of the partitions. This results in
improved conditioning of the problem and allows for much larger and more
complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201
JALAD: Joint Accuracy- and Latency-Aware Deep Structure Decoupling for Edge-Cloud Execution
Recent years have witnessed a rapid growth of deep-network based services and
applications. A practical and critical problem thus has emerged: how to
effectively deploy the deep neural network models such that they can be
executed efficiently. Conventional cloud-based approaches usually run the deep
models in data center servers, causing large latency because a significant
amount of data has to be transferred from the edge of network to the data
center. In this paper, we propose JALAD, a joint accuracy- and latency-aware
execution framework, which decouples a deep neural network so that a part of it
will run at edge devices and the other part inside the conventional cloud,
while only a minimum amount of data has to be transferred between them. Though
the idea seems straightforward, we are facing challenges including i) how to
find the best partition of a deep structure; ii) how to deploy the component at
an edge device that only has limited computation power; and iii) how to
minimize the overall execution latency. Our answers to these questions are a
set of strategies in JALAD, including 1) A normalization based in-layer data
compression strategy by jointly considering compression rate and model
accuracy; 2) A latency-aware deep decoupling strategy to minimize the overall
execution latency; and 3) An edge-cloud structure adaptation strategy that
dynamically changes the decoupling for different network conditions.
Experiments demonstrate that our solution can significantly reduce the
execution latency: it speeds up the overall inference execution with a
guaranteed model accuracy loss.Comment: conference, copyright transfered to IEE
Decoupling with random diagonal unitaries
We investigate decoupling, one of the most important primitives in quantum
Shannon theory, by replacing the uniformly distributed random unitaries
commonly used to achieve the protocol, with repeated applications of random
unitaries diagonal in the Pauli- and - bases. This strategy was recently
shown to achieve an approximate unitary -design after a number of
repetitions of the process, which implies that the strategy gradually achieves
decoupling. Here, we prove that even fewer repetitions of the process achieve
decoupling at the same rate as that with the uniform ones, showing that rather
imprecise approximations of unitary -designs are sufficient for decoupling.
We also briefly discuss efficient implementations of them and implications of
our decoupling theorem to coherent state merging and relative thermalisation.Comment: 26 pages, 3 figures. v2: 19 pages, 3 figures, both results and
presentations are improved. One conjecture in the previous version was
proven. v3: 16 pages, 1 figure. v4: doi links are added, published versio
- âŠ