58 research outputs found
Entanglement for any definition of two subsystems
The notion of entanglement of quantum states is usually defined with respect
to a fixed bipartition. Indeed, a global basis change can always map an
entangled state to a separable one. The situation is however different when
considering a set of states. In this work we define the notion of an
"absolutely entangled set" of quantum states: for any possible choice of global
basis, at least one of the states in the set is entangled. Hence, for all
bipartitions, i.e. any possible definition of the subsystems, the set features
entanglement. We present a minimum example of this phenomenon, with a set of
four states in . Moreover, we
propose a quantitative measure for absolute set entanglement. To lower-bound
this quantity, we develop a method based on polynomial optimization to perform
convex optimization over unitaries, which is of independent interest.Comment: Main: 5 pages, 2 figures; Appendix: 5 pages and 1 figur
Penalty alternating direction methods for mixed-integer optimal control with combinatorial constraints
We consider mixed-integer optimal control problems with combinatorial constraints that couple over time such as minimum dwell times. We analyze a lifting and decom-
position approach into a mixed-integer optimal control problem without combinatorial constraints and a mixed-integer problem for the combinatorial constraints in the control space. Both problems can be solved very efficiently with existing methods such as outer convexification with sum-up-rounding strategies and mixed-integer linear programming techniques. The coupling is handled using a penalty-approach. We provide an exactness result for the penalty which yields a solution approach that convergences to partial minima. We compare the quality of these dedicated points with those of
other heuristics amongst an academic example and also for the optimization of electric transmission lines with switching of the network topology for flow reallocation in order to satisfy demands
Optimal configuration of digital communication network
As the costs for maintaining computer communication networks are rapidly rising, it is particularly important to design the network efficiently. The objective of this thesis is to model the minimum cost design of digital communications networks and propose a heuristical solution approach to the formulated model. The minimum cost design has been modeled as a zero-one integer programming problem. The Lagrangian relaxation method and subgradient optimization procedure have been used to find reasonably good feasible solutions. Although the reliability for computer communication networks is as important as the cost factor, only the cost factor is considered in the context of this thesis.http://archive.org/details/optimalconfigura1094527604Major, Republic of Korea ArmyApproved for public release; distribution is unlimited
Training issues and learning algorithms for feedforward and recurrent neural networks
Ph.DDOCTOR OF PHILOSOPH
A Matter of Perspective - Three-dimensional Placement of Multiple Cameras to Maximize their Coverage
Power System Stability Analysis using Neural Network
This work focuses on the design of modern power system controllers for
automatic voltage regulators (AVR) and the applications of machine learning
(ML) algorithms to correctly classify the stability of the IEEE 14 bus system.
The LQG controller performs the best time domain characteristics compared to
PID and LQG, while the sensor and amplifier gain is changed in a dynamic
passion. After that, the IEEE 14 bus system is modeled, and contingency
scenarios are simulated in the System Modelica Dymola environment. Application
of the Monte Carlo principle with modified Poissons probability distribution
principle is reviewed from the literature that reduces the total contingency
from 1000k to 20k. The damping ratio of the contingency is then extracted,
pre-processed, and fed to ML algorithms, such as logistic regression, support
vector machine, decision trees, random forests, Naive Bayes, and k-nearest
neighbor. A neural network (NN) of one, two, three, five, seven, and ten hidden
layers with 25%, 50%, 75%, and 100% data size is considered to observe and
compare the prediction time, accuracy, precision, and recall value. At lower
data size, 25%, in the neural network with two-hidden layers and a single
hidden layer, the accuracy becomes 95.70% and 97.38%, respectively. Increasing
the hidden layer of NN beyond a second does not increase the overall score and
takes a much longer prediction time; thus could be discarded for similar
analysis. Moreover, when five, seven, and ten hidden layers are used, the F1
score reduces. However, in practical scenarios, where the data set contains
more features and a variety of classes, higher data size is required for NN for
proper training. This research will provide more insight into the damping
ratio-based system stability prediction with traditional ML algorithms and
neural networks.Comment: Masters Thesis Dissertatio
- …