241,536 research outputs found
Interior Point Decoding for Linear Vector Channels
In this paper, a novel decoding algorithm for low-density parity-check (LDPC)
codes based on convex optimization is presented. The decoding algorithm, called
interior point decoding, is designed for linear vector channels. The linear
vector channels include many practically important channels such as inter
symbol interference channels and partial response channels. It is shown that
the maximum likelihood decoding (MLD) rule for a linear vector channel can be
relaxed to a convex optimization problem, which is called a relaxed MLD
problem. The proposed decoding algorithm is based on a numerical optimization
technique so called interior point method with barrier function. Approximate
variations of the gradient descent and the Newton methods are used to solve the
convex optimization problem. In a decoding process of the proposed algorithm, a
search point always lies in the fundamental polytope defined based on a
low-density parity-check matrix. Compared with a convectional joint message
passing decoder, the proposed decoding algorithm achieves better BER
performance with less complexity in the case of partial response channels in
many cases.Comment: 18 pages, 17 figures, The paper has been submitted to IEEE
Transaction on Information Theor
Assembly and Disassembly Planning by using Fuzzy Logic & Genetic Algorithms
The authors propose the implementation of hybrid Fuzzy Logic-Genetic
Algorithm (FL-GA) methodology to plan the automatic assembly and disassembly
sequence of products. The GA-Fuzzy Logic approach is implemented onto two
levels. The first level of hybridization consists of the development of a Fuzzy
controller for the parameters of an assembly or disassembly planner based on
GAs. This controller acts on mutation probability and crossover rate in order
to adapt their values dynamically while the algorithm runs. The second level
consists of the identification of theoptimal assembly or disassembly sequence
by a Fuzzy function, in order to obtain a closer control of the technological
knowledge of the assembly/disassembly process. Two case studies were analyzed
in order to test the efficiency of the Fuzzy-GA methodologies
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Complete hierarchies of efficient approximations to problems in entanglement theory
We investigate several problems in entanglement theory from the perspective
of convex optimization. This list of problems comprises (A) the decision
whether a state is multi-party entangled, (B) the minimization of expectation
values of entanglement witnesses with respect to pure product states, (C) the
closely related evaluation of the geometric measure of entanglement to quantify
pure multi-party entanglement, (D) the test whether states are multi-party
entangled on the basis of witnesses based on second moments and on the basis of
linear entropic criteria, and (E) the evaluation of instances of maximal output
purities of quantum channels. We show that these problems can be formulated as
certain optimization problems: as polynomially constrained problems employing
polynomials of degree three or less. We then apply very recently established
known methods from the theory of semi-definite relaxations to the formulated
optimization problems. By this construction we arrive at a hierarchy of
efficiently solvable approximations to the solution, approximating the exact
solution as closely as desired, in a way that is asymptotically complete. For
example, this results in a hierarchy of novel, efficiently decidable sufficient
criteria for multi-particle entanglement, such that every entangled state will
necessarily be detected in some step of the hierarchy. Finally, we present
numerical examples to demonstrate the practical accessibility of this approach.Comment: 14 pages, 3 figures, tiny modifications, version to be published in
Physical Review
Evaluation of the applicability of investment appraisal techniques for assessing the business value of IS services.
There is a consensus among academics and practitioners that ICT investments should be carefully justified, measured and controlled. This is not different for the development of a service architecture or the development of particular services as such. In practice, the traditional capital investment appraisal techniques (CIAT’s) such as payback period or net present value are by far the most used techniques for assessing the feasibility of ICT investments. Nevertheless, serious doubts about the fitness of these techniques in a service based value net environment arise. Value nets have special characteristics such as high flexibility and agility, re-use of services,… that makes the use of these techniques very difficult and the reliability of the outcome most uncertain. Efforts are made to find more appropriate techniques. In the past, CIAT’s have been adjusted so that these techniques become more reliable in an ICT environment and new justification methods and techniques have been developed. However neither these adjusted techniques nor the new techniques are frequently used. This might be explained by the fact that the outcome of these techniques is difficult to interpret and to use and the fact that some significant problems (like the estimation of hidden costs) remain unsolved. Moreover, most of the new techniques are still in the conceptual phase. In this paper we evaluate these adjusted and new techniques in the light of service oriented architectures. We will argue that non of the techniques offers a good solution for assessing the business value of IS services. Despite the existence of a wealth of literature, the IS community appears to be no nearer to a solution to many problems associated with ICT appraisal. This is potentially problematic when dealing with investments in emerging technology such as IS services or service architectures. Since all techniques presented in the article have their drawbacks, it is safe to say that reliance on a sole technique may lead to sub-optimalisation or even failure. Therefore it makes sense to use a mixture of techniques, eliminating or diminishing the weaknesses of each of the techniques used. We strongly recommend a multi-layer evaluation process, or an evaluation process derived from the balanced scorecard, for the appraisal of investments in services or service architectures.
Constraint handling strategies in Genetic Algorithms application to optimal batch plant design
Optimal batch plant design is a recurrent issue in Process Engineering, which can be formulated as a Mixed Integer Non-Linear Programming(MINLP) optimisation problem involving specific constraints, which can be, typically, the respect of a time horizon for the synthesis of various
products. Genetic Algorithms constitute a common option for the solution of these problems, but their basic operating mode is not always wellsuited to any kind of constraint treatment: if those cannot be integrated in variable encoding or accounted for through adapted genetic operators,
their handling turns to be a thorny issue. The point of this study is thus to test a few constraint handling techniques on a mid-size example in order to determine which one is the best fitted, in the framework of one particular problem formulation. The investigated methods are the elimination of infeasible individuals, the use of a penalty term added in the minimized criterion, the relaxation of the discrete variables upper bounds, dominancebased tournaments and, finally, a multiobjective strategy. The numerical computations, analysed in terms of result quality and of computational time, show the superiority of elimination technique for the former criterion only when the latter one does not become a bottleneck. Besides, when the problem complexity makes the random location of feasible space too difficult, a single tournament technique proves to be the most efficient
one
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
- …