51 research outputs found
Convex hull ranking algorithm for multi-objective evolutionary algorithms
AbstractDue to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity depends on finding non-dominated fronts, this paper introduces a new method for ranking the solutions of an evolutionary algorithm’s population. First, we investigate the relation between the convex hull and non-dominated solutions, and discuss the complexity time of the convex hull and non-dominated sorting problems. Then, we use convex hull concepts to present a new ranking procedure for multi-objective evolutionary algorithms. The proposed algorithm is very suitable for convex multi-objective optimization problems. Finally, we apply this method as an alternative ranking procedure to NSGA-II for non-dominated comparisons, and test it using some benchmark problems
Counting and Enumerating Crossing-free Geometric Graphs
We describe a framework for counting and enumerating various types of
crossing-free geometric graphs on a planar point set. The framework generalizes
ideas of Alvarez and Seidel, who used them to count triangulations in time
where is the number of points. The main idea is to reduce the
problem of counting geometric graphs to counting source-sink paths in a
directed acyclic graph.
The following new results will emerge. The number of all crossing-free
geometric graphs can be computed in time for some .
The number of crossing-free convex partitions can be computed in time
. The number of crossing-free perfect matchings can be computed in
time . The number of convex subdivisions can be computed in time
. The number of crossing-free spanning trees can be computed in time
for some . The number of crossing-free spanning cycles
can be computed in time for some .
With the same bounds on the running time we can construct data structures
which allow fast enumeration of the respective classes. For example, after
time of preprocessing we can enumerate the set of all crossing-free
perfect matchings using polynomial time per enumerated object. For
crossing-free perfect matchings and convex partitions we further obtain
enumeration algorithms where the time delay for each (in particular, the first)
output is bounded by a polynomial in .
All described algorithms are comparatively simple, both in terms of their
analysis and implementation
Manhattan Cutset Sampling and Sensor Networks.
Cutset sampling is a new approach to acquiring two-dimensional data, i.e., images, where values are recorded densely along straight lines. This type of sampling is motivated by physical scenarios where data must be taken along straight paths, such as a boat taking water samples. Additionally, it may be possible to better reconstruct image edges using the dense amount of data collected on lines. Finally, an advantage of cutset sampling is in the design of wireless sensor networks. If battery-powered sensors are placed densely along straight lines, then the transmission energy required for communication between sensors can be reduced, thereby extending the network lifetime.
A special case of cutset sampling is Manhattan sampling, where data is recorded along evenly-spaced rows and columns. This thesis examines Manhattan sampling in three contexts. First, we prove a sampling theorem demonstrating an image can be perfectly reconstructed from Manhattan samples when its spectrum is bandlimited to the union of two Nyquist regions corresponding to the two lattices forming the Manhattan grid. An efficient ``onion peeling'' reconstruction method is provided, and we show that the Landau bound is achieved. This theorem is generalized to dimensions higher than two, where again signals are reconstructable from a Manhattan set if they are bandlimited to a union of Nyquist regions. Second, for non-bandlimited images, we present several algorithms for reconstructing natural images from Manhattan samples. The Locally Orthogonal Orientation Penalization (LOOP) algorithm is the best of the proposed algorithms in both subjective quality and mean-squared error. The LOOP algorithm reconstructs images well in general, and outperforms competing algorithms for reconstruction from non-lattice samples. Finally, we study cutset networks, which are new placement topologies for wireless sensor networks. Assuming a power-law model for communication energy, we show that cutset networks offer reduced communication energy costs over lattice and random topologies. Additionally, when solving centralized and decentralized source localization problems, cutset networks offer reduced energy costs over other topologies for fixed sensor densities and localization accuracies. Finally, with the eventual goal of analyzing different cutset topologies, we analyze the energy per distance required for efficient long-distance communication in lattice networks.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120876/1/mprelee_1.pd
Choosing among notions of multivariate depth statistics
Classical multivariate statistics measures the outlyingness of a point by its
Mahalanobis distance from the mean, which is based on the mean and the
covariance matrix of the data. A multivariate depth function is a function
which, given a point and a distribution in d-space, measures centrality by a
number between 0 and 1, while satisfying certain postulates regarding
invariance, monotonicity, convexity and continuity. Accordingly, numerous
notions of multivariate depth have been proposed in the literature, some of
which are also robust against extremely outlying data. The departure from
classical Mahalanobis distance does not come without cost. There is a trade-off
between invariance, robustness and computational feasibility. In the last few
years, efficient exact algorithms as well as approximate ones have been
constructed and made available in R-packages. Consequently, in practical
applications the choice of a depth statistic is no more restricted to one or
two notions due to computational limits; rather often more notions are
feasible, among which the researcher has to decide. The article debates
theoretical and practical aspects of this choice, including invariance and
uniqueness, robustness and computational feasibility. Complexity and speed of
exact algorithms are compared. The accuracy of approximate approaches like the
random Tukey depth is discussed as well as the application to large and
high-dimensional data. Extensions to local and functional depths and
connections to regression depth are shortly addressed
- …