447 research outputs found
A Computational Comparison of Optimization Methods for the Golomb Ruler Problem
The Golomb ruler problem is defined as follows: Given a positive integer n,
locate n marks on a ruler such that the distance between any two distinct pair
of marks are different from each other and the total length of the ruler is
minimized. The Golomb ruler problem has applications in information theory,
astronomy and communications, and it can be seen as a challenge for
combinatorial optimization algorithms. Although constructing high quality
rulers is well-studied, proving optimality is a far more challenging task. In
this paper, we provide a computational comparison of different optimization
paradigms, each using a different model (linear integer, constraint programming
and quadratic integer) to certify that a given Golomb ruler is optimal. We
propose several enhancements to improve the computational performance of each
method by exploring bound tightening, valid inequalities, cutting planes and
branching strategies. We conclude that a certain quadratic integer programming
model solved through a Benders decomposition and strengthened by two types of
valid inequalities performs the best in terms of solution time for small-sized
Golomb ruler problem instances. On the other hand, a constraint programming
model improved by range reduction and a particular branching strategy could
have more potential to solve larger size instances due to its promising
parallelization features
Volumetric diffusers : pseudorandom cylinder arrays on a periodic lattice
Most conventional diffusers take the form of a surface based treatment, and as a result can only operate in hemispherical space. Placing a diffuser in the volume of a room might provide greater efficiency by allowing scattering into the whole space. A periodic cylinder array (or sonic crystal) produces periodicity lobes and uneven scattering. Introducing defects into an array, by removing or varying the size of some of the cylinders, can enhance their diffusing abilities. This paper applies number theoretic concepts to create cylinder arrays that have more even scattering. Predictions using a Boundary Element Method are compared to measurements to verify the model, and suitable metrics are adopted to evaluate performance. Arrangements with good aperiodic autocorrelation properties tend to produce the best results. At low frequency power is controlled by object size and at high frequency diffusion is dominated by lattice spacing and structural similarity. Consequently the operational bandwidth is rather small. By using sparse arrays and varying cylinder sizes, a wider bandwidth can be achieved
Two-dimensional patterns with distinct differences; constructions, bounds, and maximal anticodes
A two-dimensional (2-D) grid with dots is called a configuration with distinct differences if any two lines which connect two dots are distinct either in their length or in their slope. These configurations are known to have many applications such as radar, sonar, physical alignment, and time-position synchronization. Rather than restricting dots to lie in a square or rectangle, as previously studied, we restrict the maximum distance between dots of the configuration; the motivation for this is a new application of such configurations to key distribution in wireless sensor networks. We consider configurations in the hexagonal grid as well as in the traditional square grid, with distances measured both in the Euclidean metric, and in the Manhattan or hexagonal metrics. We note that these configurations are confined inside maximal anticodes in the corresponding grid. We classify maximal anticodes for each diameter in each grid. We present upper bounds on the number of dots in a pattern with distinct differences contained in these maximal anticodes. Our bounds settle (in the negative) a question of Golomb and Taylor on the existence of honeycomb arrays of arbitrarily large size. We present constructions and lower bounds on the number of dots in configurations with distinct differences contained in various 2-D shapes (such as anticodes) by considering periodic configurations with distinct differences in the square grid
GPGPU for Difficult Black-box Problems
AbstractDifficult black-box problems arise in many scientific and industrial areas. In this paper, efficient use of a hardware accelerator to implement dedicated solvers for such problems is discussed and studied based on an example of Golomb Ruler problem. The actual solution of the problem is shown based on evolutionary and memetic algorithms accelerated on GPGPU. The presented results prove that GPGPU outperforms CPU in some memetic algorithms which can be used as a part of hybrid algorithm of finding near optimal solutions of Golomb Ruler problem. The presented research is a part of building heterogenous parallel algorithm for difficult black-box Golomb Ruler problem
Manx Arrays: Perfect Non-Redundant Interferometric Geometries
Interferometry applications (e.g., radio astronomy) often wish to optimize the placement of
the interferometric elements. One such optimal criterion is a uniform distribution of non-redundant element
spacings (in both distance and position angle). While large systems, with many elements, can rely on saturating
the sample space, and disregard âwastedâ sampling, small arrays with only a few elements are more critical,
where a single element can represent a significant fraction of the overall cost. This paper defines a âperfect
arrayâ as a mathematical construct having uniform and complete element spacings within a circle of radius
equal to the maximum element spacing. Additionally, the largest perfect non-redundant array, comprising six
elements, is presented. The geometry is described, along with the properties of the layout and situations where
it would be of significant benefit to array application and non-redundant masking designs
Recommended from our members
Layered cellular automata for pseudorandom number generation
The proposed Layered Cellular Automata (L-LCA), which comprises of a main CA with L additional layers of memory registers, has simple local interconnections and high operating speed. The time-varying L-LCA transformation at each clock can be reduced to a single transformation in the set formed by the transformation matrix of a maximum length Cellular Automata (CA), and the entire transformation sequence for a single period can be obtained. The analysis for the period characteristics of state sequences is simplified by analyzing representative transformation sequences determined by the phase difference between the initial states for each layer. The L-LCA model can be extended by adding more layers of memory or through the use of a larger main CA based on widely available maximum length CA. Several L-LCA (L=1,2,3,4) with 10- to 48-bit main CA are subjected to the DIEHARD test suite and better results are obtained over other CA designs reported in the literature. The experiments are repeated using the well-known nonlinear functions and in place of the linear function used in the L-LCA. Linear complexity is significantly increased when or is used
Large-scale parallelism for constraint-based local search: the costas array case study
International audienceWe present the parallel implementation of a constraint-based Local Search algorithm and investigate its performance on several hardware plat-forms with several hundreds or thousands of cores. We chose as the basis for these experiments the Adaptive Search method, an efficient sequential Local Search method for Constraint Satisfaction Problems (CSP). After preliminary experiments on some CSPLib benchmarks, we detail the modeling and solving of a hard combinatorial problem related to radar and sonar applications: the Costas Array Problem. Performance evaluation on some classical CSP bench-marks shows that speedups are very good for a few tens of cores, and good up to a few hundreds of cores. However for a hard combinatorial search problem such as the Costas Array Problem, performance evaluation of the sequential version shows results outperforming previous Local Search implementations, while the parallel version shows nearly linear speedups up to 8,192 cores. The proposed parallel scheme is simple and based on independent multi-walks with no communication between processes during search. We also investigated a cooperative multi-walk scheme where processes share simple information, but this scheme does not seem to improve performance
- âŠ