33,965 research outputs found
Optimal Pattern Synthesis of Linear Antenna Array Using Grey Wolf Optimization Algorithm
The aim of this paper is to introduce the grey wolf optimization (GWO) algorithm to the electromagnetics and antenna community. GWO is a new nature-inspired metaheuristic algorithm inspired by the social hierarchy and hunting behavior of grey wolves. It has potential to exhibit high performance in solving not only unconstrained but also constrained optimization problems. In this work, GWO has been applied to linear antenna arrays for optimal pattern synthesis in the following ways: by optimizing the antenna positions while assuming uniform excitation and by optimizing the antenna current amplitudes while assuming spacing and phase as that of uniform array. GWO is used to achieve an array pattern with minimum side lobe level (SLL) along with null placement in the specified directions. GWO is also applied for the minimization of the first side lobe nearest to the main beam (near side lobe). Various examples are presented that illustrate the application of GWO for linear array optimization and, subsequently, the results are validated by benchmarking with results obtained using other state-of-the-art nature-inspired evolutionary algorithms. The results suggest that optimization of linear antenna arrays using GWO provides considerable enhancements compared to the uniform array and the synthesis obtained from other optimization techniques
Ant Colony Based Hybrid Approach for Optimal Compromise Sum-Difference Patterns Synthesis
Dealing with the synthesis of monopulse array antennas, many stochastic optimization algorithms have been used for the solution of the so-called optimal compromise problem between sum and difference patterns when sub-arrayed feed networks are considered. More recently, hybrid approaches, exploiting the convexity of the functional with respect to a sub-set of the unknowns (i.e., the sub-array excitation coefficients) have demonstrated their effectiveness. In this letter, an hybrid approach based on the Ant Colony Optimization (ACO) is proposed. At the first step, the ACO is used to define the sub-array membership of the array elements, while, at the second step, the sub-array weights are computed by solving a convex programming problem. The definitive version is available at www3.interscience.wiley.co
Transformations of High-Level Synthesis Codes for High-Performance Computing
Specialized hardware architectures promise a major step in performance and
energy efficiency over the traditional load/store devices currently employed in
large scale computing systems. The adoption of high-level synthesis (HLS) from
languages such as C/C++ and OpenCL has greatly increased programmer
productivity when designing for such platforms. While this has enabled a wider
audience to target specialized hardware, the optimization principles known from
traditional software design are no longer sufficient to implement
high-performance codes. Fast and efficient codes for reconfigurable platforms
are thus still challenging to design. To alleviate this, we present a set of
optimizing transformations for HLS, targeting scalable and efficient
architectures for high-performance computing (HPC) applications. Our work
provides a toolbox for developers, where we systematically identify classes of
transformations, the characteristics of their effect on the HLS code and the
resulting hardware (e.g., increases data reuse or resource consumption), and
the objectives that each transformation can target (e.g., resolve interface
contention, or increase parallelism). We show how these can be used to
efficiently exploit pipelining, on-chip distributed fast memory, and on-chip
streaming dataflow, allowing for massively parallel architectures. To quantify
the effect of our transformations, we use them to optimize a set of
throughput-oriented FPGA kernels, demonstrating that our enhancements are
sufficient to scale up parallelism within the hardware constraints. With the
transformations covered, we hope to establish a common framework for
performance engineers, compiler developers, and hardware developers, to tap
into the performance potential offered by specialized hardware architectures
using HLS
Distributed and parallel sparse convex optimization for radio interferometry with PURIFY
Next generation radio interferometric telescopes are entering an era of big
data with extremely large data sets. While these telescopes can observe the sky
in higher sensitivity and resolution than before, computational challenges in
image reconstruction need to be overcome to realize the potential of
forthcoming telescopes. New methods in sparse image reconstruction and convex
optimization techniques (cf. compressive sensing) have shown to produce higher
fidelity reconstructions of simulations and real observations than traditional
methods. This article presents distributed and parallel algorithms and
implementations to perform sparse image reconstruction, with significant
practical considerations that are important for implementing these algorithms
for Big Data. We benchmark the algorithms presented, showing that they are
considerably faster than their serial equivalents. We then pre-sample gridding
kernels to scale the distributed algorithms to larger data sizes, showing
application times for 1 Gb to 2.4 Tb data sets over 25 to 100 nodes for up to
50 billion visibilities, and find that the run-times for the distributed
algorithms range from 100 milliseconds to 3 minutes per iteration. This work
presents an important step in working towards computationally scalable and
efficient algorithms and implementations that are needed to image observations
of both extended and compact sources from next generation radio interferometers
such as the SKA. The algorithms are implemented in the latest versions of the
SOPT (https://github.com/astro-informatics/sopt) and PURIFY
(https://github.com/astro-informatics/purify) software packages {(Versions
3.1.0)}, which have been released alongside of this article.Comment: 25 pages, 5 figure
- …