4,684 research outputs found
Optimized techniques for real-time microwave and millimeter wave SAR imaging
Microwave and millimeter wave synthetic aperture radar (SAR)-based imaging techniques, used for nondestructive evaluation (NDE), have shown tremendous usefulness for the inspection of a wide variety of complex composite materials and structures. Studies were performed for the optimization of uniform and nonuniform sampling (i.e., measurement positions) since existing formulations of SAR resolution and sampling criteria do not account for all of the physical characteristics of a measurement (e.g., 2D limited-size aperture, electric field decreasing with distance from the measuring antenna, etc.) and nonuniform sampling criteria supports sampling below the Nyquist rate. The results of these studies demonstrate optimum sampling given design requirements that fully explain resolution dependence on sampling criteria. This work was then extended to manually-selected and nonuniformly distributed samples such that the intelligence of the user may be utilized by observing SAR images being updated in real-time. Furthermore, a novel reconstruction method was devised that uses components of the SAR algorithm to advantageously exploit the inherent spatial information contained in the data, resulting in a superior final SAR image. Furthermore, better SAR images can be obtained if multiple frequencies are utilized as compared to single frequency. To this end, the design of an existing microwave imaging array was modified to support multiple frequency measurement. Lastly, the data of interest in such an array may be corrupted by coupling among elements since they are closely spaced, resulting in images with an increased level of artifacts. A method for correcting or pre-processing the data by using an adaptation of correlation canceling technique is presented as well --Abstract, page iii
From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals
Conventional sub-Nyquist sampling methods for analog signals exploit prior
information about the spectral support. In this paper, we consider the
challenging problem of blind sub-Nyquist sampling of multiband signals, whose
unknown frequency support occupies only a small portion of a wide spectrum. Our
primary design goals are efficient hardware implementation and low
computational load on the supporting digital processing. We propose a system,
named the modulated wideband converter, which first multiplies the analog
signal by a bank of periodic waveforms. The product is then lowpass filtered
and sampled uniformly at a low rate, which is orders of magnitude smaller than
Nyquist. Perfect recovery from the proposed samples is achieved under certain
necessary and sufficient conditions. We also develop a digital architecture,
which allows either reconstruction of the analog input, or processing of any
band of interest at a low rate, that is, without interpolating to the high
Nyquist rate. Numerical simulations demonstrate many engineering aspects:
robustness to noise and mismodeling, potential hardware simplifications,
realtime performance for signals with time-varying support and stability to
quantization effects. We compare our system with two previous approaches:
periodic nonuniform sampling, which is bandwidth limited by existing hardware
devices, and the random demodulator, which is restricted to discrete multitone
signals and has a high computational load. In the broader context of Nyquist
sampling, our scheme has the potential to break through the bandwidth barrier
of state-of-the-art analog conversion technologies such as interleaved
converters.Comment: 17 pages, 12 figures, to appear in IEEE Journal of Selected Topics in
Signal Processing, the special issue on Compressed Sensin
Doctor of Philosophy
dissertationBalancing the trade off between the spatial and temporal quality of interactive computer graphics imagery is one of the fundamental design challenges in the construction of rendering systems. Inexpensive interactive rendering hardware may deliver a high level of temporal performance if the level of spatial image quality is sufficiently constrained. In these cases, the spatial fidelity level is an independent parameter of the system and temporal performance is a dependent variable. The spatial quality parameter is selected for the system by the designer based on the anticipated graphics workload. Interactive ray tracing is one example; the algorithm is often selected due to its ability to deliver a high level of spatial fidelity, and the relatively lower level of temporal performance isreadily accepted. This dissertation proposes an algorithm to perform fine-grained adjustments to the trade off between the spatial quality of images produced by an interactive renderer, and the temporal performance or quality of the rendered image sequence. The approach first determines the minimum amount of sampling work necessary to achieve a certain fidelity level, and then allows the surplus capacity to be directed towards spatial or temporal fidelity improvement. The algorithm consists of an efficient parallel spatial and temporal adaptive rendering mechanism and a control optimization problem which adjusts the sampling rate based on a characterization of the rendered imagery and constraints on the capacity of the rendering system
AH Method: a novel routine for vicinity examination of the optimum found with a genetic algorithm
The paper presents a novel heuristic procedure (further called the AH Method) to investigate function shape in the direct vicinity of the found optimum solution. The survey is conducted using only the space sampling collected during the optimization process with an evolutionary algorithm. For this purpose the finite model of point-set is considered. The statistical analysis of the sampling quality based upon the coverage of the points in question over the entire attraction region is exploited. The tolerance boundaries of the parameters are determined for the user-specified increase of the objective function value above the found minimum. The presented test-case data prove that the proposed approach is comparable to other optimum neighborhood examination algorithms. Also, the AH Method requires noticeably shorter computational time than its counterparts. This is achieved by a repeated, second use of points from optimization without additional objective function calls, as well as significant repository size reduction during preprocessing
Robust Mission Design Through Evidence Theory and Multi-Agent Collaborative Search
In this paper, the preliminary design of a space mission is approached
introducing uncertainties on the design parameters and formulating the
resulting reliable design problem as a multiobjective optimization problem.
Uncertainties are modelled through evidence theory and the belief, or
credibility, in the successful achievement of mission goals is maximised along
with the reliability of constraint satisfaction. The multiobjective
optimisation problem is solved through a novel algorithm based on the
collaboration of a population of agents in search for the set of highly
reliable solutions. Two typical problems in mission analysis are used to
illustrate the proposed methodology
Xampling: Signal Acquisition and Processing in Union of Subspaces
We introduce Xampling, a unified framework for signal acquisition and
processing of signals in a union of subspaces. The main functions of this
framework are two. Analog compression that narrows down the input bandwidth
prior to sampling with commercial devices. A nonlinear algorithm then detects
the input subspace prior to conventional signal processing. A representative
union model of spectrally-sparse signals serves as a test-case to study these
Xampling functions. We adopt three metrics for the choice of analog
compression: robustness to model mismatch, required hardware accuracy and
software complexities. We conduct a comprehensive comparison between two
sub-Nyquist acquisition strategies for spectrally-sparse signals, the random
demodulator and the modulated wideband converter (MWC), in terms of these
metrics and draw operative conclusions regarding the choice of analog
compression. We then address lowrate signal processing and develop an algorithm
for that purpose that enables convenient signal processing at sub-Nyquist rates
from samples obtained by the MWC. We conclude by showing that a variety of
other sampling approaches for different union classes fit nicely into our
framework.Comment: 16 pages, 9 figures, submitted to IEEE for possible publicatio
- …