835 research outputs found

    Using Bayesian Inference in Design Applications

    Get PDF
    This dissertation presents a new approach for solving engineering design problems such as the design of antenna arrays and finite impulse response (FIR) filters. In this approach, a design problem is cast as an inverse problem. The tools and methods previously developed for Bayesian inference are adapted and utilized to solve design problems. Given a desired design output, Bayesian parameter estimation and model comparison are employed to produce designs that meet the prescribed design specifications and requirements. In the Bayesian inference framework, the solution to a design problem is the posterior distribution, which is proportional to the product of the likelihood and priors. The likelihood is obtained via the assignment of a distribution to the error between the desired and achieved design output. The priors are assigned distributions which express constraints on the design parameters. Other design requirements are implemented by modifying the likelihood. The posterior --- which cannot be determined analytically --- is approximated by a Markov chain Monte Carlo method by drawing a reasonable number of samples from it. Each posterior sample represents a design candidate and a designer needs to select a single candidate as the final design based on additional design criteria. The Bayesian inference framework has been applied to design antenna arrays and FIR filters. The antenna array examples presented here use different types of array such as planar array, symmetric, asymmetric and reconfigurable linear arrays to realize various desired radiation patterns which include broadside, end-fire, shaped beam, and three-dimensional patterns. Various practical design requirements such as a minimum spacing between two adjacent elements, limitations in the dynamic range and accuracy of the current amplitudes and phases, the ability to maintain antenna performance over a frequency band, and the ability to sustain the loss of an arbitrary element, have been incorporated. For the filter design application, all presented examples employ a linear phase FIR filter to produce various desired frequency responses. In practice, the filter coefficients are limited in dynamic range and accuracy. This requirement has been incorporated into two examples where the filter coefficients are represented by a sum of signed power-of-two terms

    Design Of Polynomial-based Filters For Continuously Variable Sample Rate Conversion With Applications In Synthetic Instrumentati

    Get PDF
    In this work, the design and application of Polynomial-Based Filters (PBF) for continuously variable Sample Rate Conversion (SRC) is studied. The major contributions of this work are summarized as follows. First, an explicit formula for the Fourier Transform of both a symmetrical and nonsymmetrical PBF impulse response with variable basis function coefficients is derived. In the literature only one explicit formula is given, and that for a symmetrical even length filter with fixed basis function coefficients. The frequency domain optimization of PBFs via linear programming has been proposed in the literature, however, the algorithm was not detailed nor were explicit formulas derived. In this contribution, a minimax optimization procedure is derived for the frequency domain optimization of a PBF with time-domain constraints. Explicit formulas are given for direct input to a linear programming routine. Additionally, accompanying Matlab code implementing this optimization in terms of the derived formulas is given in the appendix. In the literature, it has been pointed out that the frequency response of the Continuous-Time (CT) filter decays as frequency goes to infinity. It has also been observed that when implemented in SRC, the CT filter is sampled resulting in CT frequency response aliasing. Thus, for example, the stopband sidelobes of the Discrete-Time (DT) implementation rise above the CT designed level. Building on these observations, it is shown how the rolloff rate of the frequency response of a PBF can be adjusted by adding continuous derivatives to the impulse response. This is of great advantage, especially when the PBF is used for decimation as the aliasing band attenuation can be made to increase with frequency. It is shown how this technique can be used to dramatically reduce the effect of alias build up in the passband. In addition, it is shown that as the number of continuous derivatives of the PBF increases the resulting DT implementation more closely matches the Continuous-Time (CT) design. When implemented for SRC, samples from a PBF impulse response are computed by evaluating the polynomials using a so-called fractional interval, µ. In the literature, the effect of quantizing µ on the frequency response of the PBF has been studied. Formulas have been derived to determine the number of bits required to keep frequency response distortion below prescribed bounds. Elsewhere, a formula has been given to compute the number of bits required to represent µ to obtain a given SRC accuracy for rational factor SRC. In this contribution, it is shown how these two apparently competing requirements are quite independent. In fact, it is shown that the wordlength required for SRC accuracy need only be kept in the µ generator which is a single accumulator. The output of the µ generator may then be truncated prior to polynomial evaluation. This results in significant computational savings, as polynomial evaluation can require several multiplications and additions. Under the heading of applications, a new Wideband Digital Downconverter (WDDC) for Synthetic Instruments (SI) is introduced. DDCs first tune to a signal\u27s center frequency using a numerically controlled oscillator and mixer, and then zoom-in to the bandwidth of interest using SRC. The SRC is required to produce continuously variable output sample rates from a fixed input sample rate over a large range. Current implementations accomplish this using a pre-filter, an arbitrary factor resampler, and integer decimation filters. In this contribution, the SRC of the WDDC is simplified reducing the computational requirements to a factor of three or more. In addition to this, it is shown how this system can be used to develop a novel computationally efficient FFT-based spectrum analyzer with continuously variable frequency spans. Finally, after giving the theoretical foundation, a real Field Programmable Gate Array (FPGA) implementation of a novel Arbitrary Waveform Generator (AWG) is presented. The new approach uses a fixed Digital-to-Analog Converter (DAC) sample clock in combination with an arbitrary factor interpolator. Waveforms created at any sample rate are interpolated to the fixed DAC sample rate in real-time. As a result, the additional lower performance analog hardware required in current approaches, namely, multiple reconstruction filters and/or additional sample clocks, is avoided. Measured results are given confirming the performance of the system predicted by the theoretical design and simulation

    Sparse Filter Design Under a Quadratic Constraint: Low-Complexity Algorithms

    Get PDF
    This paper considers three problems in sparse filter design, the first involving a weighted least-squares constraint on the frequency response, the second a constraint on mean squared error in estimation, and the third a constraint on signal-to-noise ratio in detection. The three problems are unified under a single framework based on sparsity maximization under a quadratic performance constraint. Efficient and exact solutions are developed for specific cases in which the matrix in the quadratic constraint is diagonal, block-diagonal, banded, or has low condition number. For the more difficult general case, a low-complexity algorithm based on backward greedy selection is described with emphasis on its efficient implementation. Examples in wireless channel equalization and minimum-variance distortionless-response beamforming show that the backward selection algorithm yields optimally sparse designs in many instances while also highlighting the benefits of sparse design.Texas Instruments Leadership University Consortium Progra

    A Branch-and-Bound Algorithm for Quadratically-Constrained Sparse Filter Design

    Get PDF
    This paper presents an exact algorithm for sparse filter design under a quadratic constraint on filter performance. The algorithm is based on branch-and-bound, a combinatorial optimization procedure that can either guarantee an optimal solution or produce a sparse solution with a bound on its deviation from optimality. To reduce the complexity of branch-and-bound, several methods are developed for bounding the optimal filter cost. Bounds based on infeasibility yield incrementally accumulating improvements with minimal computation, while two convex relaxations, referred to as linear and diagonal relaxations, are derived to provide stronger bounds. The approximation properties of the two relaxations are characterized analytically as well as numerically. Design examples involving wireless channel equalization and minimum-variance distortionless-response beamforming show that the complexity of obtaining certifiably optimal solutions can often be significantly reduced by incorporating diagonal relaxations, especially in more difficult instances. In the case of early termination due to computational constraints, diagonal relaxations strengthen the bound on the proximity of the final solution to the optimum.Texas Instruments Leadership University Consortium Progra

    A New Hybrid Descent Method with Application to the Optimal Design of Finite Precision FIR Filters

    Get PDF
    In this paper, the problem of the optimal design of discrete coefficient FIR filters is considered. A novelhybrid descent method, consisting of a simulated annealing algorithm and a gradient-based method, isproposed. The simulated annealing algorithm operates on the space of orthogonal matrices and is used tolocate descent points for previously converged local minima. The gradient-based method is derived fromconverting the discrete problem to a continuous problem via the Stiefel manifold, where convergence canbe guaranteed. To demonstrate the effectiveness of the proposed hybrid descent method, several numericalexamples show that better discrete filter designs can be sought via this hybrid descent method

    Sonic Booms in Atmospheric Turbulence (SonicBAT): The Influence of Turbulence on Shaped Sonic Booms

    Get PDF
    The objectives of the Sonic Booms in Atmospheric Turbulence (SonicBAT) Program were to develop and validate, via research flight experiments under a range of realistic atmospheric conditions, one numeric turbulence model research code and one classic turbulence model research code using traditional N-wave booms in the presence of atmospheric turbulence, and to apply these models to assess the effects of turbulence on the levels of shaped sonic booms predicted from low boom aircraft designs. The SonicBAT program has successfully investigated sonic boom turbulence effects through the execution of flight experiments at two NASA centers, Armstrong Flight Research Center (AFRC) and Kennedy Space Center (KSC), collecting a comprehensive set of acoustic and atmospheric turbulence data that were used to validate the numeric and classic turbulence models developed. The validated codes were incorporated into the PCBoom sonic boom prediction software and used to estimate the effect of turbulence on the levels of shaped sonic booms associated with several low boom aircraft designs. The SonicBAT program was a four year effort that consisted of turbulence model development and refinement throughout the entire period as well as extensive flight test planning that culminated with the two research flight tests being conducted in the second and third years of the program. The SonicBAT team, led by Wyle, includes partners from the Pennsylvania State University, Lockheed Martin, Gulfstream Aerospace, Boeing, Eagle Aeronautics, Technical & Business Systems, and the Laboratory of Fluid Mechanics and Acoustics (France). A number of collaborators, including the Japan Aerospace Exploration Agency, also participated by supporting the experiments with human and equipment resources at their own expense. Three NASA centers, AFRC, Langley Research Center (LaRC), and KSC were essential to the planning and conduct of the experiments. The experiments involved precision flight of either an F-18A or F-18B executing steady, level passes at supersonic airspeeds in a turbulent atmosphere to create sonic boom signatures that had been distorted by turbulence. The flights spanned a range of atmospheric turbulence conditions at NASA Armstrong and Kennedy in order to provide a variety of conditions for code validations. The SonicBAT experiments at both sites were designed to capture simultaneous F-18A or F-18B onboard flight instrumentation data, high fidelity ground based and airborne acoustic data, surface and upper air meteorological data, and additional meteorological data from ultrasonic anemometers and SODARs to determine the local atmospheric turbulence and boundary layer height

    Wideband data-independent beamforming for subarrays

    Get PDF
    The desire to operate large antenna arrays for e.g. RADAR applications over a wider frequency range is currently limited by the hardware, which due to weight, cost and size only permits complex multipliers behind each element. In contrast, wideband processing would have to rely on tap delay lines enabling digital filters for every element.As an intermediate step, in this thesis we consider a design where elements are grouped into subarrays, within which elements are still individually controlled by narrowband complex weights, but where each subarray output is given a tap delay line or finite impulse response digital filter for further wideband processing. Firstly, this thesis explores how a tap delay line attached to every subarray can be designed as a delay-and-sum beamformer. This filter is set to realised a fractional delay design based on a windowed sinc function. At the element level, we show that designing a narrowband beam w.r.t. a centre frequency of wideband operation is suboptimal,and suggest an optimisation technique that can yield sufficiently accurate gain over a frequency band of interest for an arbitrary look direction, which however comes at the cost of reduced aperture efficiency, as well as significantly increased sidelobes. We also suggest an adaptive method to enhance the frequency characteristic of a partial wideband array design, by utilising subarrays pointing in different directions in different frequency bands - resolved by means of a filter bank - to adaptively suppress undesired components in the beam patterns of the subarrays. Finally, the thesis proposes a novel array design approach obtained by rotational tiling of subarrays such that the overall array aperture is densely constructed from the same geometric subarray by rotation and translation only. Since the grating lobes of differently oriented subarrays do not necessarily align, an effective grating lobe attenuation w.r.t. the main beam is achieved. Based on a review of findings from geometry,a number of designs are highlight and transformed into numerical examples, and the theoretically expected grating lobe suppression is compared to uniformly spaced arrays.Supported by a number of models and simulations, the thesis thus suggests various numerical and hardware design techniques, mainly the addition of tap-delay-line per subarray and some added processing overhead, that can help to construct a large partial wideband array close in wideband performance to currently existing hardware.The desire to operate large antenna arrays for e.g. RADAR applications over a wider frequency range is currently limited by the hardware, which due to weight, cost and size only permits complex multipliers behind each element. In contrast, wideband processing would have to rely on tap delay lines enabling digital filters for every element.As an intermediate step, in this thesis we consider a design where elements are grouped into subarrays, within which elements are still individually controlled by narrowband complex weights, but where each subarray output is given a tap delay line or finite impulse response digital filter for further wideband processing. Firstly, this thesis explores how a tap delay line attached to every subarray can be designed as a delay-and-sum beamformer. This filter is set to realised a fractional delay design based on a windowed sinc function. At the element level, we show that designing a narrowband beam w.r.t. a centre frequency of wideband operation is suboptimal,and suggest an optimisation technique that can yield sufficiently accurate gain over a frequency band of interest for an arbitrary look direction, which however comes at the cost of reduced aperture efficiency, as well as significantly increased sidelobes. We also suggest an adaptive method to enhance the frequency characteristic of a partial wideband array design, by utilising subarrays pointing in different directions in different frequency bands - resolved by means of a filter bank - to adaptively suppress undesired components in the beam patterns of the subarrays. Finally, the thesis proposes a novel array design approach obtained by rotational tiling of subarrays such that the overall array aperture is densely constructed from the same geometric subarray by rotation and translation only. Since the grating lobes of differently oriented subarrays do not necessarily align, an effective grating lobe attenuation w.r.t. the main beam is achieved. Based on a review of findings from geometry,a number of designs are highlight and transformed into numerical examples, and the theoretically expected grating lobe suppression is compared to uniformly spaced arrays.Supported by a number of models and simulations, the thesis thus suggests various numerical and hardware design techniques, mainly the addition of tap-delay-line per subarray and some added processing overhead, that can help to construct a large partial wideband array close in wideband performance to currently existing hardware

    NATURAL ALGORITHMS IN DIGITAL FILTER DESIGN

    Get PDF
    Digital filters are an important part of Digital Signal Processing (DSP), which plays vital roles within the modern world, but their design is a complex task requiring a great deal of specialised knowledge. An analysis of this design process is presented, which identifies opportunities for the application of optimisation. The Genetic Algorithm (GA) and Simulated Annealing are problem-independent and increasingly popular optimisation techniques. They do not require detailed prior knowledge of the nature of a problem, and are unaffected by a discontinuous search space, unlike traditional methods such as calculus and hill-climbing. Potential applications of these techniques to the filter design process are discussed, and presented with practical results. Investigations into the design of Frequency Sampling (FS) Finite Impulse Response (FIR) filters using a hybrid GA/hill-climber proved especially successful, improving on published results. An analysis of the search space for FS filters provided useful information on the performance of the optimisation technique. The ability of the GA to trade off a filter's performance with respect to several design criteria simultaneously, without intervention by the designer, is also investigated. Methods of simplifying the design process by using this technique are presented, together with an analysis of the difficulty of the non-linear FIR filter design problem from a GA perspective. This gave an insight into the fundamental nature of the optimisation problem, and also suggested future improvements. The results gained from these investigations allowed the framework for a potential 'intelligent' filter design system to be proposed, in which embedded expert knowledge, Artificial Intelligence techniques and traditional design methods work together. This could deliver a single tool capable of designing a wide range of filters with minimal human intervention, and of proposing solutions to incomplete problems. It could also provide the basis for the development of tools for other areas of DSP system design

    The design and implementation of a wideband digital radio receiver

    Get PDF
    Historically radio has been implemented using largely analogue circuitry. Improvements in mixed signal and digital signal processing technology are rapidly leading towards a largely digital approach, with down-conversion and filtering moving to the digital signal processing domain. Advantages of this technology include increased performance and functionality, as well as reduced cost. Wideband receivers place the heaviest demands on both mixed signal and digital signal processing technology, requiring high spurious free dynamic range (SFDR) and signal processing bandwidths. This dissertation investigates the extent to which current digital technology is able to meet these demands and compete with the proven architectures of analogue receivers. A scalable generalised digital radio receiver capable of operating in the HF and VHF bands was designed, implemented and tested, yielding instantaneous bandwidths in excess of 10 MHz with a spurious-free dynamic range exceeding 80 decibels below carrier (dBc). The results achieved reflect favourably on the digital receiver architecture. While the necessity for minimal analogue circuitry will possibly always exist, digital radio architectures are currently able to compete with analogue counterparts. The digital receiver is simple to manufacture, based on the use of largely commercial off-the-shelf (COTS) components, and exhibits extreme flexibility and high performance when compared with comparably priced analogue receivers
    corecore