337 research outputs found

    Model Uncertainty and Test of a Segmented Mirror Telescope

    Get PDF
    The future of large aperture telescopes relies heavily on the development of segmented array designs. Today\u27s monolithic mirror technology has reached a barrier, particularly for space-based telescopes. These large diameter, dense mirrors allow stable high-resolution imaging but are incompatible with optimized space launch. Segmented mirror telescopes are designed to balance lightweight with compact stowage. The structure necessary to support the flexible mirror array often combines isogrid geometry and complex actuation hardware. High-fidelity finite element models are commonly used to economically predict how the optics will perform under different environmental conditions. The research detailed herein integrates superelement partitioning and complexity simplifying techniques, resulting in a 92% size reduction of a nodally dense (1x106 degrees of freedom) model to allow efficient tuning and validation. Measured vibration data of a segmented mirror telescope was collected to allow system characterization and preliminary tuning. A single frequency comparison tuning iteration decreased the model\u27s error in predicting system dynamics, up to 500 Hz, by 4% on average. Results demonstrate it is possible to drastically reduce a model size while preserving analytical accuracy. The methodologies presented, applied to similar models with complex isogrid structures, would allow efficient model validation using standard equipped US Air Force desktop computers

    High performance digital signal processing: Theory, design, and applications in finance

    Get PDF
    The way scientific research and business is conducted has drastically changed over the last decade. Big data and data-intensive scientific discovery are two terms that have been coined recently. They describe the tremendous amounts of noisy data, created extremely rapidly by various sensing devices and methods that need to be explored for information inference. Researchers and practitioners who can obtain meaningful information out of big data in the shortest time gain a competitive advantage. Hence, there is more need than ever for a variety of high performance computational tools for scientific and business analytics. Interest in developing efficient data processing methods like compression and noise filtering tools enabling real-time analytics of big data is increasing. A common concern in digital signal processing applications has been the lack of fast handling of observed data. This problem has been an active research topic being addressed by the progress in analytical tools allowing fast processing of big data. One particular tool is the Karhunen-Loève transform (KLT) (also known as the principal component analysis) where covariance matrix of a stochastic process is decomposed into its eigenvectors and eigenvalues as the optimal orthonormal transform. Specifically, eigenanalysis is utilized to determine the KLT basis functions. KLT is a widely employed signal analysis method used in applications including noise filtering of measured data and compression. However, defining KLT basis for a given signal covariance matrix demands prohibitive computational resources in many real-world scenarios. In this dissertation, engineering implementation of KLT as well as the theory of eigenanalysis for auto-regressive order one, AR(1), discrete stochastic processes are investigated and novel improvements are proposed. The new findings are applied to well-known problems in quantitative finance (QF). First, an efficient method to derive the explicit KLT kernel for AR(1) processes that utilizes a simple root finding method for the transcendental equations is introduced. Performance improvement over a popular numerical eigenanalysis algorithm, called divide and conquer, is shown. Second, implementation of parallel Jacobi algorithm for eigenanalysis on graphics processing units is improved such that the access to the dynamic random access memory is entirely coalesced. The speed is improved by a factor of 68.5 by the proposed method compared to a CPU implementation for a square matrix of size 1,024. Third, several tools developed and implemented in the dissertation are applied to QF problems such as risk analysis and portfolio risk management. In addition, several topics in QF, such as price models, Epps effect, and jump processes are investigated and new insights are suggested from a multi-resolution (multi-rate) signal processing perspective. It is expected to see this dissertation to make contributions in better understanding and bridging the analytical methods in digital signal processing and applied mathematics, and their wider utilization in the finance sector. The emerging joint research and technology development efforts in QF and financial engineering will benefit the investors, bankers, and regulators to build and maintain more robust and fair financial markets in the future

    Pole -mounted sonar vibration prediction using CMAC neural networks

    Get PDF
    The efficiency and accuracy of pole-mounted sonar systems are severely affected by pole vibration, Traditional signal processing techniques are not appropriate for the pole vibration problem due to the nonlinearity of the pole vibration and the lack of a priori knowledge about the statistics of the data to be processed. A novel approach of predicting the pole-mounted sonar vibration using CMAC neural networks is presented. The feasibility of this approach is studied in theory, evaluated by simulation and verified with a real-time laboratory prototype, Analytical bounds of the learning rate of a CMAC neural network are derived which guarantee convergence of the weight vector in the mean. Both simulation and experimental results indicate the CMAC neural network is an effective tool for this vibration prediction problem

    On the use of spectral element methods for under-resolved simulations of transitional and turbulent flows

    Get PDF
    The present thesis comprises a sequence of studies that investigate the suitability of spectral element methods for model-free under-resolved computations of transitional and turbulent flows. More specifically, the continuous and the discontinuous Galerkin (i.e. CG and DG) methods have their performance assessed for under-resolved direct numerical simulations (uDNS) / implicit large eddy simulations (iLES). In these approaches, the governing equations of fluid motion are solved in unfiltered form, as in a typical direct numerical simulation, but the degrees of freedom employed are insufficient to capture all the turbulent scales. Numerical dissipation introduced by appropriate stabilisation techniques complements molecular viscosity in providing small-scale regularisation at very large Reynolds numbers. Added spectral vanishing viscosity (SVV) is considered for CG, while upwind dissipation is relied upon for DG-based computations. In both cases, the use of polynomial dealiasing strategies is assumed. Focus is given to the so-called eigensolution analysis framework, where numerical dispersion and diffusion errors are appraised in wavenumber/frequency space for simplified model problems, such as the one-dimensional linear advection equation. In the assessment of CG and DG, both temporal and spatial eigenanalyses are considered. While the former assumes periodic boundary conditions and is better suited for temporally evolving problems, the latter considers inflow / outflow type boundaries and should be favoured for spatially developing flows. Despite the simplicity of linear eigensolution analyses, surprisingly useful insights can be obtained from them and verified in actual turbulence problems. In fact, one of the most important contributions of this thesis is to highlight how linear eigenanalysis can be helpful in explaining why and how to use spectral element methods (particularly CG and DG) in uDNS/iLES approaches. Various aspects of solution quality and numerical stability are discussed by connecting observations from eigensolution analyses and under-resolved turbulence computations. First, DG’s temporal eigenanalysis is revisited and a simple criterion named "the 1% rule" is devised to estimate DG’s effective resolution power in spectral space. This criterion is shown to pinpoint the wavenumber beyond which a numerically induced dissipation range appears in the energy spectra of Burgers turbulence simulations in one dimension. Next, the temporal eigenanalysis of CG is discussed with and without SVV. A modified SVV operator based on DG’s upwind dissipation is proposed to enhance CG’s accuracy and robustness for uDNS / iLES. In the sequence, an extensive set of DG computations of the inviscid Taylor-Green vortex model problem is considered. These are used for the validation of the 1% rule in actual three-dimensional transitional / turbulent flows. The performance of various Riemann solvers is also discussed in this infinite Reynolds number scenario, with high quality solutions being achieved. Subsequently, the capabilities of CG for uDNS/iLES are tested through a complex turbulent boundary layer (periodic) test problem. While LES results of this test case are known to require sophisticated modelling and relatively fine grids, high-order CG approaches are shown to deliver surprisingly good quality with significantly less degrees of freedom, even without SVV. Finally, spatial eigenanalyses are conducted for DG and CG. Differences caused by upwinding levels and Riemann solvers are explored in the DG case, while robust SVV design is considered for CG, again by reference to DG’s upwind dissipation. These aspects are then tested in a two-dimensional test problem that mimics spatially developing grid turbulence. In summary, a point is made that uDNS/iLES approaches based on high-order spectral element methods, when properly stabilised, are very powerful tools for the computation of practically all types of transitional and turbulent flows. This capability is argued to stem essentially from their superior resolution power per degree of freedom and the absence of (often restrictive) modelling assumptions. Conscientious usage is however necessary as solution quality and numerical robustness may depend strongly on discretisation variables such as polynomial order, appropriate mesh spacing, Riemann solver, SVV parameters, dealiasing strategy and alternative stabilisation techniques.Open Acces

    Sensor Array Signal Processing via Eigenanalysis of Matrix Pencils Composed of Data Derived from Translationally Invariant Subarrays

    Get PDF
    An algorithm is developed for estimating characteristic parameters associated with a scene of radiating sources given the data derived from a pair of translationally invariant arrays, the X and Y arrays, which are displaced relative to one another. The algorithm is referred to as PR O—E SPRIT and is predicated on invoking two recent mathematical developments: (1) the SVD based solution to the Procrustes problem of optimally approximating an invariant subspace rotation and (2) the Total Least Squares method for perturbing each of the two estimates of a common subspace in a minimal fashion until the two perturbed spaces are the same. For uniform linear array scenarios, the use of forward-backward averaging (FBAVG) in conjunction with PR O—E S PR IT is shown to effect a substantial reduction in the computational burden, a significant improvement in performance, a simple scheme for estimating the number of sources and source decorrelation. These gains may be attributed to FBAVG’s judicious exploitation of the diagonal invariance operator relating the Direction of Arrival matrix of the Y array to that associated with the X array. Similar gains may be achieved in the case where the X and Y arrays are either not linear or not uniformly spaced through the use of pseudo-forward-backward averaging (PFBAVG). However, the use of PFBAVG does not effect source decorrelation and reduces the maximum number of resolvable sources by a factor of two. Simulation studies and the results of applying PR O—E S PR IT to real data demonstrate the excellent performance of the method

    Array signal processing robust to pointing errors

    No full text
    The objective of this thesis is to design computationally efficient DOA (direction-of- arrival) estimation algorithms and beamformers robust to pointing errors, by harnessing the antenna geometrical information and received signals. Initially, two fast root-MUSIC-type DOA estimation algorithms are developed, which can be applied in arbitrary arrays. Instead of computing all roots, the first proposed iterative algorithm calculates the wanted roots only. The second IDFT-based method obtains the DOAs by scanning a few circles in parallel and thus the rooting is avoided. Both proposed algorithms, with less computational burden, have the asymptotically similar performance to the extended root-MUSIC. The second main contribution in this thesis is concerned with the matched direction beamformer (MDB), without using the interference subspace. The manifold vector of the desired signal is modeled as a vector lying in a known linear subspace, but the associated linear combination vector is otherwise unknown due to pointing errors. This vector can be found by computing the principal eigen-vector of a certain rank-one matrix. Then a MDB is constructed which is robust to both pointing errors and overestimation of the signal subspace dimension. Finally, an interference cancellation beamformer robust to pointing errors is considered. By means of vector space projections, much of the pointing error can be eliminated. A one-step power estimation is derived by using the theory of covariance fitting. Then an estimate-and-subtract interference canceller beamformer is proposed, in which the power inversion problem is avoided and the interferences can be cancelled completely

    Fast soft-tissue deformations with FEM

    Get PDF
    Soft body simulation has been a very active research area in computer animation since Baraff and Witkin's 1998 work on cloth simulation, which led Pixar to start using such techniques in all of its animated movies that followed. Many challenges in these simulations come from different roots. From a numerical point of view, deformable systems are large sparse problems that can become numerically unstable at surprising rates and may need to be modified at each time-step. From a mathematical point of view, hyperelastic models defined by continuum mechanics need to be derived, established and configured. And from the geometric side, physical interaction with the environment and self-collisions may need to be detected and introduced into the solver. It is a fact that the Computer Graphics academia primarily focuses on offline methods, both for rendering and simulation. At the same time, the advances from the industry mainly apply to real-time rendering. However, we wondered how such high-quality simulation methods would map to a real-time use case. In this thesis, we delve into the simulation system used by Pixar's Fizt2 simulator, based on the Finite Element Method, and investigate how to apply the same techniques in real-time while preserving robustness and fidelity, altogether providing the user with some interaction mechanisms. A 3D engine for simulating deformable materials has been developed following the described models, with an interactive interface that allows the definition and configuration of scenes and later interaction with the simulation
    • …
    corecore