3,805 research outputs found

    A new generation 99 line Matlab code for compliance Topology Optimization and its extension to 3D

    Full text link
    Compact and efficient Matlab implementations of compliance Topology Optimization (TO) for 2D and 3D continua are given, consisting of 99 and 125 lines respectively. On discretizations ranging from 3â‹…1043\cdot 10^{4} to 4.8â‹…1054.8\cdot10^{5} elements, the 2D version, named top99neo, shows speedups from 2.55 to 5.5 times compared to the well-known top88 code (Andreassen-etal 2011). The 3D version, named top3D125, is the most compact and efficient Matlab implementation for 3D TO to date, showing a speedup of 1.9 times compared to the code of Amir-etal 2014, on a discretization with 2.2â‹…1052.2\cdot10^{5} elements. For both codes, improvements are due to much more efficient procedures for the assembly and implementation of filters and shortcuts in the design update step. The use of an acceleration strategy, yielding major cuts in the overall computational time, is also discussed, stressing its easy integration within the basic codes.Comment: 17 pages, 8 Figures, 4 Table

    Spectral filtering for the reduction of the Gibbs phenomenon of polynomial approximation methods on Lissajous curves with applications in MPI

    Get PDF
    Polynomial interpolation and approximation methods on sampling points along Lissajous curves using Chebyshev series is an effective way for a fast image reconstruction in Magnetic Particle Imaging. Due to the nature of spectral methods, a Gibbs phenomenon occurs in the reconstructed image if the underlying function has discontinuities. A possible solution for this problem are spectral filtering methods acting on the coefficients of the approximating polynomial. In this work, after a description of the Gibbs phenomenon and classical filtering techniques in one and several dimensions, we present an adaptive spectral filtering process for the resolution of this phenomenon and for an improved approximation of the underlying function or image. In this adaptive filtering technique, the spectral filter depends on the distance of a spatial point to the nearest discontinuity. We show the effectiveness of this filtering approach in theory, in numerical simulations as well as in the application in Magnetic Particle Imaging

    Study of interpolation methods for high-accuracy computations on overlapping grids

    Get PDF
    Overset strategy can be an efficient way to keep high-accuracy discretization by decomposing a complex geometry in topologically simple subdomains. Apart from the grid assembly algorithm, the key point of overset technique lies in the interpolation processes which ensure the communications between the overlapping grids. The family of explicit Lagrange and optimized interpolation schemes is studied. The a priori interpolation error is analyzed in the Fourier space, and combined with the error of the chosen discretization to highlight the modification of the numerical error. When high-accuracy algorithms are used an optimization of the interpolation coefficients can enhance the resolvality, which can be useful when high-frequency waves or small turbulent scales need to be supported by a grid. For general curvilinear grids in more than one space dimension, a mapping in a computational space followed by a tensorization of 1-D interpolations is preferred to a direct evaluation of the coefficient in the physical domain. A high-order extension of the isoparametric mapping is accurate and robust since it avoids the inversion of a matrix which may be ill-conditioned. A posteriori error analyses indicate that the interpolation stencil size must be tailored to the accuracy of the discretization scheme. For well discretized wavelengthes, the results show that the choice of a stencil smaller than the stencil of the corresponding finite-difference scheme can be acceptable. Besides the gain of optimization to capture high-frequency phenomena is also underlined. Adding order constraints to the optimization allows an interesting trade-off when a large range of scales is considered. Finally, the ability of the present overset strategy to preserve accuracy is illustrated by the diffraction of an acoustic source by two cylinders, and the generation of acoustic tones in a rotor–stator interaction. Some recommandations are formulated in the closing section

    PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data

    Full text link
    The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and PSF subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. We aim at developing a generic and modular data reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and in particular well suitable for the 3-5 micron wavelength range where typically (ten) thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. PynPoint is written in Python 2.7 and applies various image processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF subtraction tool based on PCA. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L' and M' data of beta Pic b and reassessed the planet's brightness and position with an MCMC analysis, and we provide a derivation of the photometric error budget.Comment: 16 pages, 9 figures, accepted for publication in A&A, PynPoint is available at https://github.com/PynPoint/PynPoin

    Low-Complexity Reduced-Rank Beamforming Algorithms

    Full text link
    A reduced-rank framework with set-membership filtering (SMF) techniques is presented for adaptive beamforming problems encountered in radar systems. We develop and analyze stochastic gradient (SG) and recursive least squares (RLS)-type adaptive algorithms, which achieve an enhanced convergence and tracking performance with low computational cost as compared to existing techniques. Simulations show that the proposed algorithms have a superior performance to prior methods, while the complexity is lower.Comment: 7 figure

    A robust and scalable implementation of the Parks-McClellan algorithm for designing FIR filters

    Get PDF
    Preliminary version accepted for publicationInternational audienceWith a long history dating back to the beginning of the 1970s, the Parks-McClellan algorithm is probably the most well-known approach for designing finite impulse response filters. Despite being a standard routine in many signal processing packages, it is possible to find practical design specifications where existing codes fail to work. Our goal is twofold. We first examine and present solutions for the practical difficulties related to weighted minimax polynomial approximation problems on multi-interval domains (i.e., the general setting under which the Parks-McClellan algorithm operates). Using these ideas, we then describe a robust implementation of this algorithm. It routinely outperforms existing minimax filter design routines
    • …
    corecore