4,319 research outputs found

    Wavelet/shearlet hybridized neural networks for biomedical image restoration

    Get PDF
    Recently, new programming paradigms have emerged that combine parallelism and numerical computations with algorithmic differentiation. This approach allows for the hybridization of neural network techniques for inverse imaging problems with more traditional methods such as wavelet-based sparsity modelling techniques. The benefits are twofold: on the one hand traditional methods with well-known properties can be integrated in neural networks, either as separate layers or tightly integrated in the network, on the other hand, parameters in traditional methods can be trained end-to-end from datasets in a neural network "fashion" (e.g., using Adagrad or Adam optimizers). In this paper, we explore these hybrid neural networks in the context of shearlet-based regularization for the purpose of biomedical image restoration. Due to the reduced number of parameters, this approach seems a promising strategy especially when dealing with small training data sets

    Accelerating scientific codes by performance and accuracy modeling

    Full text link
    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by expert users, the parameters selected by our tool yield reductions in the time-to-solution ranging between 10% and 60%. In other words, for the typical scenario where a fixed number of core-hours are granted and simulations of a fixed number of timesteps are to be run, usage of our tool may allow up to twice as many simulations. While we develop our ideas using LAMMPS as computational framework and use the PPPM method for dispersion as case study, the methodology is general and valid for a range of software tools and methods

    Physics-informed neural networks of the Saint-Venant equations for downscaling a large-scale river model

    Full text link
    Large-scale river models are being refined over coastal regions to improve the scientific understanding of coastal processes, hazards and responses to climate change. However, coarse mesh resolutions and approximations in physical representations of tidal rivers limit the performance of such models at resolving the complex flow dynamics especially near the river-ocean interface, resulting in inaccurate simulations of flood inundation. In this research, we propose a machine learning (ML) framework based on the state-of-the-art physics-informed neural network (PINN) to simulate the downscaled flow at the subgrid scale. First, we demonstrate that PINN is able to assimilate observations of various types and solve the one-dimensional (1-D) Saint-Venant equations (SVE) directly. We perform the flow simulations over a floodplain and along an open channel in several synthetic case studies. The PINN performance is evaluated against analytical solutions and numerical models. Our results indicate that the PINN solutions of water depth have satisfactory accuracy with limited observations assimilated. In the case of flood wave propagation induced by storm surge and tide, a new neural network architecture is proposed based on Fourier feature embeddings that seamlessly encodes the periodic tidal boundary condition in the PINN's formulation. Furthermore, we show that the PINN-based downscaling can produce more reasonable subgrid solutions of the along-channel water depth by assimilating observational data. The PINN solution outperforms the simple linear interpolation in resolving the topography and dynamic flow regimes at the subgrid scale. This study provides a promising path towards improving emulation capabilities in large-scale models to characterize fine-scale coastal processes

    A selective control information detection scheme for OFDM receivers

    Get PDF
    In wireless communications, both control information and payload (user-data) are concurrently transmitted and required to be successfully recovered. This paper focuses on block-level detection, which is applicable for detecting transmitted control information, particularly when this information is selected or chosen from a finite set of information that are known at both transmitting and receiving devices. Using an orthogonal frequency division multiplexing architecture, this paper investigates and evaluates the performance of a time-domain decision criterion in comparison with a form of Maximum Likelihood (ML) estimation method. Unlike the ML method, the proposed time-domain detection technique requires no channel estimation as it uses the correlation (in the time-domain) that exists between the received and the transmitted selective information as a means of detection. In comparison with the ML method, results show that the proposed method offers improved detection performance, particularly when the control information consists of at least 16. However, the implementation of the proposed method requires a slightly increased number of mathematical computations

    Spectral Analysis Techniques using Prism Signal Processing

    Get PDF
    The Prism is a new signal processing module which implements fully recursive, linear phase FIR filtering, so its computational cost is fixed irrespective of filter length. The Prism also has negligible design cost. Recent work has demonstrated how, using simple design rules, a chain of six Prisms can create a narrowband filter with arbitrary central frequency and bandwidth. In this paper, the technique is applied to spectral analysis of data, whereby a sequence of filters is applied to a data set to provide narrow frequency analysis, yielding accurate estimates of the frequencies and amplitudes of spectral peaks in the data. Although this time domain technique remains computationally expensive compared to the FFT, it can identify and reject spectral leakage, offering an alternative analysis for low amplitude and/or adjacent spectral peaks, including hidden tones, where FFT discrimination may be limited
    • …
    corecore