72,709 research outputs found

    Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm

    Get PDF
    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multi-level fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns

    Reciprocity Calibration for Massive MIMO: Proposal, Modeling and Validation

    Get PDF
    This paper presents a mutual coupling based calibration method for time-division-duplex massive MIMO systems, which enables downlink precoding based on uplink channel estimates. The entire calibration procedure is carried out solely at the base station (BS) side by sounding all BS antenna pairs. An Expectation-Maximization (EM) algorithm is derived, which processes the measured channels in order to estimate calibration coefficients. The EM algorithm outperforms current state-of-the-art narrow-band calibration schemes in a mean squared error (MSE) and sum-rate capacity sense. Like its predecessors, the EM algorithm is general in the sense that it is not only suitable to calibrate a co-located massive MIMO BS, but also very suitable for calibrating multiple BSs in distributed MIMO systems. The proposed method is validated with experimental evidence obtained from a massive MIMO testbed. In addition, we address the estimated narrow-band calibration coefficients as a stochastic process across frequency, and study the subspace of this process based on measurement data. With the insights of this study, we propose an estimator which exploits the structure of the process in order to reduce the calibration error across frequency. A model for the calibration error is also proposed based on the asymptotic properties of the estimator, and is validated with measurement results.Comment: Submitted to IEEE Transactions on Wireless Communications, 21/Feb/201

    A Fast and Accurate Algorithm for Spherical Harmonic Analysis on HEALPix Grids with Applications to the Cosmic Microwave Background Radiation

    Get PDF
    The Hierarchical Equal Area isoLatitude Pixelation (HEALPix) scheme is used extensively in astrophysics for data collection and analysis on the sphere. The scheme was originally designed for studying the Cosmic Microwave Background (CMB) radiation, which represents the first light to travel during the early stages of the universe's development and gives the strongest evidence for the Big Bang theory to date. Refined analysis of the CMB angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. In this paper, we present a new method for performing spherical harmonic analysis for HEALPix data, which is a central component to computing and analyzing the angular power spectrum of the massive CMB data sets. The method uses a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky's fast spherical harmonic transform (Slevinsky, 2019). For a HEALPix grid with NN pixels (points), the computational complexity of the method is O(Nlog⁥2N)\mathcal{O}(N\log^2 N), with an initial set-up cost of O(N3/2log⁥N)\mathcal{O}(N^{3/2}\log N). This compares favorably with O(N3/2)\mathcal{O}(N^{3/2}) runtime complexity of the current methods available in the HEALPix software when multiple maps need to be analyzed at the same time. Using numerical experiments, we demonstrate that the new method also appears to provide better accuracy over the entire angular power spectrum of synthetic data when compared to the current methods, with a convergence rate at least two times higher

    Lattice QCD thermodynamics at finite chemical potential and its comparison with Experiments

    Full text link
    We compare higher moments of baryon numbers measured at the RHIC heavy ion collision experiments with those by the lattice QCD calculations. We employ the canonical approach, in which we can access the real chemical potential regions avoiding the sign problem. In the lattice QCD simulations, we study several fits of the number density in the pure imaginary chemical potential, and analyze how these fits affects behaviors at the real chemical potential. In the energy regions between sNN\sqrt{s}_{NN}=19.6 and 200 GeV, the susceptibility calculated at T/Tc=0.93T/T_c=0.93 is consistent with experimental data at 0≀ΌB/T<1.50 \le \mu_B/T < 1.5, while the kurtosis shows similar behavior with that of the experimental data in the small ÎŒB/T\mu_B/T regions 0≀ΌB/T<0.30 \le \mu_B/T < 0.3. The experimental data at sNN=\sqrt{s}_{NN}= 11.5 shows quite different behavior. The lattice result in the deconfinement region,T/Tc=1.35T/T_c=1.35, is far from experimental data

    Short-term fire front spread prediction using inverse modelling and airborne infrared images

    Get PDF
    A wildfire forecasting tool capable of estimating the fire perimeter position sufficiently in advance of the actual fire arrival will assist firefighting operations and optimise available resources. However, owing to limited knowledge of fire event characteristics (e.g. fuel distribution and characteristics, weather variability) and the short time available to deliver a forecast, most of the current models only provide a rough approximation of the forthcoming fire positions and dynamics. The problem can be tackled by coupling data assimilation and inverse modelling techniques. We present an inverse modelling-based algorithm that uses infrared airborne images to forecast short-term wildfire dynamics with a positive lead time. The algorithm is applied to two real-scale mallee-heath shrubland fire experiments, of 9 and 25 ha, successfully forecasting the fire perimeter shape and position in the short term. Forecast dependency on the assimilation windows is explored to prepare the system to meet real scenario constraints. It is envisaged the system will be applied at larger time and space scales.Peer ReviewedPostprint (author's final draft

    Soft Consistency Reconstruction: A Robust 1-bit Compressive Sensing Algorithm

    Full text link
    A class of recovering algorithms for 1-bit compressive sensing (CS) named Soft Consistency Reconstructions (SCRs) are proposed. Recognizing that CS recovery is essentially an optimization problem, we endeavor to improve the characteristics of the objective function under noisy environments. With a family of re-designed consistency criteria, SCRs achieve remarkable counter-noise performance gain over the existing counterparts, thus acquiring the desired robustness in many real-world applications. The benefits of soft decisions are exemplified through structural analysis of the objective function, with intuition described for better understanding. As expected, through comparisons with existing methods in simulations, SCRs demonstrate preferable robustness against noise in low signal-to-noise ratio (SNR) regime, while maintaining comparable performance in high SNR regime
    • 

    corecore