16,423 research outputs found

    Galaxy shape measurement with convolutional neural networks

    Get PDF
    We present our results from training and evaluating a convolutional neural network (CNN) to predict galaxy shapes from wide-field survey images of the first data release of the Dark Energy Survey (DES DR1). We use conventional shape measurements as ground truth from an overlapping, deeper survey with less sky coverage, the Canada-France Hawaii Telescope Lensing Survey (CFHTLenS). We demonstrate that CNN predictions from single band DES images reproduce the results of CFHTLenS at bright magnitudes and show higher correlation with CFHTLenS at fainter magnitudes than maximum likelihood model fitting estimates in the DES Y1 im3shape catalogue. Prediction of shape parameters with a CNN is also extremely fast, it takes only 0.2 milliseconds per galaxy, improving more than 4 orders of magnitudes over forward model fitting. The CNN can also accurately predict shapes when using multiple images of the same galaxy, even in different color bands, with no additional computational overhead. The CNN is again more precise for faint objects, and the advantage of the CNN is more pronounced for blue galaxies than red ones when compared to the DES Y1 metacalibration catalogue, which fits a single Gaussian profile using riz band images. We demonstrate that CNN shape predictions within the metacalibration self-calibrating framework yield shear estimates with negligible multiplicative bias, m<103 m < 10^{-3}, and no significant PSF leakage. Our proposed setup is applicable to current and next generation weak lensing surveys where higher quality ground truth shapes can be measured in dedicated deep fields

    On the Impact of Hardware Impairments on Massive MIMO

    Get PDF
    Massive multi-user (MU) multiple-input multiple-output (MIMO) systems are one possible key technology for next generation wireless communication systems. Claims have been made that massive MU-MIMO will increase both the radiated energy efficiency as well as the sum-rate capacity by orders of magnitude, because of the high transmit directivity. However, due to the very large number of transceivers needed at each base-station (BS), a successful implementation of massive MU-MIMO will be contingent on of the availability of very cheap, compact and power-efficient radio and digital-processing hardware. This may in turn impair the quality of the modulated radio frequency (RF) signal due to an increased amount of power-amplifier distortion, phase-noise, and quantization noise. In this paper, we examine the effects of hardware impairments on a massive MU-MIMO single-cell system by means of theory and simulation. The simulations are performed using simplified, well-established statistical hardware impairment models as well as more sophisticated and realistic models based upon measurements and electromagnetic antenna array simulations.Comment: 7 pages, 9 figures, Accepted for presentation at Globe-Com workshop on Massive MIM

    Integration of a Precolouring Matrix in the Random Demodulator model for improved Compressive Ppectrum Estimation

    Get PDF
    The random demodulator (RD) is a compressive sensing (CS) architecture for acquiring frequency sparse, bandlimited signals. Such signals occur in cognitive radio networks for instance, where efficient sampling is a critical design requirement. A recent RD-based CS system has been shown to effectively acquire and recover frequency sparse, high-order modulated multiband signals which have been precoloured by an autoregressive (AR) filter. A shortcoming of this AR-RD architecture is that precolouring imposes additional computational cost on the signal transmission system. This paper introduces a novel CS architecture which seamlessly embeds a precolouring matrix (PM) into the signal recovery stage of the RD model (iPM-RD) with the PM depending only upon the AR filter coefficients, which are readily available. Experimental results using sparse wideband quadrature phased shift keying (QPSK) and 64 quadrature amplitude modulation 64QAM) signals confirm the iPM-RD model provides improved CS performance compared with the RD, while incurring no performance degradation compared with the original AR-RD architecture

    Neural Connectivity with Hidden Gaussian Graphical State-Model

    Full text link
    The noninvasive procedures for neural connectivity are under questioning. Theoretical models sustain that the electromagnetic field registered at external sensors is elicited by currents at neural space. Nevertheless, what we observe at the sensor space is a superposition of projected fields, from the whole gray-matter. This is the reason for a major pitfall of noninvasive Electrophysiology methods: distorted reconstruction of neural activity and its connectivity or leakage. It has been proven that current methods produce incorrect connectomes. Somewhat related to the incorrect connectivity modelling, they disregard either Systems Theory and Bayesian Information Theory. We introduce a new formalism that attains for it, Hidden Gaussian Graphical State-Model (HIGGS). A neural Gaussian Graphical Model (GGM) hidden by the observation equation of Magneto-encephalographic (MEEG) signals. HIGGS is equivalent to a frequency domain Linear State Space Model (LSSM) but with sparse connectivity prior. The mathematical contribution here is the theory for high-dimensional and frequency-domain HIGGS solvers. We demonstrate that HIGGS can attenuate the leakage effect in the most critical case: the distortion EEG signal due to head volume conduction heterogeneities. Its application in EEG is illustrated with retrieved connectivity patterns from human Steady State Visual Evoked Potentials (SSVEP). We provide for the first time confirmatory evidence for noninvasive procedures of neural connectivity: concurrent EEG and Electrocorticography (ECoG) recordings on monkey. Open source packages are freely available online, to reproduce the results presented in this paper and to analyze external MEEG databases

    Privacy Against Statistical Inference

    Full text link
    We propose a general statistical inference framework to capture the privacy threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable privacy subject to utility constraints. Based on these results we introduce two privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential privacy.Comment: Allerton 2012, 8 page

    Homomorphic Data Isolation for Hardware Trojan Protection

    Full text link
    The interest in homomorphic encryption/decryption is increasing due to its excellent security properties and operating facilities. It allows operating on data without revealing its content. In this work, we suggest using homomorphism for Hardware Trojan protection. We implement two partial homomorphic designs based on ElGamal encryption/decryption scheme. The first design is a multiplicative homomorphic, whereas the second one is an additive homomorphic. We implement the proposed designs on a low-cost Xilinx Spartan-6 FPGA. Area utilization, delay, and power consumption are reported for both designs. Furthermore, we introduce a dual-circuit design that combines the two earlier designs using resource sharing in order to have minimum area cost. Experimental results show that our dual-circuit design saves 35% of the logic resources compared to a regular design without resource sharing. The saving in power consumption is 20%, whereas the number of cycles needed remains almost the sam

    Space Time MUSIC: Consistent Signal Subspace Estimation for Wide-band Sensor Arrays

    Full text link
    Wide-band Direction of Arrival (DOA) estimation with sensor arrays is an essential task in sonar, radar, acoustics, biomedical and multimedia applications. Many state of the art wide-band DOA estimators coherently process frequency binned array outputs by approximate Maximum Likelihood, Weighted Subspace Fitting or focusing techniques. This paper shows that bin signals obtained by filter-bank approaches do not obey the finite rank narrow-band array model, because spectral leakage and the change of the array response with frequency within the bin create \emph{ghost sources} dependent on the particular realization of the source process. Therefore, existing DOA estimators based on binning cannot claim consistency even with the perfect knowledge of the array response. In this work, a more realistic array model with a finite length of the sensor impulse responses is assumed, which still has finite rank under a space-time formulation. It is shown that signal subspaces at arbitrary frequencies can be consistently recovered under mild conditions by applying MUSIC-type (ST-MUSIC) estimators to the dominant eigenvectors of the wide-band space-time sensor cross-correlation matrix. A novel Maximum Likelihood based ST-MUSIC subspace estimate is developed in order to recover consistency. The number of sources active at each frequency are estimated by Information Theoretic Criteria. The sample ST-MUSIC subspaces can be fed to any subspace fitting DOA estimator at single or multiple frequencies. Simulations confirm that the new technique clearly outperforms binning approaches at sufficiently high signal to noise ratio, when model mismatches exceed the noise floor.Comment: 15 pages, 10 figures. Accepted in a revised form by the IEEE Trans. on Signal Processing on 12 February 1918. @IEEE201
    corecore