158,592 research outputs found

    The Standing Wave Phenomenon in Radio Telescopes; Frequency Modulation of the WSRT Primary Beam

    Get PDF
    Inadequacies in the knowledge of the primary beam response of current interferometric arrays often form a limitation to the image fidelity. We hope to overcome these limitations by constructing a frequency-resolved, full-polarization empirical model for the primary beam of the Westerbork Synthesis Radio Telescope (WSRT). Holographic observations, sampling angular scales between about 5 arcmin and 11 degrees, were obtained of a bright compact source (3C147). These permitted measurement of voltage response patterns for seven of the fourteen telescopes in the array and allowed calculation of the mean cross-correlated power beam. Good sampling of the main-lobe, near-in, and far-side-lobes out to a radius of more than 5 degrees was obtained. A robust empirical beam model was detemined in all polarization products and at frequencies between 1322 and 1457 MHz with 1 MHz resolution. Substantial departures from axi-symmetry are apparent in the main-lobe as well as systematic differences between the polarization properties. Surprisingly, many beam properties are modulated at the 5 to 10% level with changing frequency. These include: (1) the main beam area, (2) the side-lobe to main-lobe power ratio, and (3) the effective telescope aperture. These semi-sinusoidsal modulations have a basic period of about 17 MHz, consistent with the natural 'standing wave' period of a 8.75 m focal distance. The deduced frequency modulations of the beam pattern were verified in an independent long duration observation using compact continuum sources at very large off-axis distances. Application of our frequency-resolved beam model should enable higher dynamic range and improved image fidelity for interferometric observations in complex fields. (abridged)Comment: 12 pages, 11 figures, Accepted for publication in A&A, figures compressed to low resolution; high-resolution version available at: http://www.astro.rug.nl/~popping/wsrtbeam.pd

    SiNeRF: Sinusoidal Neural Radiance Fields for Joint Pose Estimation and Scene Reconstruction

    Full text link
    NeRFmm is the Neural Radiance Fields (NeRF) that deal with Joint Optimization tasks, i.e., reconstructing real-world scenes and registering camera parameters simultaneously. Despite NeRFmm producing precise scene synthesis and pose estimations, it still struggles to outperform the full-annotated baseline on challenging scenes. In this work, we identify that there exists a systematic sub-optimality in joint optimization and further identify multiple potential sources for it. To diminish the impacts of potential sources, we propose Sinusoidal Neural Radiance Fields (SiNeRF) that leverage sinusoidal activations for radiance mapping and a novel Mixed Region Sampling (MRS) for selecting ray batch efficiently. Quantitative and qualitative results show that compared to NeRFmm, SiNeRF achieves comprehensive significant improvements in image synthesis quality and pose estimation accuracy. Codes are available at https://github.com/yitongx/sinerf.Comment: Accepted yet not published by BMVC202

    Associations between body dissatisfaction and self-reported anxiety and depression in otherwise healthy men: a systematic review and meta-analysis

    Get PDF
    Introduction It is unknown whether male body dissatisfaction is related to anxiety and depression. This study investigates whether there is an association between body dissatisfaction and self-reported anxiety and/or depression in otherwise healthy adult males. Method A systematic review was conducted using Preferred Reporting Items for Systematic Reviews and Meta Analyses as the reporting guideline. Four databases including CINAHL complete, Health Source: Nursing/Academic Edition, MEDLINE and PsycINFO were searched for observational studies with a correlational design. Studies were appraised using the Appraisal tool for Cross-Sectional Studies to measure quality and risk of bias. Data were extracted from studies to analyse and synthesise findings using content analysis and random effects meta-analyses in male body dissatisfaction and anxiety, depression, and both anxiety and depression. Results Twenty-three cross-sectional studies were included in the review. Nineteen studies found positive correlations between male body dissatisfaction and anxiety and/or depression. Meta-analyses of Pearson’s correlation coefficients found statistically significant associations with body satisfaction for anxiety 0.40 (95% CI 0.28 to 0.51) depression 0.34 (95% CI 0.22 to 0.45) and both anxiety and depression outcomes 0.47 (95% CI 0.33 to 0.59). The quality appraisal found study samples were homogeneous being mostly ascertained through academic institutions where participants were predominantly young, Caucasian and with relatively high educational attainment. Measures of body satisfaction focused predominantly on muscularity and thinness. Discussion This study provides the first pooled estimates of the correlation between body dissatisfaction and anxiety and depression in men. Findings need to be interpreted with respect to the samples and outcomes of the included studies. It is recommended that future research should increase the diversity of men in studies. Studies should measure a wider range of body dissatisfaction types found in men. Conclusion The findings demonstrate that an association between male body dissatisfaction and anxiety and depression is likely to exist. Future research should address the temporal relationship between body dissatisfaction and anxiety and depression

    Multi-GPU maximum entropy image synthesis for radio astronomy

    Full text link
    The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridding MEM, which is tested using real and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This allows to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 minutes, instead of 2.5 days that takes a sequential version on CPU.Comment: 11 pages, 13 figure

    Multi-Scale CLEAN deconvolution of radio synthesis images

    Full text link
    Radio synthesis imaging is dependent upon deconvolution algorithms to counteract the sparse sampling of the Fourier plane. These deconvolution algorithms find an estimate of the true sky brightness from the necessarily incomplete sampled visibility data. The most widely used radio synthesis deconvolution method is the CLEAN algorithm of Hogbom. This algorithm works extremely well for collections of point sources and surprisingly well for extended objects. However, the performance for extended objects can be improved by adopting a multi-scale approach. We describe and demonstrate a conceptually simple and algorithmically straightforward extension to CLEAN that models the sky brightness by the summation of components of emission having different size scales. While previous multiscale algorithms work sequentially on decreasing scale sizes, our algorithm works simultaneously on a range of specified scales. Applications to both real and simulated data sets are given.Comment: Submitted to IEEE Special Issue on Signal Processin

    On the Anatomy of MCMC-Based Maximum Likelihood Learning of Energy-Based Models

    Full text link
    This study investigates the effects of Markov chain Monte Carlo (MCMC) sampling in unsupervised Maximum Likelihood (ML) learning. Our attention is restricted to the family of unnormalized probability densities for which the negative log density (or energy function) is a ConvNet. We find that many of the techniques used to stabilize training in previous studies are not necessary. ML learning with a ConvNet potential requires only a few hyper-parameters and no regularization. Using this minimal framework, we identify a variety of ML learning outcomes that depend solely on the implementation of MCMC sampling. On one hand, we show that it is easy to train an energy-based model which can sample realistic images with short-run Langevin. ML can be effective and stable even when MCMC samples have much higher energy than true steady-state samples throughout training. Based on this insight, we introduce an ML method with purely noise-initialized MCMC, high-quality short-run synthesis, and the same budget as ML with informative MCMC initialization such as CD or PCD. Unlike previous models, our energy model can obtain realistic high-diversity samples from a noise signal after training. On the other hand, ConvNet potentials learned with non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images. We show that it is much harder to train a ConvNet potential to learn a steady-state over realistic images. To our knowledge, long-run MCMC samples of all previous models lose the realism of short-run samples. With correct tuning of Langevin noise, we train the first ConvNet potentials for which long-run and steady-state MCMC samples are realistic images.Comment: Code available at: https://github.com/point0bar1/ebm-anatom
    • …
    corecore