1,558 research outputs found

    Weak Lensing Study in VOICE Survey II: Shear Bias Calibrations

    Get PDF
    The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is proposed to obtain deep optical ugriugri imaging of the CDFS and ES1 fields using the VLT Survey Telescope (VST). At present, the observations for the CDFS field have been completed, and comprise in total about 4.9 deg2^2 down to rABr_\mathrm{AB}∼\sim26 mag. In the companion paper by Fu et al. (2018), we present the weak lensing shear measurements for rr-band images with seeing ≤\le 0.9 arcsec. In this paper, we perform image simulations to calibrate possible biases of the measured shear signals. Statistically, the properties of the simulated point spread function (PSF) and galaxies show good agreements with those of observations. The multiplicative bias is calibrated to reach an accuracy of ∼\sim3.0%. We study the bias sensitivities to the undetected faint galaxies and to the neighboring galaxies. We find that undetected galaxies contribute to the multiplicative bias at the level of ∼\sim0.3%. Further analysis shows that galaxies with lower signal-to-noise ratio (SNR) are impacted more significantly because the undetected galaxies skew the background noise distribution. For the neighboring galaxies, we find that although most have been rejected in the shape measurement procedure, about one third of them still remain in the final shear sample. They show a larger ellipticity dispersion and contribute to ∼\sim0.2% of the multiplicative bias. Such a bias can be removed by further eliminating these neighboring galaxies. But the effective number density of the galaxies can be reduced considerably. Therefore efficient methods should be developed for future weak lensing deep surveys.Comment: 11 pages, 13 figures, 2 tables. MNRAS accepte

    In-situ measurement and characterization of cloud particles using digital in-line holography

    Get PDF
    Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements

    Efficient Human Pose Estimation with Image-dependent Interactions

    Get PDF
    Human pose estimation from 2D images is one of the most challenging and computationally-demanding problems in computer vision. Standard models such as Pictorial Structures consider interactions between kinematically connected joints or limbs, leading to inference cost that is quadratic in the number of pixels. As a result, researchers and practitioners have restricted themselves to simple models which only measure the quality of limb-pair possibilities by their 2D geometric plausibility. In this talk, we propose novel methods which allow for efficient inference in richer models with data-dependent interactions. First, we introduce structured prediction cascades, a structured analog of binary cascaded classifiers, which learn to focus computational effort where it is needed, filtering out many states cheaply while ensuring the correct output is unfiltered. Second, we propose a way to decompose models of human pose with cyclic dependencies into a collection of tree models, and provide novel methods to impose model agreement. Finally, we develop a local linear approach that learns bases centered around modes in the training data, giving us image-dependent local models which are fast and accurate. These techniques allow for sparse and efficient inference on the order of minutes or seconds per image. As a result, we can afford to model pairwise interaction potentials much more richly with data-dependent features such as contour continuity, segmentation alignment, color consistency, optical flow and multiple modes. We show empirically that these richer models are worthwhile, obtaining significantly more accurate pose estimation on popular datasets

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications

    Shapes and Shears, Stars and Smears: Optimal Measurements for Weak Lensing

    Get PDF
    We present the theoretical and analytical bases of optimal techniques to measure weak gravitational shear from images of galaxies. We first characterize the geometric space of shears and ellipticity, then use this geometric interpretation to analyse images. The steps of this analysis include: measurement of object shapes on images, combining measurements of a given galaxy on different images, estimating the underlying shear from an ensemble of galaxy shapes, and compensating for the systematic effects of image distortion, bias from PSF asymmetries, and `"dilution" of the signal by the seeing. These methods minimize the ellipticity measurement noise, provide calculable shear uncertainty estimates, and allow removal of systematic contamination by PSF effects to arbitrary precision. Galaxy images and PSFs are decomposed into a family of orthogonal 2d Gaussian-based functions, making the PSF correction and shape measurement relatively straightforward and computationally efficient. We also discuss sources of noise-induced bias in weak lensing measurements and provide a solution for these and previously identified biases.Comment: Version accepted to AJ. Minor fixes, plus a simpler method of shape weighting. Version with full vector figures available via http://www.astro.lsa.umich.edu/users/garyb/PUBLICATIONS

    15th SC@RUG 2018 proceedings 2017-2018

    Get PDF
    • …
    corecore