1,558 research outputs found
Weak Lensing Study in VOICE Survey II: Shear Bias Calibrations
The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is proposed
to obtain deep optical imaging of the CDFS and ES1 fields using the VLT
Survey Telescope (VST). At present, the observations for the CDFS field have
been completed, and comprise in total about 4.9 deg down to
26 mag. In the companion paper by Fu et al. (2018), we
present the weak lensing shear measurements for -band images with seeing
0.9 arcsec. In this paper, we perform image simulations to calibrate
possible biases of the measured shear signals. Statistically, the properties of
the simulated point spread function (PSF) and galaxies show good agreements
with those of observations. The multiplicative bias is calibrated to reach an
accuracy of 3.0%. We study the bias sensitivities to the undetected faint
galaxies and to the neighboring galaxies. We find that undetected galaxies
contribute to the multiplicative bias at the level of 0.3%. Further
analysis shows that galaxies with lower signal-to-noise ratio (SNR) are
impacted more significantly because the undetected galaxies skew the background
noise distribution. For the neighboring galaxies, we find that although most
have been rejected in the shape measurement procedure, about one third of them
still remain in the final shear sample. They show a larger ellipticity
dispersion and contribute to 0.2% of the multiplicative bias. Such a bias
can be removed by further eliminating these neighboring galaxies. But the
effective number density of the galaxies can be reduced considerably. Therefore
efficient methods should be developed for future weak lensing deep surveys.Comment: 11 pages, 13 figures, 2 tables. MNRAS accepte
In-situ measurement and characterization of cloud particles using digital in-line holography
Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method.
In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time.
I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements
Efficient Human Pose Estimation with Image-dependent Interactions
Human pose estimation from 2D images is one of the most challenging
and computationally-demanding problems in computer vision. Standard
models such as Pictorial Structures consider interactions between
kinematically connected joints or limbs, leading to inference cost
that is quadratic in the number of pixels. As a result, researchers
and practitioners have restricted themselves to simple models which
only measure the quality of limb-pair possibilities by their 2D
geometric plausibility.
In this talk, we propose novel methods which allow for efficient
inference in richer models with data-dependent interactions. First, we
introduce structured prediction cascades, a structured analog of
binary cascaded classifiers, which learn to focus computational effort
where it is needed, filtering out many states cheaply while ensuring
the correct output is unfiltered. Second, we propose a way to
decompose models of human pose with cyclic dependencies into a
collection of tree models, and provide novel methods to impose model
agreement. Finally, we develop a local linear approach that learns
bases centered around modes in the training data, giving us
image-dependent local models which are fast and accurate.
These techniques allow for sparse and efficient inference on the order
of minutes or seconds per image. As a result, we can afford to model
pairwise interaction potentials much more richly with data-dependent
features such as contour continuity, segmentation alignment, color
consistency, optical flow and multiple modes. We show empirically that
these richer models are worthwhile, obtaining significantly more
accurate pose estimation on popular datasets
Fast and robust image feature matching methods for computer vision applications
Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications
Shapes and Shears, Stars and Smears: Optimal Measurements for Weak Lensing
We present the theoretical and analytical bases of optimal techniques to
measure weak gravitational shear from images of galaxies. We first characterize
the geometric space of shears and ellipticity, then use this geometric
interpretation to analyse images. The steps of this analysis include:
measurement of object shapes on images, combining measurements of a given
galaxy on different images, estimating the underlying shear from an ensemble of
galaxy shapes, and compensating for the systematic effects of image distortion,
bias from PSF asymmetries, and `"dilution" of the signal by the seeing. These
methods minimize the ellipticity measurement noise, provide calculable shear
uncertainty estimates, and allow removal of systematic contamination by PSF
effects to arbitrary precision. Galaxy images and PSFs are decomposed into a
family of orthogonal 2d Gaussian-based functions, making the PSF correction and
shape measurement relatively straightforward and computationally efficient. We
also discuss sources of noise-induced bias in weak lensing measurements and
provide a solution for these and previously identified biases.Comment: Version accepted to AJ. Minor fixes, plus a simpler method of shape
weighting. Version with full vector figures available via
http://www.astro.lsa.umich.edu/users/garyb/PUBLICATIONS
Recommended from our members
High-quality dense stereo vision for whole body imaging and obesity assessment
textThe prevalence of obesity has necessitated developing safe and convenient tools for timely assessing and monitoring this condition for a broad range of population. Three-dimensional (3D) body imaging has become a new mean for obesity assessment. Moreover, it generates body shape information that is meaningful for fitness, ergonomics, and personalized clothing. In the previous work of our lab, we developed a prototype active stereo vision system that demonstrated a potential to fulfill this goal. But the prototype required four computer projectors to cast artificial textures on the body which facilitate the stereo-matching on texture-deficient images (e.g., skin). This decreases the mobility of the system when used to collect a large population data. In addition, the resolution of the generated 3D~images is limited by both cameras and projectors available during the project. The study reported in this dissertation highlights our continued effort in improving the capability of 3Dbody imaging through simplified hardware for passive stereo and advanced computation techniques.
The system utilizes high-resolution single-lens reflex (SLR) cameras, which became widely available lately, and is configured in a two-stance design to image the front and back surfaces of a person. A total of eight cameras are used to form four pairs of stereo units. Each unit covers a quarter of the body surface. The stereo units are individually calibrated with a specific pattern to determine cameras' intrinsic and extrinsic parameters for stereo matching. The global orientation and position of each stereo unit within a common world coordinate system is calculated through a 3Dregistration step. The stereo calibration and 3Dregistration procedures do not need to be repeated for a deployed system if the cameras' relative positions have not changed. This property contributes to the portability of the system, and tremendously alleviates the maintenance task. The image acquisition time is around two seconds for a whole-body capture. The system works in an indoor environment with a moderate ambient light.
Advanced stereo computation algorithms are developed by taking advantage of high-resolution images and by tackling the ambiguity problem in stereo matching. A multi-scale, coarse-to-fine matching framework is proposed to match large-scale textures at a low resolution and refine the matched results over higher resolutions. This matching strategy reduces the complexity of the computation and avoids ambiguous matching at the native resolution. The pixel-to-pixel stereo matching algorithm follows a classic, four-step strategy which consists of matching cost computation, cost aggregation, disparity computation and disparity refinement.
The system performance has been evaluated on mannequins and human subjects in comparison with other measurement methods. It was found that the geometrical measurements from reconstructed 3Dbody models, including body circumferences and whole volume, are highly repeatable and consistent with manual and other instrumental measurements (CV 0.99). The agreement of percent body fat (%BF) estimation on human subjects between stereo and dual-energy X-ray absorptiometry (DEXA) was found to be improved over the previous active stereo system, and the limits of agreement with 95% confidence were reduced by half. Our achieved %BF estimation agreement is among the lowest ones of other comparative studies with commercialized air displacement plethysmography (ADP) and DEXA. In practice, %BF estimation through a two-component model is sensitive to body volume measurement, and the estimation of lung volume could be a source of variation. Protocols for this type of measurement should still be created with an awareness of this factor.Biomedical Engineerin
- …