12,605 research outputs found
Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas.
One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick.
This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset.
This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming
Non-contact vision-based deformation monitoring on bridge structures
Information on deformation is an important metric for bridge condition and performance assessment, e.g. identifying abnormal events, calibrating bridge models and estimating load carrying capacities, etc. However, accurate measurement of bridge deformation, especially for long-span bridges remains as a challenging task. The major aim of this research is to develop practical and cost-effective techniques for accurate deformation monitoring on bridge structures. Vision-based systems are taken as the study focus due to a few reasons: low cost, easy installation, desired sample rates, remote and distributed sensing, etc.
This research proposes an custom-developed vision-based system for bridge deformation monitoring. The system supports either consumer-grade or professional cameras and incorporates four advanced video tracking methods to adapt to different test situations. The sensing accuracy is firstly quantified in laboratory conditions. The working performance in field testing is evaluated on one short-span and one long-span bridge examples considering several influential factors i.e. long-range sensing, low-contrast target patterns, pattern changes and lighting changes. Through case studies, some suggestions about tracking method selection are summarised for field testing. Possible limitations of vision-based systems are illustrated as well.
To overcome observed limitations of vision-based systems, this research further proposes a mixed system combining cameras with accelerometers for accurate deformation measurement. To integrate displacement with acceleration data autonomously, a novel data fusion method based on Kalman filter and maximum likelihood estimation is proposed. Through field test validation, the method is effective for improving displacement accuracy and widening frequency bandwidth. The mixed system based on data fusion is implemented on field testing of a railway bridge considering undesired test conditions (e.g. low-contrast target patterns and camera shake). Analysis results indicate that the system offers higher accuracy than using a camera alone and is viable for bridge influence line estimation.
With considerable accuracy and resolution in time and frequency domains, the potential of vision-based measurement for vibration monitoring is investigated. The proposed vision-based system is applied on a cable-stayed footbridge for deck deformation and cable vibration measurement under pedestrian loading. Analysis results indicate that the measured data enables accurate estimation of modal frequencies and could be used to investigate variations of modal frequencies under varying pedestrian loads. The vision-based system in this application is used for multi-point vibration measurement and provides results comparable to those obtained using an array of accelerometers
Image processing for plastic surgery planning
This thesis presents some image processing tools for plastic surgery planning. In particular,
it presents a novel method that combines local and global context in a probabilistic
relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic
surgery. It also uses a method that utilises global and local symmetry to identify abnormalities
in CT frontal images of the human body. The proposed methodologies are
evaluated with the help of several clinical data supplied by collaborating plastic surgeons
Recommended from our members
Computer Vision Sensing Systems for Structural Health Monitoring in Challenging Field Conditions
Computer vision sensing techniques enable easy-to-install and remote non-contact monitoring of structures and have great potentials in field applications. This study will develop/implement novel computer vision techniques for two sensing systems for monitoring different aspects of infrastructures in challenging field conditions. The dissertation is therefore composed of two parts: robust measurement of global multi-point structural displacements, and accurate and robust monitoring of local surface displacements/strains.
Computer vision based displacement measurement has become popular in the recent decade. The first part presents InnoVision, a vision sensing system developed to address a number of challenging problems associated with applying vision sensors to the measurement of multi-point structural displacement in field conditions that are rarely comprehensively studied in the literature. The challenging problems include tracking low-contrast natural targets on the structural surface, insufficient resolution for long distance measurement, inevitable camera vibration, and image distortion due to heat haze in hot weather. Several techniques are developed in InnoVision to tackle these challenges. Laboratory and field tests are conducted to evaluate the performance of these techniques.
In the second part, another vision sensing system SurfaceVision is developed for accurate and robust monitoring two-dimensional (2D) structural surface displacements/strains. Important structures, such as nuclear power plants, need the continuous inspection of surface conditions. As an alternative to the human inspection, conventional digital-image-correlation (DIC) based methods have been applied to surfaces painted with speckle patterns in a controlled environment. However, it is highly challenging for DIC methods to accurately measure displacement on natural concrete surfaces in outdoor conditions with changing illumination and weather conditions. Additionally, common surface displacement measurement is based on segmenting the surface image into small subsets and tracking each subset individually through template matching, the surface displacement thus obtained has obvious discontinuity and low spatial resolution. Therefore, for applicability in the outdoor environment, SurfaceVision is proposed for accurate and robust monitoring of surface displacements/strains. Advanced computer vision techniques are developed/implemented to enable surface displacement measurement with high continuity, spatial resolution, accuracy, and robustness. An intuitive strain calculation method is also developed for converting surface displacements into surface strains. A numerical simulation is formulated based on four-point bending tests to validate the accuracy and robustness of SurfaceVision in surface displacements. Four-point bending experiments using reinforced concrete specimens are conducted to demonstrate the performance of SurfaceVision under different cases of optical noises and its effectiveness in predicting crack formations
Guided Stereo Matching
Stereo is a prominent technique to infer dense depth maps from images, and
deep learning further pushed forward the state-of-the-art, making end-to-end
architectures unrivaled when enough data is available for training. However,
deep networks suffer from significant drops in accuracy when dealing with new
environments. Therefore, in this paper, we introduce Guided Stereo Matching, a
novel paradigm leveraging a small amount of sparse, yet reliable depth
measurements retrieved from an external source enabling to ameliorate this
weakness. The additional sparse cues required by our method can be obtained
with any strategy (e.g., a LiDAR) and used to enhance features linked to
corresponding disparity hypotheses. Our formulation is general and fully
differentiable, thus enabling to exploit the additional sparse inputs in
pre-trained deep stereo networks as well as for training a new instance from
scratch. Extensive experiments on three standard datasets and two
state-of-the-art deep architectures show that even with a small set of sparse
input cues, i) the proposed paradigm enables significant improvements to
pre-trained networks. Moreover, ii) training from scratch notably increases
accuracy and robustness to domain shifts. Finally, iii) it is suited and
effective even with traditional stereo algorithms such as SGM.Comment: CVPR 201
Active skeleton for bacteria modeling
The investigation of spatio-temporal dynamics of bacterial cells and their
molecular components requires automated image analysis tools to track cell
shape properties and molecular component locations inside the cells. In the
study of bacteria aging, the molecular components of interest are protein
aggregates accumulated near bacteria boundaries. This particular location makes
very ambiguous the correspondence between aggregates and cells, since computing
accurately bacteria boundaries in phase-contrast time-lapse imaging is a
challenging task. This paper proposes an active skeleton formulation for
bacteria modeling which provides several advantages: an easy computation of
shape properties (perimeter, length, thickness, orientation), an improved
boundary accuracy in noisy images, and a natural bacteria-centered coordinate
system that permits the intrinsic location of molecular components inside the
cell. Starting from an initial skeleton estimate, the medial axis of the
bacterium is obtained by minimizing an energy function which incorporates
bacteria shape constraints. Experimental results on biological images and
comparative evaluation of the performances validate the proposed approach for
modeling cigar-shaped bacteria like Escherichia coli. The Image-J plugin of the
proposed method can be found online at http://fluobactracker.inrialpes.fr.Comment: Published in Computer Methods in Biomechanics and Biomedical
Engineering: Imaging and Visualizationto appear i
Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications
Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications
- …