343 research outputs found

    Laser beams-based localization methods for Boom-type roadheader using underground camera non-uniform blur model

    Get PDF
    The efficiency of automatic underground tunneling is significantly depends on the localization accuracy and reliable for the Boom-type roadheader. In comparison with other underground equipment positioning methods, vision-based measurement has gained attention for its advantages of noncontact and no accumulated error. However, the harsh underground environment, especially the geometric errors brought by the vibration of the machine body to the underground camera model, has a certain influence on the accuracy and stability for the vision-based underground localization. In this paper, a laser beams-based localization methods for the machine body of Boom-type roadheader is presented, which can tackle the dense-dust, low illumination environment with the stray lights interference. Taking mining vibration into consideration, an underground camera non-uniform blur model that incorporate the two-layer glasses refraction effect was established to eliminate vibration errors. The blur model explicitly reveals the change of imaging optical path under the influence of vibration and double layer explosion-proof glass. On the basis of this, the underground laser beams extraction and positioning are presents, which is with well environmental adaptability, and the improved 2P3L (two-points-three-lines) localization model from line correspondences are developed. Experimental evaluation are designed to verify the performance of the proposed method, and the deblurring algorithm is investigated and evaluated. The results show that the proposed methods is effective to restore the blurred laser beams image that caused by the vibration, and can meet the precision need of roadheader body localization for roadway construction in coal mine

    A MAP-Estimation Framework for Blind Deblurring Using High-Level Edge Priors

    Get PDF
    International audienceIn this paper we propose a general MAP-estimation framework for blind image deconvolution that allows the incorporation of powerful priors regarding predicting the edges of the latent image, which is known to be a crucial factor for the success of blind deblurring. This is achieved in a principled, robust and unified manner through the use of a global energy function that can take into account multiple constraints. Based on this framework, we show how to successfully make use of a particular prior of this type that is quite strong and also applicable to a wide variety of cases. It relates to the strong structural regularity that is exhibited by many scenes, and which affects the location and distribution of the corresponding image edges. We validate the excellent performance of our approach through an extensive set of experimental results and comparisons to the state-of-the-art

    Interactive removal and ground truth for difficult shadow scenes

    Get PDF
    A user-centric method for fast, interactive, robust, and high-quality shadow removal is presented. Our algorithm can perform detection and removal in a range of difficult cases, such as highly textured and colored shadows. To perform detection, an on-the-fly learning approach is adopted guided by two rough user inputs for the pixels of the shadow and the lit area. After detection, shadow removal is performed by registering the penumbra to a normalized frame, which allows us efficient estimation of nonuniform shadow illumination changes, resulting in accurate and robust removal. Another major contribution of this work is the first validated and multiscene category ground truth for shadow removal algorithms. This data set containing 186 images eliminates inconsistencies between shadow and shadow-free images and provides a range of different shadow types such as soft, textured, colored, and broken shadow. Using this data, the most thorough comparison of state-of-the-art shadow removal methods to date is performed, showing our proposed algorithm to outperform the state of the art across several measures and shadow categories. To complement our data set, an online shadow removal benchmark website is also presented to encourage future open comparisons in this challenging field of research

    Motion blur in digital images - analys, detection and correction of motion blur in photogrammetry

    Get PDF
    Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This thesis proves the negative affect that blurred images have on photogrammetric processing. It shows that small amounts of blur do have serious impacts on target detection and that it slows down processing speed due to the requirement of human intervention. Larger blur can make an image completely unusable and needs to be excluded from processing. To exclude images out of large image datasets an algorithm was developed. The newly developed method makes it possible to detect blur caused by linear camera displacement. The method is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values of the same dataset. This algorithm enables the exclusion of blurred images and subsequently allows photogrammetric processing without them. However, it is also possible to use deblurring techniques to restor blurred images. Deblurring of images is a widely researched topic and often based on the Wiener or Richardson-Lucy deconvolution, which require precise knowledge of both the blur path and extent. Even with knowledge about the blur kernel, the correction causes errors such as ringing, and the deblurred image appears muddy and not completely sharp. In the study reported in this paper, overlapping images are used to support the deblurring process. An algorithm based on the Fourier transformation is presented. This works well in flat areas, but the need for geometrically correct sharp images for deblurring may limit the application. Another method to enhance the image is the unsharp mask method, which improves images significantly and makes photogrammetric processing more successful. However, deblurring of images needs to focus on geometric correct deblurring to assure geometric correct measurements. Furthermore, a novel edge shifting approach was developed which aims to do geometrically correct deblurring. The idea of edge shifting appears to be promising but requires more advanced programming

    동적 환경 디블러링을 위한 새로운 모델, 알로기즘, 그리고 해석에 관한 연구

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 8. 이경무.Blurring artifacts are the most common flaws in photographs. To remove these artifacts, many deblurring methods which restore sharp images from blurry ones have been studied considerably in the field of computational photography. However, state-of-the-art deblurring methods are based on a strong assumption that the captured scenes are static, and thus a great many things still remain to be done. In particular, these conventional methods fail to deblur blurry images captured in dynamic environments which have spatially varying blurs caused by various sources such as camera shake including out-of-plane motion, moving objects, depth variation, and so on. Therefore, the deblurring problem becomes more difficult and deeply challenging for dynamic scenes. Therefore, in this dissertation, addressing the deblurring problem of general dynamic scenes is a goal, and new solutions are introduced, that remove spatially varying blurs in dynamic scenes unlike conventional methods built on the assumption that the captured scenes are static. Three kinds of dynamic scene deblurring methods are proposed to achieve this goal, and they are based on: (1) segmentation, (2) sharp exemplar, (3) kernel-parametrization. The proposed approaches are introduced from segment-wise to pixel-wise approaches, and pixel-wise varying general blurs are handled in the end. First, the segmentation-based deblurring method estimates the latent image, multiple different kernels, and associated segments jointly. With the aid of the joint approach, segmentation-based method could achieve accurate blur kernel within a segment, remove segment-wise varying blurs, and reduce artifacts at the motion boundaries which are common in conventional approaches. Next, an \textit{exemplar}-based deblurring method is proposed, which utilizes a sharp exemplar to estimate highly accurate blur kernel and overcomes the limitations of the segmentation-based method that cannot handle small or texture-less segments. Lastly, the deblurring method using kernel-parametrization approximates the locally varying kernel as linear using motion flows. Thus the proposed method based on kernel-parametrization is generally applicable to remove pixel-wise varying blurs, and estimates the latent image and motion flow at the same time. With the proposed methods, significantly improved deblurring qualities are achieved, and intensive experimental evaluations demonstrate the superiority of the proposed methods in dynamic scene deblurring, in which state-of-the-art methods fail to deblur.Chapter 1 Introduction 1 Chapter 2 Image Deblurring with Segmentation 7 2.1 Introduction and Related Work 7 2.2 Segmentation-based Dynamic Scene Deblurring Model 11 2.2.1 Adaptive blur model selection 13 2.2.2 Regularization 14 2.3 Optimization 17 2.3.1 Sharp image restoration 18 2.3.2 Weight estimation 19 2.3.3 Kernel estimation 23 2.3.4 Overall procedure 25 2.4 Experiments 25 2.5 Summary 27 Chapter 3 Image Deblurring with Exemplar 33 3.1 Introduction and Related Work 35 3.2 Method Overview 37 3.3 Stage I: Exemplar Acquisition 38 3.3.1 Sharp image acquisition and preprocessing 38 3.3.2 Exemplar from blur-aware optical flow estimation 40 3.4 Stage II: Exemplar-based Deblurring 42 3.4.1 Exemplar-based latent image restoration 43 3.4.2 Motion-aware segmentation 44 3.4.3 Robust kernel estimation 45 3.4.4 Unified energy model and optimization 47 3.5 Stage III: Post-processing and Refinement 47 3.6 Experiments 49 3.7 Summary 53 Chapter 4 Image Deblurring with Kernel-Parametrization 57 4.1 Introduction and Related Work 59 4.2 Preliminary 60 4.3 Proposed Method 62 4.3.1 Image-statistics-guided motion 62 4.3.2 Adaptive variational deblurring model 64 4.4 Optimization 69 4.4.1 Motion estimation 70 4.4.2 Latent image restoration 72 4.4.3 Kernel re-initialization 73 4.5 Experiments 75 4.6 Summary 80 Chapter 5 Video Deblurring with Kernel-Parametrization 87 5.1 Introduction and Related Work 87 5.2 Generalized Video Deblurring 93 5.2.1 A new data model based on kernel-parametrization 94 5.2.2 A new optical flow constraint and temporal regularization 104 5.2.3 Spatial regularization 105 5.3 Optimization Framework 107 5.3.1 Sharp video restoration 108 5.3.2 Optical flows estimation 109 5.3.3 Defocus blur map estimation 110 5.4 Implementation Details 111 5.4.1 Initialization and duty cycle estimation 112 5.4.2 Occlusion detection and refinement 113 5.5 Motion Blur Dataset 114 5.5.1 Dataset generation 114 5.6 Experiments 116 5.7 Summary 120 Chapter 6 Conclusion 127 Bibliography 131 국문 초록 141Docto

    Non-contact vision-based deformation monitoring on bridge structures

    Get PDF
    Information on deformation is an important metric for bridge condition and performance assessment, e.g. identifying abnormal events, calibrating bridge models and estimating load carrying capacities, etc. However, accurate measurement of bridge deformation, especially for long-span bridges remains as a challenging task. The major aim of this research is to develop practical and cost-effective techniques for accurate deformation monitoring on bridge structures. Vision-based systems are taken as the study focus due to a few reasons: low cost, easy installation, desired sample rates, remote and distributed sensing, etc. This research proposes an custom-developed vision-based system for bridge deformation monitoring. The system supports either consumer-grade or professional cameras and incorporates four advanced video tracking methods to adapt to different test situations. The sensing accuracy is firstly quantified in laboratory conditions. The working performance in field testing is evaluated on one short-span and one long-span bridge examples considering several influential factors i.e. long-range sensing, low-contrast target patterns, pattern changes and lighting changes. Through case studies, some suggestions about tracking method selection are summarised for field testing. Possible limitations of vision-based systems are illustrated as well. To overcome observed limitations of vision-based systems, this research further proposes a mixed system combining cameras with accelerometers for accurate deformation measurement. To integrate displacement with acceleration data autonomously, a novel data fusion method based on Kalman filter and maximum likelihood estimation is proposed. Through field test validation, the method is effective for improving displacement accuracy and widening frequency bandwidth. The mixed system based on data fusion is implemented on field testing of a railway bridge considering undesired test conditions (e.g. low-contrast target patterns and camera shake). Analysis results indicate that the system offers higher accuracy than using a camera alone and is viable for bridge influence line estimation. With considerable accuracy and resolution in time and frequency domains, the potential of vision-based measurement for vibration monitoring is investigated. The proposed vision-based system is applied on a cable-stayed footbridge for deck deformation and cable vibration measurement under pedestrian loading. Analysis results indicate that the measured data enables accurate estimation of modal frequencies and could be used to investigate variations of modal frequencies under varying pedestrian loads. The vision-based system in this application is used for multi-point vibration measurement and provides results comparable to those obtained using an array of accelerometers

    On Deep Image Deblurring: The Blur Factorization Approach

    Get PDF
    This thesis investigated whether the single image deblurring problem could be factorized into subproblems of camera shake and object motion blur removal for enhanced performance. Two deep learning-based deblurring methods were introduced to answer this question, both following a variation of the proposed blur factorization strategy. Furthermore, a novel pipeline was developed for generating synthetic blurry images, as no existing datasets or data generation methods could meet the requirements of the suggested deblurring models. The proposed data generation pipeline allows for generating three blurry versions of a single ground truth image, one with both blur types, another with camera shake blur alone, and a third with only object motion blur. The pipeline, based on mathematical models of real-world blur formation, was used to generate a dataset of 2850 triplets of blurry images, which was further divided into a training set of 2500 and a test set of 350 triplets, plus the sharp ground truth images. The datasets were used to train and test both proposed methods. The proposed methods achieved satisfactory performance. Two variations of the first method, based on strict factorization into subproblems, were tested. The variations differed from each other by which order the blur types were removed. The performance of the pipeline that tried to remove object motion blur first proved superior to that achieved by the pipeline with the reverse processing order. However, both variations were still far inferior compared to the control test, where both blurs were removed simultaneously. The second method, based on joint training of two sub-models, achieved more promising test results. Two variations out of the four tested outperformed the corresponding control test model, albeit by relatively small margins. The variations differed by the processing order and weighting of the loss functions between the sub-models. Both variations that outperformed the control test model were trained to remove object motion blur first, although the loss function weights were set so that the pipelines’ main focus was on the final sharp images. The performance improvements demonstrate that the proposed blur factorization strategy had a positive impact on deblurring results. Still, even the second method can be deemed only partly successful. This is because a greater performance improvement was gained with an alternative strategy resulting in a model with the same number of parameters as the proposed approach

    Motion blur removal from photographs

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 135-143).One of the long-standing challenges in photography is motion blur. Blur artifacts are generated from relative motion between a camera and a scene during exposure. While blur can be reduced by using a shorter exposure, this comes at an unavoidable trade-off with increased noise. Therefore, it is desirable to remove blur computationally. To remove blur, we need to (i) estimate how the image is blurred (i.e. the blur kernel or the point-spread function) and (ii) restore a natural looking image through deconvolution. Blur kernel estimation is challenging because the algorithm needs to distinguish the correct imageblur pair from incorrect ones that can also adequately explain the blurred image. Deconvolution is also difficult because the algorithm needs to restore high frequency image contents attenuated by blur. In this dissertation, we address a few aspects of these challenges. We introduce an insight that a blur kernel can be estimated by analyzing edges in a blurred photograph. Edge profiles in a blurred image encode projections of the blur kernel, from which we can recover the blur using the inverse Radon transform. This method is computationally attractive and is well suited to images with many edges. Blurred edge profiles can also serve as additional cues for existing kernel estimation algorithms. We introduce a method to integrate this information into a maximum-a-posteriori kernel estimation framework, and show its benefits. Deconvolution algorithms restore information attenuated by blur using an image prior that exploits a heavy-tailed gradient profile of natural images. We show, however, that such a sparse prior does not accurately model textures, thereby degrading texture renditions in restored images. To address this issue, we introduce a content-aware image prior that adapts its characteristics to local textures. The adapted image prior improves the quality of textures in restored 6 images. Sometimes even the content-aware image prior may be insufficient for restoring rich textures. This issue can be addressed by matching the restored image's gradient distribution to its original image's gradient distribution, which is estimated directly from the blurred image. This new image deconvolution technique called iterative distribution reweighting (IDR) improves the visual realism of reconstructed images. Subject motion can also cause blur. Removing subject motion blur is especially challenging because the blur is often spatially variant. In this dissertation, we address a restricted class of subject motion blur: the subject moves at a constant velocity locally. We design a new computational camera that improves the local motion estimation and, at the same time, reduces the image information loss due to blur.by Taeg Sang Cho.Ph.D
    corecore