68 research outputs found

    RESTORASI BAR CODES 2-D PADA CITRA HASIL KAMERA MENGGUNAKAN METODE WAVELET

    Get PDF
    Abstrak Pengambilan bar codes dua dimensi (2-D) menggunakan kamera sering kali tidak fokus sehingga menghasilkan citra yang kabur (blur noise). Citra bar codes 2-D merupakan citra dengan bentuk yang khusus, sehingga proses deblurring harus dilakukan. Pada paper ini, diusulkan penggunaan metode wavelet yang handal dalam proses restorasi dan dirancang khusus untuk citra bar codes 2-D. Setelah menganalisis citra bar codes, standar deviasi dari kernel gaussian blur ditentukan. Kemudian, citra bar codes direstorasi menggunakan filter wavelet. Dari hasil uji coba didapatkan rata-rata nilai PSNR sebesar 30.14 untuk standar deviasi =10. Metode wavelet yang digunakan untuk deblurring citra bar codes 2-D nmenghasilkan kualitas yang baik. Kata kunci: Bar codes dua dimensi, deblurring, restoration, wavelet

    Fast restoration for out-of-focus blurred images of QR code with edge prior information via image sensing.

    Get PDF
    Out-of-focus blurring of the QR code is very common in mobile Internet systems, which often causes failure of authentication as a result of a misreading of the information hence adversely affects the operation of the system. To tackle this difficulty, this work firstly introduced an edge prior information, which is the average distance between the center point and the edge of the clear QR code images in the same batch. It is motivated by the theoretical analysis and the practical observation of the theory of CMOS image sensing, optics information, blur invariants, and the invariance of the center of the diffuse light spots. After obtaining the edge prior information, combining the iterative image and the center point of the binary image, the proposed method can accurately estimate the parameter of the out-of-focus blur kernel. Furthermore, we obtain the sharp image by Wiener filter, a non-blind image deblurring algorithm. By this, it avoids excessive redundant calculations. Experimental results validate that the proposed method has great practical utility in terms of deblurring quality, robustness, and computational efficiency, which is suitable for barcode application systems, e.g., warehouse, logistics, and automated production

    A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes

    Full text link
    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.Comment: 14 pages, 19 figures (with a total of 57 subfigures), 1 table; v3: previously missing reference [35] adde

    A regularization approach to blind deblurring and denoising of QR barcodes

    Get PDF
    QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise

    Visual and Camera Sensors

    Get PDF
    This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors

    Image Restoration

    Get PDF
    This book represents a sample of recent contributions of researchers all around the world in the field of image restoration. The book consists of 15 chapters organized in three main sections (Theory, Applications, Interdisciplinarity). Topics cover some different aspects of the theory of image restoration, but this book is also an occasion to highlight some new topics of research related to the emergence of some original imaging devices. From this arise some real challenging problems related to image reconstruction/restoration that open the way to some new fundamental scientific questions closely related with the world we interact with

    Improving Range Estimation of a 3D FLASH LADAR via Blind Deconvolution

    Get PDF
    The purpose of this research effort is to improve and characterize range estimation in a three-dimensional FLASH LAser Detection And Ranging (3D FLASH LADAR) by investigating spatial dimension blurring effects. The myriad of emerging applications for 3D FLASH LADAR both as primary and supplemental sensor necessitate superior performance including accurate range estimates. Along with range information, this sensor also provides an imaging or laser vision capability. Consequently, accurate range estimates would also greatly aid in image quality of a target or remote scene under interrogation. Unlike previous efforts, this research accounts for pixel coupling by defining the range image mathematical model as a convolution between the system spatial impulse response and the object (target or remote scene) at a particular range slice. Using this model, improved range estimation is possible by object restoration from the data observations. Object estimation is principally performed by deriving a blind deconvolution Generalized Expectation Maximization (GEM) algorithm with the range determined from the estimated object by a normalized correlation method. Theoretical derivations and simulation results are verified with experimental data of a bar target taken from a 3D FLASH LADAR system in a laboratory environment. Additionally, among other factors, range separation estimation variance is a function of two LADAR design parameters (range sampling interval and transmitted pulse-width), which can be optimized using the expected range resolution between two point sources. Using both CRB theory and an unbiased estimator, an investigation is accomplished that finds the optimal pulse-width for several range sampling scenarios using a range resolution metric

    Model-based Optical Flow: Layers, Learning, and Geometry

    Get PDF
    The estimation of motion in video sequences establishes temporal correspondences between pixels and surfaces and allows reasoning about a scene using multiple frames. Despite being a focus of research for over three decades, computing motion, or optical flow, remains challenging due to a number of difficulties, including the treatment of motion discontinuities and occluded regions, and the integration of information from more than two frames. One reason for these issues is that most optical flow algorithms only reason about the motion of pixels on the image plane, while not taking the image formation pipeline or the 3D structure of the world into account. One approach to address this uses layered models, which represent the occlusion structure of a scene and provide an approximation to the geometry. The goal of this dissertation is to show ways to inject additional knowledge about the scene into layered methods, making them more robust, faster, and more accurate. First, this thesis demonstrates the modeling power of layers using the example of motion blur in videos, which is caused by fast motion relative to the exposure time of the camera. Layers segment the scene into regions that move coherently while preserving their occlusion relationships. The motion of each layer therefore directly determines its motion blur. At the same time, the layered model captures complex blur overlap effects at motion discontinuities. Using layers, we can thus formulate a generative model for blurred video sequences, and use this model to simultaneously deblur a video and compute accurate optical flow for highly dynamic scenes containing motion blur. Next, we consider the representation of the motion within layers. Since, in a layered model, important motion discontinuities are captured by the segmentation into layers, the flow within each layer varies smoothly and can be approximated using a low dimensional subspace. We show how this subspace can be learned from training data using principal component analysis (PCA), and that flow estimation using this subspace is computationally efficient. The combination of the layered model and the low-dimensional subspace gives the best of both worlds, sharp motion discontinuities from the layers and computational efficiency from the subspace. Lastly, we show how layered methods can be dramatically improved using simple semantics. Instead of treating all layers equally, a semantic segmentation divides the scene into its static parts and moving objects. Static parts of the scene constitute a large majority of what is shown in typical video sequences; yet, in such regions optical flow is fully constrained by the depth structure of the scene and the camera motion. After segmenting out moving objects, we consider only static regions, and explicitly reason about the structure of the scene and the camera motion, yielding much better optical flow estimates. Furthermore, computing the structure of the scene allows to better combine information from multiple frames, resulting in high accuracies even in occluded regions. For moving regions, we compute the flow using a generic optical flow method, and combine it with the flow computed for the static regions to obtain a full optical flow field. By combining layered models of the scene with reasoning about the dynamic behavior of the real, three-dimensional world, the methods presented herein push the envelope of optical flow computation in terms of robustness, speed, and accuracy, giving state-of-the-art results on benchmarks and pointing to important future research directions for the estimation of motion in natural scenes

    Computer Vision Approaches to Liquid-Phase Transmission Electron Microscopy

    Get PDF
    Electron microscopy (EM) is a technique that exploits the interaction between electron and matter to produce high resolution images down to atomic level. In order to avoid undesired scattering in the electron path, EM samples are conventionally imaged in solid state under vacuum conditions. Recently, this limit has been overcome by the realization of liquid-phase electron microscopy (LP EM), a technique that enables the analysis of samples in their liquid native state. LP EM paired with a high frame rate acquisition direct detection camera allows tracking the motion of particles in liquids, as well as their temporal dynamic processes. In this research work, LP EM is adopted to image the dynamics of particles undergoing Brownian motion, exploiting their natural rotation to access all the particle views, in order to reconstruct their 3D structure via tomographic techniques. However, specific computer vision-based tools were designed around the limitations of LP EM in order to elaborate the results of the imaging process. Consequently, different deblurring and denoising approaches were adopted to improve the quality of the images. Therefore, the processed LP EM images were adopted to reconstruct the 3D model of the imaged samples. This task was performed by developing two different methods: Brownian tomography (BT) and Brownian particle analysis (BPA). The former tracks in time a single particle, capturing its dynamics evolution over time. The latter is an extension in time of the single particle analysis (SPA) technique. Conventionally it is paired to cryo-EM to reconstruct 3D density maps starting from thousands of EM images by capturing hundreds of particles of the same species frozen on a grid. On the contrary, BPA has the ability to process image sequences that may not contain thousands of particles, but instead monitors individual particle views across consecutive frames, rather than across a single frame
    • …
    corecore