16 research outputs found

    Performance analysis on color image mosaicing techniques on FPGA

    Get PDF
    Today, the surveillance systems and other monitoring systems are considering the capturing of image sequences in a single frame. The captured images can be combined to get the mosaiced image or combined image sequence. But the captured image may have quality issues like brightness issue, alignment issue (correlation issue), resolution issue, manual image registration issue etc. The existing technique like cross correlation can offer better image mosaicing but faces brightness issue in mosaicing. Thus, this paper introduces two different methods for mosaicing i.e., (a) Sliding Window Module (SWM) based Color Image Mosaicing (CIM) and (b) Discrete Cosine Transform (DCT) based CIM on Field Programmable Gate Array (FPGA). The SWM based CIM adopted for corner detection of two images and perform the automatic image registration while DCT based CIM aligns both the local as well as global alignment of images by using phase correlation approach. Finally, these two methods performances are analyzed by comparing with parameters like PSNR, MSE, device utilization and execution time. From the analysis it is concluded that the DCT based CIM can offers significant results than SWM based CIM

    An in Depth Review Paper on Numerous Image Mosaicing Approaches and Techniques

    Get PDF
    Image mosaicing is one of the most important subjects of research in computer vision at current. Image mocaicing requires the integration of direct techniques and feature based techniques. Direct techniques are found to be very useful for mosaicing large overlapping regions, small translations and rotations while feature based techniques are useful for small overlapping regions. Feature based image mosaicing is a combination of corner detection, corner matching, motion parameters estimation and image stitching.Furthermore, image mosaicing is considered the process of obtaining a wider field-of-view of a scene from a sequence of partial views, which has been an attractive research area because of its wide range of applications, including motion detection, resolution enhancement, monitoring global land usage, and medical imaging. Numerous algorithms for image mosaicing have been proposed over the last two decades.In this paper the authors present a review on different approaches for image mosaicing and the literature over the past few years in the field of image masaicing methodologies. The authors take an overview on the various methods for image mosaicing.This review paper also provides an in depth survey of the existing image mosaicing algorithms by classifying them into several groups. For each group, the fundamental concepts are first clearly explained. Finally this paper also reviews and discusses the strength and weaknesses of all the mosaicing groups

    Development Of A High Performance Mosaicing And Super-Resolution Algorithm

    Get PDF
    In this dissertation, a high-performance mosaicing and super-resolution algorithm is described. The scale invariant feature transform (SIFT)-based mosaicing algorithm builds an initial mosaic which is iteratively updated by the robust super resolution algorithm to achieve the final high-resolution mosaic. Two different types of datasets are used for testing: high altitude balloon data and unmanned aerial vehicle data. To evaluate our algorithm, five performance metrics are employed: mean square error, peak signal to noise ratio, singular value decomposition, slope of reciprocal singular value curve, and cumulative probability of blur detection. Extensive testing shows that the proposed algorithm is effective in improving the captured aerial data and the performance metrics are accurate in quantifying the evaluation of the algorithm

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    Development of Multi-Strip Image Mosaicking for KOMPSAT-3A Images

    Get PDF
    High-resolution satellite imagery has a limitation in terms of coverage area. This limitation presents challenges for extensive-scale analysis at regional or national levels. To maximize the utility of high-resolution satellite imagery, the implementation of image mosaicking techniques is essential. In this paper, we have developed seamline extraction techniques and relative geometric correction optimized for high-resolution satellite imagery. Ultimately, we proposed a multi-strip image mosaicking method for KOMPSAT-3A (Korea Multi-Purpose Satellite-3A) images. We applied the Dijkstra's shortest path algorithm to efficiently extract seamlines. we also performed image registration based on feature matching and homography transformation to correct the relative geometric errors between input images. We conducted experiments with our methods using 29 scenes from KOMPSAT-3A L1G data. The results indicated high relative geometric accuracy, with an average error of 1.63 pixels. Furthermore, we were able to obtain high-quality seamless mosaic images. Our proposed method is expected to enhance the utility of KOMPSAT-3A imagery for large-scale environmental and urban analysis and to provide more accurate and comprehensive data

    Joint Rectification and Stitching of Images Formulated as Camera Pose Estimation Problems

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2015. 8. 조남익.This dissertation presents a study of image rectification and stitching problems formulated as camera pose estimation problems. There have been many approaches to the rectification and/or stitching of images for their importance in image processing and computer vision areas. This dissertation adds a new approach to these problems, which finds appropriate optimization problems whose solutions give camera pose parameters for the given problems. Specifically, the contribution of this dissertation is to develop (i) a new optimization problem that can handle image rectification and stitching in a unified framework through the pose estimation formulation, and (ii) a new approach to planar object rectification problem which is also formulated as an optimal homography estimation problem. First, a unified framework for the image rectification and stitching problem is studied, which can handle both assumptions or conditions that (i) the optical center of camera is fixed or (ii) the camera captures a plane target. For this, the camera pose is modeled with six parameters (three for the rotation and three for the translation) and a cost function is developed that reflects the registration errors on a reference plane (image stitching results). The designed cost function is effectively minimized via the Levenberg-Marquardt algorithm. From the estimated camera poses, the relative camera motion is computed: when the optical center is moved (i.e., the camera motion is large), metric rectification is possible and thus provides rectified composites as well as camera poses are obtained. Second, this dissertation presents a rectification method for planar objects using line segments which can be augmented to the previous problem for further rectification or performed independently to single images when there are planar objects in the image such as building facades or name cards. Based on the 2D Manhattan world assumption (i.e., the majority of line segments are aligned with principal axes), a cost function is formulated as an optimal homography estimation problem that makes the line segments horizontally or vertically straight. Since there are outliers in the line segment detection, an iterative optimization scheme for the robust estimation is also developed. The application of the proposed methods is the stitching of many images of the same scene into a high resolution image along with its rectification. Also it can be applied to the rectification of building facades, documents, name cards, etc, which helps the optical character recognition (OCR) rates of texts in the scene and also to improve the recognition of buildings and visual qualities of scenery images. In addition, this dissertation finally presents an application of the proposed method for finding boundaries of document in videos for mobile device based application. This is a challenging problem due to perspective distortion, focus and motion blur, partial occlusion, and so on. For this, a cost function is formulated which comprises a data term (color distributions of the document and background), boundary term (alignment and contrast errors after the contour of the documents is rectified), and temporal term (temporal coherence in consecutive frames).1 Introduction 1 1.1 Background 1 1.2 Contributions 2 1.3 Homography between the i-th image and pi_E 4 1.4 Structure of the dissertation 5 2 A unified framework for automatic image stitching and rectification 7 2.1 Related works 7 2.2 Proposed cost function and its optimization 8 2.2.1 Proposed cost function 12 2.2.2 Optimization 13 2.2.3 Relation to the model in [1] 14 2.3 Post-processing 15 2.3.1 Classification of the conditions 15 2.3.2 Skew removal 16 2.4 Experimental results 18 2.4.1 Quantitative evaluation on metric reconstruction performance 19 2.4.2 Determining the capturing environment 21 2.4.3 Experiments on real images 25 2.4.4 Applications to document image stitching and more results 28 2.5 Summary 28 3 Rectification of planar targets based on line segments 31 3.1 Related works 31 3.1.1 Rectification of planar objects 32 3.1.2 Rectification based on self calibration 33 3.2 Proposed rectification model 33 3.2.1 Optimization-based framework 36 3.2.2 Cost function based on line segment alignments 37 3.2.3 Optimization 38 3.3 Experimental results 40 3.3.1 Evaluation metrics 40 3.3.2 Quantitative evaluation 41 3.3.3 Computation complexity 45 3.3.4 Qualitative comparisons and limitations 45 3.4 Summary 52 4 Application: Document capture system for mobile devices 53 4.1 Related works 53 4.2 The proposed method 54 4.2.1 Notation 54 4.2.2 Optimization-based framework 55 4.3 Experimental results 62 4.3.1 Initialization 65 4.3.2 Quantitative evaluation 65 4.3.3 Qualitative evaluation and limitations 66 4.4 Summary 67 5 Conclusions and future works 75 Bibliography 77 Abstract (Korean) 83Docto

    Digital Image Transformations and Image Stacking of Latent Prints Processed Using Multiple Physical and Chemical Techniques

    Get PDF
    Latent fingerprints are highly common among evidence that is found at crime scenes. While fingerprint evidence can be very reliable, comparison and identification of a print is highly affected by the quality of the fingerprint image. Fingerprint experts ideally want to have an image with the best quality possible, in order to make an accurate identification and avoid missing pertinent details. This is a thesis presented on the use of digital image processing to merge multiple images of one fingerprint to obtain a final image with greater quality. The research was conducted using different latent fingerprint processing techniques that have been widely used in the forensic science community: ninhydrin, DFO, zinc chloride, cyanoacrylate, and fluorescent dye-stains. The latents were photographed after each technique was utilized. Images of the same print under different wavelengths and filters were merged to create a final image with ideally better contrast, quality, and friction ridge detail than were observed in the original images prior to merging. Quality was determined using three different scoring methods; NFIQ, Bandey scale, and AFIX Tracker. A print was considered to be improved if the merged score was better than the scores of the original images. There were 12.1 % of prints that were improved based on NFIQ scores, 2.8 % based on Bandey scores, and 15.0 % based on AFIX match scores. Image fusion for increasing quality of latent fingerprint images is a method that shows small benefits for the examiner when performing a comparison

    An Artificial Intelligence Approach to Concatenative Sound Synthesis

    Get PDF
    Sound examples are included with this thesisTechnological advancement such as the increase in processing power, hard disk capacity and network bandwidth has opened up many exciting new techniques to synthesise sounds, one of which is Concatenative Sound Synthesis (CSS). CSS uses data-driven method to synthesise new sounds from a large corpus of small sound snippets. This technique closely resembles the art of mosaicing, where small tiles are arranged together to create a larger image. A ‘target’ sound is often specified by users so that segments in the database that match those of the target sound can be identified and then concatenated together to generate the output sound. Whilst the practicality of CSS in synthesising sounds currently looks promising, there are still areas to be explored and improved, in particular the algorithm that is used to find the matching segments in the database. One of the main issues in CSS is the basis of similarity, as there are many perceptual attributes which sound similarity can be based on, for example it can be based on timbre, loudness, rhythm, and tempo and so on. An ideal CSS system needs to be able to decipher which of these perceptual attributes are anticipated by the users and then accommodate them by synthesising sounds that are similar with respect to the particular attribute. Failure to communicate the basis of sound similarity between the user and the CSS system generally results in output that mismatches the sound which has been envisioned by the user. In order to understand how humans perceive sound similarity, several elements that affected sound similarity judgment were first investigated. Of the four elements tested (timbre, melody, loudness, tempo), it was found that the basis of similarity is dependent on humans’ musical training where musicians based similarity on the timbral information, whilst non-musicians rely on melodic information. Thus, for the rest of the study, only features that represent the timbral information were included, as musicians are the target user for the findings of this study. Another issue with the current state of CSS systems is the user control flexibility, in particular during segment matching, where features can be assigned with different weights depending on their importance to the search. Typically, the weights (in some existing CSS systems that support the weight assigning mechanism) can only be assigned manually, resulting in a process that is both labour intensive and time consuming. Additionally, another problem was identified in this study, which is the lack of mechanism to handle homosonic and equidistant segments. These conditions arise when too few features are compared causing otherwise aurally different sounds to be represented by the same sonic values, or can also be a result of rounding off the values of the features extracted. This study addresses both of these problems through an extended use of Artificial Intelligence (AI). The Analysis Hierarchy Process (AHP) is employed to enable order dependent features selection, allowing weights to be assigned for each audio feature according to their relative importance. Concatenation distance is used to overcome the issues with homosonic and equidistant sound segments. The inclusion of AI results in a more intelligent system that can better handle tedious tasks and minimize human error, allowing users (composers) to worry less of the mundane tasks, and focusing more on the creative aspects of music making. In addition to the above, this study also aims to enhance user control flexibility in a CSS system and improve similarity result. The key factors that affect the synthesis results of CSS were first identified and then included as parametric options which users can control in order to communicate their intended creations to the system to synthesise. Comprehensive evaluations were carried out to validate the feasibility and effectiveness of the proposed solutions (timbral-based features set, AHP, and concatenation distance). The final part of the study investigates the relationship between perceived sound similarity and perceived sound interestingness. A new framework that integrates all these solutions, the query-based CSS framework, was then proposed. The proof-of-concept of this study, ConQuer, was developed based on this framework. This study has critically analysed the problems in existing CSS systems. Novel solutions have been proposed to overcome them and their effectiveness has been tested and discussed, and these are also the main contributions of this study.Malaysian Minsitry of Higher Education, Universiti Putra Malaysi

    Geo-rectification and cloud-cover correction of multi-temporal Earth observation imagery

    Get PDF
    Over the past decades, improvements in remote sensing technology have led to mass proliferation of aerial imagery. This, in turn, opened vast new possibilities relating to land cover classification, cartography, and so forth. As applications in these fields became increasingly more complex, the amount of data required also rose accordingly and so, to satisfy these new needs, automated systems had to be developed. Geometric distortions in raw imagery must be rectified, otherwise the high accuracy requirements of the newest applications will not be attained. This dissertation proposes an automated solution for the pre-stages of multi-spectral satellite imagery classification, focusing on Fast Fourier Shift theorem based geo-rectification and multi-temporal cloud-cover correction. By automatizing the first stages of image processing, automatic classifiers can take advantage of a larger supply of image data, eventually allowing for the creation of semi-real-time mapping applications
    corecore