465 research outputs found

    Parameters Estimation For Image Restoration

    Get PDF
    Image degradation generally occurs due to transmission channel error, camera mis-focus, atmospheric turbulence, relative object-camera motion, etc. Such degradations are unavoidable while a scene is captured through a camera. As degraded images are having less scientific values, restoration of such images is extremely essential in many practical applications. In this thesis, attempts have been made to recover images from their degraded observations. Various degradations including, out-of-focus blur, motion blur, atmospheric turbulence blur along with Gaussian noise are considered. Basically image restoration schemes are based on classical, regularisation parameter estimation and PSF estimation. In this thesis, five different contributions have been made based on various aspects of restoration. Four of them deal with spatial invariant degradation and in one of the approach we attempt for removal of spatial variant degradation. Two different schemes are proposed to estimate the motion blur parameters. Two dimensional Gabor filter has been used to calculate the direction of the blur. Radial basis function neural network (RBFNN) has been utilised to find the length of the blur. Subsequently, Wiener filter has been used to restore the images. Noise robustness of the proposed scheme is tested with different noise strengths. The blur parameter estimation problem is modelled as a pattern classification problem and is solved using support vector machine (SVM). The length parameter of motion blur and sigma (σ) parameter of Gaussian blur are identified through multi-class SVM. Support vector regression (SVR) has been utilised to obtain a true mapping of the images from the observed noisy blurred image. The parameters in SVR play a key role in SVR performance and these are optimised through particle swarm optimisation (PSO) technique. The optimised SVR model is used to restore the noisy blurred images. Blur in the presence of noise makes the restoration problem ill-conditioned. The regularisation parameter required for restoration of noisy blurred image is discussed and for the purpose, a global optimisation scheme namely PSO is utilisedto minimise the cost function of generalised cross validation (GCV) measure, which is dependent on regularisation parameter. This eliminates the problem of falling into a local minima. The scheme adapts to degradations due to motion and out-of-focus blur, associated with noise of varying strengths. In another contribution, an attempt has been made to restore images degraded due to rotational motion. Such situation is considered as spatial variant blur and handled by considering this as a combination of a number of spatial invariant blurs. The proposed scheme divides the blurred image into a number of images using elliptical path modelling. Each image is deblurred separately using Wiener filter and finally integrated to construct the whole image. Each model is studied separately, and experiments are conducted to evaluate their performances. The visual as well as the peak signal to noise ratio (PSNR in dB) of restored images are compared with competent recent schemes

    Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier

    Full text link
    This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles. The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road. At the beginning of the detector, a learnable pre-processing block is implemented, which extracts deep features from input images and calculates the data rarity for each feature. In the next step, drawing inspiration from soft attention, a weighted binary mask is designed that guides the model to focus more on predetermined regions. This research utilizes Convolutional Neural Networks (CNNs) to extract distinguishing characteristics from these areas, then reduces dimensions using Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM) is used to predict the behavior of the vehicles. To train and evaluate the model, a large-scale dataset is collected from two types of dash-cams and Insta360 cameras from the rear view of Ford Motor Company vehicles. This dataset includes over 12k frames captured during both daytime and nighttime hours. To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images. The findings from the experiments demonstrate that the proposed methodology can accurately categorize vehicle behavior with 92.14% accuracy, 97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's Kappa Statistic. Further details are available at https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure

    Modeling and model-aware signal processing methods for enhancement of optical systems

    Full text link
    Theoretical and numerical modeling of optical systems are increasingly being utilized in a wide range of areas in physics and engineering for characterizing and improving existing systems or developing new methods. This dissertation focuses on determining and improving the performance of imaging and non-imaging optical systems through modeling and developing model-aware enhancement methods. We evaluate the performance, demonstrate enhancements in terms of resolution and light collection efficiency, and improve the capabilities of the systems through changes to the system design and through post-processing techniques. We consider application areas in integrated circuit (IC) imaging for fault analysis and malicious circuitry detection, and free-form lens design for creating prescribed illumination patterns. The first part of this dissertation focuses on sub-surface imaging of ICs for fault analysis using a solid immersion lens (SIL) microscope. We first derive the Green's function of the microscope and use it to determine its resolution limits for bulk silicon and silicon-on-insulator (SOI) chips. We then propose an optimization framework for designing super-resolving apodization masks that utilizes the developed model and demonstrate the trade-offs in designing such masks. Finally, we derive the full electromagnetic model of the SIL microscope that models the image of an arbitrary sub-surface structure. With the rapidly shrinking dimensions of ICs, we are increasingly limited in resolving the features and identifying potential modifications despite the resolution improvements provided by the state-of-the-art microscopy techniques and enhancement methods described here. In the second part of this dissertation, we shift our focus away from improving the resolution and consider an optical framework that does not require high resolution imaging for detecting malicious circuitry. We develop a classification-based high-throughput gate identification method that utilizes the physical model of the optical system. We then propose a lower-throughput system to increase the detection accuracy, based on higher resolution imaging to supplement the former method. Finally, we consider the problem of free-form lens design for forming prescribed illumination patterns as a non-imaging application. Common methods that design free-form lenses for forming patterns consider the input light source to be a point source, however using extended light sources with such lenses lead to significant blurring in the resulting pattern. We propose a deconvolution-based framework that utilizes the lens geometry to model the blurring effects and eliminates this degradation, resulting in sharper patterns

    Modeling and model-aware signal processing methods for enhancement of optical systems

    Full text link
    Theoretical and numerical modeling of optical systems are increasingly being utilized in a wide range of areas in physics and engineering for characterizing and improving existing systems or developing new methods. This dissertation focuses on determining and improving the performance of imaging and non-imaging optical systems through modeling and developing model-aware enhancement methods. We evaluate the performance, demonstrate enhancements in terms of resolution and light collection efficiency, and improve the capabilities of the systems through changes to the system design and through post-processing techniques. We consider application areas in integrated circuit (IC) imaging for fault analysis and malicious circuitry detection, and free-form lens design for creating prescribed illumination patterns. The first part of this dissertation focuses on sub-surface imaging of ICs for fault analysis using a solid immersion lens (SIL) microscope. We first derive the Green's function of the microscope and use it to determine its resolution limits for bulk silicon and silicon-on-insulator (SOI) chips. We then propose an optimization framework for designing super-resolving apodization masks that utilizes the developed model and demonstrate the trade-offs in designing such masks. Finally, we derive the full electromagnetic model of the SIL microscope that models the image of an arbitrary sub-surface structure. With the rapidly shrinking dimensions of ICs, we are increasingly limited in resolving the features and identifying potential modifications despite the resolution improvements provided by the state-of-the-art microscopy techniques and enhancement methods described here. In the second part of this dissertation, we shift our focus away from improving the resolution and consider an optical framework that does not require high resolution imaging for detecting malicious circuitry. We develop a classification-based high-throughput gate identification method that utilizes the physical model of the optical system. We then propose a lower-throughput system to increase the detection accuracy, based on higher resolution imaging to supplement the former method. Finally, we consider the problem of free-form lens design for forming prescribed illumination patterns as a non-imaging application. Common methods that design free-form lenses for forming patterns consider the input light source to be a point source, however using extended light sources with such lenses lead to significant blurring in the resulting pattern. We propose a deconvolution-based framework that utilizes the lens geometry to model the blurring effects and eliminates this degradation, resulting in sharper patterns

    Analysis of Image Processing Strategies Dedicated to Underwater Scenarios

    Get PDF
    Underwater images undergo quality degradation issues of an image, like blur image, poor contrast, non-uniform illumination etc. Therefore, to process these degraded images, image processing come into existence. In this paper, two important image processing methods namely Image restoration and Image enhancement are compared. This paper also discusses the quality measures parameters of image processing which will be helpful to see clear images

    State of the Art in Face Recognition

    Get PDF
    Notwithstanding the tremendous effort to solve the face recognition problem, it is not possible yet to design a face recognition system with a potential close to human performance. New computer vision and pattern recognition approaches need to be investigated. Even new knowledge and perspectives from different fields like, psychology and neuroscience must be incorporated into the current field of face recognition to design a robust face recognition system. Indeed, many more efforts are required to end up with a human like face recognition system. This book tries to make an effort to reduce the gap between the previous face recognition research state and the future state

    Eye Detection and Face Recognition Across the Electromagnetic Spectrum

    Get PDF
    Biometrics, or the science of identifying individuals based on their physiological or behavioral traits, has increasingly been used to replace typical identifying markers such as passwords, PIN numbers, passports, etc. Different modalities, such as face, fingerprint, iris, gait, etc. can be used for this purpose. One of the most studied forms of biometrics is face recognition (FR). Due to a number of advantages over typical visible to visible FR, recent trends have been pushing the FR community to perform cross-spectral matching of visible images to face images from higher spectra in the electromagnetic spectrum.;In this work, the SWIR band of the EM spectrum is the primary focus. Four main contributions relating to automatic eye detection and cross-spectral FR are discussed. First, a novel eye localization algorithm for the purpose of geometrically normalizing a face across multiple SWIR bands for FR algorithms is introduced. Using a template based scheme and a novel summation range filter, an extensive experimental analysis show that this algorithm is fast, robust, and highly accurate when compared to other available eye detection methods. Also, the eye locations produced by this algorithm provides higher FR results than all other tested approaches. This algorithm is then augmented and updated to quickly and accurately detect eyes in more challenging unconstrained datasets, spanning the EM spectrum. Additionally, a novel cross-spectral matching algorithm is introduced that attempts to bridge the gap between the visible and SWIR spectra. By fusing multiple photometric normalization combinations, the proposed algorithm is not only more efficient than other visible-SWIR matching algorithms, but more accurate in multiple challenging datasets. Finally, a novel pre-processing algorithm is discussed that bridges the gap between document (passport) and live face images. It is shown that the pre-processing scheme proposed, using inpainting and denoising techniques, significantly increases the cross-document face recognition performance

    Face recognition technologies for evidential evaluation of video traces

    Get PDF
    Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future

    Digital forensic techniques for the reverse engineering of image acquisition chains

    Get PDF
    In recent years a number of new methods have been developed to detect image forgery. Most forensic techniques use footprints left on images to predict the history of the images. The images, however, sometimes could have gone through a series of processing and modification through their lifetime. It is therefore difficult to detect image tampering as the footprints could be distorted or removed over a complex chain of operations. In this research we propose digital forensic techniques that allow us to reverse engineer and determine history of images that have gone through chains of image acquisition and reproduction. This thesis presents two different approaches to address the problem. In the first part we propose a novel theoretical framework for the reverse engineering of signal acquisition chains. Based on a simplified chain model, we describe how signals have gone in the chains at different stages using the theory of sampling signals with finite rate of innovation. Under particular conditions, our technique allows to detect whether a given signal has been reacquired through the chain. It also makes possible to predict corresponding important parameters of the chain using acquisition-reconstruction artefacts left on the signal. The second part of the thesis presents our new algorithm for image recapture detection based on edge blurriness. Two overcomplete dictionaries are trained using the K-SVD approach to learn distinctive blurring patterns from sets of single captured and recaptured images. An SVM classifier is then built using dictionary approximation errors and the mean edge spread width from the training images. The algorithm, which requires no user intervention, was tested on a database that included more than 2500 high quality recaptured images. Our results show that our method achieves a performance rate that exceeds 99% for recaptured images and 94% for single captured images.Open Acces
    corecore