77 research outputs found

    INTEGRATED LOW LIGHT IMAGE ENHANCEMENT IN TRANSPORTATION SYSTEM

    Get PDF
    Recent Intelligent Transportation System (ITS) focuses on both traffic management and Homeland Security. It involves advance detection systems of all kind but proper analysis of the image data is required for controlling and further processing. It becomes even more difficult when it comes to low light images due to limitation in the image sensor and heavy amount of noise. An ITS supports all levels like (Transport policy level, Traffic control tactical level, Traffic control measure level, Traffic control operation). For this it uses several split systems like Real time passenger information (RTPI), Automatic Number Plate Recognition (ANPR), Variable message signs (VMS), Vehicle to Infrastructure (V2I) and Vehicle to Vehicle (V2V) system. While analyzing critical scenarios, mostly for the development of the application for Vehicle to Infrastructure (V2I) System several cases are taken into consideration. From these cases some are very difficult to analyze due to the visibility of the background as the detail structure is taken into consideration. Here Direct processing of low light images or video frames like day images leads to loss of required data, so an efficient enhancement method is required which gives allowable result for further transformation and analysis with minimal processing. So an Adaptive Enhancement Method is presented here which applies different enhancement methods for day light and low light images separately. For this purpose a combination of image fusion, edge detection filtering and Contourlet transformation is used for low light images; tone level adjustment and low level feature extraction for enhancement of day light images

    Car make and model recognition under limited lighting conditions at night

    Get PDF
    Car make and model recognition (CMMR) has become an important part of intelligent transport systems. Information provided by CMMR can be utilized when license plate numbers cannot be identified or fake number plates are used. CMMR can also be used when a certain model of a vehicle is required to be automatically identified by cameras. The majority of existing CMMR methods are designed to be used only in daytime when most of the car features can be easily seen. Few methods have been developed to cope with limited lighting conditions at night where many vehicle features cannot be detected. The aim of this work was to identify car make and model at night by using available rear view features. This paper presents a one-class classifier ensemble designed to identify a particular car model of interest from other models. The combination of salient geographical and shape features of taillights and license plates from the rear view is extracted and used in the recognition process. The majority vote from support vector machine, decision tree, and k-nearest neighbors is applied to verify a target model in the classification process. The experiments on 421 car makes and models captured under limited lighting conditions at night show the classification accuracy rate at about 93 %

    Use of Coherent Point Drift in computer vision applications

    Get PDF
    This thesis presents the novel use of Coherent Point Drift in improving the robustness of a number of computer vision applications. CPD approach includes two methods for registering two images - rigid and non-rigid point set approaches which are based on the transformation model used. The key characteristic of a rigid transformation is that the distance between points is preserved, which means it can be used in the presence of translation, rotation, and scaling. Non-rigid transformations - or affine transforms - provide the opportunity of registering under non-uniform scaling and skew. The idea is to move one point set coherently to align with the second point set. The CPD method finds both the non-rigid transformation and the correspondence distance between two point sets at the same time without having to use a-priori declaration of the transformation model used. The first part of this thesis is focused on speaker identification in video conferencing. A real-time, audio-coupled video based approach is presented, which focuses more on the video analysis side, rather than the audio analysis that is known to be prone to errors. CPD is effectively utilised for lip movement detection and a temporal face detection approach is used to minimise false positives if face detection algorithm fails to perform. The second part of the thesis is focused on multi-exposure and multi-focus image fusion with compensation for camera shake. Scale Invariant Feature Transforms (SIFT) are first used to detect keypoints in images being fused. Subsequently this point set is reduced to remove outliers, using RANSAC (RANdom Sample Consensus) and finally the point sets are registered using CPD with non-rigid transformations. The registered images are then fused with a Contourlet based image fusion algorithm that makes use of a novel alpha blending and filtering technique to minimise artefacts. The thesis evaluates the performance of the algorithm in comparison to a number of state-of-the-art approaches, including the key commercial products available in the market at present, showing significantly improved subjective quality in the fused images. The final part of the thesis presents a novel approach to Vehicle Make & Model Recognition in CCTV video footage. CPD is used to effectively remove skew of vehicles detected as CCTV cameras are not specifically configured for the VMMR task and may capture vehicles at different approaching angles. A LESH (Local Energy Shape Histogram) feature based approach is used for vehicle make and model recognition with the novelty that temporal processing is used to improve reliability. A number of further algorithms are used to maximise the reliability of the final outcome. Experimental results are provided to prove that the proposed system demonstrates an accuracy in excess of 95% when tested on real CCTV footage with no prior camera calibration

    Enhancement of Single and Composite Images Based on Contourlet Transform Approach

    Get PDF
    Image enhancement is an imperative step in almost every image processing algorithms. Numerous image enhancement algorithms have been developed for gray scale images despite their absence in many applications lately. This thesis proposes hew image enhancement techniques of 8-bit single and composite digital color images. Recently, it has become evident that wavelet transforms are not necessarily best suited for images. Therefore, the enhancement approaches are based on a new 'true' two-dimensional transform called contourlet transform. The proposed enhancement techniques discussed in this thesis are developed based on the understanding of the working mechanisms of the new multiresolution property of contourlet transform. This research also investigates the effects of using different color space representations for color image enhancement applications. Based on this investigation an optimal color space is selected for both single image and composite image enhancement approaches. The objective evaluation steps show that the new method of enhancement not only superior to the commonly used transformation method (e.g. wavelet transform) but also to various spatial models (e.g. histogram equalizations). The results found are encouraging and the enhancement algorithms have proved to be more robust and reliable

    Video content analysis for intelligent forensics

    Get PDF
    The networks of surveillance cameras installed in public places and private territories continuously record video data with the aim of detecting and preventing unlawful activities. This enhances the importance of video content analysis applications, either for real time (i.e. analytic) or post-event (i.e. forensic) analysis. In this thesis, the primary focus is on four key aspects of video content analysis, namely; 1. Moving object detection and recognition, 2. Correction of colours in the video frames and recognition of colours of moving objects, 3. Make and model recognition of vehicles and identification of their type, 4. Detection and recognition of text information in outdoor scenes. To address the first issue, a framework is presented in the first part of the thesis that efficiently detects and recognizes moving objects in videos. The framework targets the problem of object detection in the presence of complex background. The object detection part of the framework relies on background modelling technique and a novel post processing step where the contours of the foreground regions (i.e. moving object) are refined by the classification of edge segments as belonging either to the background or to the foreground region. Further, a novel feature descriptor is devised for the classification of moving objects into humans, vehicles and background. The proposed feature descriptor captures the texture information present in the silhouette of foreground objects. To address the second issue, a framework for the correction and recognition of true colours of objects in videos is presented with novel noise reduction, colour enhancement and colour recognition stages. The colour recognition stage makes use of temporal information to reliably recognize the true colours of moving objects in multiple frames. The proposed framework is specifically designed to perform robustly on videos that have poor quality because of surrounding illumination, camera sensor imperfection and artefacts due to high compression. In the third part of the thesis, a framework for vehicle make and model recognition and type identification is presented. As a part of this work, a novel feature representation technique for distinctive representation of vehicle images has emerged. The feature representation technique uses dense feature description and mid-level feature encoding scheme to capture the texture in the frontal view of the vehicles. The proposed method is insensitive to minor in-plane rotation and skew within the image. The capability of the proposed framework can be enhanced to any number of vehicle classes without re-training. Another important contribution of this work is the publication of a comprehensive up to date dataset of vehicle images to support future research in this domain. The problem of text detection and recognition in images is addressed in the last part of the thesis. A novel technique is proposed that exploits the colour information in the image for the identification of text regions. Apart from detection, the colour information is also used to segment characters from the words. The recognition of identified characters is performed using shape features and supervised learning. Finally, a lexicon based alignment procedure is adopted to finalize the recognition of strings present in word images. Extensive experiments have been conducted on benchmark datasets to analyse the performance of proposed algorithms. The results show that the proposed moving object detection and recognition technique superseded well-know baseline techniques. The proposed framework for the correction and recognition of object colours in video frames achieved all the aforementioned goals. The performance analysis of the vehicle make and model recognition framework on multiple datasets has shown the strength and reliability of the technique when used within various scenarios. Finally, the experimental results for the text detection and recognition framework on benchmark datasets have revealed the potential of the proposed scheme for accurate detection and recognition of text in the wild

    Video Forgery Detection: A Comprehensive Study of Inter and Intra Frame Forgery With Comparison of State-Of-Art

    Get PDF
    Availability of sophisticated and low-cost smart phones, digital cameras, camcorders, surveillance CCTV cameras are extensively used to create videos in our daily life. The prevalence of video sharing techniques presently available in the market are: YouTube, Facebook, Instagram, snapchat and many more are in utilization to share the information related to videos. Besides this, there are many software which can edit the content of video: Window Movie Maker, Video Editor, Adobe Photoshop etc., with this available software anyone can edit the video content which is called as “Forgery” if edited content is harmful. Usually, videos play a vital role in terms of proof in crime scene. The Victim is judged by the proof submitted by the lawyer to the court. Many such cases have evidenced that the video being submitted as proof is been forged. Checking the authentication of the video is most important before submitting as proof. There has been a rapid development in deep learning techniques which have created deepfake videos where faces are replaced with other faces which strongly made a belief of saying “Seeing is no longer believing”. The available software which can morph the faces are FakeApp, FaceSwap etc., the increased technology really made the Authentication of proofs very doubtful and un-trusty which are not accepted as proof without proper validation of the video. The survey gives the methods that are capable of accurately computing the videos and analyses to detect different kinds of forgeries. It has revealed that most of the existing methods are relying on number of tampered frames. The proposed techniques are with compression, double compression codec videos where research is being carried out from 2016 to present. This paper gives the comprehensive study of techniques, algorithms and applications designed and developed to detect forgery in videos
    • …
    corecore