164 research outputs found

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Image Restoration for Remote Sensing: Overview and Toolbox

    Full text link
    Remote sensing provides valuable information about objects or areas from a distance in either active (e.g., RADAR and LiDAR) or passive (e.g., multispectral and hyperspectral) modes. The quality of data acquired by remotely sensed imaging sensors (both active and passive) is often degraded by a variety of noise types and artifacts. Image restoration, which is a vibrant field of research in the remote sensing community, is the task of recovering the true unknown image from the degraded observed image. Each imaging sensor induces unique noise types and artifacts into the observed image. This fact has led to the expansion of restoration techniques in different paths according to each sensor type. This review paper brings together the advances of image restoration techniques with particular focuses on synthetic aperture radar and hyperspectral images as the most active sub-fields of image restoration in the remote sensing community. We, therefore, provide a comprehensive, discipline-specific starting point for researchers at different levels (i.e., students, researchers, and senior researchers) willing to investigate the vibrant topic of data restoration by supplying sufficient detail and references. Additionally, this review paper accompanies a toolbox to provide a platform to encourage interested students and researchers in the field to further explore the restoration techniques and fast-forward the community. The toolboxes are provided in https://github.com/ImageRestorationToolbox.Comment: This paper is under review in GRS

    Deep Learning Techniques for Geospatial Data Analysis

    Full text link
    Consumer electronic devices such as mobile handsets, goods tagged with RFID labels, location and position sensors are continuously generating a vast amount of location enriched data called geospatial data. Conventionally such geospatial data is used for military applications. In recent times, many useful civilian applications have been designed and deployed around such geospatial data. For example, a recommendation system to suggest restaurants or places of attraction to a tourist visiting a particular locality. At the same time, civic bodies are harnessing geospatial data generated through remote sensing devices to provide better services to citizens such as traffic monitoring, pothole identification, and weather reporting. Typically such applications are leveraged upon non-hierarchical machine learning techniques such as Naive-Bayes Classifiers, Support Vector Machines, and decision trees. Recent advances in the field of deep-learning showed that Neural Network-based techniques outperform conventional techniques and provide effective solutions for many geospatial data analysis tasks such as object recognition, image classification, and scene understanding. The chapter presents a survey on the current state of the applications of deep learning techniques for analyzing geospatial data. The chapter is organized as below: (i) A brief overview of deep learning algorithms. (ii)Geospatial Analysis: a Data Science Perspective (iii) Deep-learning techniques for Remote Sensing data analytics tasks (iv) Deep-learning techniques for GPS data analytics(iv) Deep-learning techniques for RFID data analytics.Comment: This is a pre-print of the following chapter: Arvind W. Kiwelekar, Geetanjali S. Mahamunkar, Laxman D. Netak, Valmik B Nikam, {\em Deep Learning Techniques for Geospatial Data Analysis}, published in {\bf Machine Learning Paradigms}, edited by George A. TsihrintzisLakhmi C. Jain, 2020, publisher Springer, Cham reproduced with permission of publisher Springer, Cha

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented

    Neural Network Methods for Radiation Detectors and Imaging

    Full text link
    Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration

    Non-Retinotopic Reference Frames in Human Vision: A Dynamic Journey from Visual Chaos to Clarity

    Get PDF
    The optics of the eye maps neighboring points in the environment to neighboring retinal photoreceptors, and these neighborhood relations, known as retinotopic organization, are qualitatively preserved in early visual cortical areas. Under normal viewing conditions, due to object and observer movements in the environment, the stimuli impinging on retinotopic representations are highly dynamic and unstable. Thus, understanding ecological vision requires an understanding of how visual processes operate under these dynamic conditions. Retinotopically based theories, however, are not sufficient to explain how clarity of form is achieved in a dynamic environment. Non-retinotopic theories provide an alternative to address dynamic issues associated with purely retinotopic theories. Indeed, recent studies have indicated that many visual attributes of a stimulus are computed according to non-retinotopic reference frames. While those studies show the involvement of non-retinotopic reference frames in visual computation, the nature and spatio-temporal characteristics of these reference frames remain largely unknown. The primary goal of our research was to understand the nature and spatio-temporal properties of reference frames involved in non-retinotopic computations. Our results indicate that the effect of a dynamic non-retinotopic reference frame extends over space, creating a field within which target stimuli are localized and perceived relative to the reference. The fields of neighboring dynamic reference frames interact; static neighbors do not affect the fields of dynamic references; the non-retinotopic field effect is maximized when the target and the reference stimuli are in phase; and the field strength decreases with target-reference phase shift. The results of our visual masking experiments indicate that while masking mechanisms operate in retinotopic domain, masking effect attenuates significantly in the presence of predictable non-retinotopic reference frames. We suggest that the reference frame revealed by our studies can be better described in terms of a “field” rather than an object. Our results also indicate that the interactions between reference frames occur only when they are in motion; suggesting that the fields generated by non-retinotopic reference frames are motion-based. In conclusion, this work reveals that the dynamic nature of our visual experience should be viewed as part of the solution, rather than a problem in ecological vision.Electrical and Computer Engineering, Department o

    Automatic Car Number Plate Extraction Using Connected Components and Geometrical Features Approach

    Get PDF
    As today information era of advanced and secure digital technology field, monitoring system and security mechanisms are played as the most important role. By using specialized security camera in public sectors and pedestrian crossings, it can monitor and record a real time events and information of the sectors as video clips to track criminals. According to get the important data clearly and correctly from the video clips, the detection and extraction methods are essential. The proposed system focuses on the detection and extraction of car number plate that are taken from over speed driving cars. So, these number plates are deblurred to overcome some of the security threat and enhance the motion deblurring technique. Our proposed method is the combination of connected component based approach with the regional geometrical features. In this method, key frames are generated from an input video clips using Discrete Wavelet Transform (DWT) based approach. From the key frame images, rectangle shape areas which has high luminance value is detected and extracted as foreground regions and others are discarded as background by using regional geometric features. Finally, the rectangle shapes are checked whether any text is included or not. If a rectangle shape area contains text, this system accepts that it is a number plate and other region is omitted. Then the accuracy of the research method is evaluated with various experiments to compare with previous researches. This system can be widely used in

    Machine learning for flow field measurements: a perspective

    Get PDF
    Advancements in machine-learning (ML) techniques are driving a paradigm shift in image processing. Flow diagnostics with optical techniques is not an exception. Considering the existing and foreseeable disruptive developments in flow field measurement techniques, we elaborate this perspective, particularly focused to the field of particle image velocimetry. The driving forces for the advancements in ML methods for flow field measurements in recent years are reviewed in terms of image preprocessing, data treatment and conditioning. Finally, possible routes for further developments are highlighted.Stefano Discetti acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 949085). Yingzheng Liu acknowledges financial support from the National Natural Science Foundation of China (11725209)

    Light field image processing: an overview

    Get PDF
    Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data

    Using emotions in intelligent virtual environments: the EJaCalIVE framework

    Get PDF
    Nowadays, there is a need to provide new applications which allow the definition and implementation of safe environments that attends to the user needs and increases their wellbeing. In this sense, this paper introduces the EJaCalIVE framework which allows the creation of emotional virtual environments that incorporate agents, eHealth related devices, human actors, and emotions projecting them virtually and managing the interaction between all the elements. In this way, the proposed framework allows the design and programming of intelligent virtual environments, as well as the simulation and detection of human emotions which can be used for the improvement of the decision-making processes of the developed entities. The paper also shows a case study that enforces the need of this framework in common environments like nursing homes or assisted living facilities. Concretely, the case study proposes the simulation of a residence for the elderly. The main goal is to have an emotion-based simulation to train an assistance robot avoiding the complexity involved in working with the real elders. The main advantage of the proposed framework is to provide a safe environment, that is, an environment where users are able to interact safely with the system.This work is partially supported by the MINECO/FEDER TIN2015-65515-C4-1-R and the FPI Grant AP2013-01276 awarded to Jaime-Andres Rincon. This work is also supported by COMPETE: POCI-01-0145-FEDER-007043 and Fundacao para a Ciencia e Tecnologia (FCT) within the projects UID/CEC/00319/2013 and Post-Doc scholarship SFRH/BPD/102696/2014 (A. Costa).info:eu-repo/semantics/publishedVersio
    corecore