15,855 research outputs found

    An Improved Metric Space for Pixel Signatures

    Full text link

    Recent Developments of NEMO: Detection of Solar Eruptions Characteristics

    Full text link
    The recent developments in space instrumentation for solar observations and telemetry have caused the necessity of advanced pattern recognition tools for the different classes of solar events. The Extreme ultraviolet Imaging Telescope (EIT) of solar corona on-board SOHO spacecraft has uncovered a new class of eruptive events which are often identified as signatures of Coronal Mass Ejection (CME) initiations on solar disk. It is evident that a crucial task is the development of an automatic detection tool of CMEs precursors. The Novel EIT wave Machine Observing (NEMO) (http://sidc.be/nemo) code is an operational tool that detects automatically solar eruptions using EIT image sequences. NEMO applies techniques based on the general statistical properties of the underlying physical mechanisms of eruptive events on the solar disc. In this work, the most recent updates of NEMO code - that have resulted to the increase of the recognition efficiency of solar eruptions linked to CMEs - are presented. These updates provide calculations of the surface of the dimming region, implement novel clustering technique for the dimmings and set new criteria to flag the eruptive dimmings based on their complex characteristics. The efficiency of NEMO has been increased significantly resulting to the extraction of dimmings observed near the solar limb and to the detection of small-scale events as well. As a consequence, the detection efficiency of CMEs precursors and the forecasts of CMEs have been drastically improved. Furthermore, the catalogues of solar eruptive events that can be constructed by NEMO may include larger number of physical parameters associated to the dimming regions.Comment: 12 Pages, 5 figures, submitted to Solar Physic

    Kepler Presearch Data Conditioning I - Architecture and Algorithms for Error Correction in Kepler Light Curves

    Full text link
    Kepler provides light curves of 156,000 stars with unprecedented precision. However, the raw data as they come from the spacecraft contain significant systematic and stochastic errors. These errors, which include discontinuities, systematic trends, and outliers, obscure the astrophysical signals in the light curves. To correct these errors is the task of the Presearch Data Conditioning (PDC) module of the Kepler data analysis pipeline. The original version of PDC in Kepler did not meet the extremely high performance requirements for the detection of miniscule planet transits or highly accurate analysis of stellar activity and rotation. One particular deficiency was that astrophysical features were often removed as a side-effect to removal of errors. In this paper we introduce the completely new and significantly improved version of PDC which was implemented in Kepler SOC 8.0. This new PDC version, which utilizes a Bayesian approach for removal of systematics, reliably corrects errors in the light curves while at the same time preserving planet transits and other astrophysically interesting signals. We describe the architecture and the algorithms of this new PDC module, show typical errors encountered in Kepler data, and illustrate the corrections using real light curve examples.Comment: Submitted to PASP. Also see companion paper "Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction" by Jeff C. Smith et a

    Review of Person Re-identification Techniques

    Full text link
    Person re-identification across different surveillance cameras with disjoint fields of view has become one of the most interesting and challenging subjects in the area of intelligent video surveillance. Although several methods have been developed and proposed, certain limitations and unresolved issues remain. In all of the existing re-identification approaches, feature vectors are extracted from segmented still images or video frames. Different similarity or dissimilarity measures have been applied to these vectors. Some methods have used simple constant metrics, whereas others have utilised models to obtain optimised metrics. Some have created models based on local colour or texture information, and others have built models based on the gait of people. In general, the main objective of all these approaches is to achieve a higher-accuracy rate and lowercomputational costs. This study summarises several developments in recent literature and discusses the various available methods used in person re-identification. Specifically, their advantages and disadvantages are mentioned and compared.Comment: Published 201

    A robust nonlinear scale space change detection approach for SAR images

    Get PDF
    In this paper, we propose a change detection approach based on nonlinear scale space analysis of change images for robust detection of various changes incurred by natural phenomena and/or human activities in Synthetic Aperture Radar (SAR) images using Maximally Stable Extremal Regions (MSERs). To achieve this, a variant of the log-ratio image of multitemporal images is calculated which is followed by Feature Preserving Despeckling (FPD) to generate nonlinear scale space images exhibiting different trade-offs in terms of speckle reduction and shape detail preservation. MSERs of each scale space image are found and then combined through a decision level fusion strategy, namely "selective scale fusion" (SSF), where contrast and boundary curvature of each MSER are considered. The performance of the proposed method is evaluated using real multitemporal high resolution TerraSAR-X images and synthetically generated multitemporal images composed of shapes with several orientations, sizes, and backscatter amplitude levels representing a variety of possible signatures of change. One of the main outcomes of this approach is that different objects having different sizes and levels of contrast with their surroundings appear as stable regions at different scale space images thus the fusion of results from scale space images yields a good overall performance

    Offline signature verification using classifier combination of HOG and LBP features

    Get PDF
    We present an offline signature verification system based on a signature’s local histogram features. The signature is divided into zones using both the Cartesian and polar coordinate systems and two different histogram features are calculated for each zone: histogram of oriented gradients (HOG) and histogram of local binary patterns (LBP). The classification is performed using Support Vector Machines (SVMs), where two different approaches for training are investigated, namely global and user-dependent SVMs. User-dependent SVMs, trained separately for each user, learn to differentiate a user’s signature from others, whereas a single global SVM trained with difference vectors of query and reference signatures’ features of all users, learns how to weight dissimilarities. The global SVM classifier is trained using genuine and forgery signatures of subjects that are excluded from the test set, while userdependent SVMs are separately trained for each subject using genuine and random forgeries. The fusion of all classifiers (global and user-dependent classifiers trained with each feature type), achieves a 15.41% equal error rate in skilled forgery test, in the GPDS-160 signature database without using any skilled forgeries in training

    Aerial Vehicle Tracking by Adaptive Fusion of Hyperspectral Likelihood Maps

    Full text link
    Hyperspectral cameras can provide unique spectral signatures for consistently distinguishing materials that can be used to solve surveillance tasks. In this paper, we propose a novel real-time hyperspectral likelihood maps-aided tracking method (HLT) inspired by an adaptive hyperspectral sensor. A moving object tracking system generally consists of registration, object detection, and tracking modules. We focus on the target detection part and remove the necessity to build any offline classifiers and tune a large amount of hyperparameters, instead learning a generative target model in an online manner for hyperspectral channels ranging from visible to infrared wavelengths. The key idea is that, our adaptive fusion method can combine likelihood maps from multiple bands of hyperspectral imagery into one single more distinctive representation increasing the margin between mean value of foreground and background pixels in the fused map. Experimental results show that the HLT not only outperforms all established fusion methods but is on par with the current state-of-the-art hyperspectral target tracking frameworks.Comment: Accepted at the International Conference on Computer Vision and Pattern Recognition Workshops, 201
    corecore