2,435 research outputs found

    Methodology for anomalous source detection in sparse gamma-ray spectra

    Get PDF
    The dangers of rogue nuclear material remain a top concern despite increased attention and strides in computational protocols. Single, mobile detector methodologies for localizing sources via autonomous surveying have become popular with the maturation of the machine learning (ML) and statistical learning (SL) fields as well as increased access to drone (quad-copter) technology. These options, however, face task-inherent impediments which either degrade the quality of collected gamma-ray spectra or necessitate high-quality information on source and background spectrum compositions. Some such hurdles include: limited dwell periods, fluctuating and/or unknown background, weak source signal due to large distance and/or small/shielded activity, and the low sensitivity of mobile detectors. As such, collected gamma-ray spectra are sparse, containing many zero-count energy channels, and contain relatively large background presence. This combination of factors, as well as the natural variance in second-to-second count rates, leads to low-quality information for making navigational decisions. In this thesis, an SL algorithm is presented for extracting source count estimations from time-series, sparse gamma-ray spectra with no prior training required. A Gaussian process with a linear innovation sequences procedure is used to efficiently update ongoing spectral estimates with real-time training and hyperparameters defined by detector characteristics. Being free of prior training and assumptions allows such an algorithm to be used in a wide variety of sparse-data settings whereas a trained solution would have very narrow applications. We have evaluated the effectiveness of this approach for anomaly detection using background spectra dataset collected with a Kromek D3S and simulated source spectra. Results of anomaly detection testing with a source count rate at half that of the background displays an area under the ROC curve of 0.9. Further, deployment with an ML guided navigation scheme shows, after an anomaly is detected, estimated gross source counts and true gross source counts have an average correlation of 0.998, whereas estimated gross background counts and true gross background counts have an average correlation of 0.876

    Object Image Linking of Earth Orbiting Objects in the Presence of Cosmics

    Full text link
    In survey series of unknown Earth orbiting objects, no a priori orbital elements are available. In surveys of wide field telescopes possibly many nonresolved object images are present on the single frames of the series. Reliable methods have to be found to associate the object images stemming from the same object with each other, so-called linking. The presence of cosmic ray events, so-called Cosmics, complicates reliable linking of non-resolved images. The tracklets of object images allow to extract exact positions for a first orbit determination. A two step method is used and tested on observation frames of space debris surveys of the ESA Space Debris Telescope, located on Tenerife, Spain: In a first step a cosmic filter is applied in the single observation frames. Four different filter approaches are compared and tested in performance. In a second step, the detected object images are linked on observation series based on the assumption of a linear accelerated movement of the objects over the frame during the series, which is updated with every object image, that could be successfully linked.Comment: Accepted for Publication; Advances in Space Research, 201

    Dynamic mode decomposition in vector-valued reproducing kernel Hilbert spaces for extracting dynamical structure among observables

    Full text link
    Understanding nonlinear dynamical systems (NLDSs) is challenging in a variety of engineering and scientific fields. Dynamic mode decomposition (DMD), which is a numerical algorithm for the spectral analysis of Koopman operators, has been attracting attention as a way of obtaining global modal descriptions of NLDSs without requiring explicit prior knowledge. However, since existing DMD algorithms are in principle formulated based on the concatenation of scalar observables, it is not directly applicable to data with dependent structures among observables, which take, for example, the form of a sequence of graphs. In this paper, we formulate Koopman spectral analysis for NLDSs with structures among observables and propose an estimation algorithm for this problem. This method can extract and visualize the underlying low-dimensional global dynamics of NLDSs with structures among observables from data, which can be useful in understanding the underlying dynamics of such NLDSs. To this end, we first formulate the problem of estimating spectra of the Koopman operator defined in vector-valued reproducing kernel Hilbert spaces, and then develop an estimation procedure for this problem by reformulating tensor-based DMD. As a special case of our method, we propose the method named as Graph DMD, which is a numerical algorithm for Koopman spectral analysis of graph dynamical systems, using a sequence of adjacency matrices. We investigate the empirical performance of our method by using synthetic and real-world data.Comment: 34 pages with 4 figures, Published in Neural Networks, 201

    A comprehensive overview of the Cold Spot

    Get PDF
    The report of a significant deviation of the CMB temperature anisotropies distribution from Gaussianity (soon after the public release of the WMAP data in 2003) has become one of the most solid WMAP anomalies. This detection grounds on an excess of the kurtosis of the Spherical Mexican Hat Wavelet coefficients at scales of around 10 degrees. At these scales, a prominent feature --located in the southern Galactic hemisphere-- was highlighted from the rest of the SMHW coefficients: the Cold Spot. This article presents a comprehensive overview related to the study of the Cold Spot, paying attention to the non-Gaussianity detection methods, the morphological characteristics of the Cold Spot, and the possible sources studied in the literature to explain its nature. Special emphasis is made on the Cold Spot compatibility with a cosmic texture, commenting on future tests that would help to give support or discard this hypothesis.Comment: 21 pages, 14 figures. Accepted for publication in the Advances in Astronomy special issue "Testing the Gaussianity and Statistical Isotropy of the Universe

    Weakly and Partially Supervised Learning Frameworks for Anomaly Detection

    Get PDF
    The automatic detection of abnormal events in surveillance footage is still a concern of the research community. Since protection is the primary purpose of installing video surveillance systems, the monitoring capability to keep public safety, and its rapid response to satisfy this purpose, is a significant challenge even for humans. Nowadays, human capacity has not kept pace with the increased use of surveillance systems, requiring much supervision to identify unusual events that could put any person or company at risk, without ignoring the fact that there is a substantial waste of labor and time due to the extremely low likelihood of occurring anomalous events compared to normal ones. Consequently, the need for an automatic detection algorithm of abnormal events has become crucial in video surveillance. Even being in the scope of various research works published in the last decade, the state-of-the-art performance is still unsatisfactory and far below the required for an effective deployment of this kind of technology in fully unconstrained scenarios. Nevertheless, despite all the research done in this area, the automatic detection of abnormal events remains a challenge for many reasons. Starting by environmental diversity, the complexity of movements resemblance in different actions, crowded scenarios, and taking into account all possible standard patterns to define a normal action is undoubtedly difficult or impossible. Despite the difficulty of solving these problems, the substantive problem lies in obtaining sufficient amounts of labeled abnormal samples, which concerning computer vision algorithms, is fundamental. More importantly, obtaining an extensive set of different videos that satisfy the previously mentioned conditions is not a simple task. In addition to its effort and time-consuming, defining the boundary between normal and abnormal actions is usually unclear. Henceforward, in this work, the main objective is to provide several solutions to the problems mentioned above, by focusing on analyzing previous state-of-the-art methods and presenting an extensive overview to clarify the concepts employed on capturing normal and abnormal patterns. Also, by exploring different strategies, we were able to develop new approaches that consistently advance the state-of-the-art performance. Moreover, we announce the availability of a new large-scale first of its kind dataset fully annotated at the frame level, concerning a specific anomaly detection event with a wide diversity in fighting scenarios, that can be freely used by the research community. Along with this document with the purpose of requiring minimal supervision, two different proposals are described; the first method employs the recent technique of self-supervised learning to avoid the laborious task of annotation, where the training set is autonomously labeled using an iterative learning framework composed of two independent experts that feed data to each other through a Bayesian framework. The second proposal explores a new method to learn an anomaly ranking model in the multiple instance learning paradigm by leveraging weakly labeled videos, where the training labels are done at the video-level. The experiments were conducted in several well-known datasets, and our solutions solidly outperform the state-of-the-art. Additionally, as a proof-of-concept system, we also present the results of collected real-world simulations in different environments to perform a field test of our learned models.A detecção automática de eventos anómalos em imagens de videovigilância permanece uma inquietação por parte da comunidade científica. Sendo a proteção o principal propósito da instalação de sistemas de vigilância, a capacidade de monitorização da segurança pública, e a sua rápida resposta para satisfazer essa finalidade, é uma adversidade até para o ser humano. Nos dias de hoje, com o aumento do uso de sistemas de videovigilância, a capacidade humana não tem alcançado a cadência necessária, exigindo uma supervisão exorbitante para a identificação de acontecimentos invulgares que coloquem uma identidade ou sociedade em risco. O facto da probabilidade de se suceder um incidente ser extremamente reduzida comparada a eventualidades normais, existe um gasto substancial de tempo de ofício. Consequentemente, a necessidade para um algorítmo de detecção automática de incidentes tem vindo a ser crucial em videovigilância. Mesmo sendo alvo de vários trabalhos científicos publicados na última década, o desempenho do estado-da-arte continua insatisfatório e abaixo do requisitado para uma implementação eficiente deste tipo de tecnologias em ambientes e cenários totalmente espontâneos e incontinentes. Porém, apesar de toda a investigação realizada nesta área, a automatização de detecção de incidentes é um desafio que perdura por várias razões. Começando pela diversidade ambiental, a complexidade da semalhança entre movimentos de ações distintas, cenários de multidões, e ter em conta todos os padrões para definir uma ação normal, é indiscutivelmente difícil ou impossível. Não obstante a dificuldade de resolução destes problemas, o obstáculo fundamental consiste na obtenção de um número suficiente de instâncias classificadas anormais, considerando algoritmos de visão computacional é essencial. Mais importante ainda, obter um vasto conjunto de diferentes vídeos capazes de satisfazer as condições previamente mencionadas, não é uma tarefa simples. Em adição ao esforço e tempo despendido, estabelecer um limite entre ações normais e anormais é frequentemente indistinto. Tendo estes aspetos em consideração, neste trabalho, o principal objetivo é providenciar diversas soluções para os problemas previamente mencionados, concentrando na análise de métodos do estado-da-arte e apresentando uma visão abrangente dos mesmos para clarificar os conceitos aplicados na captura de padrões normais e anormais. Inclusive, a exploração de diferentes estratégias habilitou-nos a desenvolver novas abordagens que aprimoram consistentemente o desempenho do estado-da-arte. Por último, anunciamos a disponibilidade de um novo conjunto de dados, em grande escala, totalmente anotado ao nível da frame em relação à detecção de anomalias em um evento específico com uma vasta diversidade em cenários de luta, podendo ser livremente utilizado pela comunidade científica. Neste documento, com o propósito de requerer o mínimo de supervisão, são descritas duas propostas diferentes; O primeiro método põe em prática a recente técnica de aprendizagem auto-supervisionada para evitar a árdua tarefa de anotação, onde o conjunto de treino é classificado autonomamente usando uma estrutura de aprendizagem iterativa composta por duas redes neuronais independentes que fornecem dados entre si através de uma estrutura Bayesiana. A segunda proposta explora um novo método para aprender um modelo de classificação de anomalias no paradigma multiple-instance learning manuseando vídeos fracamente anotados, onde a classificação do conjunto de treino é feita ao nível do vídeo. As experiências foram concebidas em vários conjuntos de dados, e as nossas soluções superam consolidamente o estado-da-arte. Adicionalmente, como sistema de prova de conceito, apresentamos os resultados da execução do nosso modelo em simulações reais em diferentes ambientes
    corecore