8 research outputs found

    Using Convolutional Neural Networks for the Helicity Classification of Magnetic Fields

    Full text link
    The presence of non-zero helicity in intergalactic magnetic fields is a smoking gun for their primordial origin since they have to be generated by processes that break CP invariance. As an experimental signature for the presence of helical magnetic fields, an estimator QQ based on the triple scalar product of the wave-vectors of photons generated in electromagnetic cascades from, e.g., TeV blazars, has been suggested previously. We propose to apply deep learning to helicity classification employing Convolutional Neural Networks and show that this method outperforms the QQ estimator.Comment: 14 pages, extended version of a contribution to the proceedings of the 37.th ICRC 202

    ODIN AD: a framework supporting the life-cycle of time series anomaly detection applications

    Get PDF
    Anomaly detection (AD) in numerical temporal data series is a prominent task in many domains, including the analysis of industrial equipment operation, the processing of IoT data streams, and the monitoring of appliance energy consumption. The life-cycle of an AD application with a Machine Learning (ML) approach requires data collection and preparation, algorithm design and selection, training, and evaluation. All these activities contain repetitive tasks which could be supported by tools. This paper describes ODIN AD, a framework assisting the life-cycle of AD applications in the phases of data preparation, prediction performance evaluation, and error diagnosis

    Identification of salient iconography features in artwork analysis

    No full text
    Ikonografi studerer det visuelle innholdet i kunstverk ved å vurdere temaene som er portrettert i dem, og deres representasjon. Computer Vision har blitt brukt til å identifisere ikonografifag i malerier og Convolutional Neural Networks (CNN) muliggjorde en effektiv klassifisering av tegn i kristne kunstmalerier. Det må imidlertid fremdeles demonstreres om klassifiseringsresultatene oppnådd av CNN er avhengige av de samme ikonografiske egenskapene som menneskelige eksperter utnytter når de studerer ikonografi. En passende tilnærming for å eksponere klassifiseringsprosessen ved nevrale modeller er avhengig av klasseaktiveringskart, som understreker områdene i et bilde som bidrar mest til klassifiseringen. Dette arbeidet sammenligner toppmoderne algoritmer (CAM, Grad-CAM, Grad-CAM++ og Smooth Grad-CAM++) når det gjelder deres evne til å identifisere ikonografiske attributter som bestemmer klassifiseringen av tegn i kristne kunstmalerier. Kvantitative og kvalitative analyser viser at Grad-CAM, Grad-CAM++ og Smooth Grad-CAM++ har lignende ytelser mens CAM har lavere effekt. Smooth Grad-CAM++ isolerer flere frakoblede bilderegioner som identifiserer små ikonografisymboler godt. Grad-CAM produserer bredere og sammenhengende områder som dekker store ikonografisymboler bedre. Den illustrerte analysen er et skritt mot datastøttet studie av variasjonene av ikonografiske elementers posisjonering og gjensidige relasjoner i kunstverk, og åpner veien for automatisk oppretting av avgrensningsbokser for å trene detektorer av ikonografisymboler i kristne kunstbilder

    DeepGraviLens: a Multi-Modal Architecture for Classifying Gravitational Lensing Data

    Get PDF
    Gravitational lensing is the relativistic effect generated by massive bodies, which bend the space-time surrounding them. It is a deeply investigated topic in astrophysics and allows validating theoretical relativistic results and studying faint astrophysical objects that would not be visible otherwise. In recent years Machine Learning methods have been applied to support the analysis of the gravitational lensing phenomena by detecting lensing effects in data sets consisting of images associated with brightness variation time series. However, the state-of-art approaches either consider only images and neglect time-series data or achieve relatively low accuracy on the most difficult data sets. This paper introduces DeepGraviLens, a novel multi-modal network that classifies spatio-temporal data belonging to one non-lensed system type and three lensed system types. It surpasses the current state of the art accuracy results by \approx 19% to \approx 43%, depending on the considered data set. Such an improvement will enable the acceleration of the analysis of lensed objects in upcoming astrophysical surveys, which will exploit the petabytes of data collected, e.g., from the Vera C. Rubin Observatory

    Proposals Generation for Weakly Supervised Object Detection in Artwork Images

    No full text
    Object Detection requires many precise annotations, which are available for natural images but not for many non-natural data sets such as artworks data sets. A solution is using Weakly Supervised Object Detection (WSOD) techniques that learn accurate object localization from image-level labels. Studies have demonstrated that state-of-the-art end-to-end architectures may not be suitable for domains in which images or classes sensibly differ from those used to pre-train networks. This paper presents a novel two-stage Weakly Supervised Object Detection approach for obtaining accurate bounding boxes on non-natural data sets. The proposed method exploits existing classification knowledge to generate pseudo-ground truth bounding boxes from Class Activation Maps (CAMs). The automatically generated annotations are used to train a robust Faster R-CNN object detector. Quantitative and qualitative analysis shows that bounding boxes generated from CAMs can compensate for the lack of manually annotated ground truth (GT) and that an object detector, trained with such pseudo-GT, surpasses end-to-end WSOD state-of-the-art methods on ArtDL 2.0 (≈41.5% mAP) and IconArt (≈17% mAP), two artworks data sets. The proposed solution is a step towards the computer-aided study of non-natural images and opens the way to more advanced tasks, e.g., automatic artwork image captioning for digital archive applications

    Comparing CAM Algorithms for the Identification of Salient Image Features in Iconography Artwork Analysis

    No full text
    Iconography studies the visual content of artworks by considering the themes portrayed in them and their representation. Computer Vision has been used to identify iconographic subjects in paintings and Convolutional Neural Networks enabled the effective classification of characters in Christian art paintings. However, it still has to be demonstrated if the classification results obtained by CNNs rely on the same iconographic properties that human experts exploit when studying iconography and if the architecture of a classifier trained on whole artwork images can be exploited to support the much harder task of object detection. A suitable approach for exposing the process of classification by neural models relies on Class Activation Maps, which emphasize the areas of an image contributing the most to the classification. This work compares state-of-the-art algorithms (CAM, Grad-CAM, Grad-CAM++, and Smooth Grad-CAM++) in terms of their capacity of identifying the iconographic attributes that determine the classification of characters in Christian art paintings. Quantitative and qualitative analyses show that Grad-CAM, Grad-CAM++, and Smooth Grad-CAM++ have similar performances while CAM has lower efficacy. Smooth Grad-CAM++ isolates multiple disconnected image regions that identify small iconographic symbols well. Grad-CAM produces wider and more contiguous areas that cover large iconographic symbols better. The salient image areas computed by the CAM algorithms have been used to estimate object-level bounding boxes and a quantitative analysis shows that the boxes estimated with Grad-CAM reach 55% average IoU, 61% GT-known localization and 31% mAP. The obtained results are a step towards the computer-aided study of the variations of iconographic elements positioning and mutual relations in artworks and open the way to the automatic creation of bounding boxes for training detectors of iconographic symbols in Christian art images

    On the Visualization of Semantic-based Mappings

    No full text
    The popularity of the semantic web in many domains, such as transportation, has led to an ever-increasing development of standards, vocabularies, and ontologies, which generates problems of heterogeneity and lack of interoperability. To address this issue, a large body of research focused on providing various mapping tools and techniques to translate data from one standard to another to foster smooth communication among them. While valuable advancements in mapping techniques have been achieved so far, the explainability and usability of such tools have been overlooked. Since explainability of software is being recognized as a crucial non-functional requirement for complex systems, the development of self-explaining and user-friendly graphical interfaces is becoming a pressing need. In this paper we present S2SMaT, our contribution to the problem of visualization of mappings. The tool helps users easily navigate the structure of standards, understand the suggested mappings between their terms, and in general more easily interact with the system
    corecore