15 research outputs found

    Dynamical Modeling of NGC 6809: Selecting the best model using Bayesian Inference

    Full text link
    The precise cosmological origin of globular clusters remains uncertain, a situation hampered by the struggle of observational approaches in conclusively identifying the presence, or not, of dark matter in these systems. In this paper, we address this question through an analysis of the particular case of NGC 6809. While previous studies have performed dynamical modeling of this globular cluster using a small number of available kinematic data, they did not perform appropriate statistical inference tests for the choice of best model description; such statistical inference for model selection is important since, in general, different models can result in significantly different inferred quantities. With the latest kinematic data, we use Bayesian inference tests for model selection and thus obtain the best fitting models, as well as mass and dynamic mass-to-light ratio estimates. For this, we introduce a new likelihood function that provides more constrained distributions for the defining parameters of dynamical models. Initially we consider models with a known distribution function, and then model the cluster using solutions of the spherically symmetric Jeans equation; this latter approach depends upon the mass density profile and anisotropy β\beta parameter. In order to find the best description for the cluster we compare these models by calculating their Bayesian evidence. We find smaller mass and dynamic mass-to-light ratio values than previous studies, with the best fitting Michie model for a constant mass-to-light ratio of Υ=0.90−0.14+0.14\Upsilon = 0.90^{+0.14}_{-0.14} and Mdyn=6.10−0.88+0.51×104M⊙M_{\text{dyn}}=6.10^{+0.51}_{-0.88} \times 10^4 M_{\odot}. We exclude the significant presence of dark matter throughout the cluster, showing that no physically motivated distribution of dark matter can be present away from the cluster core.Comment: 12 pages, 10 figures, accepted for publication in MNRA

    Looking for change? Roll the Dice and demand Attention

    Full text link
    Change detection, i.e. identification per pixel of changes for some classes of interest from a set of bi-temporal co-registered images, is a fundamental task in the field of remote sensing. It remains challenging due to unrelated forms of change that appear at different times in input images. Here, we propose a reliable deep learning framework for the task of semantic change detection in very high-resolution aerial images. Our framework consists of a new loss function, new attention modules, new feature extraction building blocks, and a new backbone architecture that is tailored for the task of semantic change detection. Specifically, we define a new form of set similarity, that is based on an iterative evaluation of a variant of the Dice coefficient. We use this similarity metric to define a new loss function as well as a new spatial and channel convolution Attention layer (the FracTAL). The new attention layer, designed specifically for vision tasks, is memory efficient, thus suitable for use in all levels of deep convolutional networks. Based on these, we introduce two new efficient self-contained feature extraction convolution units. We validate the performance of these feature extraction building blocks on the CIFAR10 reference data and compare the results with standard ResNet modules. Further, we introduce a new encoder/decoder scheme, a network macro-topology, that is tailored for the task of change detection. Our network moves away from any notion of subtraction of feature layers for identifying change. We validate our approach by showing excellent performance and achieving state of the art score (F1 and Intersection over Union-hereafter IoU) on two building change detection datasets, namely, the LEVIRCD (F1: 0.918, IoU: 0.848) and the WHU (F1: 0.938, IoU: 0.882) datasets.Comment: 28 pages, under review in ISPRS P&RS, 1st revision. Figures of low quality due to compression for arxiv. Reduced abstract in arxiv due to character limitation

    SSG2: A new modelling paradigm for semantic segmentation

    Full text link
    State-of-the-art models in semantic segmentation primarily operate on single, static images, generating corresponding segmentation masks. This one-shot approach leaves little room for error correction, as the models lack the capability to integrate multiple observations for enhanced accuracy. Inspired by work on semantic change detection, we address this limitation by introducing a methodology that leverages a sequence of observables generated for each static input image. By adding this "temporal" dimension, we exploit strong signal correlations between successive observations in the sequence to reduce error rates. Our framework, dubbed SSG2 (Semantic Segmentation Generation 2), employs a dual-encoder, single-decoder base network augmented with a sequence model. The base model learns to predict the set intersection, union, and difference of labels from dual-input images. Given a fixed target input image and a set of support images, the sequence model builds the predicted mask of the target by synthesizing the partial views from each sequence step and filtering out noise. We evaluate SSG2 across three diverse datasets: UrbanMonitor, featuring orthoimage tiles from Darwin, Australia with five spectral bands and 0.2m spatial resolution; ISPRS Potsdam, which includes true orthophoto images with multiple spectral bands and a 5cm ground sampling distance; and ISIC2018, a medical dataset focused on skin lesion segmentation, particularly melanoma. The SSG2 model demonstrates rapid convergence within the first few tens of epochs and significantly outperforms UNet-like baseline models with the same number of gradient updates. However, the addition of the temporal dimension results in an increased memory footprint. While this could be a limitation, it is offset by the advent of higher-memory GPUs and coding optimizations.Comment: 19 pages, Under revie

    Deep Symbolic Regression for Physics Guided by Units Constraints: Toward the Automated Discovery of Physical Laws

    No full text
    International audienceSymbolic regression (SR) is the study of algorithms that automate the search for analytic expressions that fit data. While recent advances in deep learning have generated renewed interest in such approaches, the development of SR methods has not been focused on physics, where we have important additional constraints due to the units associated with our data. Here we present Φ-SO, a physical symbolic optimization framework for recovering analytical symbolic expressions from physics data using deep reinforcement learning techniques by learning units constraints. Our system is built, from the ground up, to propose solutions where the physical units are consistent by construction. This is useful not only in eliminating physically impossible solutions but also because the grammatical rules of dimensional analysis enormously restrict the freedom of the equation generator, thus vastly improving performance. The algorithm can be used to fit noiseless data, which can be useful, for instance, when attempting to derive an analytical property of a physical model, and it can also be used to obtain analytical approximations of noisy data. We test our machinery on a standard benchmark of equations from the Feynman Lectures on Physics and other physics textbooks, achieving state-of-the-art performance in the presence of noise (exceeding 0.1%) and show that it is robust even in the presence of substantial (10%) noise. We showcase its abilities on a panel of examples from astrophysics

    Symbolic regression driven by dimensional analysis for the automated discovery of physical laws and constants of nature

    No full text
    International audienceGiven the abundance of empirical laws in astrophysics, the rise of agnostic and automatic methods to derive them from data is of great interest. This concept is embodied in symbolic regression, which seeks to identify the best functional form fitting a dataset. Here we present a protocol for deducing both physical laws but also the constants of nature appearing in those with their associated units. Our method is grounded in the Physical Symbolic Optimization framework, which integrates dimensional analysis with deep reinforcement learning. We showcase our approach on a panel of equations from (astro)-physics
    corecore