19 research outputs found

    Dynamical Modeling of NGC 6809: Selecting the best model using Bayesian Inference

    Full text link
    The precise cosmological origin of globular clusters remains uncertain, a situation hampered by the struggle of observational approaches in conclusively identifying the presence, or not, of dark matter in these systems. In this paper, we address this question through an analysis of the particular case of NGC 6809. While previous studies have performed dynamical modeling of this globular cluster using a small number of available kinematic data, they did not perform appropriate statistical inference tests for the choice of best model description; such statistical inference for model selection is important since, in general, different models can result in significantly different inferred quantities. With the latest kinematic data, we use Bayesian inference tests for model selection and thus obtain the best fitting models, as well as mass and dynamic mass-to-light ratio estimates. For this, we introduce a new likelihood function that provides more constrained distributions for the defining parameters of dynamical models. Initially we consider models with a known distribution function, and then model the cluster using solutions of the spherically symmetric Jeans equation; this latter approach depends upon the mass density profile and anisotropy β\beta parameter. In order to find the best description for the cluster we compare these models by calculating their Bayesian evidence. We find smaller mass and dynamic mass-to-light ratio values than previous studies, with the best fitting Michie model for a constant mass-to-light ratio of Υ=0.90−0.14+0.14\Upsilon = 0.90^{+0.14}_{-0.14} and Mdyn=6.10−0.88+0.51×104M⊙M_{\text{dyn}}=6.10^{+0.51}_{-0.88} \times 10^4 M_{\odot}. We exclude the significant presence of dark matter throughout the cluster, showing that no physically motivated distribution of dark matter can be present away from the cluster core.Comment: 12 pages, 10 figures, accepted for publication in MNRA

    Looking for change? Roll the Dice and demand Attention

    Full text link
    Change detection, i.e. identification per pixel of changes for some classes of interest from a set of bi-temporal co-registered images, is a fundamental task in the field of remote sensing. It remains challenging due to unrelated forms of change that appear at different times in input images. Here, we propose a reliable deep learning framework for the task of semantic change detection in very high-resolution aerial images. Our framework consists of a new loss function, new attention modules, new feature extraction building blocks, and a new backbone architecture that is tailored for the task of semantic change detection. Specifically, we define a new form of set similarity, that is based on an iterative evaluation of a variant of the Dice coefficient. We use this similarity metric to define a new loss function as well as a new spatial and channel convolution Attention layer (the FracTAL). The new attention layer, designed specifically for vision tasks, is memory efficient, thus suitable for use in all levels of deep convolutional networks. Based on these, we introduce two new efficient self-contained feature extraction convolution units. We validate the performance of these feature extraction building blocks on the CIFAR10 reference data and compare the results with standard ResNet modules. Further, we introduce a new encoder/decoder scheme, a network macro-topology, that is tailored for the task of change detection. Our network moves away from any notion of subtraction of feature layers for identifying change. We validate our approach by showing excellent performance and achieving state of the art score (F1 and Intersection over Union-hereafter IoU) on two building change detection datasets, namely, the LEVIRCD (F1: 0.918, IoU: 0.848) and the WHU (F1: 0.938, IoU: 0.882) datasets.Comment: 28 pages, under review in ISPRS P&RS, 1st revision. Figures of low quality due to compression for arxiv. Reduced abstract in arxiv due to character limitation

    SSG2: A new modelling paradigm for semantic segmentation

    Full text link
    State-of-the-art models in semantic segmentation primarily operate on single, static images, generating corresponding segmentation masks. This one-shot approach leaves little room for error correction, as the models lack the capability to integrate multiple observations for enhanced accuracy. Inspired by work on semantic change detection, we address this limitation by introducing a methodology that leverages a sequence of observables generated for each static input image. By adding this "temporal" dimension, we exploit strong signal correlations between successive observations in the sequence to reduce error rates. Our framework, dubbed SSG2 (Semantic Segmentation Generation 2), employs a dual-encoder, single-decoder base network augmented with a sequence model. The base model learns to predict the set intersection, union, and difference of labels from dual-input images. Given a fixed target input image and a set of support images, the sequence model builds the predicted mask of the target by synthesizing the partial views from each sequence step and filtering out noise. We evaluate SSG2 across three diverse datasets: UrbanMonitor, featuring orthoimage tiles from Darwin, Australia with five spectral bands and 0.2m spatial resolution; ISPRS Potsdam, which includes true orthophoto images with multiple spectral bands and a 5cm ground sampling distance; and ISIC2018, a medical dataset focused on skin lesion segmentation, particularly melanoma. The SSG2 model demonstrates rapid convergence within the first few tens of epochs and significantly outperforms UNet-like baseline models with the same number of gradient updates. However, the addition of the temporal dimension results in an increased memory footprint. While this could be a limitation, it is offset by the advent of higher-memory GPUs and coding optimizations.Comment: 19 pages, Under revie

    A noise robust automatic radiolocation animal tracking system

    Get PDF
    Agriculture is becoming increasingly reliant upon accurate data from sensor arrays, with localization an emerging application in the livestock industry. Ground-based time difference of arrival (TDoA) radio location methods have the advantage of being lightweight and exhibit higher energy efficiency than methods reliant upon Global Navigation Satellite Systems (GNSS). Such methods can employ small primary battery cells, rather than rechargeable cells, and still deliver a multi-year deployment. In this paper, we present a novel deep learning algorithm adapted from a one-dimensional U-Net implementing a convolutional neural network (CNN) model, originally developed for the task of semantic segmentation. The presented model (ResUnet-1d) both converts TDoA sequences directly to positions and reduces positional errors introduced by sources such as multipathing. We have evaluated the model using simulated animal movements in the form of TDoA position sequences in combination with real-world distributions of TDoA error. These animal tracks were simulated at various step intervals to mimic potential TDoA transmission intervals. We compare ResUnet-1d to a Kalman filter to evaluate the performance of our algorithm to a more traditional noise reduction approach. On average, for simulated tracks having added noise with a standard deviation of 50 m, the described approach was able to reduce localization error by between 66.3% and 73.6%. The Kalman filter only achieved a reduction of between 8.0% and 22.5%. For a scenario with larger added noise having a standard deviation of 100 m, the described approach was able to reduce average localization error by between 76.2% and 81.9%. The Kalman filter only achieved a reduction of between 31.0% and 39.1%. Results indicate that this novel 1D CNN U-Net like encoder/decoder for TDoA location error correction outperforms the Kalman filter. It is able to reduce average localization errors to between 16 and 34 m across all simulated experimental treatments while the uncorrected average TDoA error ranged from 55 to 188 m

    On the dynamics of spherically symmetric self gravitating systems

    No full text
    This thesis focuses on the investigation of the dark matter content of spherically symmetric self gravitating systems. The first system under investigation is the Galactic globular cluster NGC 6809; we use a variety of dynamical models and Bayesian inference in order to conclusively identify the presence of dark matter. Our findings exclude the hypothesis of a surrounding dark matter halo, and we predict to a 95% confidence interval there exists no significant amount of dark matter in the cluster. Pushing further the limits of theoretical modelling in the first two of a series of three papers we attack the problem of the mass-anisotropy degeneracy of the spherically symmetric Jeans equation. At the heart of our method lies the representation of the radial second order velocity moment with flexible B-splines. In the first of the papers we set the framework for the theoretical foundation of the method and we present a simple example. In the second paper, we define an optimum smoothing algorithm for the flexible B-spline and we validate our method through a series of examples. The overall result of these two papers is that for an assumed free functional form of the potential and mass density we identify a unique anisotropy profile (within statistical uncertainties) as this is described from the radial and tangential second order velocity moments. This is both for a constant or variable mass-to-light ratio. The third paper is currently a project under development. In this we perform the full mass-anisotropy resolution, i.e. the reconstruction from data of the stellar mass profile, the dark matter mass profile, and of the second order velocity moments of the radial and tangential components. In Chapter 5 I describe the basic mathematical framework for the full resolution of the mass-anisotropy resolution and I present a simple example
    corecore