26 research outputs found

    Aggregated primary detectors for generic change detection in satellite images

    Get PDF
    International audienceDetecting changes between two satellite images of the same scene generally requires an accurate (and thus often uneasy to obtain) model discriminating relevant changes from irrelevant ones. We here present a generic method, based on the definition of four different a-contrario detection models (associated to arbitrary features), whose aggregation is then trained from specific examples with gradient boosting. The results we present are encouraging, and in particular the low false positive rate is noticeable

    Weakened magnetic braking as the origin of anomalously rapid rotation in old field stars

    Full text link
    A knowledge of stellar ages is crucial for our understanding of many astrophysical phenomena, and yet ages can be difficult to determine. As they become older, stars lose mass and angular momentum, resulting in an observed slowdown in surface rotation. The technique of 'gyrochronology' uses the rotation period of a star to calculate its age. However, stars of known age must be used for calibration, and, until recently, the approach was untested for old stars (older than 1 gigayear, Gyr). Rotation periods are now known for stars in an open cluster of intermediate age (NGC 6819; 2.5 Gyr old), and for old field stars whose ages have been determined with asteroseismology. The data for the cluster agree with previous period-age relations, but these relations fail to describe the asteroseismic sample. Here we report stellar evolutionary modelling, and confirm the presence of unexpectedly rapid rotation in stars that are more evolved than the Sun. We demonstrate that models that incorporate dramatically weakened magnetic braking for old stars can---unlike existing models---reproduce both the asteroseismic and the cluster data. Our findings might suggest a fundamental change in the nature of ageing stellar dynamos, with the Sun being close to the critical transition to much weaker magnetized winds. This weakened braking limits the diagnostic power of gyrochronology for those stars that are more than halfway through their main-sequence lifetimes.Comment: 25 pages, 3 figures in main paper, 6 extended data figures, 1 table. Published in Nature, January 2016. Please see https://youtu.be/O6HzYgP5uyc for a video description of the resul

    Aggregated primary detectors for generic change detection in satellite images

    No full text
    International audienceDetecting changes between two satellite images of the same scene generally requires an accurate (and thus often uneasy to obtain) model discriminating relevant changes from irrelevant ones. We here present a generic method, based on the definition of four different a-contrario detection models (associated to arbitrary features), whose aggregation is then trained from specific examples with gradient boosting. The results we present are encouraging, and in particular the low false positive rate is noticeable

    Neural Architecture Search in operational context: a remote sensing case-study

    No full text
    Deep learning has become in recent years a cornerstone tool fueling key innovations in the industry, such as autonomous driving. To attain good performances, the neural network architecture used for a given application must be chosen with care. These architectures are often handcrafted and therefore prone to human biases and sub-optimal selection. Neural Architecture Search (NAS) is a framework introduced to mitigate such risks by jointly optimizing the network architectures and its weights. Albeit its novelty, it was applied on complex tasks with significant results - e.g. semantic image segmentation. In this technical paper, we aim to evaluate its ability to tackle a challenging operational task: semantic segmentation of objects of interest in satellite imagery. Designing a NAS framework is not trivial and has strong dependencies to hardware constraints. We therefore motivate our NAS approach selection and provide corresponding implementation details. We also present novel ideas to carry out other such use-case studies

    Photometric magnetic-activity metrics tested with the Sun: application to Kepler M dwarfs

    No full text
    International audienceThe Kepler mission has been providing high-quality photometric data leading to many breakthroughs in the exoplanet search and in stellar physics. Stellar magnetic activity results from the interaction between rotation, convection, and magnetic field. Constraining these processes is important if we want to better understand stellar magnetic activity. Using the Sun, we want to test a magnetic activity index based on the analysis of the photometric response and then apply it to a sample of M dwarfs observed by Kepler. We estimate a global stellar magnetic activity index by measuring the standard deviation of the whole time series, S ph. Because stellar variability can be related to convection, pulsations or magnetism, we need to ensure that this index mostly takes into account magnetic effects. We define another stellar magnetic activity index as the average of the standard deviation of shorter subseries which lengths are determined by the rotation period of the star. This way we can ensure that the measured photometric variability is related to starspots crossing the visible stellar disc. This new index combined with a time-frequency analysis based on the Morlet wavelets allows us to determine the existence of magnetic activity cycles. We measure magnetic indexes for the Sun and for 34 M dwarfs observed by Kepler. As expected, we obtain that the sample of M dwarfs studied in this work is much more active than the Sun. Moreover, we find a small correlation between the rotation period and the magnetic index. Finally, by combining a time-frequency analysis with phase diagrams, we discover the presence of long-lived features suggesting the existence of active longitudes on the surface of these stars

    Self-Supervised Pretraining on Satellite Imagery: a Case Study on Label-Efficient Vehicle Detection

    No full text
    In defense-related remote sensing applications, such as vehicle detection on satellite imagery, supervised learning requires a huge number of labeled examples to reach operational performances. Such data are challenging to obtain as it requires military experts, and some observables are intrinsically rare. This limited labeling capability, as well as the large number of unlabeled images available due to the growing number of sensors, make object detection on remote sensing imagery highly relevant for self-supervised learning. We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery, that is yet poorly explored. For the first time to our knowledge, we study the problem of label efficiency on this task. We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework. We then investigate this model's transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data, which is designed to be representative of an operational use case of strategic site surveillance. We show that our in-domain self-supervised learning model is competitive with ImageNet pretraining, and outperforms it in the low-label regime

    Préentraînement auto-supervisé sur imagerie satellite : un cas d'étude sur la détection de véhicules efficiente en annotation

    No full text
    International audienceIn defense-related remote sensing applications, such as vehicle detection on satellite imagery, supervised learning requires a huge number of labeled examples to reach operational performances. Such data are challenging to obtain as it requires military experts, and some observables are intrinsically rare. This limited labeling capability, as well as the large number of unlabeled images available due to the growing number of sensors, make object detection on remote sensing imagery highly relevant for self-supervised learning. We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery, that is yet poorly explored. For the first time to our knowledge, we study the problem of label efficiency on this task. We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework. We then investigate this model’s transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data, which is designed to be representative of an operational use case of strategic site surveillance. We show that our in domain self-supervised learning model is competitive with ImageNet pretraining, and outperforms it in the low-label regime

    Self-Supervised Pretraining on Satellite Imagery: a Case Study on Label-Efficient Vehicle Detection

    No full text
    In defense-related remote sensing applications, such as vehicle detection on satellite imagery, supervised learning requires a huge number of labeled examples to reach operational performances. Such data are challenging to obtain as it requires military experts, and some observables are intrinsically rare. This limited labeling capability, as well as the large number of unlabeled images available due to the growing number of sensors, make object detection on remote sensing imagery highly relevant for self-supervised learning. We study in-domain self-supervised representation learning for object detection on very high resolution optical satellite imagery, that is yet poorly explored. For the first time to our knowledge, we study the problem of label efficiency on this task. We use the large land use classification dataset Functional Map of the World to pretrain representations with an extension of the Momentum Contrast framework. We then investigate this model's transferability on a real-world task of fine-grained vehicle detection and classification on Preligens proprietary data, which is designed to be representative of an operational use case of strategic site surveillance. We show that our in-domain self-supervised learning model is competitive with ImageNet pretraining, and outperforms it in the low-label regime
    corecore