17 research outputs found

    Domain invariant representation learning with domain density transformations

    Get PDF
    Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant approach is to find and learn some domain-invariant information in order to use it for the prediction task. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We also show how to use generative adversarial networks to learn such domain transformations to implement our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models

    KL guided domain adaptation

    Get PDF
    Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. training and testing datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that has the same (marginal) distribution over the source and the target domain. However, these approaches often require additional networks and/or optimizing an adversarial (minimax) objective, which can be very expensive or unstable in practice. To improve upon these marginal alignment techniques, in this paper, we first derive a generalization bound for the target loss based on the training loss and the reverse Kullback-Leibler (KL) divergence between the source and the target representation distributions. Based on this bound, we derive an algorithm that minimizes the KL term to obtain a better generalization to the target domain. We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples without any additional network or a minimax objective. This leads to a theoretically sound alignment method which is also very efficient and stable in practice. Experimental results also suggest that our method outperforms other representation-alignment approaches

    High-Cadence Thermospheric Density Estimation enabled by Machine Learning on Solar Imagery

    Full text link
    Accurate estimation of thermospheric density is critical for precise modeling of satellite drag forces in low Earth orbit (LEO). Improving this estimation is crucial to tasks such as state estimation, collision avoidance, and re-entry calculations. The largest source of uncertainty in determining thermospheric density is modeling the effects of space weather driven by solar and geomagnetic activity. Current operational models rely on ground-based proxy indices which imperfectly correlate with the complexity of solar outputs and geomagnetic responses. In this work, we directly incorporate NASA's Solar Dynamics Observatory (SDO) extreme ultraviolet (EUV) spectral images into a neural thermospheric density model to determine whether the predictive performance of the model is increased by using space-based EUV imagery data instead of, or in addition to, the ground-based proxy indices. We demonstrate that EUV imagery can enable predictions with much higher temporal resolution and replace ground-based proxies while significantly increasing performance relative to current operational models. Our method paves the way for assimilating EUV image data into operational thermospheric density forecasting models for use in LEO satellite navigation processes.Comment: Accepted at the Machine Learning and the Physical Sciences workshop, NeurIPS 202

    KL Guided Domain Adaptation

    Get PDF
    Domain adaptation is an important problem and often needed for real-world applications. In this problem, instead of i.i.d. datapoints, we assume that the source (training) data and the target (testing) data have different distributions. With that setting, the empirical risk minimization training procedure often does not perform well, since it does not account for the change in the distribution. A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain. However, these approaches often require additional networks and/or optimizing an adversarial (minimax) objective, which can be very expensive or unstable in practice. To tackle this problem, we first derive a generalization bound for the target loss based on the training loss and the reverse Kullback-Leibler (KL) divergence between the source and the target representation distributions. Based on this bound, we derive an algorithm that minimizes the KL term to obtain a better generalization to the target domain. We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples without any additional network or a minimax objective. This leads to a theoretically sound alignment method which is also very efficient and stable in practice. Experimental results also suggest that our method outperforms other representation-alignment approaches

    PyATMOS: A Scalable Grid of Hypothetical Planetary Atmospheres

    Full text link
    Cloud computing offers an opportunity to run compute-resource intensive climate models at scale by parallelising model runs such that datasets useful to the exoplanet community can be produced efficiently. To better understand the statistical distributions and properties of potentially habitable planetary atmospheres we implemented a parallelised climate modelling tool to scan a range of hypothetical atmospheres.Starting with a modern day Earth atmosphere, we iteratively and incrementally simulated a range of atmospheres to infer the landscape of the multi-parameter space, such as the abundances of biological mediated gases (\ce{O2}, \ce{CO2}, \ce{H2O}, \ce{CH4}, \ce{H2}, and \ce{N2}) that would yield `steady state' planetary atmospheres on Earth-like planets around solar-type stars. Our current datasets comprises of \numatmospheres simulated models of exoplanet atmospheres and is available publicly on the NASA Exoplanet Archive. Our scalable approach of analysing atmospheres could also help interpret future observations of planetary atmospheres by providing estimates of atmospheric gas fluxes and temperatures as a function of altitude. Such data could enable high-throughput first-order assessment of the potential habitability of exoplanetary surfaces and sepcan be a learning dataset for machine learning applications in the atmospheric and exoplanet science domain.Comment: 9 pages, 6 figure

    PyATMOS: a scalable grid of hypothetical planetary atmospheres

    Get PDF
    Cloud computing offers an opportunity to run compute-resource intensive climate models at scale by parallelising model runs such that datasets useful to the exoplanet community can be produced efficiently. To better understand the statistical distributions and properties of potentially habitable planetary atmospheres we implemented a parallelised climate modelling tool to scan a range of hypothetical atmospheres.Starting with a modern day Earth atmosphere, we iteratively and incrementally simulated a range of atmospheres to infer the landscape of the multi-parameter space, such as the abundances of biological mediated gases (\ce{O2}, \ce{CO2}, \ce{H2O}, \ce{CH4}, \ce{H2}, and \ce{N2}) that would yield `steady state' planetary atmospheres on Earth-like planets around solar-type stars. Our current datasets comprises of \numatmospheres simulated models of exoplanet atmospheres and is available publicly on the NASA Exoplanet Archive. Our scalable approach of analysing atmospheres could also help interpret future observations of planetary atmospheres by providing estimates of atmospheric gas fluxes and temperatures as a function of altitude. Such data could enable high-throughput first-order assessment of the potential habitability of exoplanetary surfaces and sepcan be a learning dataset for machine learning applications in the atmospheric and exoplanet science domain

    Kessler : A machine learning library for spacecraft collision avoidance

    Get PDF
    As megaconstellations are launched and the space sector grows, space debris pollution is posing an increasing threat to operational spacecraft. Low Earth orbit is a junkyard of dead satellites, rocket bodies, shrapnels, and other debris that travel at very high speed in an uncontrolled manner. Collisions at orbital speeds can generate fragments and potentially trigger a cascade of more collisions endangering the whole population, a scenario known since the late 1970s as the Kessler syndrome. In this work we present Kessler: an open-source Python package for machine learning (ML) applied to collision avoidance. Kessler provides functionalities to import and export conjunction data messages (CDMs) in their standard format and predict the evolution of conjunction events based on explainable ML models. In Kessler we provide Bayesian recurrent neural networks that can be trained with existing collections of CDM data and then deployed in order to predict the contents of future CDMs in a given conjunction event, conditioned on all CDMs received up to now, with associated uncertainty estimates about all predictions. Furthermore Kessler includes a novel generative model of conjunction events and CDM sequences implemented using probabilistic programming, simulating the CDM generation process of the Combined Space Operations Center (CSpOC). The model allows Bayesian inference and also the generation of large datasets of realistic synthetic CDMs that we believe will be pivotal to enable further ML approaches given the sensitive nature and public unavailability of real CDM data

    Towards automated satellite conjunction management with Bayesian deep learning

    Get PDF
    After decades of space travel, low Earth orbit is a junkyard of discarded rocket bodies, dead satellites, and millions of pieces of debris from collisions and explosions.Objects in high enough altitudes do not re-enter and burn up in the atmosphere, but stay in orbit around Earth for a long time. With a speed of 28,000 km/h, collisions in these orbits can generate fragments and potentially trigger a cascade of more collisions known as the Kessler syndrome. This could pose a planetary challenge, because the phenomenon could escalate to the point of hindering future space operations and damaging satellite infrastructure critical for space and Earth science applications. As commercial entities place mega-constellations of satellites in orbit, the burden on operators conducting collision avoidance manoeuvres will increase.For this reason, development of automated tools that predict potential collision events (conjunctions) is critical. We introduce a Bayesian deep learning approach to this problem, and develop recurrent neural network architectures (LSTMs) that work with time series of conjunction data messages (CDMs), a standard data format used by the space community. We show that our method can be used to model all CDM features simultaneously, including the time of arrival of future CDMs, providing predictions of conjunction event evolution with associated uncertainties
    corecore