1,371 research outputs found
Thermospheric nitric oxide from the ATLAS 1 and Spacelab 1 missions
Spectral and spatial images obtained with the Imaging Spectrometric Observatory on the ATLAS 1 and Spacelab 1 missions are used to study the ultraviolet emissions of nitric oxide in the thermosphere. By synthetically fitting the measured NO gamma bands, intensities are derived as a function of altitude and latitude. We find that the NO concentrations inferred from the ATLAS 1 measurements are higher than predicted by our thermospheric airglow model and tend to lie to the high side of a number of earlier measurements. By comparison with synthetic spectral fits, the shape of the NO gamma bands is used to derive temperature as a function of altitude. Using the simultaneous spectral and spatial imaging capability of the instrument, we present the first simultaneously acquired altitude images of NO gamma band temperature and intensity in the thermosphere. The lower thermospheric temperature images show structure as a function of altitude. The spatial imaging technique appears to be a viable means of obtaining temperatures in the middle and lower thermosphere, provided that good information is also obtained at the higher altitudes, as the contribution of the overlying, hotter NO is nonnegligible. By fitting both self-absorbed and nonabsorbed bands of the NO gamma system, we show that the self absorption effects are observable up to 200 km, although small above 150 km. The spectral resolution of the instrument (1.6 A) allows separation of the N(+)(S-5) doublet, and we show the contribution of this feature to the combination of the NO gamma (1, 0) band and the N(+)(S-5) doublet as a function of altitude (less than 10% below 200 km). Spectral images including the NO delta bands support previous findings that the fluorescence efficiency is much higher than that determined from laboratory measurements. The Spacelab 1 data indicate the presence of a significant population of hot NO in the vehicle environment of that early shuttle mission
Adversarial masking for self-supervised learning
We propose ADIOS, a masked image modeling (MIM) framework for self-supervised learning, which simultaneously learns a masking function and an image encoder using an adversarial objective. The image encoder is trained to minimise the distance between representations of the original and that of a masked image. The masking function, conversely, aims at maximising this distance. ADIOS consistently improves on state-ofthe-art self-supervised learning (SSL) methods on a variety of tasks and datasets-including classification on ImageNet100 and STL10, transfer learning on CIFAR10/100, Flowers102 and iNaturalist, as well as robustness evaluated on the backgrounds challenge (Xiao et al., 2021)-while generating semantically meaningful masks. Unlike modern MIM models such as MAE, BEiT and iBOT, ADIOS does not rely on the image-patch tokenisation construction of Vision Transformers, and can be implemented with convolutional backbones. We further demonstrate that the masks learned by ADIOS are more effective in improving representation learning of SSL methods than masking schemes used in popular MIM models
Charge exchange of metastable 2D oxygen ions with N2 in the thermosphere
Measurements of N2+ and supporting data made on the Atmosphere Explorer-C satellite in the ionosphere are used to study the charge exchange process The equality k = (5 +/- 1.7) x 10-10cm3s-1. This value lies close to the lower limit of experimental uncertainty of the rate coefficient determined in the laboratory. We have also investigated atomic oxygen quenching of O+(2D) and find that the rate coefficient is 2 x 10-11 cm3s-1 to within approximately a factor of two.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/22805/1/0000362.pd
Capturing Label Characteristics in VAEs
We present a principled approach to incorporating labels in VAEs that
captures the rich characteristic information associated with those labels.
While prior work has typically conflated these by learning latent variables
that directly correspond to label values, we argue this is contrary to the
intended effect of supervision in VAEs-capturing rich label characteristics
with the latents. For example, we may want to capture the characteristics of a
face that make it look young, rather than just the age of the person. To this
end, we develop the CCVAE, a novel VAE model and concomitant variational
objective which captures label characteristics explicitly in the latent space,
eschewing direct correspondences between label values and latents. Through
judicious structuring of mappings between such characteristic latents and
labels, we show that the CCVAE can effectively learn meaningful representations
of the characteristics of interest across a variety of supervision schemes. In
particular, we show that the CCVAE allows for more effective and more general
interventions to be performed, such as smooth traversals within the
characteristics for a given label, diverse conditional generation, and
transferring characteristics across datapoints.Comment: Accepted to ICLR 202
Simulation-Based Inference for Global Health Decisions
The COVID-19 pandemic has highlighted the importance of in-silico
epidemiological modelling in predicting the dynamics of infectious diseases to
inform health policy and decision makers about suitable prevention and
containment strategies. Work in this setting involves solving challenging
inference and control problems in individual-based models of ever increasing
complexity. Here we discuss recent breakthroughs in machine learning,
specifically in simulation-based inference, and explore its potential as a
novel venue for model calibration to support the design and evaluation of
public health interventions. To further stimulate research, we are developing
software interfaces that turn two cornerstone COVID-19 and malaria epidemiology
models COVID-sim, (https://github.com/mrc-ide/covid-sim/) and OpenMalaria
(https://github.com/SwissTPH/openmalaria) into probabilistic programs, enabling
efficient interpretable Bayesian inference within those simulators
Optimizing the colour and fabric of targets for the control of the tsetse fly Glossina fuscipes fuscipes
Background:
Most cases of human African trypanosomiasis (HAT) start with a bite from one of the subspecies of Glossina fuscipes. Tsetse use a range of olfactory and visual stimuli to locate their hosts and this response can be exploited to lure tsetse to insecticide-treated targets thereby reducing transmission. To provide a rational basis for cost-effective designs of target, we undertook studies to identify the optimal target colour.
Methodology/Principal Findings:
On the Chamaunga islands of Lake Victoria , Kenya, studies were made of the numbers of G. fuscipes fuscipes attracted to targets consisting of a panel (25 cm square) of various coloured fabrics flanked by a panel (also 25 cm square) of fine black netting. Both panels were covered with an electrocuting grid to catch tsetse as they contacted the target. The reflectances of the 37 different-coloured cloth panels utilised in the study were measured spectrophotometrically. Catch was positively correlated with percentage reflectance at the blue (460 nm) wavelength and negatively correlated with reflectance at UV (360 nm) and green (520 nm) wavelengths. The best target was subjectively blue, with percentage reflectances of 3%, 29%, and 20% at 360 nm, 460 nm and 520 nm respectively. The worst target was also, subjectively, blue, but with high reflectances at UV (35% reflectance at 360 nm) wavelengths as well as blue (36% reflectance at 460 nm); the best low UV-reflecting blue caught 3× more tsetse than the high UV-reflecting blue.
Conclusions/Significance:
Insecticide-treated targets to control G. f. fuscipes should be blue with low reflectance in both the UV and green bands of the spectrum. Targets that are subjectively blue will perform poorly if they also reflect UV strongly. The selection of fabrics for targets should be guided by spectral analysis of the cloth across both the spectrum visible to humans and the UV region
DGPose: Deep Generative Models for Human Body Analysis
Deep generative modelling for human body analysis is an emerging problem with
many interesting applications. However, the latent space learned by such
approaches is typically not interpretable, resulting in less flexibility. In
this work, we present deep generative models for human body analysis in which
the body pose and the visual appearance are disentangled. Such a
disentanglement allows independent manipulation of pose and appearance, and
hence enables applications such as pose-transfer without specific training for
such a task. Our proposed models, the Conditional-DGPose and the Semi-DGPose,
have different characteristics. In the first, body pose labels are taken as
conditioners, from a fully-supervised training set. In the second, our
structured semi-supervised approach allows for pose estimation to be performed
by the model itself and relaxes the need for labelled data. Therefore, the
Semi-DGPose aims for the joint understanding and generation of people in
images. It is not only capable of mapping images to interpretable latent
representations but also able to map these representations back to the image
space. We compare our models with relevant baselines, the ClothNet-Body and the
Pose Guided Person Generation networks, demonstrating their merits on the
Human3.6M, ChictopiaPlus and DeepFashion benchmarks.Comment: IJCV 2020 special issue on 'Generating Realistic Visual Data of Human
Behavior' preprint. Keywords: deep generative models, semi-supervised
learning, human pose estimation, variational autoencoders, generative
adversarial network
- …