28,158 research outputs found
Geodesics on the manifold of multivariate generalized Gaussian distributions with an application to multicomponent texture discrimination
We consider the Rao geodesic distance (GD) based on the Fisher information as a similarity measure on the manifold of zero-mean multivariate generalized Gaussian distributions (MGGD). The MGGD is shown to be an adequate model for the heavy-tailed wavelet statistics in multicomponent images, such as color or multispectral images. We discuss the estimation of MGGD parameters using various methods. We apply the GD between MGGDs to color texture discrimination in several classification experiments, taking into account the correlation structure between the spectral bands in the wavelet domain. We compare the performance, both in terms of texture discrimination capability and computational load, of the GD and the Kullback-Leibler divergence (KLD). Likewise, both uni- and multivariate generalized Gaussian models are evaluated, characterized by a fixed or a variable shape parameter. The modeling of the interband correlation significantly improves classification efficiency, while the GD is shown to consistently outperform the KLD as a similarity measure
Constraining the Atmospheric Composition of the Day-Night Terminators of HD 189733b : Atmospheric Retrieval with Aerosols
A number of observations have shown that Rayleigh scattering by aerosols
dominates the transmission spectrum of HD 189733b at wavelengths shortward of 1
m. In this study, we retrieve a range of aerosol distributions consistent
with transmission spectroscopy between 0.3-24 m that were recently
re-analyzed by Pont et al. (2013). To constrain the particle size and the
optical depth of the aerosol layer, we investigate the degeneracies between
aerosol composition, temperature, planetary radius, and molecular abundances
that prevent unique solutions for transit spectroscopy. Assuming that the
aerosol is composed of MgSiO, we suggest that a vertically uniform aerosol
layer over all pressures with a monodisperse particle size smaller than about
0.1 m and an optical depth in the range 0.002-0.02 at 1 m provides
statistically meaningful solutions for the day/night terminator regions of HD
189733b. Generally, we find that a uniform aerosol layer provide adequate fits
to the data if the optical depth is less than 0.1 and the particle size is
smaller than 0.1 m, irrespective of the atmospheric temperature, planetary
radius, aerosol composition, and gaseous molecules. Strong constraints on the
aerosol properties are provided by spectra at wavelengths shortward of 1 m
as well as longward of 8 m, if the aerosol material has absorption
features in this region. We show that these are the optimal wavelengths for
quantifying the effects of aerosols, which may guide the design of future space
observations. The present investigation indicates that the current data offer
sufficient information to constrain some of the aerosol properties of
HD189733b, but the chemistry in the terminator regions remains uncertain.Comment: Transferred to ApJ and accepted. 11 pages, 10 figures, 1 tabl
Evolutionary constraints on the complexity of genetic regulatory networks allow predictions of the total number of genetic interactions
Genetic regulatory networks (GRNs) have been widely studied, yet there is a
lack of understanding with regards to the final size and properties of these
networks, mainly due to no network currently being complete. In this study, we
analyzed the distribution of GRN structural properties across a large set of
distinct prokaryotic organisms and found a set of constrained characteristics
such as network density and number of regulators. Our results allowed us to
estimate the number of interactions that complete networks would have, a
valuable insight that could aid in the daunting task of network curation,
prediction, and validation. Using state-of-the-art statistical approaches, we
also provided new evidence to settle a previously stated controversy that
raised the possibility of complete biological networks being random and
therefore attributing the observed scale-free properties to an artifact
emerging from the sampling process during network discovery. Furthermore, we
identified a set of properties that enabled us to assess the consistency of the
connectivity distribution for various GRNs against different alternative
statistical distributions. Our results favor the hypothesis that highly
connected nodes (hubs) are not a consequence of network incompleteness.
Finally, an interaction coverage computed for the GRNs as a proxy for
completeness revealed that high-throughput based reconstructions of GRNs could
yield biased networks with a low average clustering coefficient, showing that
classical targeted discovery of interactions is still needed.Comment: 28 pages, 5 figures, 12 pages supplementary informatio
Modelling dynamic decision making with the ACT-R cognitive architecture
This paper describes a model of dynamic decision making in the Dynamic Stocks and Flows (DSF) task, developed using the ACT-R cognitive architecture. This task is a simple simulation of a water tank in which the water level must be kept constant whilst the inflow and outflow changes at varying rates. The basic functions of the model are based around three steps. Firstly, the model predicts the water level in the next cycle by adding the current water level to the predicted net inflow of water. Secondly, based on this projection, the net outflow of the water is adjusted to bring the water level back to the target. Thirdly, the predicted net inflow of water is adjusted to improve its accuracy in the future. If the prediction has overestimated net inflow then it is reduced, if it has underestimated net inflow it is increased. The model was entered into a model comparison competition-the Dynamic Stocks and Flows Challenge-to model human performance on four conditions of the DSF task and then subject the model to testing on five unseen transfer conditions. The model reproduced the main features of the development data reasonably well but did not reproduce human performance well under the transfer conditions. This suggests that the principles underlying human performance across the different conditions differ considerably despite their apparent similarity. Further lessons for the future development of our model and model comparison challenges are considered
A holistic model to infer mathematics performance: the interrelated impact of student, family and school context variables
The present study aims at exploring predictors influencing mathematics performance. In particular, the study focuses on internal students' characteristics (gender, age, metacognitive experience, mathematics self-efficacy) and external contextual factors (GDP of school location, parents' educational level, teachers' educational level, and teacher beliefs). A sample of 1749 students and 91 teachers from Chinese primary schools were involved in the study. Path analysis was used to test the direct and indirect relations between the predictors and mathematics performance. Results reveal that a large proportion of mathematics performance can be directly predicted from students' metacognitive experiences. In addition, other student characteristics and contextual variables influence mathematics performance in direct or indirect ways
Recommended from our members
Five seconds or sixty? Presentation time in expert memory
The template theory presented in Gobet and Simon (1996a, 1998) is based on the EPAM theory (Feigenbaum & Simon, 1984; Richman et al., 1995), including the numerical parameters that have been estimated in tests of the latter; and it therefore offers precise predictions for the timing of cognitive processes during the presentation and recall of chess positions. This paper describes the behavior of CHREST, a computer implementation of the template theory, in a task when the presentation time is systematically varied from one second to sixty seconds, on the recall of both game and random positions, and compares the model to human data. As predicted by the model, strong players are better than weak players with both types of positions. Their superiority with random positions is especially clear with long presentation times, but is also present after brief presentation times, although smaller in absolute value. CHREST accounts for the data, both qualitatively and quantitatively. Strong players’ superiority with random positions is explained by the large number of chunks they hold in LTM. Strong players’ high recall percentage with short presentation times is explained by the presence of templates, a special class of chunks. The model is compared to other theories of chess skill, which either cannot account for the superiority of Masters with random positions (models based on high-level descriptions and on levels of processing) or predict too strong a performance of Masters with random positions (long-term working memory)
Toward Explainable Fashion Recommendation
Many studies have been conducted so far to build systems for recommending
fashion items and outfits. Although they achieve good performances in their
respective tasks, most of them cannot explain their judgments to the users,
which compromises their usefulness. Toward explainable fashion recommendation,
this study proposes a system that is able not only to provide a goodness score
for an outfit but also to explain the score by providing reason behind it. For
this purpose, we propose a method for quantifying how influential each feature
of each item is to the score. Using this influence value, we can identify which
item and what feature make the outfit good or bad. We represent the image of
each item with a combination of human-interpretable features, and thereby the
identification of the most influential item-feature pair gives useful
explanation of the output score. To evaluate the performance of this approach,
we design an experiment that can be performed without human annotation; we
replace a single item-feature pair in an outfit so that the score will
decrease, and then we test if the proposed method can detect the replaced item
correctly using the above influence values. The experimental results show that
the proposed method can accurately detect bad items in outfits lowering their
scores
On the potential of the EChO mission to characterise gas giant atmospheres
Space telescopes such as EChO (Exoplanet Characterisation Observatory) and
JWST (James Webb Space Telescope) will be important for the future study of
extrasolar planet atmospheres. Both of these missions are capable of performing
high sensitivity spectroscopic measurements at moderate resolutions in the
visible and infrared, which will allow the characterisation of atmospheric
properties using primary and secondary transit spectroscopy. We use the NEMESIS
radiative transfer and retrieval tool (Irwin et al. 2008, Lee et al. 2012) to
explore the potential of the proposed EChO mission to solve the retrieval
problem for a range of H2-He planets orbiting different stars. We find that
EChO should be capable of retrieving temperature structure to ~200 K precision
and detecting H2O, CO2 and CH4 from a single eclipse measurement for a hot
Jupiter orbiting a Sun-like star and a hot Neptune orbiting an M star, also
providing upper limits on CO and NH3. We provide a table of retrieval
precisions for these quantities in each test case. We expect around 30
Jupiter-sized planets to be observable by EChO; hot Neptunes orbiting M dwarfs
are rarer, but we anticipate observations of at least one similar planet.Comment: 22 pages, 30 figures, 4 tables. Accepted for publication in MNRA
- …