884 research outputs found
Automated supervised classification of variable stars I. Methodology
The fast classification of new variable stars is an important step in making
them available for further research. Selection of science targets from large
databases is much more efficient if they have been classified first. Defining
the classes in terms of physical parameters is also important to get an
unbiased statistical view on the variability mechanisms and the borders of
instability strips. Our goal is twofold: provide an overview of the stellar
variability classes that are presently known, in terms of some relevant stellar
parameters; use the class descriptions obtained as the basis for an automated
`supervised classification' of large databases. Such automated classification
will compare and assign new objects to a set of pre-defined variability
training classes. For every variability class, a literature search was
performed to find as many well-known member stars as possible, or a
considerable subset if too many were present. Next, we searched on-line and
private databases for their light curves in the visible band and performed
period analysis and harmonic fitting. The derived light curve parameters are
used to describe the classes and define the training classifiers. We compared
the performance of different classifiers in terms of percentage of correct
identification, of confusion among classes and of computation time. We describe
how well the classes can be separated using the proposed set of parameters and
how future improvements can be made, based on new large databases such as the
light curves to be assembled by the CoRoT and Kepler space missions.Comment: This paper has been accepted for publication in Astronomy and
Astrophysics (reference AA/2007/7638) Number of pages: 27 Number of figures:
1
The Gaia Ultra-Cool Dwarf Sample -- II : Structure at the end of the main sequence
© 2019 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society.We identify and investigate known late M, L, and T dwarfs in the Gaia second data release. This sample is being used as a training set in the Gaia data processing chain of the ultracool dwarfs work package. We find 695 objects in the optical spectral range M8âT6 with accurate Gaia coordinates, proper motions, and parallaxes which we combine with published spectral types and photometry from large area optical and infrared sky surveys. We find that 100 objects are in 47 multiple systems, of which 27 systems are published and 20 are new. These will be useful benchmark systems and we discuss the requirements to produce a complete catalogue of multiple systems with an ultracool dwarf component. We examine the magnitudes in the Gaia passbands and find that the G BP magnitudes are unreliable and should not be used for these objects. We examine progressively redder colourâmagnitude diagrams and see a notable increase in the main-sequence scatter and a bivariate main sequence for old and young objects. We provide an absolute magnitude â spectral subtype calibration for G and G RP passbands along with linear fits over the range M8âL8 for other passbands.Peer reviewedFinal Published versio
Game-theoretic analysis of development practices: Challenges and opportunities
Developers continuously invent new practices, usually grounded in hard-won experience, not theory. Game theory studies cooperation and conflict; its use will speed the development of effective processes. A survey of game theory in software engineering finds highly idealised models that are rarely based on process data. This is because software processes are hard to analyse using traditional game theory since they generate huge game models. We are the first to show how to use game abstractions, developed in artificial intelligence, to produce tractable game-theoretic models of software practices. We present Game-Theoretic Process Improvement (GTPI), built on top of empirical game-theoretic analysis. Some teams fall into the habit of preferring âquick-and-dirtyâ code to slow-to-write, careful code, incurring technical debt. We showcase GTPIâs ability to diagnose and improve such a development process. Using GTPI, we discover a lightweight intervention that incentivises developers to write careful code: add a singlecode reviewer who needs to catch only 25% of kludges. This 25% accuracy is key; it means that a reviewer does not need to examine each commit in depth, making this process intervention cost-effective
Cluster membership probabilities from proper motions and multiwavelength photometric catalogues: I. Method and application to the Pleiades cluster
We present a new technique designed to take full advantage of the high
dimensionality (photometric, astrometric, temporal) of the DANCe survey to
derive self-consistent and robust membership probabilities of the Pleiades
cluster. We aim at developing a methodology to infer membership probabilities
to the Pleiades cluster from the DANCe multidimensional astro-photometric data
set in a consistent way throughout the entire derivation. The determination of
the membership probabilities has to be applicable to censored data and must
incorporate the measurement uncertainties into the inference procedure.
We use Bayes' theorem and a curvilinear forward model for the likelihood of
the measurements of cluster members in the colour-magnitude space, to infer
posterior membership probabilities. The distribution of the cluster members
proper motions and the distribution of contaminants in the full
multidimensional astro-photometric space is modelled with a
mixture-of-Gaussians likelihood. We analyse several representation spaces
composed of the proper motions plus a subset of the available magnitudes and
colour indices. We select two prominent representation spaces composed of
variables selected using feature relevance determination techniques based in
Random Forests, and analyse the resulting samples of high probability
candidates. We consistently find lists of high probability (p > 0.9975)
candidates with 1000 sources, 4 to 5 times more than obtained in the
most recent astro-photometric studies of the cluster.
The methodology presented here is ready for application in data sets that
include more dimensions, such as radial and/or rotational velocities, spectral
indices and variability.Comment: 14 pages, 4 figures, accepted by A&
Artefact Relation Graphs for Unit Test Reuse Recommendation
The reuse of artefacts is fundamental to software development and can reduce development cost and time as well as improve the quality of the output. For example, developers often create new tests from existing tests by copying and adapting them. However, reuse opportunities are often missed due to the cost of discovering suitable artefacts to reuse.Development artefacts form groups that have both internal connections between artefacts of the same type, and cross-group connections between artefacts of different types. When a pair of artefact groups are considered, the cross-group connections form a bipartite graph. This paper presents Rashid, an abstract framework to assist artefact reuse by predicting edges in these bipartite graphs. We instantiate Rashid with Relatest, an approach to assist developers to reuse tests. Relatest recommends existing tests that are closely related to a new function and can, therefore, be easily adapted to test the new function. Our evaluation finds that Relatest's recommendations result in an average 58% reduction in developer effort (measured in tokens), for 75% of functions, resulting in an overall saving of 43% of the effort required to create tests. A user study revealed that, on average, developers needed 10 minutes less to develop a test when given Relatest recommendations and all developers reported that the recommendations were useful
The Seven Sisters DANCe. I. Empirical isochrones, Luminosity and Mass Functions of the Pleiades cluster
The DANCe survey provides photometric and astrometric (position and proper
motion) measurements for approximately 2 millions unique sources in a region
encompassing 80deg centered around the Pleiades cluster.
We aim at deriving a complete census of the Pleiades, and measure the mass
and luminosity function of the cluster. Using the probabilistic selection
method described in Sarro+2014, we identify high probability members in the
DANCe (14mag) and Tycho-2 (12mag) catalogues, and study the
properties of the cluster over the corresponding luminosity range. We find a
total of 2109 high probability members, of which 812 are new, making it the
most extensive and complete census of the cluster to date. The luminosity and
mass functions of the cluster are computed from the most massive members down
to 0.025M. The size, sensitivity and quality of the sample
result in the most precise luminosity and mass functions observed to date for a
cluster. Our census supersedes previous studies of the Pleiades cluster
populations, both in terms of sensitivity and accuracy.Comment: Language Edition Done. Final version to be published in A&A. Tables
will be published at CDS. Meanwhile, they can be requested to H. Bouy (hbouy
-at- cab . inta - csic . es
Mutation analysis for evaluating code translation
Source-to-source code translation automatically translates a program from one programming language to another. The existing research on code translation evaluates the effectiveness of their approaches by using either syntactic similarities (e.g., BLEU score), or test execution results. The former does not consider semantics, the latter considers semantics but falls short on the problem of insufficient data and tests. In this paper, we propose MBTA (Mutation-based Code Translation Analysis), a novel application of mutation analysis for code translation assessment. We also introduce MTS (Mutation-based Translation Score), a measure to compute the level of trustworthiness of a translator. If a mutant of an input program shows different test execution results from its translated version, the mutant is killed and a translation bug is revealed. Fewer killed mutants indicate better code translation. MBTA is novel in the sense that mutants are compared to their translated counterparts, and not to their original programâs translation. We conduct a proof-of-concept case study with 612 Java-Python program pairs and 75,082 mutants on the code translators TransCoder and j2py to evaluate the feasibility of MBTA. The results reveal that TransCoder and j2py fail to translate 70.44% and 70.64% of the mutants, respectively, i.e., more than two-thirds of all mutants are incorrectly translated by these translators. By analysing the MTS results more closely, we were able to reveal translation bugs not captured by the conventional comparison between the original and translated programs
- âŠ