1,687 research outputs found
Dissecting Massive YSOs with Mid-Infrared Interferometry
The very inner structure of massive YSOs is difficult to trace. With
conventional observational methods we often identify structures still several
hundreds of AU in size. But we also need information about the innermost
regions where the actual mass transfer onto the forming high-mass star occurs.
An innovative way to probe these scales is to utilise mid-infrared
interferometry. Here, we present first results of our MIDI GTO programme at the
VLTI. We observed 10 well-known massive YSOs down to scales of 20 mas. We
clearly resolve these objects which results in low visibilities and sizes in
the order of 30 - 50 mas. Thus, with MIDI we can for the first time quantify
the extent of the thermal emission from the warm circumstellar dust and thus
calibrate existing concepts regarding the compactness of such emission in the
pre-UCHII region phase. Special emphasis will be given to the BN-type object
M8E-IR where our modelling is most advanced and where there is indirect
evidence for a strongly bloated central star.Comment: 8 pages, 6 figures, proceedings contribution for the conference
"Massive Star Formation: Observations confront Theory", held in September
2007 in Heidelberg, Germany; to appear in ASP Conf. Ser. 387, H. Beuther et
al. (eds.
Evolution of statistical analysis in empirical software engineering research: Current state and steps forward
Software engineering research is evolving and papers are increasingly based
on empirical data from a multitude of sources, using statistical tests to
determine if and to what degree empirical evidence supports their hypotheses.
To investigate the practices and trends of statistical analysis in empirical
software engineering (ESE), this paper presents a review of a large pool of
papers from top-ranked software engineering journals. First, we manually
reviewed 161 papers and in the second phase of our method, we conducted a more
extensive semi-automatic classification of papers spanning the years 2001--2015
and 5,196 papers. Results from both review steps was used to: i) identify and
analyze the predominant practices in ESE (e.g., using t-test or ANOVA), as well
as relevant trends in usage of specific statistical methods (e.g.,
nonparametric tests and effect size measures) and, ii) develop a conceptual
model for a statistical analysis workflow with suggestions on how to apply
different statistical methods as well as guidelines to avoid pitfalls. Lastly,
we confirm existing claims that current ESE practices lack a standard to report
practical significance of results. We illustrate how practical significance can
be discussed in terms of both the statistical analysis and in the
practitioner's context.Comment: journal submission, 34 pages, 8 figure
Towards Causal Analysis of Empirical Software Engineering Data: The Impact of Programming Languages on Coding Competitions
There is abundant observational data in the software engineering domain,
whereas running large-scale controlled experiments is often practically
impossible. Thus, most empirical studies can only report statistical
correlations -- instead of potentially more insightful and robust causal
relations. To support analyzing purely observational data for causal relations,
and to assess any differences between purely predictive and causal models of
the same data, this paper discusses some novel techniques based on structural
causal models (such as directed acyclic graphs of causal Bayesian networks).
Using these techniques, one can rigorously express, and partially validate,
causal hypotheses; and then use the causal information to guide the
construction of a statistical model that captures genuine causal relations --
such that correlation does imply causation. We apply these ideas to analyzing
public data about programmer performance in Code Jam, a large world-wide coding
contest organized by Google every year. Specifically, we look at the impact of
different programming languages on a participant's performance in the contest.
While the overall effect associated with programming languages is weak compared
to other variables -- regardless of whether we consider correlational or causal
links -- we found considerable differences between a purely associational and a
causal analysis of the very same data. The takeaway message is that even an
imperfect causal analysis of observational data can help answer the salient
research questions more precisely and more robustly than with just purely
predictive techniques -- where genuine causal effects may be confounded
VLTI observations of IRS~3: The brightest compact MIR source at the Galactic Centre
The dust enshrouded star IRS~3 in the central light year of our galaxy was
partially resolved in a recent VLTI experiment. The presented observation is
the first step in investigating both IRS~3 in particular and the stellar
population of the Galactic Centre in general with the VLTI at highest angular
resolution. We will outline which scientific issues can be addressed by a
complete MIDI dataset on IRS~3 in the mid infrared.Comment: 4 pages, 3 figures, published in: The ESO Messenge
Bayesian Data Analysis in Empirical Software Engineering Research
Statistics comes in two main flavors: frequentist and Bayesian. For
historical and technical reasons, frequentist statistics have traditionally
dominated empirical data analysis, and certainly remain prevalent in empirical
software engineering. This situation is unfortunate because frequentist
statistics suffer from a number of shortcomings---such as lack of flexibility
and results that are unintuitive and hard to interpret---that curtail their
effectiveness when dealing with the heterogeneous data that is increasingly
available for empirical analysis of software engineering practice.
In this paper, we pinpoint these shortcomings, and present Bayesian data
analysis techniques that provide tangible benefits---as they can provide
clearer results that are simultaneously robust and nuanced. After a short,
high-level introduction to the basic tools of Bayesian statistics, we present
the reanalysis of two empirical studies on the effectiveness of automatically
generated tests and the performance of programming languages. By contrasting
the original frequentist analyses with our new Bayesian analyses, we
demonstrate the concrete advantages of the latter. To conclude we advocate a
more prominent role for Bayesian statistical techniques in empirical software
engineering research and practice.Comment: To appear in IEEE Transactions on Software Engineerin
Applying Bayesian Analysis Guidelines to Empirical Software Engineering Data: The Case of Programming Languages and Code Quality
Statistical analysis is the tool of choice to turn data into information, and
then information into empirical knowledge. To be valid, the process that goes
from data to knowledge should be supported by detailed, rigorous guidelines,
which help ferret out issues with the data or model, and lead to qualified
results that strike a reasonable balance between generality and practical
relevance. Such guidelines are being developed by statisticians to support the
latest techniques for Bayesian data analysis. In this article, we frame these
guidelines in a way that is apt to empirical research in software engineering.
To demonstrate the guidelines in practice, we apply them to reanalyze a
GitHub dataset about code quality in different programming languages. The
dataset's original analysis (Ray et al., 2014) and a critical reanalysis
(Berger at al., 2019) have attracted considerable attention -- in no small part
because they target a topic (the impact of different programming languages) on
which strong opinions abound. The goals of our reanalysis are largely
orthogonal to this previous work, as we are concerned with demonstrating, on
data in an interesting domain, how to build a principled Bayesian data analysis
and to showcase some of its benefits. In the process, we will also shed light
on some critical aspects of the analyzed data and of the relationship between
programming languages and code quality.
The high-level conclusions of our exercise will be that Bayesian statistical
techniques can be applied to analyze software engineering data in a way that is
principled, flexible, and leads to convincing results that inform the state of
the art while highlighting the boundaries of its validity. The guidelines can
support building solid statistical analyses and connecting their results, and
hence help buttress continued progress in empirical software engineering
research
Supernova Remnant in a Stratified Medium: Explicit, Analytical Approximations for Adiabatic Expansion and Radiative Cooling
We propose simple, explicit, analytical approximations for the kinematics of
an adiabatic blast wave propagating in an exponentially stratified ambient
medium, and for the onset of radiative cooling, which ends the adiabatic era.
Our method, based on the Kompaneets implicit solution and the Kahn
approximation for the radiative cooling coefficient, gives straightforward
estimates for the size, expansion velocity, and progression of cooling times
over the surface, when applied to supernova remnants (SNRs). The remnant shape
is remarkably close to spherical for moderate density gradients, but even a
small gradient in ambient density causes the cooling time to vary substantially
over the remnant's surface, so that for a considerable period there will be a
cold dense expanding shell covering only a part of the remnant. Our
approximation provides an effective tool for identifying the approximate
parameters when planning 2-dimensional numerical models of SNRs, the example of
W44 being given in a subsequent paper.Comment: ApJ accepted, 11 pages, 2 figures embedded, aas style with
ecmatex.sty and lscape.sty package
Detecting Extrasolar Planets with Integral Field Spectroscopy
Observations of extrasolar planets using Integral Field Spectroscopy (IFS),
if coupled with an extreme Adaptive Optics system and analyzed with a
Simultaneous Differential Imaging technique (SDI), are a powerful tool to
detect and characterize extrasolar planets directly; they enhance the signal of
the planet and, at the same time, reduces the impact of stellar light and
consequently important noise sources like speckles. In order to verify the
efficiency of such a technique, we developed a simulation code able to test the
capabilities of this IFS-SDI technique for different kinds of planets and
telescopes, modelling the atmospheric and instrumental noise sources. The first
results obtained by the simulations show that many significant extrasolar
planet detections are indeed possible using the present 8m-class telescopes
within a few hours of exposure time. The procedure adopted to simulate IFS
observations is presented here in detail, explaining in particular how we
obtain estimates of the speckle noise, Adaptive Optics corrections, specific
instrumental features, and how we test the efficiency of the SDI technique to
increase the signal-to-noise ratio of the planet detection. The most important
results achieved by simulations of various objects, from 1 M_J to brown dwarfs
of 30 M_J, for observations with an 8 meter telescope, are then presented and
discussed.Comment: 60 pages, 37 figures, accepted in PASP, 4 Tables adde
- …