6,297 research outputs found
High-resolution imaging of planet host candidates. A comprehensive comparison of different techniques
The Kepler mission has discovered thousands of planet candidates. Currently,
some of them have already been discarded; more than 200 have been confirmed by
follow-up observations, and several hundreds have been validated. However, most
of them are still awaiting for confirmation. Thus, priorities (in terms of the
probability of the candidate being a real planet) must be established for
subsequent observations. The motivation of this work is to provide a set of
isolated (good) host candidates to be further tested by other techniques. We
identify close companions of the candidates that could have contaminated the
light curve of the planet host. We used the AstraLux North instrument located
at the 2.2 m telescope in the Calar Alto Observatory to obtain
diffraction-limited images of 174 Kepler objects of interest. The lucky-imaging
technique used in this work is compared to other AO and speckle imaging
observations of Kepler planet host candidates. We define a new parameter, the
blended source confidence level (BSC), to assess the probability of an object
to have blended non-detected eclipsing binaries capable of producing the
detected transit. We find that 67.2% of the observed Kepler hosts are isolated
within our detectability limits, and 32.8% have at least one visual companion
at angular separations below 6 arcsec. We find close companions (below 3
arcsec) for the 17.2% of the sample. The planet properties of this sample of
non-isolated hosts are revised. We report one possible S-type binary
(KOI-3158). We also report three possible false positives (KOIs 1230.01,
3649.01, and 3886.01) due to the presence of close companions. The BSC
parameter is calculated for all the isolated targets and compared to both the
value prior to any high-resolution image and, when possible, to observations
from previous high-spatial resolution surveys in the Kepler sample.Comment: Accepted for publication in A&A on April 29, 2014; 32 pages, 11
figures, 11 table
Routine Crime in Exceptional Times: The Impact of the 2002 Winter Olympics on Citizen Demand for Police Services
Despite their rich theoretical and practical importance, criminologists have paid scant attention to the patterns of crime and the responses to crime during exceptional events. Throughout the world large-scale political, social, economic, cultural, and sporting events have become commonplace. Natural disasters such as blackouts, hurricanes, tornadoes, and tsunamis present similar opportunities. Such events often tax the capacities of jurisdictions to provide safety and security in response to the exceptional event, as well as to meet the “routine” public safety needs. This article examines “routine” crime as measured by calls for police service, official crime reports, and police arrests in Salt Lake City before, during, and after the 2002 Olympic Games. The analyses suggest that while a rather benign demographic among attendees and the presence of large numbers of social control agents might have been expected to decrease calls for police service for minor crime, it actually increased in Salt Lake during this period. The implications of these findings are considered for theories of routine activities, as well as systems capacity
Recommended from our members
A management architecture for active networks
In this paper we present an architecture for network and applications management, which is based on the Active Networks paradigm and shows the advantages of network programmability. The stimulus to develop this architecture arises from an actual need to manage a cluster of active nodes, where it is often required to redeploy network assets and modify nodes connectivity. In our architecture, a remote front-end of the managing entity allows the operator to design new network topologies, to check the status of the nodes and to configure them. Moreover, the proposed framework allows to explore an active network, to monitor the active applications, to query each node and to install programmable traps. In order to take advantage of the Active Networks technology, we introduce active SNMP-like MIBs and agents, which are dynamic and programmable. The programmable management agents make tracing distributed applications a feasible task. We propose a general framework that can inter-operate with any active execution environment. In this framework, both the manager and the monitor front-ends communicate with an active node (the Active Network Access Point) through the XML language. A gateway service performs the translation of the queries from XML to an active packet language and injects the code in the network. We demonstrate the implementation of an active network gateway for PLAN (Packet Language for Active Networks) in a forty active nodes testbed. Finally, we discuss an application of the active management architecture to detect the causes of network failures by tracing network events in time
Determination of the Joint Confidence Region of Optimal Operating Conditions in Robust Design by Bootstrap Technique
Robust design has been widely recognized as a leading method in reducing
variability and improving quality. Most of the engineering statistics
literature mainly focuses on finding "point estimates" of the optimum operating
conditions for robust design. Various procedures for calculating point
estimates of the optimum operating conditions are considered. Although this
point estimation procedure is important for continuous quality improvement, the
immediate question is "how accurate are these optimum operating conditions?"
The answer for this is to consider interval estimation for a single variable or
joint confidence regions for multiple variables.
In this paper, with the help of the bootstrap technique, we develop
procedures for obtaining joint "confidence regions" for the optimum operating
conditions. Two different procedures using Bonferroni and multivariate normal
approximation are introduced. The proposed methods are illustrated and
substantiated using a numerical example.Comment: Two tables, Three figure
Application of Bayesian model averaging to measurements of the primordial power spectrum
Cosmological parameter uncertainties are often stated assuming a particular
model, neglecting the model uncertainty, even when Bayesian model selection is
unable to identify a conclusive best model. Bayesian model averaging is a
method for assessing parameter uncertainties in situations where there is also
uncertainty in the underlying model. We apply model averaging to the estimation
of the parameters associated with the primordial power spectra of curvature and
tensor perturbations. We use CosmoNest and MultiNest to compute the model
Evidences and posteriors, using cosmic microwave data from WMAP, ACBAR,
BOOMERanG and CBI, plus large-scale structure data from the SDSS DR7. We find
that the model-averaged 95% credible interval for the spectral index using all
of the data is 0.940 < n_s < 1.000, where n_s is specified at a pivot scale
0.015 Mpc^{-1}. For the tensors model averaging can tighten the credible upper
limit, depending on prior assumptions.Comment: 7 pages with 7 figures include
A new look inside Planetary Nebula LoTr 5: A long-period binary with hints of a possible third component
LoTr 5 is a planetary nebula with an unusual long-period binary central star.
As far as we know, the pair consists of a rapidly rotating G-type star and a
hot star, which is responsible for the ionization of the nebula. The rotation
period of the G-type star is 5.95 days and the orbital period of the binary is
now known to be 2700 days, one of the longest in central star of
planetary nebulae. The spectrum of the G central star shows a complex H
double-peaked profile which varies with very short time scales, also reported
in other central stars of planetary nebulae and whose origin is still unknown.
We present new radial velocity observations of the central star which allow us
to confirm the orbital period for the long-period binary and discuss the
possibility of a third component in the system at 129 days to the G star.
This is complemented with the analysis of archival light curves from SuperWASP,
ASAS and OMC. From the spectral fitting of the G-type star, we obtain a
effective temperature of = 5410250 K and surface gravity of
= 2.70.5, consistent with both giant and subgiant stars. We also
present a detailed analysis of the H double-peaked profile and conclude
that it does not present correlation with the rotation period and that the
presence of an accretion disk via Roche lobe overflow is unlikely.Comment: 12 pages, 12 figures, accepted for publication in MNRA
A Bayesian spatio-temporal model of panel design data: airborne particle number concentration in Brisbane, Australia
This paper outlines a methodology for semi-parametric spatio-temporal
modelling of data which is dense in time but sparse in space, obtained from a
split panel design, the most feasible approach to covering space and time with
limited equipment. The data are hourly averaged particle number concentration
(PNC) and were collected, as part of the Ultrafine Particles from Transport
Emissions and Child Health (UPTECH) project. Two weeks of continuous
measurements were taken at each of a number of government primary schools in
the Brisbane Metropolitan Area. The monitoring equipment was taken to each
school sequentially. The school data are augmented by data from long term
monitoring stations at three locations in Brisbane, Australia.
Fitting the model helps describe the spatial and temporal variability at a
subset of the UPTECH schools and the long-term monitoring sites. The temporal
variation is modelled hierarchically with penalised random walk terms, one
common to all sites and a term accounting for the remaining temporal trend at
each site. Parameter estimates and their uncertainty are computed in a
computationally efficient approximate Bayesian inference environment, R-INLA.
The temporal part of the model explains daily and weekly cycles in PNC at the
schools, which can be used to estimate the exposure of school children to
ultrafine particles (UFPs) emitted by vehicles. At each school and long-term
monitoring site, peaks in PNC can be attributed to the morning and afternoon
rush hour traffic and new particle formation events. The spatial component of
the model describes the school to school variation in mean PNC at each school
and within each school ground. It is shown how the spatial model can be
expanded to identify spatial patterns at the city scale with the inclusion of
more spatial locations.Comment: Draft of this paper presented at ISBA 2012 as poster, part of UPTECH
projec
Implications of Compressed Supersymmetry for Collider and Dark Matter Searches
Martin has proposed a scenario dubbed ``compressed supersymmetry'' (SUSY)
where the MSSM is the effective field theory between energy scales M_{\rm weak}
and M_{\rm GUT}, but with the GUT scale SU(3) gaugino mass M_3<< M_1 or M_2. As
a result, squark and gluino masses are suppressed relative to slepton, chargino
and neutralino masses, leading to a compressed sparticle mass spectrum, and
where the dark matter relic density in the early universe may be dominantly
governed by neutralino annihilation into ttbar pairs via exchange of a light
top squark. We explore the dark matter and collider signals expected from
compressed SUSY for two distinct model lines with differing assumptions about
GUT scale gaugino mass parameters. For dark matter signals, the compressed
squark spectrum leads to an enhancement in direct detection rates compared to
models with unified gaugino masses. Meanwhile, neutralino halo annihilation
rates to gamma rays and anti-matter are also enhanced relative to related
scenarios with unified gaugino masses but, depending on the halo dark matter
distribution, may yet be below the sensitivity of indirect searches underway.
In the case of collider signals, we compare the rates for the potentially
dominant decay modes of the stop_1 which may be expected to be produced in
cascade decay chains at the LHC: \tst_1\to c\tz_1 and \tst_1\to bW\tz_1. We
examine the extent to which multilepton signal rates are reduced when the
two-body decay mode dominates. For the model lines that we examine here, the
multi-lepton signals, though reduced, still remain observable at the LHC.Comment: 22 pages including 24 eps figure
Synthetic LISA: Simulating Time Delay Interferometry in a Model LISA
We report on three numerical experiments on the implementation of Time-Delay
Interferometry (TDI) for LISA, performed with Synthetic LISA, a C++/Python
package that we developed to simulate the LISA science process at the level of
scientific and technical requirements. Specifically, we study the laser-noise
residuals left by first-generation TDI when the LISA armlengths have a
realistic time dependence; we characterize the armlength-measurements
accuracies that are needed to have effective laser-noise cancellation in both
first- and second-generation TDI; and we estimate the quantization and
telemetry bitdepth needed for the phase measurements. Synthetic LISA generates
synthetic time series of the LISA fundamental noises, as filtered through all
the TDI observables; it also provides a streamlined module to compute the TDI
responses to gravitational waves according to a full model of TDI, including
the motion of the LISA array and the temporal and directional dependence of the
armlengths. We discuss the theoretical model that underlies the simulation, its
implementation, and its use in future investigations on system characterization
and data-analysis prototyping for LISA.Comment: 18 pages, 14 EPS figures, REVTeX 4. Accepted PRD version. See
http://www.vallis.org/syntheticlisa for information on the Synthetic LISA
software packag
- …