79,459 research outputs found
Optimisation of Mobile Communication Networks - OMCO NET
The mini conference âOptimisation of Mobile Communication Networksâ focuses on advanced methods for search and optimisation applied to wireless communication networks. It is sponsored by Research & Enterprise Fund Southampton Solent University.
The conference strives to widen knowledge on advanced search methods capable of optimisation of wireless communications networks. The aim is to provide a forum for exchange of recent knowledge, new ideas and trends in this progressive and challenging area. The conference will popularise new successful approaches on resolving hard tasks such as minimisation of transmit power, cooperative and optimal routing
Using Parameterized Black-Box Priors to Scale Up Model-Based Policy Search for Robotics
The most data-efficient algorithms for reinforcement learning in robotics are
model-based policy search algorithms, which alternate between learning a
dynamical model of the robot and optimizing a policy to maximize the expected
return given the model and its uncertainties. Among the few proposed
approaches, the recently introduced Black-DROPS algorithm exploits a black-box
optimization algorithm to achieve both high data-efficiency and good
computation times when several cores are used; nevertheless, like all
model-based policy search approaches, Black-DROPS does not scale to high
dimensional state/action spaces. In this paper, we introduce a new model
learning procedure in Black-DROPS that leverages parameterized black-box priors
to (1) scale up to high-dimensional systems, and (2) be robust to large
inaccuracies of the prior information. We demonstrate the effectiveness of our
approach with the "pendubot" swing-up task in simulation and with a physical
hexapod robot (48D state space, 18D action space) that has to walk forward as
fast as possible. The results show that our new algorithm is more
data-efficient than previous model-based policy search algorithms (with and
without priors) and that it can allow a physical 6-legged robot to learn new
gaits in only 16 to 30 seconds of interaction time.Comment: Accepted at ICRA 2018; 8 pages, 4 figures, 2 algorithms, 1 table;
Video at https://youtu.be/HFkZkhGGzTo ; Spotlight ICRA presentation at
https://youtu.be/_MZYDhfWeL
Transit Least Squares: Optimized transit detection algorithm to search for periodic transits of small planets
We present a new method to detect planetary transits from time-series
photometry, the Transit Least Squares (TLS) algorithm. TLS searches for
transit-like features while taking the stellar limb darkening and planetary
ingress and egress into account. We have optimized TLS for both signal
detection efficiency (SDE) of small planets and computational speed. TLS
analyses the entire, unbinned phase-folded light curve. We compensate for the
higher computational load by (i.) using algorithms like "Mergesort" (for the
trial orbital phases) and by (ii.) restricting the trial transit durations to a
smaller range that encompasses all known planets, and using stellar density
priors where available. A typical K2 light curve, including 80d of observations
at a cadence of 30min, can be searched with TLS in ~10s real time on a standard
laptop computer, as fast as the widely used Box Least Squares (BLS) algorithm.
We perform a transit injection-retrieval experiment of Earth-sized planets
around sun-like stars using synthetic light curves with 110ppm white noise per
30min cadence, corresponding to a photometrically quiet KP=12 star observed
with Kepler. We determine the SDE thresholds for both BLS and TLS to reach a
false positive rate of 1% to be SDE~7 in both cases. The resulting true
positive (or recovery) rates are ~93% for TLS and ~76% for BLS, implying more
reliable detections with TLS. We also test TLS with the K2 light curve of the
TRAPPIST-1 system and find six of seven Earth-sized planets using an iterative
search for increasingly lower signal detection efficiency, the phase-folded
transit of the seventh planet being affected by a stellar flare. TLS is more
reliable than BLS in finding any kind of transiting planet but it is
particularly suited for the detection of small planets in long time series from
Kepler, TESS, and PLATO. We make our Python implementation of TLS publicly
available.Comment: A&A accepted. Code, documentation and tutorials at
https://github.com/hippke/tl
Detecting extreme mass ratio inspiral events in LISA data using the Hierarchical Algorithm for Clusters and Ridges (HACR)
One of the most exciting prospects for the Laser Interferometer Space Antenna
(LISA) is the detection of gravitational waves from the inspirals of
stellar-mass compact objects into supermassive black holes. Detection of these
sources is an extremely challenging computational problem due to the large
parameter space and low amplitude of the signals. However, recent work has
suggested that the nearest extreme mass ratio inspiral (EMRI) events will be
sufficiently loud that they might be detected using computationally cheap,
template-free techniques, such as a time-frequency analysis. In this paper, we
examine a particular time-frequency algorithm, the Hierarchical Algorithm for
Clusters and Ridges (HACR). This algorithm searches for clusters in a power map
and uses the properties of those clusters to identify signals in the data. We
find that HACR applied to the raw spectrogram performs poorly, but when the
data is binned during the construction of the spectrogram, the algorithm can
detect typical EMRI events at distances of up to Gpc. This is a little
further than the simple Excess Power method that has been considered
previously. We discuss the HACR algorithm, including tuning for single and
multiple sources, and illustrate its performance for detection of typical EMRI
events, and other likely LISA sources, such as white dwarf binaries and
supermassive black hole mergers. We also discuss how HACR cluster properties
could be used for parameter extraction.Comment: 21 pages, 11 figures, submitted to Class. Quantum Gravity. Modified
and shortened in light of referee's comments. Updated results consider tuning
over all three HACR thresholds, and show 10-15% improvement in detection rat
CoPhy: A Scalable, Portable, and Interactive Index Advisor for Large Workloads
Index tuning, i.e., selecting the indexes appropriate for a workload, is a
crucial problem in database system tuning. In this paper, we solve index tuning
for large problem instances that are common in practice, e.g., thousands of
queries in the workload, thousands of candidate indexes and several hard and
soft constraints. Our work is the first to reveal that the index tuning problem
has a well structured space of solutions, and this space can be explored
efficiently with well known techniques from linear optimization. Experimental
results demonstrate that our approach outperforms state-of-the-art commercial
and research techniques by a significant margin (up to an order of magnitude).Comment: VLDB201
An Alternating Trust Region Algorithm for Distributed Linearly Constrained Nonlinear Programs, Application to the AC Optimal Power Flow
A novel trust region method for solving linearly constrained nonlinear
programs is presented. The proposed technique is amenable to a distributed
implementation, as its salient ingredient is an alternating projected gradient
sweep in place of the Cauchy point computation. It is proven that the algorithm
yields a sequence that globally converges to a critical point. As a result of
some changes to the standard trust region method, namely a proximal
regularisation of the trust region subproblem, it is shown that the local
convergence rate is linear with an arbitrarily small ratio. Thus, convergence
is locally almost superlinear, under standard regularity assumptions. The
proposed method is successfully applied to compute local solutions to
alternating current optimal power flow problems in transmission and
distribution networks. Moreover, the new mechanism for computing a Cauchy point
compares favourably against the standard projected search as for its activity
detection properties
Gaussianisation for fast and accurate inference from cosmological data
We present a method to transform multivariate unimodal non-Gaussian posterior
probability densities into approximately Gaussian ones via non-linear mappings,
such as Box--Cox transformations and generalisations thereof. This permits an
analytical reconstruction of the posterior from a point sample, like a Markov
chain, and simplifies the subsequent joint analysis with other experiments.
This way, a multivariate posterior density can be reported efficiently, by
compressing the information contained in MCMC samples. Further, the model
evidence integral (i.e. the marginal likelihood) can be computed analytically.
This method is analogous to the search for normal parameters in the cosmic
microwave background, but is more general. The search for the optimally
Gaussianising transformation is performed computationally through a
maximum-likelihood formalism; its quality can be judged by how well the
credible regions of the posterior are reproduced. We demonstrate that our
method outperforms kernel density estimates in this objective. Further, we
select marginal posterior samples from Planck data with several distinct
strongly non-Gaussian features, and verify the reproduction of the marginal
contours. To demonstrate evidence computation, we Gaussianise the joint
distribution of data from weak lensing and baryon acoustic oscillations (BAO),
for different cosmological models, and find a preference for flat CDM.
Comparing to values computed with the Savage-Dickey density ratio, and
Population Monte Carlo, we find good agreement of our method within the spread
of the other two.Comment: 14 pages, 9 figure
- âŠ