22,825 research outputs found
Robust Monotonic Optimization Framework for Multicell MISO Systems
The performance of multiuser systems is both difficult to measure fairly and
to optimize. Most resource allocation problems are non-convex and NP-hard, even
under simplifying assumptions such as perfect channel knowledge, homogeneous
channel properties among users, and simple power constraints. We establish a
general optimization framework that systematically solves these problems to
global optimality. The proposed branch-reduce-and-bound (BRB) algorithm handles
general multicell downlink systems with single-antenna users, multiantenna
transmitters, arbitrary quadratic power constraints, and robustness to channel
uncertainty. A robust fairness-profile optimization (RFO) problem is solved at
each iteration, which is a quasi-convex problem and a novel generalization of
max-min fairness. The BRB algorithm is computationally costly, but it shows
better convergence than the previously proposed outer polyblock approximation
algorithm. Our framework is suitable for computing benchmarks in general
multicell systems with or without channel uncertainty. We illustrate this by
deriving and evaluating a zero-forcing solution to the general problem.Comment: Published in IEEE Transactions on Signal Processing, 16 pages, 9
figures, 2 table
Algorithms for Graph-Constrained Coalition Formation in the Real World
Coalition formation typically involves the coming together of multiple,
heterogeneous, agents to achieve both their individual and collective goals. In
this paper, we focus on a special case of coalition formation known as
Graph-Constrained Coalition Formation (GCCF) whereby a network connecting the
agents constrains the formation of coalitions. We focus on this type of problem
given that in many real-world applications, agents may be connected by a
communication network or only trust certain peers in their social network. We
propose a novel representation of this problem based on the concept of edge
contraction, which allows us to model the search space induced by the GCCF
problem as a rooted tree. Then, we propose an anytime solution algorithm
(CFSS), which is particularly efficient when applied to a general class of
characteristic functions called functions. Moreover, we show how CFSS can
be efficiently parallelised to solve GCCF using a non-redundant partition of
the search space. We benchmark CFSS on both synthetic and realistic scenarios,
using a real-world dataset consisting of the energy consumption of a large
number of households in the UK. Our results show that, in the best case, the
serial version of CFSS is 4 orders of magnitude faster than the state of the
art, while the parallel version is 9.44 times faster than the serial version on
a 12-core machine. Moreover, CFSS is the first approach to provide anytime
approximate solutions with quality guarantees for very large systems of agents
(i.e., with more than 2700 agents).Comment: Accepted for publication, cite as "in press
Design and Implementation of an Extensible Variable Resolution Bathymetric Estimator
For grid-based bathymetric estimation techniques, determining the right resolution at which to work is essential. Appropriate grid resolution can be related, roughly, to data density and thence to sonar characteristics, survey methodology, and depth. It is therefore variable in almost all survey scenarios, and methods of addressing this problem can have enormous impact on the correctness and efficiency of computational schemes of this kind. This paper describes the design and implementation of a bathymetric depth estimation algorithm that attempts to address this problem by combining the computational efficiency of locally regular grids with piecewise-variable estimation resolution to provide a single logical data structure and associated algorithms that can adjust to local data conditions, change resolution where required to best support the data, and operate over essentially arbitrarily large areas as a single unit. The algorithm, which is in part a development of CUBE, is modular and extensible, and is structured as a client-server application to support different implementation modalities. The algorithm is called “CUBE with Hierarchical Resolution Techniques”, or CHRT
DC-SPP-YOLO: Dense Connection and Spatial Pyramid Pooling Based YOLO for Object Detection
Although YOLOv2 approach is extremely fast on object detection; its backbone
network has the low ability on feature extraction and fails to make full use of
multi-scale local region features, which restricts the improvement of object
detection accuracy. Therefore, this paper proposed a DC-SPP-YOLO (Dense
Connection and Spatial Pyramid Pooling Based YOLO) approach for ameliorating
the object detection accuracy of YOLOv2. Specifically, the dense connection of
convolution layers is employed in the backbone network of YOLOv2 to strengthen
the feature extraction and alleviate the vanishing-gradient problem. Moreover,
an improved spatial pyramid pooling is introduced to pool and concatenate the
multi-scale local region features, so that the network can learn the object
features more comprehensively. The DC-SPP-YOLO model is established and trained
based on a new loss function composed of mean square error and cross entropy,
and the object detection is realized. Experiments demonstrate that the mAP
(mean Average Precision) of DC-SPP-YOLO proposed on PASCAL VOC datasets and
UA-DETRAC datasets is higher than that of YOLOv2; the object detection accuracy
of DC-SPP-YOLO is superior to YOLOv2 by strengthening feature extraction and
using the multi-scale local region features.Comment: 23 pages, 9 figures, 9 table
Conforming restricted Delaunay mesh generation for piecewise smooth complexes
A Frontal-Delaunay refinement algorithm for mesh generation in piecewise
smooth domains is described. Built using a restricted Delaunay framework, this
new algorithm combines a number of novel features, including: (i) an
unweighted, conforming restricted Delaunay representation for domains specified
as a (non-manifold) collection of piecewise smooth surface patches and curve
segments, (ii) a protection strategy for domains containing curve segments that
subtend sharply acute angles, and (iii) a new class of off-centre refinement
rules designed to achieve high-quality point-placement along embedded curve
features. Experimental comparisons show that the new Frontal-Delaunay algorithm
outperforms a classical (statically weighted) restricted Delaunay-refinement
technique for a number of three-dimensional benchmark problems.Comment: To appear at the 25th International Meshing Roundtabl
Long-term Tracking in the Wild: A Benchmark
We introduce the OxUvA dataset and benchmark for evaluating single-object
tracking algorithms. Benchmarks have enabled great strides in the field of
object tracking by defining standardized evaluations on large sets of diverse
videos. However, these works have focused exclusively on sequences that are
just tens of seconds in length and in which the target is always visible.
Consequently, most researchers have designed methods tailored to this
"short-term" scenario, which is poorly representative of practitioners' needs.
Aiming to address this disparity, we compile a long-term, large-scale tracking
dataset of sequences with average length greater than two minutes and with
frequent target object disappearance. The OxUvA dataset is much larger than the
object tracking datasets of recent years: it comprises 366 sequences spanning
14 hours of video. We assess the performance of several algorithms, considering
both the ability to locate the target and to determine whether it is present or
absent. Our goal is to offer the community a large and diverse benchmark to
enable the design and evaluation of tracking methods ready to be used "in the
wild". The project website is http://oxuva.netComment: To appear at ECCV 201
- …