35,022 research outputs found
An Automated Algorithm for Approximation of Temporal Video Data Using Linear B'EZIER Fitting
This paper presents an efficient method for approximation of temporal video
data using linear Bezier fitting. For a given sequence of frames, the proposed
method estimates the intensity variations of each pixel in temporal dimension
using linear Bezier fitting in Euclidean space. Fitting of each segment ensures
upper bound of specified mean squared error. Break and fit criteria is employed
to minimize the number of segments required to fit the data. The proposed
method is well suitable for lossy compression of temporal video data and
automates the fitting process of each pixel. Experimental results show that the
proposed method yields good results both in terms of objective and subjective
quality measurement parameters without causing any blocking artifacts.Comment: 14 Pages, IJMA 201
Hypervolume-based Multi-objective Bayesian Optimization with Student-t Processes
Student- processes have recently been proposed as an appealing alternative
non-parameteric function prior. They feature enhanced flexibility and
predictive variance. In this work the use of Student- processes are explored
for multi-objective Bayesian optimization. In particular, an analytical
expression for the hypervolume-based probability of improvement is developed
for independent Student- process priors of the objectives. Its effectiveness
is shown on a multi-objective optimization problem which is known to be
difficult with traditional Gaussian processes.Comment: 5 pages, 3 figure
Scaling Laws of Cognitive Networks
We consider a cognitive network consisting of n random pairs of cognitive
transmitters and receivers communicating simultaneously in the presence of
multiple primary users. Of interest is how the maximum throughput achieved by
the cognitive users scales with n. Furthermore, how far these users must be
from a primary user to guarantee a given primary outage. Two scenarios are
considered for the network scaling law: (i) when each cognitive transmitter
uses constant power to communicate with a cognitive receiver at a bounded
distance away, and (ii) when each cognitive transmitter scales its power
according to the distance to a considered primary user, allowing the cognitive
transmitter-receiver distances to grow. Using single-hop transmission, suitable
for cognitive devices of opportunistic nature, we show that, in both scenarios,
with path loss larger than 2, the cognitive network throughput scales linearly
with the number of cognitive users. We then explore the radius of a primary
exclusive region void of cognitive transmitters. We obtain bounds on this
radius for a given primary outage constraint. These bounds can help in the
design of a primary network with exclusive regions, outside of which cognitive
users may transmit freely. Our results show that opportunistic secondary
spectrum access using single-hop transmission is promising.Comment: significantly revised and extended, 30 pages, 13 figures, submitted
to IEEE Journal of Special Topics in Signal Processin
SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization
Computer vision is experiencing an AI renaissance, in which machine learning
models are expediting important breakthroughs in academic research and
commercial applications. Effectively training these models, however, is not
trivial due in part to hyperparameters: user-configured values that control a
model's ability to learn from data. Existing hyperparameter optimization
methods are highly parallel but make no effort to balance the search across
heterogeneous hardware or to prioritize searching high-impact spaces. In this
paper, we introduce a framework for massively Scalable Hardware-Aware
Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the
relative complexity of each search space and monitors performance on the
learning task over all trials. These metrics are then used as heuristics to
assign hyperparameters to distributed workers based on their hardware. We first
demonstrate that our framework achieves double the throughput of a standard
distributed hyperparameter optimization framework by optimizing SVM for MNIST
using 150 distributed workers. We then conduct model search with SHADHO over
the course of one week using 74 GPUs across two compute clusters to optimize
U-Net for a cell segmentation task, discovering 515 models that achieve a lower
validation loss than standard U-Net.Comment: 10 pages, 6 figure
Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization
The use of surrogate based optimization (SBO) is widely spread in engineering design to reduce the number of computational expensive simulations. However, "real-world" problems often consist of multiple, conflicting objectives leading to a set of competitive solutions (the Pareto front). The objectives are often aggregated into a single cost function to reduce the computational cost, though a better approach is to use multiobjective optimization methods to directly identify a set of Pareto-optimal solutions, which can be used by the designer to make more efficient design decisions (instead of weighting and aggregating the costs upfront). Most of the work in multiobjective optimization is focused on multiobjective evolutionary algorithms (MOEAs). While MOEAs are well-suited to handle large, intractable design spaces, they typically require thousands of expensive simulations, which is prohibitively expensive for the problems under study. Therefore, the use of surrogate models in multiobjective optimization, denoted as multiobjective surrogate-based optimization, may prove to be even more worthwhile than SBO methods to expedite the optimization of computational expensive systems. In this paper, the authors propose the efficient multiobjective optimization (EMO) algorithm which uses Kriging models and multiobjective versions of the probability of improvement and expected improvement criteria to identify the Pareto front with a minimal number of expensive simulations. The EMO algorithm is applied on multiple standard benchmark problems and compared against the well-known NSGA-II, SPEA2 and SMS-EMOA multiobjective optimization methods
- …