91,300 research outputs found
A Cost-based Optimizer for Gradient Descent Optimization
As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.Comment: Accepted at SIGMOD 201
PF-OLA: A High-Performance Framework for Parallel On-Line Aggregation
Online aggregation provides estimates to the final result of a computation
during the actual processing. The user can stop the computation as soon as the
estimate is accurate enough, typically early in the execution. This allows for
the interactive data exploration of the largest datasets. In this paper we
introduce the first framework for parallel online aggregation in which the
estimation virtually does not incur any overhead on top of the actual
execution. We define a generic interface to express any estimation model that
abstracts completely the execution details. We design a novel estimator
specifically targeted at parallel online aggregation. When executed by the
framework over a massive TPC-H instance, the estimator provides
accurate confidence bounds early in the execution even when the cardinality of
the final result is seven orders of magnitude smaller than the dataset size and
without incurring overhead.Comment: 36 page
Technical Report: Distribution Temporal Logic: Combining Correctness with Quality of Estimation
We present a new temporal logic called Distribution Temporal Logic (DTL)
defined over predicates of belief states and hidden states of partially
observable systems. DTL can express properties involving uncertainty and
likelihood that cannot be described by existing logics. A co-safe formulation
of DTL is defined and algorithmic procedures are given for monitoring
executions of a partially observable Markov decision process with respect to
such formulae. A simulation case study of a rescue robotics application
outlines our approach.Comment: More expanded version of "Distribution Temporal Logic: Combining
Correctness with Quality of Estimation" to appear in IEEE CDC 201
Technical report: Distribution Temporal Logic: combining correctness with quality of estimation
We present a new temporal logic called Distribution Temporal Logic (DTL) defined over predicates of belief states and hidden states of partially observable systems. DTL can express properties involving uncertainty and likelihood that cannot be described by existing logics. A co-safe formulation of DTL is defined and algorithmic procedures are given for monitoring executions of a partially observable Markov decision process with respect to such formulae. A simulation case study of a rescue robotics application outlines our approach
An Intermediate Language and Estimator for Automated Design Space Exploration on FPGAs
We present the TyTra-IR, a new intermediate language intended as a
compilation target for high-level language compilers and a front-end for HDL
code generators. We develop the requirements of this new language based on the
design-space of FPGAs that it should be able to express and the
estimation-space in which each configuration from the design-space should be
mappable in an automated design flow. We use a simple kernel to illustrate
multiple configurations using the semantics of TyTra-IR. The key novelty of
this work is the cost model for resource-costs and throughput for different
configurations of interest for a particular kernel. Through the realistic
example of a Successive Over-Relaxation kernel implemented both in TyTra-IR and
HDL, we demonstrate both the expressiveness of the IR and the accuracy of our
cost model.Comment: Pre-print and extended version of poster paper accepted at
international symposium on Highly Efficient Accelerators and Reconfigurable
Technologies (HEART2015) Boston, MA, USA, June 1-2, 201
- …