479,748 research outputs found
Turbo-Aggregate: Breaking the Quadratic Aggregation Barrier in Secure Federated Learning
Federated learning is a distributed framework for training machine learning
models over the data residing at mobile devices, while protecting the privacy
of individual users. A major bottleneck in scaling federated learning to a
large number of users is the overhead of secure model aggregation across many
users. In particular, the overhead of the state-of-the-art protocols for secure
model aggregation grows quadratically with the number of users. In this paper,
we propose the first secure aggregation framework, named Turbo-Aggregate, that
in a network with users achieves a secure aggregation overhead of
, as opposed to , while tolerating up to a user dropout
rate of . Turbo-Aggregate employs a multi-group circular strategy for
efficient model aggregation, and leverages additive secret sharing and novel
coding techniques for injecting aggregation redundancy in order to handle user
dropouts while guaranteeing user privacy. We experimentally demonstrate that
Turbo-Aggregate achieves a total running time that grows almost linear in the
number of users, and provides up to speedup over the
state-of-the-art protocols with up to users. Our experiments also
demonstrate the impact of model size and bandwidth on the performance of
Turbo-Aggregate
Cross-Scale Cost Aggregation for Stereo Matching
Human beings process stereoscopic correspondence across multiple scales.
However, this bio-inspiration is ignored by state-of-the-art cost aggregation
methods for dense stereo correspondence. In this paper, a generic cross-scale
cost aggregation framework is proposed to allow multi-scale interaction in cost
aggregation. We firstly reformulate cost aggregation from a unified
optimization perspective and show that different cost aggregation methods
essentially differ in the choices of similarity kernels. Then, an inter-scale
regularizer is introduced into optimization and solving this new optimization
problem leads to the proposed framework. Since the regularization term is
independent of the similarity kernel, various cost aggregation methods can be
integrated into the proposed general framework. We show that the cross-scale
framework is important as it effectively and efficiently expands
state-of-the-art cost aggregation methods and leads to significant
improvements, when evaluated on Middlebury, KITTI and New Tsukuba datasets.Comment: To Appear in 2013 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR). 2014 (poster, 29.88%
Understanding Price Variation Across Stores and Supermarket Chains: Some Implications for CPI Aggregation Methods
The empirical literature on price indices consistently finds that aggregation methods have a considerable impact, particularly when scanner data are used. This paper outlines a novel approach to test for the homogeneity of goods and hence for the appropriateness of aggregation. A hedonic regression framework is used to test for item homogeneity across four supermarket chains and across stores within each of these supermarket chains. We find empirical support for the aggregation of prices across stores which belong to the same supermarket chain. Support was also found for the aggregation of prices across three of the four supermarket chains.Price indexes; aggregation; scanner data; unit values; item homogeneity; hedonics
Approximation with Error Bounds in Spark
We introduce a sampling framework to support approximate computing with
estimated error bounds in Spark. Our framework allows sampling to be performed
at the beginning of a sequence of multiple transformations ending in an
aggregation operation. The framework constructs a data provenance tree as the
computation proceeds, then combines the tree with multi-stage sampling and
population estimation theories to compute error bounds for the aggregation.
When information about output keys are available early, the framework can also
use adaptive stratified reservoir sampling to avoid (or reduce) key losses in
the final output and to achieve more consistent error bounds across popular and
rare keys. Finally, the framework includes an algorithm to dynamically choose
sampling rates to meet user specified constraints on the CDF of error bounds in
the outputs. We have implemented a prototype of our framework called
ApproxSpark, and used it to implement five approximate applications from
different domains. Evaluation results show that ApproxSpark can (a)
significantly reduce execution time if users can tolerate small amounts of
uncertainties and, in many cases, loss of rare keys, and (b) automatically find
sampling rates to meet user specified constraints on error bounds. We also
explore and discuss extensively trade-offs between sampling rates, execution
time, accuracy and key loss
The Present, Future and Imperfect of Financial Risk Management
Current research on financial risk management applications of econometrics centres on the accurate assessment of individual market and credit risks with relatively little theoretical or applied econometric research on other types of risk, aggregation risk, data incompleteness and optimal risk control. We argue that consideration of the model risk arising from crude aggregation rules and inadequate data could lead to a new class of reduced form Bayesian risk assessment models. Logically, these models should be set within a common factor framework that allows proper risk aggregation methods to be developed. We explain how such a framework could also provide the essential links between risk control, risk assessments and the optimal allocation of resources.Financial risk assessment; risk control, RAROC, economic capital; regulatory capital; optimal allocation of resources
- …
