9,941 research outputs found
Generalization Bounds via Information Density and Conditional Information Density
We present a general approach, based on an exponential inequality, to derive
bounds on the generalization error of randomized learning algorithms. Using
this approach, we provide bounds on the average generalization error as well as
bounds on its tail probability, for both the PAC-Bayesian and single-draw
scenarios. Specifically, for the case of subgaussian loss functions, we obtain
novel bounds that depend on the information density between the training data
and the output hypothesis. When suitably weakened, these bounds recover many of
the information-theoretic available bounds in the literature. We also extend
the proposed exponential-inequality approach to the setting recently introduced
by Steinke and Zakynthinou (2020), where the learning algorithm depends on a
randomly selected subset of the available training data. For this setup, we
present bounds for bounded loss functions in terms of the conditional
information density between the output hypothesis and the random variable
determining the subset choice, given all training data. Through our approach,
we recover the average generalization bound presented by Steinke and
Zakynthinou (2020) and extend it to the PAC-Bayesian and single-draw scenarios.
For the single-draw scenario, we also obtain novel bounds in terms of the
conditional -mutual information and the conditional maximal leakage.Comment: Published in Journal on Selected Areas in Information Theory (JSAIT).
Important note: the proof of the data-dependent bounds provided in the paper
contains an error, which is rectified in the following document:
https://gdurisi.github.io/files/2021/jsait-correction.pd
To develop an efficient variable speed compressor motor system
This research presents a proposed new method of improving the energy efficiency of a Variable Speed Drive (VSD) for induction motors. The principles of VSD are reviewed with emphasis on the efficiency and power losses associated with the operation of the variable speed compressor motor drive, particularly at low speed operation.The efficiency of induction motor when operated at rated speed and load torque
is high. However at low load operation, application of the induction motor at rated flux will cause the iron losses to increase excessively, hence its efficiency will reduce
dramatically. To improve this efficiency, it is essential to obtain the flux level that minimizes the total motor losses. This technique is known as an efficiency or energy
optimization control method. In practice, typical of the compressor load does not require high dynamic response, therefore improvement of the efficiency optimization
control that is proposed in this research is based on scalar control model.In this research, development of a new neural network controller for efficiency optimization control is proposed. The controller is designed to generate both voltage and frequency reference signals imultaneously. To achieve a robust controller from variation of motor parameters, a real-time or on-line learning algorithm based on a second order optimization Levenberg-Marquardt is employed. The simulation of the proposed controller for variable speed compressor is presented. The results obtained
clearly show that the efficiency at low speed is significant increased. Besides that the speed of the motor can be maintained. Furthermore, the controller is also robust to the motor parameters variation. The simulation results are also verified by experiment
Quantitative information flow under generic leakage functions and adaptive adversaries
We put forward a model of action-based randomization mechanisms to analyse
quantitative information flow (QIF) under generic leakage functions, and under
possibly adaptive adversaries. This model subsumes many of the QIF models
proposed so far. Our main contributions include the following: (1) we identify
mild general conditions on the leakage function under which it is possible to
derive general and significant results on adaptive QIF; (2) we contrast the
efficiency of adaptive and non-adaptive strategies, showing that the latter are
as efficient as the former in terms of length up to an expansion factor bounded
by the number of available actions; (3) we show that the maximum information
leakage over strategies, given a finite time horizon, can be expressed in terms
of a Bellman equation. This can be used to compute an optimal finite strategy
recursively, by resorting to standard methods like backward induction.Comment: Revised and extended version of conference paper with the same title
appeared in Proc. of FORTE 2014, LNC
Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario
A variety of methods is available to quantify uncertainties arising with\-in
the modeling of flow and transport in carbon dioxide storage, but there is a
lack of thorough comparisons. Usually, raw data from such storage sites can
hardly be described by theoretical statistical distributions since only very
limited data is available. Hence, exact information on distribution shapes for
all uncertain parameters is very rare in realistic applications. We discuss and
compare four different methods tested for data-driven uncertainty
quantification based on a benchmark scenario of carbon dioxide storage. In the
benchmark, for which we provide data and code, carbon dioxide is injected into
a saline aquifer modeled by the nonlinear capillarity-free fractional flow
formulation for two incompressible fluid phases, namely carbon dioxide and
brine. To cover different aspects of uncertainty quantification, we incorporate
various sources of uncertainty such as uncertainty of boundary conditions, of
conceptual model definitions and of material properties. We consider recent
versions of the following non-intrusive and intrusive uncertainty
quantification methods: arbitary polynomial chaos, spatially adaptive sparse
grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The
performance of each approach is demonstrated assessing expectation value and
standard deviation of the carbon dioxide saturation against a reference
statistic based on Monte Carlo sampling. We compare the convergence of all
methods reporting on accuracy with respect to the number of model runs and
resolution. Finally we offer suggestions about the methods' advantages and
disadvantages that can guide the modeler for uncertainty quantification in
carbon dioxide storage and beyond
Exploring Differential Obliviousness
In a recent paper, Chan et al. [SODA \u2719] proposed a relaxation of the notion of (full) memory obliviousness, which was introduced by Goldreich and Ostrovsky [J. ACM \u2796] and extensively researched by cryptographers. The new notion, differential obliviousness, requires that any two neighboring inputs exhibit similar memory access patterns, where the similarity requirement is that of differential privacy. Chan et al. demonstrated that differential obliviousness allows achieving improved efficiency for several algorithmic tasks, including sorting, merging of sorted lists, and range query data structures.
In this work, we continue the exploration of differential obliviousness, focusing on algorithms that do not necessarily examine all their input. This choice is motivated by the fact that the existence of logarithmic overhead ORAM protocols implies that differential obliviousness can yield at most a logarithmic improvement in efficiency for computations that need to examine all their input. In particular, we explore property testing, where we show that differential obliviousness yields an almost linear improvement in overhead in the dense graph model, and at most quadratic improvement in the bounded degree model. We also explore tasks where a non-oblivious algorithm would need to explore different portions of the input, where the latter would depend on the input itself, and where we show that such a behavior can be maintained under differential obliviousness, but not under full obliviousness. Our examples suggest that there would be benefits in further exploring which class of computational tasks are amenable to differential obliviousness
Automatic programming methodologies for electronic hardware fault monitoring
This paper presents three variants of Genetic Programming (GP) approaches for intelligent online performance monitoring of electronic circuits and systems. Reliability modeling of electronic circuits can be best performed by the Stressor - susceptibility interaction model. A circuit or a system is considered to be failed once the stressor has exceeded the susceptibility limits. For on-line prediction, validated stressor vectors may be obtained by direct measurements or sensors, which after pre-processing and standardization are fed into the GP models. Empirical results are compared with artificial neural networks trained using backpropagation algorithm and classification and regression trees. The performance of the proposed method is evaluated by comparing the experiment results with the actual failure model values. The developed model reveals that GP could play an important role for future fault monitoring systems.This research was supported by the International Joint Research Grant of the IITA (Institute of Information Technology Assessment) foreign professor invitation program of the MIC (Ministry of Information and Communication), Korea
- …