22,995 research outputs found
Towards Smart Hybrid Fuzzing for Smart Contracts
Smart contracts are Turing-complete programs that are executed across a
blockchain network. Unlike traditional programs, once deployed they cannot be
modified. As smart contracts become more popular and carry more value, they
become more of an interesting target for attackers. In recent years, smart
contracts suffered major exploits, costing millions of dollars, due to
programming errors. As a result, a variety of tools for detecting bugs has been
proposed. However, majority of these tools often yield many false positives due
to over-approximation or poor code coverage due to complex path constraints.
Fuzzing or fuzz testing is a popular and effective software testing technique.
However, traditional fuzzers tend to be more effective towards finding shallow
bugs and less effective in finding bugs that lie deeper in the execution. In
this work, we present CONFUZZIUS, a hybrid fuzzer that combines evolutionary
fuzzing with constraint solving in order to execute more code and find more
bugs in smart contracts. Evolutionary fuzzing is used to exercise shallow parts
of a smart contract, while constraint solving is used to generate inputs which
satisfy complex conditions that prevent the evolutionary fuzzing from exploring
deeper paths. Moreover, we use data dependency analysis to efficiently generate
sequences of transactions, that create specific contract states in which bugs
may be hidden. We evaluate the effectiveness of our fuzzing strategy, by
comparing CONFUZZIUS with state-of-the-art symbolic execution tools and
fuzzers. Our evaluation shows that our hybrid fuzzing approach produces
significantly better results than state-of-the-art symbolic execution tools and
fuzzers
Recommended from our members
Collaboration Networks, Structural Holes, And Innovation: A Longitudinal Study
To assess the effects of a firm's network of relations on innovation, this paper elaborates a theoretical framework that relates three aspects of a firm's ego network-direct ties, indirect ties, and structural holes (disconnections between a firm's partners)-to the firm's subsequent innovation output. It posits that direct and indirect ties both have a positive impact on innovation but that the impact of indirect ties is moderated by the number of a firm's direct ties. Structural holes are proposed to have both positive and negative influences on subsequent innovation. Results from a longitudinal study of firms in the international chemicals industry indicate support for the predictions on direct and indirect ties, but in the interfirm collaboration network, increasing structural holes has a negative effect on innovation. Among the implications for interorganizational network theory is that the optimal structure of interfirm networks depends on the objectives of the network members.Managemen
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
A central challenge to many fields of science and engineering involves
minimizing non-convex error functions over continuous, high dimensional spaces.
Gradient descent or quasi-Newton methods are almost ubiquitously used to
perform such minimizations, and it is often thought that a main source of
difficulty for these local methods to find the global minimum is the
proliferation of local minima with much higher error than the global minimum.
Here we argue, based on results from statistical physics, random matrix theory,
neural network theory, and empirical evidence, that a deeper and more profound
difficulty originates from the proliferation of saddle points, not local
minima, especially in high dimensional problems of practical interest. Such
saddle points are surrounded by high error plateaus that can dramatically slow
down learning, and give the illusory impression of the existence of a local
minimum. Motivated by these arguments, we propose a new approach to
second-order optimization, the saddle-free Newton method, that can rapidly
escape high dimensional saddle points, unlike gradient descent and quasi-Newton
methods. We apply this algorithm to deep or recurrent neural network training,
and provide numerical evidence for its superior optimization performance.Comment: The theoretical review and analysis in this article draw heavily from
arXiv:1405.4604 [cs.LG
Statistical modelling of summary values leads to accurate Approximate Bayesian Computations
Approximate Bayesian Computation (ABC) methods rely on asymptotic arguments,
implying that parameter inference can be systematically biased even when
sufficient statistics are available. We propose to construct the ABC
accept/reject step from decision theoretic arguments on a suitable auxiliary
space. This framework, referred to as ABC*, fully specifies which test
statistics to use, how to combine them, how to set the tolerances and how long
to simulate in order to obtain accuracy properties on the auxiliary space. Akin
to maximum-likelihood indirect inference, regularity conditions establish when
the ABC* approximation to the posterior density is accurate on the original
parameter space in terms of the Kullback-Leibler divergence and the maximum a
posteriori point estimate. Fundamentally, escaping asymptotic arguments
requires knowledge of the distribution of test statistics, which we obtain
through modelling the distribution of summary values, data points on a summary
level. Synthetic examples and an application to time series data of influenza A
(H3N2) infections in the Netherlands illustrate ABC* in action.Comment: Videos can be played with Acrobat Reader. Manuscript under review and
not accepte
A flexible architecture for privacy-aware trust management
In service-oriented systems a constellation of services cooperate, sharing potentially sensitive information and responsibilities. Cooperation is only possible if the different participants trust each other. As trust may depend on many different factors, in a flexible framework for Trust Management (TM) trust must be computed by combining different types of information. In this paper we describe the TAS3 TM framework which integrates independent TM systems into a single trust decision point. The TM framework supports intricate combinations whilst still remaining easily extensible. It also provides a unified trust evaluation interface to the (authorization framework of the) services. We demonstrate the flexibility of the approach by integrating three distinct TM paradigms: reputation-based TM, credential-based TM, and Key Performance Indicator TM. Finally, we discuss privacy concerns in TM systems and the directions to be taken for the definition of a privacy-friendly TM architecture.\u
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
Implicit particle methods and their connection with variational data assimilation
The implicit particle filter is a sequential Monte Carlo method for data
assimilation that guides the particles to the high-probability regions via a
sequence of steps that includes minimizations. We present a new and more
general derivation of this approach and extend the method to particle smoothing
as well as to data assimilation for perfect models. We show that the
minimizations required by implicit particle methods are similar to the ones one
encounters in variational data assimilation and explore the connection of
implicit particle methods with variational data assimilation. In particular, we
argue that existing variational codes can be converted into implicit particle
methods at a low cost, often yielding better estimates, that are also equipped
with quantitative measures of the uncertainty. A detailed example is presented
- …