35,498 research outputs found

    I Don't Want to Think About it Now:Decision Theory With Costly Computation

    Full text link
    Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult. In a companion paper we develop a framework for game theory with costly computation, where the objects of choice are Turing machines. Here we apply that framework to decision theory. We show how well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on), belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions), and the status quo bias (people are much more likely to stick with what they already have) can be easily captured in that framework. Finally, we use the framework to define some new notions: value of computational information (a computational variant of value of information) and and computational value of conversation.Comment: In Conference on Knowledge Representation and Reasoning (KR '10

    Bounding Rationality by Discounting Time

    Get PDF
    Consider a game where Alice generates an integer and Bob wins if he can factor that integer. Traditional game theory tells us that Bob will always win this game even though in practice Alice will win given our usual assumptions about the hardness of factoring. We define a new notion of bounded rationality, where the payoffs of players are discounted by the computation time they take to produce their actions. We use this notion to give a direct correspondence between the existence of equilibria where Alice has a winning strategy and the hardness of factoring. Namely, under a natural assumption on the discount rates, there is an equilibriumwhere Alice has a winning strategy iff there is a linear-time samplable distribution with respect to which Factoring is hard on average. We also give general results for discounted games over countable action spaces, including showing that any game with bounded and computable payoffs has an equilibrium in our model, even if each player is allowed a countable number of actions. It follows, for example, that the Largest Integer game has an equilibrium in our model though it has no Nash equilibria or epsilon-Nash equilibria.Comment: To appear in Proceedings of The First Symposium on Innovations in Computer Scienc

    Computationally Data-Independent Memory Hard Functions

    Get PDF
    Memory hard functions (MHFs) are an important cryptographic primitive that are used to design egalitarian proofs of work and in the construction of moderately expensive key-derivation functions resistant to brute-force attacks. Broadly speaking, MHFs can be divided into two categories: data-dependent memory hard functions (dMHFs) and data-independent memory hard functions (iMHFs). iMHFs are resistant to certain side-channel attacks as the memory access pattern induced by the honest evaluation algorithm is independent of the potentially sensitive input e.g., password. While dMHFs are potentially vulnerable to side-channel attacks (the induced memory access pattern might leak useful information to a brute-force attacker), they can achieve higher cumulative memory complexity (CMC) in comparison than an iMHF. In particular, any iMHF that can be evaluated in N steps on a sequential machine has CMC at most ?((N^2 log log N)/log N). By contrast, the dMHF scrypt achieves maximal CMC ?(N^2) - though the CMC of scrypt would be reduced to just ?(N) after a side-channel attack. In this paper, we introduce the notion of computationally data-independent memory hard functions (ciMHFs). Intuitively, we require that memory access pattern induced by the (randomized) ciMHF evaluation algorithm appears to be independent from the standpoint of a computationally bounded eavesdropping attacker - even if the attacker selects the initial input. We then ask whether it is possible to circumvent known upper bound for iMHFs and build a ciMHF with CMC ?(N^2). Surprisingly, we answer the question in the affirmative when the ciMHF evaluation algorithm is executed on a two-tiered memory architecture (RAM/Cache). We introduce the notion of a k-restricted dynamic graph to quantify the continuum between unrestricted dMHFs (k=n) and iMHFs (k=1). For any ? > 0 we show how to construct a k-restricted dynamic graph with k=?(N^(1-?)) that provably achieves maximum cumulative pebbling cost ?(N^2). We can use k-restricted dynamic graphs to build a ciMHF provided that cache is large enough to hold k hash outputs and the dynamic graph satisfies a certain property that we call "amenable to shuffling". In particular, we prove that the induced memory access pattern is indistinguishable to a polynomial time attacker who can monitor the locations of read/write requests to RAM, but not cache. We also show that when k=o(N^(1/log log N))then any k-restricted graph with constant indegree has cumulative pebbling cost o(N^2). Our results almost completely characterize the spectrum of k-restricted dynamic graphs

    Quantifying Resource Use in Computations

    Get PDF
    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a natural way to known computational trade-offs and can be used to estimate the computational capacity of real silicon hardware and neural nets. The theory is applied to a historical case of 56 bit DES key recovery, as an example of application to cryptanalysis. Furthermore, the relative computational capacities of human brain neurons and the C. elegans nervous system are estimated as an example of application to neural nets.Comment: 26 pages, no figure

    An Investigation Into the use of Swarm Intelligence for an Evolutionary Algorithm Optimisation; The Optimisation Performance of Differential Evolution Algorithm Coupled with Stochastic Diffusion Search

    Get PDF
    The integration of Swarm Intelligence (SI) algorithms and Evolutionary algorithms (EAs) might be one of the future approaches in the Evolutionary Computation (EC). This work narrates the early research on using Stochastic Diffusion Search (SDS) -- a swarm intelligence algorithm -- to empower the Differential Evolution (DE) -- an evolutionary algorithm -- over a set of optimisation problems. The results reported herein suggest that the powerful resource allocation mechanism deployed in SDS has the potential to improve the optimisation capability of the classical evolutionary algorithm used in this experiment. Different performance measures and statistical analyses were utilised to monitor the behaviour of the final coupled algorithm
    corecore