376,384 research outputs found

    On the complexity of trial and error for constraint satisfaction problems

    Get PDF
    In 2013 Bei, Chen and Zhang introduced a trial and error model of computing, and applied to some constraint satisfaction problems. In this model the input is hidden by an oracle which, for a candidate assignment, reveals some information about a violated constraint if the assignment is not satisfying. In this paper we initiate a systematic study of constraint satisfaction problems in the trial and error model, by adopting a formal framework for CSPs, and defining several types of revealing oracles. Our main contribution is to develop a transfer theorem for each type of the revealing oracle. To any hidden CSP with a specific type of revealing Oracle, the transfer theorem associates another CSP in the normal setting, such that their complexities are polynomial-time equivalent. This in principle transfers the study of a large class of hidden CSPs to the study of normal CSPs. We apply the transfer theorems to get polynomial-time algorithms or hardness results for several families of concrete problems. (C) 2017 Elsevier Inc. All rights reserved

    A Biologically Plausible Learning Rule for Deep Learning in the Brain

    Get PDF
    Researchers have proposed that deep learning, which is providing important progress in a wide range of high complexity tasks, might inspire new insights into learning in the brain. However, the methods used for deep learning by artificial neural networks are biologically unrealistic and would need to be replaced by biologically realistic counterparts. Previous biologically plausible reinforcement learning rules, like AGREL and AuGMEnT, showed promising results but focused on shallow networks with three layers. Will these learning rules also generalize to networks with more layers and can they handle tasks of higher complexity? We demonstrate the learning scheme on classical and hard image-classification benchmarks, namely MNIST, CIFAR10 and CIFAR100, cast as direct reward tasks, both for fully connected, convolutional and locally connected architectures. We show that our learning rule - Q-AGREL - performs comparably to supervised learning via error-backpropagation, with this type of trial-and-error reinforcement learning requiring only 1.5-2.5 times more epochs, even when classifying 100 different classes as in CIFAR100. Our results provide new insights into how deep learning may be implemented in the brain

    12 Confused Men: Using Flowchart Verdict Sheets To Mitigate Inconsistent Civil Verdicts

    Get PDF
    The finality of jury verdicts reflects an implicit societal acceptance of the soundness of the jury\u27s decision. Regardless, jurors are not infallible, and the questions they are often tasked with deciding are unfortunately neither obvious nor clear. The length of trial, complexity of subject matter, volume of factual background, and opaqueness of law can converge in a perfect storm that may confound even the most capable juror. Although the Federal Rules of Civil Procedure provide decision rules to resolve inconsistent verdicts, the current remedies authorized by Rule 49—notably, the resubmission of the verdict to the jury and the ordering of a new trial—impose time and money costs on the jury, litigants, and judicial system. The increasing complexity of civil litigation raises the stakes by increasing the likelihood of juror error and the costs of relitigating the case. This Note proposes the creation of flowchart verdict sheets as a prophylactic against juror confusion. The flowchart verdict sheet builds upon current legal reform proposals to increase juror understanding while decreasing juror confusion and incorporates principles of effective visual design. By mitigating the confusion that can result in inconsistencies before the verdict is rendered, the flowchart verdict sheet enables the judicial system to avoid the costs associated with remedying inconsistent verdicts

    On the complexity of trial and error for constraint satisfaction problems

    Full text link
    In a recent work of Bei, Chen and Zhang (STOC 2013), a trial and error model of computing was introduced, and applied to some constraint satisfaction problems. In this model the input is hidden by an oracle which, for a candidate assignment, reveals some information about a violated constraint if the assignment is not satisfying. In this paper we initiate a systematic study of constraint satisfaction problems in the trial and error model. To achieve this, we first adopt a formal framework for CSPs, and based on this framework we define several types of revealing oracles. Our main contribution is to develop a transfer theorem for each type of the revealing oracle, under a broad class of parameters. To any hidden CSP with a specific type of revealing oracle, the transfer theorem associates another, potentially harder CSP in the normal setting, such that their complexities are polynomial time equivalent. This in principle transfers the study of a large class of hidden CSPs, possibly with a promise on the instances, to the study of CSPs in the normal setting. We then apply the transfer theorems to get polynomial-time algorithms or hardness results for hidden CSPs, including satisfaction problems, monotone graph properties, isomorphism problems, and the exact version of the Unique Games problem. © 2014 Springer-Verlag

    On Multiple Decoding Attempts for Reed-Solomon Codes: A Rate-Distortion Approach

    Full text link
    One popular approach to soft-decision decoding of Reed-Solomon (RS) codes is based on using multiple trials of a simple RS decoding algorithm in combination with erasing or flipping a set of symbols or bits in each trial. This paper presents a framework based on rate-distortion (RD) theory to analyze these multiple-decoding algorithms. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the successful decoding condition, for a single errors-and-erasures decoding trial, becomes equivalent to distortion being less than a fixed threshold. Finding the best set of erasure patterns also turns into a covering problem which can be solved asymptotically by rate-distortion theory. Thus, the proposed approach can be used to understand the asymptotic performance-versus-complexity trade-off of multiple errors-and-erasures decoding of RS codes. This initial result is also extended a few directions. The rate-distortion exponent (RDE) is computed to give more precise results for moderate blocklengths. Multiple trials of algebraic soft-decision (ASD) decoding are analyzed using this framework. Analytical and numerical computations of the RD and RDE functions are also presented. Finally, simulation results show that sets of erasure patterns designed using the proposed methods outperform other algorithms with the same number of decoding trials.Comment: to appear in the IEEE Transactions on Information Theory (Special Issue on Facets of Coding Theory: from Algorithms to Networks

    Reset-free Trial-and-Error Learning for Robot Damage Recovery

    Get PDF
    The high probability of hardware failures prevents many advanced robots (e.g., legged robots) from being confidently deployed in real-world situations (e.g., post-disaster rescue). Instead of attempting to diagnose the failures, robots could adapt by trial-and-error in order to be able to complete their tasks. In this situation, damage recovery can be seen as a Reinforcement Learning (RL) problem. However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously. In addition, most of the RL methods for robotics do not scale well with complex robots (e.g., walking robots) and either cannot be used at all or take too long to converge to a solution (e.g., hours of learning). In this paper, we introduce a novel learning algorithm called "Reset-free Trial-and-Error" (RTE) that (1) breaks the complexity by pre-generating hundreds of possible behaviors with a dynamics simulator of the intact robot, and (2) allows complex robots to quickly recover from damage while completing their tasks and taking the environment into account. We evaluate our algorithm on a simulated wheeled robot, a simulated six-legged robot, and a real six-legged walking robot that are damaged in several ways (e.g., a missing leg, a shortened leg, faulty motor, etc.) and whose objective is to reach a sequence of targets in an arena. Our experiments show that the robots can recover most of their locomotion abilities in an environment with obstacles, and without any human intervention.Comment: 18 pages, 16 figures, 3 tables, 6 pseudocodes/algorithms, video at https://youtu.be/IqtyHFrb3BU, code at https://github.com/resibots/chatzilygeroudis_2018_rt

    Coupled analysis of material flow and die deflection in direct aluminum extrusion

    Get PDF
    The design of extrusion dies depends on the experience of the designer. After the die has\ud been manufactured, it is tested during an extrusion trial and machined several times until it works\ud properly. The die is designed by a trial and error method which is an expensive process in terms\ud of time and the amount of scrap. In order to decrease the time and the amount of scrap, research is\ud going on to replace the trial pressing with finite element simulations. The goal of these simulations\ud is to predict the material flow through the die. In these simulations, it is required to calculate the\ud material flow and the tool deformation simultaneously. Solving the system of equations concerning\ud the material flow and the tool deformation becomes more difficult with increasing the complexity\ud of the die. For example the total number of degrees of freedom can reach a value of 500,000 for\ud a flat die. Therefore, actions must be taken to solve the material flow and the tool deformation\ud simultaneously and faster. This paper describes the calculation of a flat die deformation used in the\ud production of a U-shape profile with a coupled method. In this calculation an Arbitrary Lagrangian\ud Eulerian and Updated Lagrangian formulation are applied for the aluminum and the tool finite\ud element models respectively. In addition, for decreasing the total number of degrees of freedom,\ud the stiffness matrix of the tool is condensed to the contact nodes between the aluminum and the tool\ud finite element models. Finally, the numerical results are compared with experiment results in terms\ud of extrusion force and the angular deflection of the tongue
    • …
    corecore