7,594 research outputs found

    Search versus Knowledge: An Empirical Study of Minimax on KRK

    Get PDF
    This article presents the results of an empirical experiment designed to gain insight into what is the effect of the minimax algorithm on the evaluation function. The experiment’s simulations were performed upon the KRK chess endgame. Our results show that dependencies between evaluations of sibling nodes in a game tree and an abundance of possibilities to commit blunders present in the KRK endgame are not sufficient to explain the success of the minimax principle in practical game-playing as was previously believed. The article shows that minimax in combination with a noisy evaluation function introduces a bias into the backed-up evaluations and argues that this bias is what masked the effectiveness of the minimax in previous studies

    Frequency-splitting Dynamic MRI Reconstruction using Multi-scale 3D Convolutional Sparse Coding and Automatic Parameter Selection

    Get PDF
    Department of Computer Science and EngineeringIn this thesis, we propose a novel image reconstruction algorithm using multi-scale 3D con- volutional sparse coding and a spectral decomposition technique for highly undersampled dy- namic Magnetic Resonance Imaging (MRI) data. The proposed method recovers high-frequency information using a shared 3D convolution-based dictionary built progressively during the re- construction process in an unsupervised manner, while low-frequency information is recovered using a total variation-based energy minimization method that leverages temporal coherence in dynamic MRI. Additionally, the proposed 3D dictionary is built across three different scales to more efficiently adapt to various feature sizes, and elastic net regularization is employed to promote a better approximation to the sparse input data. Furthermore, the computational com- plexity of each component in our iterative method is analyzed. We also propose an automatic parameter selection technique based on a genetic algorithm to find optimal parameters for our numerical solver which is a variant of the alternating direction method of multipliers (ADMM). We demonstrate the performance of our method by comparing it with state-of-the-art methods on 15 single-coil cardiac, 7 single-coil DCE, and a multi-coil brain MRI datasets at different sampling rates (12.5%, 25% and 50%). The results show that our method significantly outper- forms the other state-of-the-art methods in reconstruction quality with a comparable running time and is resilient to noise.ope

    Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework

    Full text link
    Example weighting algorithm is an effective solution to the training bias problem, however, most previous typical methods are usually limited to human knowledge and require laborious tuning of hyperparameters. In this paper, we propose a novel example weighting framework called Learning to Auto Weight (LAW). The proposed framework finds step-dependent weighting policies adaptively, and can be jointly trained with target networks without any assumptions or prior knowledge about the dataset. It consists of three key components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge searching space in a complete training process; Duplicate Network Reward (DNR) gives more accurate supervision by removing randomness during the searching process; Full Data Update (FDU) further improves the updating efficiency. Experimental results demonstrate the superiority of weighting policy explored by LAW over standard training pipeline. Compared with baselines, LAW can find a better weighting schedule which achieves much more superior accuracy on both biased CIFAR and ImageNet.Comment: Accepted by AAAI 202

    A study of pattern recovery in recurrent correlation associative memories

    Get PDF
    In this paper, we analyze the recurrent correlation associative memory (RCAM) model of Chiueh and Goodman. This is an associative memory in which stored binary memory patterns are recalled via an iterative update rule. The update of the individual pattern-bits is controlled by an excitation function, which takes as its arguement the inner product between the stored memory patterns and the input patterns. Our contribution is to analyze the dynamics of pattern recall when the input patterns are corrupted by noise of a relatively unrestricted class. We make three contributions. First, we show how to identify the excitation function which maximizes the separation (the Fisher discriminant) between the uncorrupted realization of the noisy input pattern and the remaining patterns residing in the memory. Moreover, we show that the excitation function which gives maximum separation is exponential when the input bit-errors follow a binomial distribution. Our second contribution is to develop an expression for the expectation value of bit-error probability on the input pattern after one iteration. We show how to identify the excitation function which minimizes the bit-error probability. However, there is no closed-form solution and the excitation function must be recovered numerically. The relationship between the excitation functions which result from the two different approaches is examined for a binomial distribution of bit-errors. The final contribution is to develop a semiempirical approach to the modeling of the dynamics of the RCAM. This provides us with a numerical means of predicting the recall error rate of the memory. It also allows us to develop an expression for the storage capacity for a given recall error rate

    End-to-End Differentiable Proving

    Get PDF
    We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.Comment: NIPS 2017 camera-ready, NIPS 201

    Shape Interaction Matrix Revisited and Robustified: Efficient Subspace Clustering with Corrupted and Incomplete Data

    Full text link
    The Shape Interaction Matrix (SIM) is one of the earliest approaches to performing subspace clustering (i.e., separating points drawn from a union of subspaces). In this paper, we revisit the SIM and reveal its connections to several recent subspace clustering methods. Our analysis lets us derive a simple, yet effective algorithm to robustify the SIM and make it applicable to realistic scenarios where the data is corrupted by noise. We justify our method by intuitive examples and the matrix perturbation theory. We then show how this approach can be extended to handle missing data, thus yielding an efficient and general subspace clustering algorithm. We demonstrate the benefits of our approach over state-of-the-art subspace clustering methods on several challenging motion segmentation and face clustering problems, where the data includes corrupted and missing measurements.Comment: This is an extended version of our iccv15 pape
    corecore