83 research outputs found

    Missing Data Imputation and Corrected Statistics for Large-Scale Behavioral Databases

    Full text link
    This paper presents a new methodology to solve problems resulting from missing data in large-scale item performance behavioral databases. Useful statistics corrected for missing data are described, and a new method of imputation for missing data is proposed. This methodology is applied to the DLP database recently published by Keuleers et al. (2010), which allows us to conclude that this database fulfills the conditions of use of the method recently proposed by Courrieu et al. (2011) to test item performance models. Two application programs in Matlab code are provided for the imputation of missing data in databases, and for the computation of corrected statistics to test models.Comment: Behavior Research Methods (2011) in pres

    Solving Time of Least Square Systems in Sigma-Pi Unit Networks

    No full text
    Nombre de pages: 7International audienceThe solving of least square systems is a useful operation in neurocomputational modeling of learning, pattern matching, and pattern recognition. In these last two cases, the solution must be obtained on-line, thus the time required to solve a system in a plausible neural architecture is critical. This paper presents a recurrent network of Sigma-Pi neurons, whose solving time increases at most like the logarithm of the system size, and of its condition number, which provides plausible computation times for biological systems

    Fast solving of Weighted Pairing Least-Squares systems

    Full text link
    This paper presents a generalization of the "weighted least-squares" (WLS), named "weighted pairing least-squares" (WPLS), which uses a rectangular weight matrix and is suitable for data alignment problems. Two fast solving methods, suitable for solving full rank systems as well as rank deficient systems, are studied. Computational experiments clearly show that the best method, in terms of speed, accuracy, and numerical stability, is based on a special {1, 2, 3}-inverse, whose computation reduces to a very simple generalization of the usual "Cholesky factorization-backward substitution" method for solving linear systems

    Fast Density Codes for Image Data

    Full text link
    Recently, a new method for encoding data sets in the form of "Density Codes" was proposed in the literature (Courrieu, 2006). This method allows to compare sets of points belonging to every multidimensional space, and to build shape spaces invariant to a wide variety of affine and non-affine transformations. However, this general method does not take advantage of the special properties of image data, resulting in a quite slow encoding process that makes this tool practically unusable for processing large image databases with conventional computers. This paper proposes a very simple variant of the density code method that directly works on the image function, which is thousands times faster than the original Parzen window based method, without loss of its useful properties.Comment: nombre de pages:

    Accounting for Item Variance in Large-scale Databases

    Get PDF
    International audienceA commentary on: Practice effects in large-scale visual word recognition studies: a lexical decision study on 14,000 Dutch mono- and disyllabic words and nonwords

    ANAGRAM EFFECTS IN VISUAL WORD RECOGNITION

    No full text
    Unpublished manuscript, 40 pages.Four experiments using a lexical decision task showed systematic effects of the anagram relationship between lexical units as well as between prime and target stimuli, even though the letter strings had no common letters in the same position. An "anagram frequency effect", similar to the well known "neighborhood frequency effect", was observed in Experiment 1. An anagram priming effect was observed in Experiment 4. An anagram prime x lexical anagram interaction effect was observed in Experiments 2 and 3. We concluded that the mental lexicon is activated by position-free letter codes, together with other units that separately encode the order information

    Affinely Invariant Features in Visual Perception of Letters and Words

    No full text
    Unpublished manuscript, 14 pages.This paper describes two experiments using a masked priming method with 60 ms SOA. In the first experiment, the task was an alphabetical decision. The stimuli were isolated letters or non-alphabetical symbols, preceded by a similar or different prime, while the primes were scaled down or 180° rotated. Response times to letters revealed priming effects for both prime transformations. In the second experiment, the task was a lexical decision, and the stimuli were five lower-case letter words or pseudo-words. The priming conditions were similar to those of the first experiment. Response times to words revealed priming effects for both prime transformations, however the priming effect was only marginally significant for rotated primes and it appeared dependent on the frequency of use of the prime. A significant correlation between priming effects and the frequency of use of the different prime words was observed. We concluded that scale invariant features are used in the perception of letters and words as well, while 180° rotation invariant features are used in the perception of letters, but no such a conclusion can be drawn for words, in general

    Numerical orthographic coding: merging Open Bigrams and Spatial Coding theories

    Get PDF
    Simple numerical versions of the Spatial Coding and of the Open Bigrams coding of character strings are presented, together with a natural merging of these two approaches. Comparing the predictive performance of these three orthographic coding schemes on orthographic masked priming data, we observe that the merged coding scheme always provides the best fits. Testing the ability of the orthographic codes, used as regressors, to capture relevant regularities in lexical decision data, we also observe that the merged code provides the best fits and that both the spatial coding component and the open bigrams component provide specific and significant contributions. This gives us a new lighting on probable mechanisms involved in orthographic coding, together with new tools for modelling behavioural and electrophysiological data collected in word recognition tasks
    corecore