2,087 research outputs found

    Normal, Abby Normal, Prefix Normal

    Full text link
    A prefix normal word is a binary word with the property that no substring has more 1s than the prefix of the same length. This class of words is important in the context of binary jumbled pattern matching. In this paper we present results about the number pnw(n)pnw(n) of prefix normal words of length nn, showing that pnw(n)=Ω(2n−cnln⁥n)pnw(n) =\Omega\left(2^{n - c\sqrt{n\ln n}}\right) for some cc and pnw(n)=O(2n(ln⁥n)2n)pnw(n) = O \left(\frac{2^n (\ln n)^2}{n}\right). We introduce efficient algorithms for testing the prefix normal property and a "mechanical algorithm" for computing prefix normal forms. We also include games which can be played with prefix normal words. In these games Alice wishes to stay normal but Bob wants to drive her "abnormal" -- we discuss which parameter settings allow Alice to succeed.Comment: Accepted at FUN '1

    On the Parikh-de-Bruijn grid

    Full text link
    We introduce the Parikh-de-Bruijn grid, a graph whose vertices are fixed-order Parikh vectors, and whose edges are given by a simple shift operation. This graph gives structural insight into the nature of sets of Parikh vectors as well as that of the Parikh set of a given string. We show its utility by proving some results on Parikh-de-Bruijn strings, the abelian analog of de-Bruijn sequences.Comment: 18 pages, 3 figures, 1 tabl

    Hierarchically-Attentive RNN for Album Summarization and Storytelling

    Full text link
    We address the problem of end-to-end visual storytelling. Given a photo album, our model first selects the most representative (summary) photos, and then composes a natural language story for the album. For this task, we make use of the Visual Storytelling dataset and a model composed of three hierarchically-attentive Recurrent Neural Nets (RNNs) to: encode the album photos, select representative (summary) photos, and compose the story. Automatic and human evaluations show our model achieves better performance on selection, generation, and retrieval than baselines.Comment: To appear at EMNLP-2017 (7 pages

    Visual Recognition for Dynamic Scenes

    Get PDF
    abstract: Recognition memory was investigated for naturalistic dynamic scenes. Although visual recognition for static objects and scenes has been investigated previously and found to be extremely robust in terms of fidelity and retention, visual recognition for dynamic scenes has received much less attention. In four experiments, participants view a number of clips from novel films and are then tasked to complete a recognition test containing frames from the previously viewed films and difficult foil frames. Recognition performance is good when foils are taken from other parts of the same film (Experiment 1), but degrades greatly when foils are taken from unseen gaps from within the viewed footage (Experiments 3 and 4). Removing all non-target frames had a serious effect on recognition performance (Experiment 2). Across all experiments, presenting the films as a random series of clips seemed to have no effect on recognition performance. Patterns of accuracy and response latency in Experiments 3 and 4 appear to be a result of a serial-search process. It is concluded that visual representations of dynamic scenes may be stored as units of events, and participant's old/new judgments of individual frames were better characterized by a cued-recall paradigm than traditional recognition judgments.Dissertation/ThesisPh.D. Psychology 201

    Visual processing of words in a patient with visual form agnosia: A behavioural and fMRI study

    Get PDF
    Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to “guess” remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area. We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F

    The effect of frequency on learners’ ability to recall the forms of deliberately learned L2 multiword expressions

    Get PDF
    In incidental learning, vocabulary items with high or relatively high objective frequency in input are comparatively likely to be acquired. However, many single words and most multiword expressions (MWEs) occur infrequently in authentic input. It has therefore been argued that learners of school age or older can benefit from episodes of instructed or self-managed deliberate (or intentional) L2 vocabulary learning, especially when L2 is learned in an EFL environment and most especially when productive knowledge is the goal. A relevant question is whether the objective frequency of vocabulary items is an important factor in production-oriented deliberate L2 vocabulary learning. We report three small-scale interim meta-analyses addressing this question with regard to two-word English Adj-Noun and Noun-Noun expressions. The data derive from 8 original studies involving 406 learners and 139 different MWEs. Our results suggest that objective frequency has a weak, possibly negative effect in the deliberate learning of MWE forms
    • 

    corecore