259,241 research outputs found

    Lesions impairing regular versus irregular past tense production

    Get PDF
    We investigated selective impairments in the production of regular and irregular past tense by examining language performance and lesion sites in a sample of twelve stroke patients. A disadvantage in regular past tense production was observed in six patients when phonological complexity was greater for regular than irregular verbs, and in three patients when phonological complexity was closely matched across regularity. These deficits were not consistently related to grammatical difficulties or phonological errors but were consistently related to lesion site. All six patients with a regular past tense disadvantage had damage to the left ventral pars opercularis (in the inferior frontal cortex), an area associated with articulatory sequencing in prior functional imaging studies. In addition, those that maintained a disadvantage for regular verbs when phonological complexity was controlled had damage to the left ventral supramarginal gyrus (in the inferior parietal lobe), an area associated with phonological short-term memory. When these frontal and parietal regions were spared in patients who had damage to subcortical (n = 2) or posterior temporo-parietal regions (n = 3), past tense production was relatively unimpaired for both regular and irregular forms. The remaining (12th) patient was impaired in producing regular past tense but was significantly less accurate when producing irregular past tense. This patient had frontal, parietal, subcortical and posterior temporo-parietal damage, but was distinguished from the other patients by damage to the left anterior temporal cortex, an area associated with semantic processing. We consider how our lesion site and behavioural observations have implications for theoretical accounts of past tense production

    Computational Indistinguishability between Quantum States and Its Cryptographic Application

    Full text link
    We introduce a computational problem of distinguishing between two specific quantum states as a new cryptographic problem to design a quantum cryptographic scheme that is "secure" against any polynomial-time quantum adversary. Our problem, QSCDff, is to distinguish between two types of random coset states with a hidden permutation over the symmetric group of finite degree. This naturally generalizes the commonly-used distinction problem between two probability distributions in computational cryptography. As our major contribution, we show that QSCDff has three properties of cryptographic interest: (i) QSCDff has a trapdoor; (ii) the average-case hardness of QSCDff coincides with its worst-case hardness; and (iii) QSCDff is computationally at least as hard as the graph automorphism problem in the worst case. These cryptographic properties enable us to construct a quantum public-key cryptosystem, which is likely to withstand any chosen plaintext attack of a polynomial-time quantum adversary. We further discuss a generalization of QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme that relies on similar cryptographic properties of QSCDcyc.Comment: 24 pages, 2 figures. We improved presentation, and added more detail proofs and follow-up of recent wor

    Institutional theory and legislatures

    Get PDF
    Institutionalism has become one of the dominant strands of theory within contemporary political science. Beginning with the challenge to behavioral and rational choice theory issued by March and Olsen, institutional analysis has developed into an important alternative to more individualistic approaches to theory and analysis. This body of theory has developed in a number of ways, and perhaps the most commonly applied version in political science is historical institutionalism that stresses the importance of path dependency in shaping institutional behaviour. The fundamental question addressed in this book is whether institutionalism is useful for the various sub-disciplines within political science to which it has been applied, and to what extent the assumptions inherent to institutional analysis can be useful for understanding the range of behavior of individuals and structures in the public sector. The volume will also examine the relative utility of different forms of institutionalism within the various sub-disciplines. The book consists of a set of strong essays by noted international scholars from a range of sub-disciplines within the field of political science, each analyzing their area of research from an institutionalist perspective and assessing what contributions this form of theorizing has made, and can make, to that research. The result is a balanced and nuanced account of the role of institutions in contemporary political science, and a set of suggestions for the further development of institutional theory

    Myths and Legends of the Baldwin Effect

    Get PDF
    This position paper argues that the Baldwin effect is widely misunderstood by the evolutionary computation community. The misunderstandings appear to fall into two general categories. Firstly, it is commonly believed that the Baldwin effect is concerned with the synergy that results when there is an evolving population of learning individuals. This is only half of the story. The full story is more complicated and more interesting. The Baldwin effect is concerned with the costs and benefits of lifetime learning by individuals in an evolving population. Several researchers have focussed exclusively on the benefits, but there is much to be gained from attention to the costs. This paper explains the two sides of the story and enumerates ten of the costs and benefits of lifetime learning by individuals in an evolving population. Secondly, there is a cluster of misunderstandings about the relationship between the Baldwin effect and Lamarckian inheritance of acquired characteristics. The Baldwin effect is not Lamarckian. A Lamarckian algorithm is not better for most evolutionary computing problems than a Baldwinian algorithm. Finally, Lamarckian inheritance is not a better model of memetic (cultural) evolution than the Baldwin effect

    Why DO dove: Evidence for register variation in Early Modern English negatives

    Get PDF
    The development of “supportive” (or “periphrastic”) DO in English suffered a curious and sharp reversal late in the 16th century in negative declaratives and questions according to Ellegård's (1953) database, with a recovery late in the following century. This article examines the variation between DO and the full verb in negative declaratives in this database, from 1500 to 1710. It is shown that both register variation and age-grading are relevant, and that the periods 1500–1575 and 1600–1710 have radically distinct properties. The second period shows substantial age-grading, and is interpreted as having introduced a fresh evaluative principle governing register variation. Negative questions supply data that suggest that the development of clitic negation may have been implicated in the development of the new evaluation. This change in evaluation accounts for the apparent reversal in the development of DO, and we can abandon the view that it was a consequence of grammatical restructuring

    Restricted Recurrent Neural Networks

    Full text link
    Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis. In this paper, we present a new methodology to significantly reduce the number of parameters in RNNs while maintaining performance that is comparable or even better than classical RNNs. The new proposal, referred to as Restricted Recurrent Neural Network (RRNN), restricts the weight matrices corresponding to the input data and hidden states at each time step to share a large proportion of parameters. The new architecture can be regarded as a compression of its classical counterpart, but it does not require pre-training or sophisticated parameter fine-tuning, both of which are major issues in most existing compression techniques. Experiments on natural language modeling show that compared with its classical counterpart, the restricted recurrent architecture generally produces comparable results at about 50\% compression rate. In particular, the Restricted LSTM can outperform classical RNN with even less number of parameters
    corecore