452 research outputs found

    Noise-based information processing: Noise-based logic and computing: what do we have so far?

    Full text link
    We briefly introduce noise-based logic. After describing the main motivations we outline classical, instantaneous (squeezed and non-squeezed), continuum, spike and random-telegraph-signal based schemes with applications such as circuits that emulate the brain functioning and string verification via a slow communication channel.Comment: Invited talk at the 21st International Conference on Noise and Fluctuations, Toronto, Canada, June 12-16, 201

    Phoneme Recognition Using Acoustic Events

    Get PDF
    This paper presents a new approach to phoneme recognition using nonsequential sub--phoneme units. These units are called acoustic events and are phonologically meaningful as well as recognizable from speech signals. Acoustic events form a phonologically incomplete representation as compared to distinctive features. This problem may partly be overcome by incorporating phonological constraints. Currently, 24 binary events describing manner and place of articulation, vowel quality and voicing are used to recognize all German phonemes. Phoneme recognition in this paradigm consists of two steps: After the acoustic events have been determined from the speech signal, a phonological parser is used to generate syllable and phoneme hypotheses from the event lattice. Results obtained on a speaker--dependent corpus are presented.Comment: 4 pages, to appear at ICSLP'94, PostScript version (compressed and uuencoded

    Symphony on strong field approximation

    Get PDF
    This paper has been prepared by the Symphony collaboration (University of Warsaw, Uniwersytet Jagiellonski, DESY/CNR and ICFO) on the occasion of the 25th anniversary of the 'simple man's models' which underlie most of the phenomena that occur when intense ultrashort laser pulses interact with matter. The phenomena in question include high-harmonic generation (HHG), above-threshold ionization (ATI), and non-sequential multielectron ionization (NSMI). 'Simple man's models' provide both an intuitive basis for understanding the numerical solutions of the time-dependent Schrodinger equation and the motivation for the powerful analytic approximations generally known as the strong field approximation (SFA). In this paper we first review the SFA in the form developed by us in the last 25 years. In this approach the SFA is a method to solve the TDSE, in which the non-perturbative interactions are described by including continuum-continuum interactions in a systematic perturbation-like theory. In this review we focus on recent applications of the SFA to HHG, ATI and NSMI from multi-electron atoms and from multi-atom molecules. The main novel part of the presented theory concerns generalizations of the SFA to: (i) time-dependent treatment of two-electron atoms, allowing for studies of an interplay between electron impact ionization and resonant excitation with subsequent ionization; (ii) time-dependent treatment in the single active electron approximation of 'large' molecules and targets which are themselves undergoing dynamics during the HHG or ATI processes. In particular, we formulate the general expressions for the case of arbitrary molecules, combining input from quantum chemistry and quantum dynamics. We formulate also theory of time-dependent separable molecular potentials to model analytically the dynamics of realistic electronic wave packets for molecules in strong laser fields. We dedicate this work to the memory of Bertrand Carre, who passed away in March 2018 at the age of 60

    Instruction fetch architectures and code layout optimizations

    Get PDF
    The design of higher performance processors has been following two major trends: increasing the pipeline depth to allow faster clock rates, and widening the pipeline to allow parallel execution of more instructions. Designing a higher performance processor implies balancing all the pipeline stages to ensure that overall performance is not dominated by any of them. This means that a faster execution engine also requires a faster fetch engine, to ensure that it is possible to read and decode enough instructions to keep the pipeline full and the functional units busy. This paper explores the challenges faced by the instruction fetch stage for a variety of processor designs, from early pipelined processors, to the more aggressive wide issue superscalars. We describe the different fetch engines proposed in the literature, the performance issues involved, and some of the proposed improvements. We also show how compiler techniques that optimize the layout of the code in memory can be used to improve the fetch performance of the different engines described Overall, we show how instruction fetch has evolved from fetching one instruction every few cycles, to fetching one instruction per cycle, to fetching a full basic block per cycle, to several basic blocks per cycle: the evolution of the mechanism surrounding the instruction cache, and the different compiler optimizations used to better employ these mechanisms.Peer ReviewedPostprint (published version

    Phoneme and sentence-level ensembles for speech recognition

    Get PDF
    We address the question of whether and how boosting and bagging can be used for speech recognition. In order to do this, we compare two different boosting schemes, one at the phoneme level and one at the utterance level, with a phoneme-level bagging scheme. We control for many parameters and other choices, such as the state inference scheme used. In an unbiased experiment, we clearly show that the gain of boosting methods compared to a single hidden Markov model is in all cases only marginal, while bagging significantly outperforms all other methods. We thus conclude that bagging methods, which have so far been overlooked in favour of boosting, should be examined more closely as a potentially useful ensemble learning technique for speech recognition

    Finding the way : navigation in hypermedia

    Get PDF
    If navigation is recognized as a fundamental problem experienced by hypermedia users, then navigation merits further investigation. This review will identify observed navigational problems, suspected causes, and proposed solutions. To investigate the problem, it is necessary to examine methods used to sequence hypermedia components and to identify the various schemes used for navigation. Information resulting from the review of literature will serve as the basis for a project evaluating current multimedia software packages designed to develop reading skills. The findings will also provide the media specialist or professional educator with a framework for evaluation and subsequent utilization of multimedia learning materials. Additionally, the information compiled can serve as a guide to professionals endeavoring to create their own hypermedia materials
    corecore