78 research outputs found

    Conceptual Background for Symbolic Computation

    Get PDF
    This paper is a tutorial which examines the three major models of computation--the Turing Machine, Combinators, and Lambda Calculus--with respect to their usefulness to practical engineering of computing machines. While the classical von Neumann architecture can be deduced from the Turing Machine model, and Combinator machines have been built on an experimental basis, no serious attempts have been made to construct a Lambda Calculus machine. This paper gives a basic outline of how to incorporate a Lambda Calculus capability into a von Neumann type architecture, maintaining full backward compatibility and at the same time making optimal use of its advantages and technological maturity for the Lambda Calculus capability

    The Albert Schweitzer Papers at Syracuse University

    Get PDF
    This article highlights some of the documents contained in the Albert Schweitzer Papers located in the Syrcause University Special Collections. They contain a variety of materials, such as notebooks, letters, manuscripts, miscellanea, and books from Schweitzer\u27s library. The article gives a synopsis of each of these categories, and includes some photos of Schweitzer\u27s life

    A consistent extension of the lambda-calculus as a base for functional programming languages

    Get PDF
    Church's lambda-calculus is modified by introducing a new mechanism, the lambda-bar operator #, which neutralizes the effect of one preceding lambda binding. This operator can be used in such a way that renaming of bound variables in any reduction sequence can be avoided, with the effect that efficient interpreters with comparatively simple machine organization can be designed. It is shown that any semantic model of the pure λ-calculus also serves as a model of this modified reduction calculus, which guarantees smooth semantic theories. The Berkling Reduction Language (BRL) is a new functional programming language based upon this modification

    Arrays and the Lambda Calculus

    Get PDF
    Why do functional languages have more difficulties with arrays than procedural languages? The problems arising in the designing of functional languages with arrays and in their implementations are manifold. They can be classified according to 1) first principles, 2) semantics, 3) pragmatics, and 4) performance. This paper attempts to give an outline of the issues in this area, and their relation to the lambda calculus. The lambda calculus is a formal system and as such seemingly remote from practical applications. However, specific representations and implementations of that system may be utilized to realize arrays such that progress is made towards a compromise of 1) adhering to first principles, 2) clear semantics, 3) obvious pragmatics, and 4) high performance. To be more specific, a form of the lambda calculus which uses a particular representation of variables, namely De Bruijn indices, may be a vehicle to represent arrays and the functions to manipulate them

    HMM-based synthesis of child speech

    Get PDF
    The synthesis of child speech presents challenges both in the collection of data and in the building of a synthesiser from that data. Because only limited data can be collected, and the domain of that data is constrained, it is difficult to obtain the type of phonetically-balanced corpus usually used in speech synthesis. As a consequence, building a synthesiser from this data is difficult. Concatenative synthesisers are not robust to corpora with many missing units (as is likely when the corpus content is not carefully designed), so we chose to build a statistical parametric synthesiser using the HMM-based system HTS. This technique has previously been shown to perform well for limited amounts of data, and for data collected under imperfect conditions. We compared 6 different configurations of the synthesiser, using both speaker-dependent and speaker-adaptive modelling techniques, and using varying amounts of data. The output from these systems was evaluated alongside natural and vocoded speech, in a Blizzard-style listening test

    Automatische Analyse von Rechtschreibfähigkeit auf Basis von Speech-Processing-Technologien

    Full text link
    Der vorliegende Beitrag stellt ein interdisziplinäres Forschungsprojekt zur Entwicklung eines Instruments zur automatisierten Rechtschreibanalyse in frei verfassten Lernertexten vor. Mit diesem Instrument kann dem bekannten Dilemma der \u27ökonomischen vs. Differenzierten Rechtschreibdiagnostik\u27 begegnet werden, da binnen kürzester Zeit große Datenmengen orthographisch detailliert analysiert werden können. Der innovative Ansatz basiert auf der Analyse der Lernerschreibungen unter Einbezug ihrer automatisch generierten Aussprache. Erkenntnisse der automatischen Spracherkennung und -synthese erlauben eine automatisierte Gegenüberstellung aus fehlerhaftem Text und einer auf Basis von assoziierten Wahrscheinlichkeiten ermittelten korrekten Version des Textes. Rechtschreibfehler und Richtigschreibungen können dann automatisch annotiert und klassifiziert werden. Das Instrument wird hier zunächst in seiner Anlage erklärt, dann werden die Ergebnisse aus der Anwendung auf 120 Lernertexte aus Kl. 1 bis 4 vorgestellt. Der Vergleich von automatischer und manueller Analyse zeigt, dass die Machbarkeit dieses Ansatzes sowie die Wege, die begangen werden müssen, um zu einem vollständig autonom agierenden Verfahren zu gelangen. (DIPF/Orig.)The interdisciplinary research presented in this paper introduces a prototype for an automatic mechanism of analyzing and classifying spelling errors in freely written text by learners of the German writing system. Know-how from didactics and computer linguistics and automatic speech processing is combined to close the gap between economic and detailed analysis of spelling by automating the process. Large amounts f data can now be processed and analyzed without additional effort. This innovative approach is based on the connection between writing and pronunciation in addition to the usual study of grapheme-sequences. Know-how from automatic speech recognition and synthesis is leveraged to derive an alignment between grapheme and phoneme for both incorrect and correct spelling. Using this alignment, a detailed spectrum of error-types is detected and classified. This paper will detail the system setup and then proceed to apply it to data consisting of 120 texts collected from elementary school kids in grades 1 through 4. A comparison of hand-labeled and automatic procedure in terms of accuracy is carried out, showing that the approach is viable. Finally, the necessary steps to obtain a fully interconnected version are discussed

    Synthesis of Child Speech With HMM Adaptation and Voice Conversion

    Get PDF
    The synthesis of child speech presents challenges both in the collection of data and in the building of a synthesizer from that data. We chose to build a statistical parametric synthesizer using the hidden Markov model (HMM)-based system HTS, as this technique has previously been shown to perform well for limited amounts of data, and for data collected under imperfect conditions. Six different configurations of the synthesizer were compared, using both speaker-dependent and speaker-adaptive modeling techniques, and using varying amounts of data. For comparison with HMM adaptation, techniques from voice conversion were used to transform existing synthesizers to the characteristics of the target speaker. Speaker-adaptive voices generally outperformed child speaker-dependent voices in the evaluation. HMM adaptation outperformed voice conversion style techniques when using the full target speaker corpus; with fewer adaptation data, however, no significant listener preference for either HMM adaptation or voice conversion methods was found

    A Database of Freely Written Texts of German School Students for the Purpose of Automatic Spelling Error Classification

    Get PDF
    Abstract The spelling competence of school students is best measured on freely written texts, instead of pre-determined, dictated texts. Since the analysis of the error categories in these kinds of texts is very labor intensive and costly, we are working on an automatic systems to perform this task. The modules of the systems are derived from techniques from the area of natural language processing, and are learning systems that need large amounts of training data. To obtain the data necessary for training and evaluating the resulting system, we conducted data collection of freely written, German texts by school children. 1,730 students from grade 1 through 8 participated in this data collection. The data was transcribed electronically and annotated with their corrected version. This resulted in a total of 14,563 sentences that can now be used for research regarding spelling diagnostics. Additional meta-data was collected regarding writers' language biography, teaching methodology, age, gender, and school year. In order to do a detailed manual annotation of the categories of the spelling errors committed by the students we developed a tool specifically tailored to the task
    • …
    corecore