1,341 research outputs found

    Ada and the rapid development lifecycle

    Get PDF
    JPL is under contract, through NASA, with the US Army to develop a state-of-the-art Command Center System for the US European Command (USEUCOM). The Command Center System will receive, process, and integrate force status information from various sources and provide this integrated information to staff officers and decision makers in a format designed to enhance user comprehension and utility. The system is based on distributed workstation class microcomputers, VAX- and SUN-based data servers, and interfaces to existing military mainframe systems and communication networks. JPL is developing the Command Center System utilizing an incremental delivery methodology called the Rapid Development Methodology with adherence to government and industry standards including the UNIX operating system, X Windows, OSF/Motif, and the Ada programming language. Through a combination of software engineering techniques specific to the Ada programming language and the Rapid Development Approach, JPL was able to deliver capability to the military user incrementally, with comparable quality and improved economies of projects developed under more traditional software intensive system implementation methodologies

    Mixed forms of capital aid

    Full text link

    Language Acquisition: Which Factors Make It Easier to Learn Another Language?

    Get PDF
    My presentation looks at factors influencing second language acquisition and whether they facilitate or hinder the learning process. I will begin by reviewing language acquisition theories applicable to infants and toddlers, providing a foundational understanding. As a key factor, I will examine age of acquisition in relation to successful second language acquisition. The role of intelligence on second language acquisition will also be discussed. Additionally, I will investigate whether certain languages are inherently easier to learn based on syntax, vocabulary, and one’s native language. My discussion will extend to examining the ease of acquiring multiple languages after acquiring a second language

    Robust Speech Recognition via Adaptation for German Oral History Interviews

    Get PDF
    Automatic speech recognition systems often achieve remarkable performance when trained on thousands of hours of manually annotated and time-aligned speech. However, when applied in other conditions and domains than they were trained on, the systems' recognition quality often deteriorates, substantially limiting their real-world application. One of these applications is the automatic transcription of oral history interviews, i.e., interviews with witnesses of historical events. For the past twenty years, oral history interviews have been among the most challenging use cases for speech recognition due to a lack of representative training data, diverse and often poor recording conditions, and the spontaneous and occasionally colloquial nature of the speech. This thesis proposes and studies the combination of different domain adaptation approaches to overcome the lack of representative training data and cope with the unpredictability of oral history interviews. We employ and investigate data augmentation to adapt broadcast training data to cover the challenging recording conditions of oral history interviews. We compare data augmentation approaches to conventional speech enhancement. To improve the system's performance further, we study domain adaptation via fine-tuning to adapt the acoustic models trained robustly on thousands of hours of annotated speech using a minimal amount of manually transcribed oral history interviews. We employ automatic transcript-alignment to generate adaptation data from transcribed but not time-aligned interviews and investigate the influence of different adaptation data sizes on domain overfitting and generalization. We reduce domain overfitting and improve the generalization of the adapted models employing cross-lingual adaptation in a multi-staged setup to leverage the vast availability of English speech corpora. Additionally, in this thesis, a human word error rate for German oral history interviews recorded under clean conditions is experimentally estimated to study and highlight the challenges of transcription even for humans and put current results of automatic transcription into perspective. The proposed methods are evaluated on a representative oral history test set for the target domain and several additional German test sets from different domains. With this evaluation, we assure high robustness, obtain a reliable estimate of the real-world performance for conditions not seen in training, and avoid selecting models that suffer from domain overfitting. Overall, we halved the word error rate compared to the baseline using the proposed methods, simultaneously improving the recognition performance on the other domains by a substantial margin.Robuste Spracherkennung mittels Adaption für deutschsprachige Zeitzeugeninterviews Automatische Spracherkennungssysteme erzielen oft bemerkenswerte Ergebnisse, wenn sie auf Tausenden Stunden manuell transkribierter und zeitlich alignierter Sprache trainiert werden. Wenn sie jedoch unter anderen Bedingungen und in anderen Bereichen als den trainierten eingesetzt werden, verschlechtert sich die Erkennungsqualität der Systeme häufig, was ihre Anwendbarkeit in der Praxis erheblich einschränkt. Eine Anwendung ist die automatische Transkription von Oral History Interviews, d. h. von Interviews mit Zeitzeugen historischer Ereignisse. In den letzten zwanzig Jahren gehörten diese Interviews zu den anspruchsvollsten Anwendungsfällen für die Spracherkennung, da es an repräsentativen Trainingsdaten mangelt, die Aufnahmebedingungen vielfältig und oft schlecht sind sowie die Sprache spontan und mitunter umgangssprachlich ist. In dieser Arbeit wird die Kombination verschiedener Ansätze für Domänenanpassung untersucht, um den Mangel an repräsentativen Trainingsdaten zu überwinden und mit der Unvorhersehbarkeit von Oral History Interviews umzugehen. Wir verwenden und untersuchen Data Augmentation, um Trainingsdaten aus dem Rundfunk-Bereich so anzupassen, dass sie die herausfordernden Aufnahmebedingungen von Oral History Interviews abdecken. Wir vergleichen Ansätze zur Data Augmentation mit Sprachsignalverbesserungsmethoden. Um die Leistung des Systems weiter zu verbessern, untersuchen wir die Domänenanpassung durch Fine-Tuning, um akustischen Modelle, die auf Tausenden von Stunden annotierter Sprache trainiert wurden, mit einer minimalen Menge manuell transkribierter Oral History Interviews robust anzupassen. Wir setzen automatische Alignierung ein, um Daten für diese Anpassung aus transkribierten, aber zeitlich nicht alignierten Interviews zu generieren, und untersuchen den Einfluss verschiedener Größen von Daten auf Domänen-Überanpassung und Generalisierung. Wir reduzieren Domänen-Überanpassung und verbessern die Generalisierung der adaptierten Modelle durch sprachübergreifende Adaption in einem mehrstufigen Verfahren, um die enorme Verfügbarkeit von englischen Sprachkorpora zu nutzen. Zusätzlich wird in dieser Arbeit eine menschliche Wortfehlerrate für deutsche Oral History Interviews in guten Aufnahmebedingungen experimentell geschätzt, um die Herausforderungen der Transkription für Menschen zu untersuchen, zu verdeutlichen und die aktuellen Ergebnisse automatischer Transkription in einen Kontext zu setzen. Die vorgeschlagenen Methoden werden anhand eines repräsentativen Testdatensatzes für Oral History Interviews für die Zieldomäne und auf mehreren zusätzlichen deutschen Testdatensätzen aus verschiedenen Domänen evaluiert. Hierdurch wird eine hohe Robustheit gewährleistet, eine zuverlässige Schätzung der realen Leistung für im Training nicht gesehene Bedingungen erhalten und die Auswahl von Modellen vermieden, die unter Domänen-Überanpassung leiden. Insgesamt konnten wir mit den vorgeschlagenen Methoden die Wortfehlerrate für Oral History im Vergleich zur Baseline halbieren und gleichzeitig die Erkennungsleistung in den anderen Domänen deutlich verbessern

    Indigenous Language Publishing in the North American Context

    Get PDF
    Developing Indigenous literacy is often seen as a key component of successful language revitalization and maintenance programs (Fishman 1991, Watahomigie and McCarty 1997, Bernard 1997, Grenoble and Whaley 2005), particularly in contexts such as North America (i.e., Native American and First Nations communities) where literacy in the language of wider communication is widespread and necessary for participation in daily life. To date, however, there has been no systemic evaluation of the types of literature available to readers of Indigenous languages, the methods of production and distribution, nor the effects of Indigenous literature on perceptions of language prestige. This paper provides a strategic analysis of discoverable and accessible reading material available in Indigenous languages across North America, surveying over 2,100 titles across 80 languages and dialects in North America, taking stock of the state of the art of Indigenous language book publishing and exploring how these publications can affect language attitudes. Cultivating positive language attitudes and language prestige is especially crucial in the North American context, where the great majority of Indigenous languages are highly endangered and most Indigenous people acquire English (or French, in some parts of Canada) as their mother tongue. Likewise, most Indigenous people acquire literacy in English or French first, and are exposed to literature that encompasses a wide variety of subjects, reading levels, and genres. I present the results of the survey of 2,100 Indigenous language publications in North America in terms of the subjects, reading levels, and genres available to Indigenous readers as compared to the literature available in the languages of wider communication, and identify the challenges faced by language revitalization programs that prioritize Indigenous literacy, including the difficulty of expanding literacy beyond the domain of education (Grenoble and Whaley 2005) and the economic hurdles in creating an Indigenous literary tradition. Bernard (1997) in argues that building an Indigenous literary tradition via publishing important for preserving Indigenous languages; I extend this argument by noting the role literature can play in building cultural capital (Bourdieu 1986) and prestige planning, particularly in the North American context where literature is so highly valued. Finally, I highlight some of the features found across successful Indigenous literature publishing endeavors and attempt to provide guidelines for future publishing projects. References Bernard, H. Russell. 1997. ‘Language Preservation and Publishing.’ In Nancy Hornberger (Ed.) Indigenous Literacies in the Americas: Language Planning from the Bottom Up. 139-156. Berlin: De Gruyter Mouton. Bourdieu, Pierre and Richard Nice (translator). 1986. ‘Forms of Capital.’ Reproduced in Imre Szeman and Timothy Kaposy (Ed.). 2011. Cultural Theory: An Anthology. Wiley-Blackwell. Fishman, Joshua A. (Ed.) 1991. Reversing Language Shift: Theoretical and Empirical Foundations of Assistance to Threatened Languages. Clevedon: Multilingual Matters. Grenoble, Lenore A. and Lindsay J. Whaley. 2005. Saving Languages: An Introduction to Language Revitalization. Cambridge: Cambridge University Press. Watahomigie, Lucille J. and Teresa L. McCarty. 1997. ‘Literacy for what? Hualapai literacy and language maintenance.’ In Nancy Hornberger. (Ed.) Indigenous Literacies in the Americas: Language Planning from the Bottom Up. 95-113. Berlin: De Gruyter Mouton

    Performance of Large Language Models in a Computer Science Degree Program

    Full text link
    Large language models such as ChatGPT-3.5 and GPT-4.0 are ubiquitous and dominate the current discourse. Their transformative capabilities have led to a paradigm shift in how we interact with and utilize (text-based) information. Each day, new possibilities to leverage the capabilities of these models emerge. This paper presents findings on the performance of different large language models in a university of applied sciences' undergraduate computer science degree program. Our primary objective is to assess the effectiveness of these models within the curriculum by employing them as educational aids. By prompting the models with lecture material, exercise tasks, and past exams, we aim to evaluate their proficiency across different computer science domains. We showcase the strong performance of current large language models while highlighting limitations and constraints within the context of such a degree program. We found that ChatGPT-3.5 averaged 79.9% of the total score in 10 tested modules, BingAI achieved 68.4%, and LLaMa, in the 65 billion parameter variant, 20%. Despite these convincing results, even GPT-4.0 would not pass the degree program - due to limitations in mathematical calculations.Comment: Submitted to AI4AI Workshop 202

    FAA Designated Pilot Examiner System Insights

    Get PDF
    As part of the Reauthorization Act of 2018 the FAA was required to assign to the Aviation Rulemaking Advisory Committee (ARAC) a review of the current Designated Pilot Examiner (DPE) policies. The ARAC in turn assigned this task to the Designated Pilot Examiner Reforms Working Group (DPERWG). This Group delivered its recommendations to the FAA in June 2021, with an FAA response to the Group due by June 2022. The purpose of this research project is to provide more insight regarding the current DPE system from all stakeholders prior to that deadline. Survey data from both current DPE’s and flight schools nationwide will be shared. These surveys address stakeholder perceptions on components of the DPE system including: 1) wait times for check rides, 2) activity level of DPE’s, 3) the effect rescinding constraints on geographical regions and the ability to do up to three check rides per day has had, 4) the prevalence of applicants and/or examiners traveling to check ride sites other than their home airport, and 5) feedback on a number of specific recommendations made by the DPERWG. These items include changes to the DPE application process, the development of an applicant feedback system, changes to the number of events per day which can effectively be conducted, a national DPE oversight model versus the current FSDO oversight model, the treatment of oral and flight tests as separate events, and the effectiveness of the DPE locator on the FAA website

    FAA Designated Pilot Examiner System Insights

    Get PDF
    As part of the Federal Aviation Administration (FAA) Reauthorization Act of 2018 the FAA was required by Congress to review Designated Pilot Examiner (DPE) policies and procedures. This task was delegated to the Designated Pilot Examiner Reforms Working Group (DPERWG). This Group delivered its recommendations to the FAA in June 2021, and this research study was conducted in late January of 2022 to attempt to provide additional insights to the agency prior to its required response to the DPERWG in June of 2022. This research project aimed to provide perceptions of the current DPE system from both DPEs and flight schools nationwide, as well as feedback on selected DPERWG recommendations. Surveys of these two populations were conducted seeking stakeholder perceptions on the current DPE system including: 1) wait times for scheduling check rides, 2) the level of activity of DPE’s, and 3) the prevalence of applicants and/or examiners traveling to check ride sites other than their home airport. Feedback on specific recommendations made by the DPERWG were also solicited including: 1) the implementation of a confidential survey applicant feedback system, 2) the possibility of moving to a national oversight model for the DPE system, 3) the perceptions of and improvements seen as necessary for the current FAA DPE locator website, 4) the possibility of treating oral and flight exams as separate events, and 5) changing medical certificate requirements for DPEs. There were significant differences in perceptions of DPEs and flight training providers regarding the wait times incurred when scheduling check rides, but there was general consensus regarding the travel of both applicants and DPEs for the conduct of those rides. There was also consensus between the two surveyed groups regarding most of the DPERWG recommendations which were examined by the surveys

    Two-Staged Acoustic Modeling Adaption for Robust Speech Recognition by the Example of German Oral History Interviews

    Full text link
    In automatic speech recognition, often little training data is available for specific challenging tasks, but training of state-of-the-art automatic speech recognition systems requires large amounts of annotated speech. To address this issue, we propose a two-staged approach to acoustic modeling that combines noise and reverberation data augmentation with transfer learning to robustly address challenges such as difficult acoustic recording conditions, spontaneous speech, and speech of elderly people. We evaluate our approach using the example of German oral history interviews, where a relative average reduction of the word error rate by 19.3% is achieved.Comment: Accepted for IEEE International Conference on Multimedia and Expo (ICME), Shanghai, China, July 201
    corecore