549,227 research outputs found

    The QTKanji project : an analysis of the relationship between computer assisted language learning (CALL) and the development of autonomous language learners : a thesis presented in partial fulfilment of the requirements for the degree of Master of Arts in Japanese at Massey University, Palmerston North, New Zealand

    Get PDF
    Further thesis content held on disc is unreadable.An analysis of the relationship between computer Assisted Language Learning (CALL) and the development of autonomous language learners Computer assisted language learning (CALL) software is being introduced into tertiary language programmes for a number of reasons. Research has indicated that CALL is effective for language learning, that it caters for individual learning needs and that it promotes independent learning. By providing structured learning, students can study in their own time without a teacher. Whilst it is now commonly accepted that CALL material must be carefully integrated into the curriculum for it to be effective, there is a move in CALL research away from just evaluation of software to a greater focus on the learner. It is maintained that understanding different learning styles and learner preferences is essential in the creation of CALL packages, and that packages are sufficiently flexible to cater for learners of different ability to manage their own learning. However, while an attraction of CALL is that it fosters independent learning, it is not clear what learners do when they are in the process of becoming independent learners, what CALL environments will foster the development of independent learning skills, and the type of learner who will benefit. This thesis examines the in-house development and trialling of kanji software at the Auckland University of Technology, taking into account the direction of current research into CALL. It provides an initial evaluation of the software design and use, within the framework of research into second language acquisition, learner differences and independent learning. Findings from this initial study will be used to modify the software where necessary and to provide the basis for further research into CALL and language learning

    Speech Recognition Technology: Improving Speed and Accuracy of Emergency Medical Services Documentation to Protect Patients

    Get PDF
    Because hospital errors, such as mistakes in documentation, cause one in six deaths each year in the United States, the accuracy of health records in the emergency medical services (EMS) must be improved. One possible solution is to incorporate speech recognition (SR) software into current tools used by EMS first responders. The purpose of this research was to determine if SR software could increase the efficiency and accuracy of EMS documentation to improve the safety of patients of EMS. An initial review of the literature on the performance of current SR software demonstrated that this software was not 99% accurate, and therefore, errors in the medical documentation produced by the software could harm patients. The literature review also identified weaknesses of SR software that could be overcome so that the software would be accurate enough for use in EMS settings. These weaknesses included the inability to differentiate between similar phrases and the inability to filter out background noise. To find a solution, an analysis of natural language processing algorithms showed that the bag-of-words post processing algorithm has the ability to differentiate between similar phrases. This algorithm is best suited for SR applications because it is simple yet effective compared to machine learning algorithms that required a large amount of training data. The findings suggested that if these weaknesses of current SR software are solved, then the software would potentially increase the efficiency and accuracy of EMS documentation. Further studies should integrate the bag-of-words post processing method into SR software and field test its accuracy in EMS settings

    Speech Recognition Technology: Improving Speed and Accuracy of Emergency Medical Services Documentation to Protect Patients

    Get PDF
    Because hospital errors, such as mistakes in documentation, cause one sixth of the deaths each year in the United States, the accuracy of health records in the emergency medical services (EMS) must be improved. One possible solution is to incorporate speech recognition (SR) software into current tools used by EMS first responders. The purpose of this research was to determine if SR software could increase the efficiency and accuracy of EMS documentation to improve the safety for patients of EMS. An initial review of the literature on the performance of current SR software demonstrated that this software was not 99% accurate and therefore, errors in the medical documentation produced by the software could harm patients. The literature review also identified weaknesses of SR software that could be overcome so that the software would be accurate enough for use in EMS settings. These weaknesses included the inability to differentiate between similar phrases and the inability to filter out background noise. To find a solution, an analysis of natural language processing algorithms showed that the bag-of-words post processing algorithm has the ability to differentiate between similar phrases. This algorithm is the best suited for SR applications because it is simple yet effective compared to machine learning algorithms that required a large amount of training data. The findings suggested that if these weaknesses of current SR software are solved, then the software would potentially increase the efficiency and accuracy of EMS documentation. Further studies should integrate the bag-of-words post processing method into SR software and field test its accuracy in EMS settings.https://scholarscompass.vcu.edu/uresposters/1273/thumbnail.jp

    The space physics environment data analysis system (SPEDAS)

    Get PDF
    With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform (www.spedas.org), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans.Published versio

    Automated Quality Assessment of Natural Language Requirements

    Get PDF
    High demands on quality and increasing complexity are major challenges in the development of industrial software in general. The development of automotive software in particular is subject to additional safety, security, and legal demands. In such software projects, the specification of requirements is the first concrete output of the development process and usually the basis for communication between manufacturers and development partners. The quality of this output is therefore decisive for the success of a software development project. In recent years, many efforts in academia and practice have been targeted towards securing and improving the quality of requirement specifications. Early improvement approaches concentrated on the assistance of developers in formulating their requirements. Other approaches focus on the use of formal methods; but despite several advantages, these are not widely applied in practice today. Most software requirements today are informal and still specified in natural language. Current and previous research mainly focuses on quality characteristics agreed upon by the software engineering community. They are described in the standard ISO/IEC/IEEE 29148:2011, which offers nine essential characteristics for requirements quality. Several approaches focus additionally on measurable indicators that can be derived from text. More recent publications target the automated analysis of requirements by assessing their quality characteristics and by utilizing methods from natural language processing and techniques from machine learning. This thesis focuses in particular on the reliability and accuracy in the assessment of requirements and addresses the relationships between textual indicators and quality characteristics as defined by global standards. In addition, an automated quality assessment of natural language requirements is implemented by using machine learning techniques. For this purpose, labeled data is captured through assessment sessions. In these sessions, experts from the automotive industry manually assess the quality characteristics of natural language requirements.% as defined in ISO 29148. The research is carried out in cooperation with an international engineering and consulting company and enables us to access requirements from automotive software development projects of safety and comfort functions. We demonstrate the applicability of our approach for real requirements and present promising results for an industry-wide application

    A Model-Based AI-Driven Test Generation System

    Get PDF
    Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. This thesis is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. We present an approach that combines a trainable web component classifier, a test case description language, and a trainable test generation and execution system that can learn to generate new test cases. Training data was collected and hand-labeled across 7 systems, 95 web pages, and 17,360 elements. A total of 250 test flows were also manually hand-crafted for training purposes. Various machine learning algorithms were evaluated. Results showed that Random Forest classifiers performed well on several web component classification problems. In addition, Long Short-Term Memory neural networks were able to model and generate new valid test flows

    Social Web Communities

    Get PDF
    Blogs, Wikis, and Social Bookmark Tools have rapidly emerged onthe Web. The reasons for their immediate success are that people are happy to share information, and that these tools provide an infrastructure for doing so without requiring any specific skills. At the moment, there exists no foundational research for these systems, and they provide only very simple structures for organising knowledge. Individual users create their own structures, but these can currently not be exploited for knowledge sharing. The objective of the seminar was to provide theoretical foundations for upcoming Web 2.0 applications and to investigate further applications that go beyond bookmark- and file-sharing. The main research question can be summarized as follows: How will current and emerging resource sharing systems support users to leverage more knowledge and power from the information they share on Web 2.0 applications? Research areas like Semantic Web, Machine Learning, Information Retrieval, Information Extraction, Social Network Analysis, Natural Language Processing, Library and Information Sciences, and Hypermedia Systems have been working for a while on these questions. In the workshop, researchers from these areas came together to assess the state of the art and to set up a road map describing the next steps towards the next generation of social software

    Social Web Communities

    No full text
    Blogs, Wikis, and Social Bookmark Tools have rapidly emerged on the Web. The reasons for their immediate success are that people are happy to share information, and that these tools provide an infrastructure for doing so without requiring any specific skills. At the moment, there exists no foundational research for these systems, and they provide only very simple structures for organising knowledge. Individual users create their own structures, but these can currently not be exploited for knowledge sharing. The objective of the seminar was to provide theoretical foundations for upcoming Web 2.0 applications and to investigate further applications that go beyond bookmark- and file-sharing. The main research question can be summarized as follows: How will current and emerging resource sharing systems support users to leverage more knowledge and power from the information they share on Web 2.0 applications? Research areas like Semantic Web, Machine Learning, Information Retrieval, Information Extraction, Social Network Analysis, Natural Language Processing, Library and Information Sciences, and Hypermedia Systems have been working for a while on these questions. In the workshop, researchers from these areas came together to assess the state of the art and to set up a road map describing the next steps towards the next generation of social software

    The language data repository: machine readable storage for spoken language data

    Get PDF
    Due to the character of the original source materials and the nature of batch digitization, quality control issues may be present in this document. Please report any quality issues you encounter to [email protected], referencing the URI of the item.Includes bibliographical references (leaf 48).The Language Data Repository project is working to develop a software architecture capable of storing the transcripts and recordings of spoken language data and capable of hosting software tools to aid in the analysis of that data. The proposed software architecture can be used by multiple people to store linguistic data from multiple languages on either local machines or non-local machines that can be accessed via a network by multiple users simultaneously. The primary user community for the LDR software comes from a targeted subset of linguists conducting research on language groups with no officially established or standardized writing system. These linguistic field workers are typically involved in activities such as: learning these "unwritten" languages, developing orthographic systems, beginning literacy programs, and producing written texts in the new orthographic system (e.g., Bible translations and traditional stories). The secondary user community consists of linguists who need a reliable method of storing spoken language data and the transcripts of those data, regardless of the existence of an established or standardized written code for that language. Such a software system offers two main improvements over current, paper-based methods of recording transcripts of linguistic data. First, by utilizing machine-readable storage, it will enable linguists to use computational tools to aid in linguistic analysis by increasing the ability to quickly and accurately test and evaluate linguistic hypotheses of the rules governing the linguistic systems. Secondly, a standardized method of recording data in a machine-readable format will enhance linguists' ability to document their research and share their results with a greater number of colleagues than previously possible. A benefit to this increase in the distribution of primary data to other colleagues is the ability for mote people to test various hypotheses simultaneously
    • …
    corecore