186 research outputs found

    Study to determine potential flight applications and human factors design guidelines for voice recognition and synthesis systems

    Get PDF
    A study was conducted to determine potential commercial aircraft flight deck applications and implementation guidelines for voice recognition and synthesis. At first, a survey of voice recognition and synthesis technology was undertaken to develop a working knowledge base. Then, numerous potential aircraft and simulator flight deck voice applications were identified and each proposed application was rated on a number of criteria in order to achieve an overall payoff rating. The potential voice recognition applications fell into five general categories: programming, interrogation, data entry, switch and mode selection, and continuous/time-critical action control. The ratings of the first three categories showed the most promise of being beneficial to flight deck operations. Possible applications of voice synthesis systems were categorized as automatic or pilot selectable and many were rated as being potentially beneficial. In addition, voice system implementation guidelines and pertinent performance criteria are proposed. Finally, the findings of this study are compared with those made in a recent NASA study of a 1995 transport concept

    Design and evaluation of acceleration strategies for speeding up the development of dialog applications

    Get PDF
    In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%

    The Army word recognition system

    Get PDF
    The application of speech recognition technology in the Army command and control area is presented. The problems associated with this program are described as well as as its relevance in terms of the man/machine interactions, voice inflexions, and the amount of training needed to interact with and utilize the automated system

    ATS ranging and position fixing experiment, 25 November 1968 - 25 August 1969 Quarterly report

    Get PDF
    Bandwidth tests for ranging and position fixing by ATS-1 and ATS-

    A simulation system for Space Station extravehicular activity

    Get PDF
    America's next major step into space will be the construction of a permanently manned Space Station which is currently under development and scheduled for full operation in the mid-1990's. Most of the construction of the Space Station will be performed over several flights by suited crew members during an extravehicular activity (EVA) from the Space Shuttle. Once fully operational, EVA's will be performed from the Space Station on a routine basis to provide, among other services, maintenance and repair operations of satellites currently in Earth orbit. Both voice recognition and helmet-mounted display technologies can improve the productivity of workers in space by potentially reducing the time, risk, and cost involved in performing EVA. NASA has recognized this potential and is currently developing a voice-controlled information system for Space Station EVA. Two bench-model helmet-mounted displays and an EVA simulation program have been developed to demonstrate the functionality and practicality of the system

    DTO-675: Voice Control of the Closed Circuit Television System

    Get PDF
    This report presents the results of the Detail Test Object (DTO)-675 "Voice Control of the Closed Circuit Television (CCTV)" system. The DTO is a follow-on flight of the Voice Command System (VCS) that flew as a secondary payload on STS-41. Several design changes were made to the VCS for the STS-78 mission. This report discusses those design changes, the data collected during the mission, recognition problems encountered, and findings

    Army Hand Signal Recognition System using Smartwatch Sensors

    Get PDF
    The organized armies of the world all have their own hand signal systems to deliver commands and messages between combatants during operations such as search, reconnaissance, and infiltration. For instance, to command a troop to stop, a commander would lift his/her fist next to the his/her face height. When the operation is carried out by a small unit, the hand signal system plays a very important role. However, obviously, there is an aspect of limitation in this method; each signal should be relayed by individuals, which while waiting attentively for a signal can cause soldiers to lose attention on the front observation and be distracted. Another limitation is, it takes a certain period to convey signals from the first person to the last person. While the limitations above are related to a short moment, that can be fatal in the field of battle. Gesture recognition has emerged as a very important and effective way for interaction between human and computer (HCI). An application of inertial measurement unit (IMU) sensor data from smart devices has lead gesture recognition into the next level, because it means people don’t need to rely on any external equipment, such as a camera to read movements. Especially wearable devices can be more adequate for gesture recognition than hand-held devices because of its distinguished strengths. If soldiers can deliver signals using an off-the-shelf smartwatch, without additional training, it can resolve many drawbacks of the current hand signal system. In the battlefield, cameras to record combatants’ movement for image processing cannot be installed nor utilized, and there are countless obstacles, such as tree branches, trunks, or valleys that hinder the camera to observe movements of the combatants. Because of unique characteristics of battlefield, a gesture recognition system using a smartwatch can be the most appropriate solution for making troops mobility more efficient and secure. For the system to be used successfully in combat zone, the system requires high precision and prompt processing; although accuracy and operating speed are inversely proportional in most of cases. This paper will present a gesture recognition tool for army hand signals with high accuracy and fast processing speed. It is expected that the army hand signal recognition system (AHSR) will assist small units to carry-out their maneuver with higher efficiency

    Static Voronoi-Based Target Expansion Technique for Distant Pointing

    No full text
    International audienceAddressing the challenges of distant pointing, we present the feedforward static targeting assistance technique VTE: Voronoi-based Target Expansion. VTE statically displays all the activation areas by dividing the total screen space into areas such that there is only one target inside each area, also called Voronoi tessellation. The key benefit of VTE is in providing the user with an immediate understanding of the targets' activation boundaries before the pointing task even begins: VTE then provides static targeting assistance for both phases of a pointing task, the ballistic motion and the corrective phase. With the goal of making the environment visually uncluttered, we present a first user study to explore the visual parameters of VTE that affect the performance of the technique. In a second user study focusing on static versus dynamic assistance, we compare VTE with Bubble Ray, a dynamic Voronoi-based targeting assistance technique for distant pointing. Results show that VTE significantly outperforms the dynamic assistance technique and is preferred by users both for ray-casting pointing and relative pointing with a hand-controlled cursor
    • …
    corecore