7 research outputs found

    Deriving and Exploiting Situational Information in Speech: Investigations in a Simulated Search and Rescue Scenario

    Get PDF
    The need for automatic recognition and understanding of speech is emerging in tasks involving the processing of large volumes of natural conversations. In application domains such as Search and Rescue, exploiting automated systems for extracting mission-critical information from speech communications has the potential to make a real difference. Spoken language understanding has commonly been approached by identifying units of meaning (such as sentences, named entities, and dialogue acts) for providing a basis for further discourse analysis. However, this fine-grained identification of fundamental units of meaning is sensitive to high error rates in the automatic transcription of noisy speech. This thesis demonstrates that topic segmentation and identification techniques can be employed for information extraction from spoken conversations by being robust to such errors. Two novel topic-based approaches are presented for extracting situational information within the search and rescue context. The first approach shows that identifying the changes in the context and content of first responders' report over time can provide an estimation of their location. The second approach presents a speech-based topological map estimation technique that is inspired, in part, by automatic mapping algorithms commonly used in robotics. The proposed approaches are evaluated on a goal-oriented conversational speech corpus, which has been designed and collected based on an abstract communication model between a first responder and a task leader during a search process. Results have confirmed that a highly imperfect transcription of noisy speech has limited impact on the information extraction performance compared with that obtained on the transcription of clean speech data. This thesis also shows that speech recognition accuracy can benefit from rescoring its initial transcription hypotheses based on the derived high-level location information. A new two-pass speech decoding architecture is presented. In this architecture, the location estimation from a first decoding pass is used to dynamically adapt a general language model which is used for rescoring the initial recognition hypotheses. This decoding strategy has resulted in a statistically significant gain in the recognition accuracy of the spoken conversations in high background noise. It is concluded that the techniques developed in this thesis can be extended to more application domains that deal with large volumes of natural spoken conversations

    A pervasive neural network based fall detection system on smart phone

    Get PDF
    This paper presents a pervasive fall detection system on smart phones which can monitor the elderly activities and identifies the occurrence of falls. The proposed pervasive fall detection system was developed as a smart phone-based application under the name of Smart Fall Detection© (SFD). SFD is a standalone Android-based application that detects the falls using proposed trained multilayer perceptron (MLP) neural network while utilizes smart phone resources such as accelerometer sensor and GPS. Data from the accelerometer are evaluated with the MLP to determine a fall. When neural network detects the fall, a help request will be sent to the specified emergency contact using SMS and subsequently whenever GPS data is available, the exact location of the fallen person will be sent. The SFD performance shows that it can detect the falls with the accuracy of 91.25

    Development of wearable human fall detection system using multilayer perceptron neural network

    Get PDF
    This paper presents an accurate wearable fall detection system which can identify the occurrence of falls among elderly population. A waist worn tri-axial accelerometer was used to capture the movement signals of human body. A set of laboratory-based falls and activities of daily living (ADL) were performed by volunteers with different physical characteristics. The collected acceleration patterns were classified precisely to fall and ADL using multilayer perceptron (MLP) neural network. This work was resulted to a high accuracy wearable fall-detection system with the accuracy of 91.6%

    Mobile robots communication and control framework for USARSim

    Get PDF
    In recent years there have been intensive efforts in robotics researches from the earliest stages of education. Subject of this paper is a powerful mobile robot communication and control framework for USARSim simulator that can be used both for research and education. Mobile Robots Communication and Control Framework (MCCF) is developed in order to offer faster and easier communication process with the USARSim server within Matlab that differentiates it from most existing basic open source control interfaces. Most notably, it takes the advantages of easy integration with other analysis and control methods that have been provided in Matlab tool-boxes. MCCF enables communication and control of a wide range of robots platforms including but not limited to wheeled-robots, legged-robots, submarine robots and aerial robots. In this paper we describe its general architecture, features and examples of utilization for researchers who are interested in mobile robot simulations for education and research

    Evaluation of fall detection classification approaches

    Get PDF
    As we grow old, our desire for being independence does not decrease while our health needs to be monitored more frequently. Accidents such as falling can be a serious problem for the elderly. An accurate automatic fall detection system can help elderly people be safe in every situation. In this paper a waist worn fall detection system has been proposed. A tri-axial accelerometer (ADXL345) was used to capture the movement signals of human body and detect events such as walking and falling to a reasonable degree of accuracy. A set of laboratory-based falls and activities of daily living (ADL) were performed by healthy volunteers with different physical characteristics. This paper presents the comparison of different machine learning classification algorithms using Waikato Environment for Knowledge Analysis (WEKA) platform for classifying falling patterns from ADL patterns. The aim of this paper is to investigate the performance of different classification algorithms for a set of recorded acceleration data. The algorithms are Multilayer Perceptron, Naive Bayes, Decision tree, Support Vector Machine, ZeroR and OneR. The acceleration data with a total data of 6962 instances and 29 attributes were used to evaluate the performance of the different classification algorithm. Results show that the Multilayer Perceptron algorithm is the best option among other mentioned algorithms, due to its high accuracy in fall detection

    Dynamic graphical instructions result in improved attitudes and decreased task completion time in human−robot co-working: an experimental manufacturing study

    Get PDF
    Collaborative robots offer opportunities to increase the sustainability of work and workforces by increasing productivity, quality, and efficiency, whilst removing workers from hazardous, repetitive, and strenuous tasks. They also offer opportunities for increasing accessibility to work, supporting those who may otherwise be disadvantaged through age, ability, gender, or other characteristics. However, to maximise the benefits, employers must overcome negative attitudes toward, and a lack of confidence in, the technology, and must take steps to reduce errors arising from misuse. This study explores how dynamic graphical signage could be employed to address these issues in a manufacturing task. Forty employees from one UK manufacturing company participated in a field experiment to complete a precision pick-and-place task working in conjunction with a collaborative robotic arm. Twenty-one participants completed the task with the support of dynamic graphical signage that provided information about the robot and the activity, while the rest completed the same task with no signage. The presence of the signage improved the completion time of the task as well as reducing negative attitudes towards the robots. Furthermore, participants provided with no signage had worse outcome expectancies as a function of their response time. Our results indicate that the provision of instructional information conveyed through appropriate graphical signage can improve task efficiency and user wellbeing, contributing to greater workforce sustainability. The findings will be of interest for companies introducing collaborative robots as well as those wanting to improve their workforce wellbeing and technology acceptance

    Language-free graphical signage improves human performance and reduces anxiety when working collaboratively with robots

    Get PDF
    As robots become more ubiquitous, and their capabilities extend, novice users will require intuitive instructional information related to their use. This is particularly important in the manufacturing sector, which is set to be transformed under Industry 4.0 by the deployment of collaborative robots in support of traditionally low-skilled, manual roles. In the first study of its kind, this paper reports how static graphical signage can improve performance and reduce anxiety in participants physically collaborating with a semi-autonomous robot. Three groups of 30 participants collaborated with a robot to perform a manufacturing-type process using graphical information that was relevant to the task, irrelevant, or absent. The results reveal that the group exposed to relevant signage was significantly more accurate in undertaking the task. Furthermore, their anxiety towards robots significantly decreased as a function of increasing accuracy. Finally, participants exposed to graphical signage showed positive emotional valence in response to successful trials. At a time when workers are concerned about the threat posed by robots to jobs, and with advances in technology requiring upskilling of the workforce, it is important to provide intuitive and supportive information to users. Whilst increasingly sophisticated technical solutions are being sought to improve communication and confidence in human-robot co-working, our findings demonstrate how simple signage can still be used as an effective tool to reduce user anxiety and increase task performance
    corecore