317 research outputs found

    Voice Identification Using Classification Algorithms

    Get PDF
    This article discusses the classification algorithms for the problem of personality identification by voice using machine learning methods. We used the MFCC algorithm in the speech preprocessing process. To solve the problem, a comparative analysis of five classification algorithms was carried out. In the first experiment, the support vector method was determined—0.90 and multilayer perceptron—0.83, that showed the best results. In the second experiment, a multilayer perceptron with an accuracy of 0.93 was proposed using the Robust scaler method for personal identification. Therefore, to solve this problem, it is possible to use a multi-layer perceptron, taking into account the specifics of the speech signal

    Speaker Recognition: Advancements and Challenges

    Get PDF

    Taxonomic Classification of IoT Smart Home Voice Control

    Full text link
    Voice control in the smart home is commonplace, enabling the convenient control of smart home Internet of Things hubs, gateways and devices, along with information seeking dialogues. Cloud-based voice assistants are used to facilitate the interaction, yet privacy concerns surround the cloud analysis of data. To what extent can voice control be performed using purely local computation, to ensure user data remains private? In this paper we present a taxonomy of the voice control technologies present in commercial smart home systems. We first review literature on the topic, and summarise relevant work categorising IoT devices and voice control in the home. The taxonomic classification of these entities is then presented, and we analyse our findings. Following on, we turn to academic efforts in implementing and evaluating voice-controlled smart home set-ups, and we then discuss open-source libraries and devices that are applicable to the design of a privacy-preserving voice assistant for smart homes and the IoT. Towards the end, we consider additional technologies and methods that could support a cloud-free voice assistant, and conclude the work

    RADIC Voice Authentication: Replay Attack Detection using Image Classification for Voice Authentication Systems

    Get PDF
    Systems like Google Home, Alexa, and Siri that use voice-based authentication to verify their users’ identities are vulnerable to voice replay attacks. These attacks gain unauthorized access to voice-controlled devices or systems by replaying recordings of passphrases and voice commands. This shows the necessity to develop more resilient voice-based authentication systems that can detect voice replay attacks. This thesis implements a system that detects voice-based replay attacks by using deep learning and image classification of voice spectrograms to differentiate between live and recorded speech. Tests of this system indicate that the approach represents a promising direction for detecting voice-based replay attacks

    Data Augmentation for Speaker Identification under Stress Conditions to Combat Gender-Based Violence

    Get PDF
    This article belongs to the Special Issue IberSPEECH 2018: Speech and Language Technologies for Iberian LanguagesA Speaker Identification system for a personalized wearable device to combat gender-based violence is presented in this paper. Speaker recognition systems exhibit a decrease in performance when the user is under emotional or stress conditions, thus the objective of this paper is to measure the effects of stress in speech to ultimately try to mitigate their consequences on a speaker identification task, by using data augmentation techniques specifically tailored for this purpose given the lack of data resources for this condition. An extensive experimentation has been carried out for assessing the effectiveness of the proposed techniques. First, we conclude that the best performance is always obtained when naturally stressed samples are included in the training set, and second, when these are not available, their substitution and augmentation with synthetically generated stress-like samples improves the performance of the system.This work is partially supported by the Spanish Government-MinECo project TEC2017-84395-P and Madrid Regional Project Y2018/TCS-5046

    Development of a Limb prosthesis by reverse mechanotransduction

    Get PDF
    Recent developments in the field of limb prosthesis have focused on the use of body signals of the user to generate the desired motion in the prosthesis. Unlike earlier designs, this approach is more effective and less stressful for the amputee. The signals that have been used up till now are EMG signals, EEG signals and neural signals. Another possible source of body signal is the pH value of the neuromuscular junction, which depends upon the ion movements across the muscle tissue. Hence, it is safe to assume that changes in the pH can accurately mimic the intended changes in the amputated limb muscles, and therefore can be used to turn the user’s desired motion into actual motion of the limb prosthesis. In the current model, this is achieved through the means of a pH-to-voltage converter that converts the pH value into voltage that is in turn used to drive the motor. The direction of movement is controlled by a microcontroller-based circuit. Further improvements can be made upon the model presented in this thesis, if the pH values could be more accurately read and employed to determine the direction of the movement of the finger too. Also, attempts can be made to apply the same working principle on more complex models of hand prosthesis, thus producing more applicable results

    Radio2Text: Streaming Speech Recognition Using mmWave Radio Signals

    Full text link
    Millimeter wave (mmWave) based speech recognition provides more possibility for audio-related applications, such as conference speech transcription and eavesdropping. However, considering the practicality in real scenarios, latency and recognizable vocabulary size are two critical factors that cannot be overlooked. In this paper, we propose Radio2Text, the first mmWave-based system for streaming automatic speech recognition (ASR) with a vocabulary size exceeding 13,000 words. Radio2Text is based on a tailored streaming Transformer that is capable of effectively learning representations of speech-related features, paving the way for streaming ASR with a large vocabulary. To alleviate the deficiency of streaming networks unable to access entire future inputs, we propose the Guidance Initialization that facilitates the transfer of feature knowledge related to the global context from the non-streaming Transformer to the tailored streaming Transformer through weight inheritance. Further, we propose a cross-modal structure based on knowledge distillation (KD), named cross-modal KD, to mitigate the negative effect of low quality mmWave signals on recognition performance. In the cross-modal KD, the audio streaming Transformer provides feature and response guidance that inherit fruitful and accurate speech information to supervise the training of the tailored radio streaming Transformer. The experimental results show that our Radio2Text can achieve a character error rate of 5.7% and a word error rate of 9.4% for the recognition of a vocabulary consisting of over 13,000 words.Comment: Accepted by Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (ACM IMWUT/UbiComp 2023

    A Software Testbed for Assessing Human-Robot Verbal Interaction

    Get PDF
    Verbal interaction provides a natural and social-style interaction mode by which robots can communicate with general public who is likely unknowledgeable in robotics. This interaction mechanism is also very important for a broad range of users such as hands/eyes-busy users, motor-impaired users, users with vision impairment and users working in hostile environments. Verbal interaction is very popular in robotics especially in personal assistive robots, which are used to help elderly people and in entertainment robots. Several research endeavors have been assigned to endow the robots with verbal interaction as a high-level faculty. However, the language usages of many of them were simple and may not be considered as full speech dialogue systems providing natural language understanding. In this thesis, we investigate a testbed platform that can be deployed to enable human-robot verbal interaction. The proposed approach encompasses a design pattern-based user interface and a user-independent automatic speech recognizer with a modified grammar module in the context of human-robot interaction. The user interface is used to simulate robots response toward multiple users’ voice commands. The performance of the proposed testbed has been evaluated quantitatively using a set of evaluation metrics such as word correct rate, recognition time and success and false action rates. The conducted experiments show the promising features of the system. The results obtained could be refined even further by training the system for more voice commands and the whole system could be ported to real robotic platforms such as Peoplebot to endow it with natural language understanding

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    corecore