13 research outputs found

    Online Deception Detection Using BDI Agents

    Get PDF
    This research has two facets within separate research areas. The research area of Belief, Desire and Intention (BDI) agent capability development was extended. Deception detection research has been advanced with the development of automation using BDI agents. BDI agents performed tasks automatically and autonomously. This study used these characteristics to automate deception detection with limited intervention of human users. This was a useful research area resulting in a capability general enough to have practical application by private individuals, investigators, organizations and others. The need for this research is grounded in the fact that humans are not very effective at detecting deception whether in written or spoken form. This research extends the deception detection capability research in that typical deception detection tools are labor intensive and require extraction of the text in question following ingestion into a deception detection tool. A neural network capability module was incorporated to lend the resulting prototype Machine Learning attributes. The prototype developed as a result of this research was able to classify online data as either deceptive or not deceptive with 85% accuracy. The false discovery rate for deceptive online data entries was 20% while the false discovery rate for not deceptive was 10%. The system showed stability during test runs. No computer crashes or other anomalous system behavior were observed during the testing phase. The prototype successfully interacted with an online data communications server database and processed data using Neural Network input vector generation algorithms within second

    Interactive and life-long learning for identification and categorization tasks

    Get PDF
    Abstract (engl.) This thesis focuses on life-long and interactive learning for recognition tasks. To achieve these targets the separation into a short-term memory (STM) and a long-term memory (LTM) is proposed. For the incremental build up of the STM a similarity-based one-shot learning method was developed. Furthermore two consolidation algorithms were proposed enabling the incremental learning of LTM representations. Based on the Learning Vector Quantization (LVQ) network architecture an error-based node insertion rule and a node dependent learning rate are proposed to enable life-long learning. For learning of categories additionally a forward-feature selection method was introduced to separate co-occurring categories. In experiments the performance of these learning methods could be shown for difficult visual recognition problems

    The challenges and opportunities of human-centred AI for trustworthy robots and autonomous systems

    Get PDF
    The trustworthiness of robots and autonomous systems (RAS) has taken a prominent position on the way towards full autonomy. This work is the first to systematically explore the key facets of human-centred AI for trustworthy RAS. We identified five key properties of a trustworthy RAS, i.e., RAS must be (i) safe in any uncertain and dynamic environment; (ii) secure, i.e., protect itself from cyber threats; (iii) healthy and fault-tolerant; (iv) trusted and easy to use to enable effective human-machine interaction (HMI); (v) compliant with the law and ethical expectations. While the applications of RAS have mainly focused on performance and productivity, not enough scientific attention has been paid to the risks posed by advanced AI in RAS. We analytically examine the challenges of implementing trustworthy RAS with respect to the five key properties and explore the role and roadmap of AI technologies in ensuring the trustworthiness of RAS in respect of safety, security, health, HMI, and ethics. A new acceptance model of RAS is provided as a framework for human-centric AI requirements and for implementing trustworthy RAS by design. This approach promotes human-level intelligence to augment human capabilities and focuses on contribution to humanity

    顔の表情に基づいた感情と行動を表出するシステムに関する研究

    Get PDF
    九州工業大学博士学位論文 学位記番号:情工博甲第320号 学位授与年月日:平成29年3月24日1 Introduction|2 Configuration of CONBE Robot System|3 Animal-like Behavior of CONBE Robot using CBA|4 Emotion Generating System of CONBE Robot|5 Experiment and discussion|6 Conclusions九州工業大学平成28年

    Adaptive user interface for vehicle swarm control

    Get PDF
    An algorithm to automatically generate behaviors for robotic vehicles has been created and tested in a laboratory setting. This system is designed to be applied in situations where a large number of robotic vehicles must be controlled by a single operator. The system learns what behaviors the operator typically issues and offers these behaviors to the operator in future missions. This algorithm uses the symbolic clustering method Gram-ART to generate these behaviors. Gram-ART has been shown to be successful at clustering such standard symbolic problems as the mushroom dataset and the Unix commands dataset. The algorithm was tested by having users complete exploration and tracking missions. Users were brought in for two sessions of testing. In the first session, they familiarized themselves with the testing interface and generated training information for Gram-ART. In the second session, the users ran missions with and without the generated behaviors to determine what effect the generated behaviors had on the users\u27 performance. Through these human tests, missions with generated behaviors enabled are shown to have reduced operator workload over those without. Missions with generated behaviors required fewer button presses than those without while maintaining a similar or greater level of mission success. Users also responded positively in a survey after the second session. Most users\u27 responses indicated that the generated behaviors increased their ability to complete the missions --Abstract, page iii

    A Generalized Neural Network Approach to Mobile Robot Navigation and Obstacle Avoidance

    Get PDF
    In this thesis, we tackle the problem of extending neural network navigation algorithms for various types of mobile robots and 2-dimensional range sensors. We propose a general method to interpret the data from various types of 2-dimensional range sensors and a neural network algorithm to perform the navigation task. Our approach can yield a global navigation algorithm which can be applied to various types of range sensors and mobile robot platforms. Moreover, this method allows the neural networks to be trained using only one type of 2-dimensional range sensor, which contributes positively to reducing the time required for training the networks. Experimental results carried out in simulation environments demonstrate the effectiveness of our approach in mobile robot navigation for different kinds of robots and sensors. Therefore, the successful implementation of our method provides a solution to apply mobile robot navigation algorithms to various robot platforms

    A Framework For Enhancing Speaker Age And Gender Classification By Using A New Feature Set And Deep Neural Network Architectures

    Get PDF
    Speaker age and gender classification is one of the most challenging problems in speech processing. Recently with developing technologies, identifying a speaker age and gender has become a necessity for speaker verification and identification systems such as identifying suspects in criminal cases, improving human-machine interaction, and adapting music for awaiting people queue. Although many studies have been carried out focusing on feature extraction and classifier design for improvement, classification accuracies are still not satisfactory. The key issue in identifying speaker’s age and gender is to generate robust features and to design an in-depth classifier. Age and gender information is concealed in speaker’s speech, which is liable for many factors such as, background noise, speech contents, and phonetic divergences. In this work, different methods are proposed to enhance the speaker age and gender classification based on the deep neural networks (DNNs) as a feature extractor and classifier. First, a model for generating new features from a DNN is proposed. The proposed method uses the Hidden Markov Model toolkit (HTK) tool to find tied-state triphones for all utterances, which are used as labels for the output layer in the DNN. The DNN with a bottleneck layer is trained in an unsupervised manner for calculating the initial weights between layers, then it is trained and tuned in a supervised manner to generate transformed mel-frequency cepstral coefficients (T-MFCCs). Second, the shared class labels method is introduced among misclassified classes to regularize the weights in DNN. Third, DNN-based speakers models using the SDC feature set is proposed. The speakers-aware model can capture the characteristics of the speaker age and gender more effectively than a model that represents a group of speakers. In addition, AGender-Tune system is proposed to classify the speaker age and gender by jointly fine-tuning two DNN models; the first model is pre-trained to classify the speaker age, and second model is pre-trained to classify the speaker gender. Moreover, the new T-MFCCs feature set is used as the input of a fusion model of two systems. The first system is the DNN-based class model and the second system is the DNN-based speaker model. Utilizing the T-MFCCs as input and fusing the final score with the score of a DNN-based class model enhanced the classification accuracies. Finally, the DNN-based speaker models are embedded into an AGender-Tune system to exploit the advantages of each method for a better speaker age and gender classification. The experimental results on a public challenging database showed the effectiveness of the proposed methods for enhancing the speaker age and gender classification and achieved the state of the art on this database

    Visual Attention for Robotic Cognition: A Biologically Inspired Probabilistic Architecture

    Get PDF
    The human being, the most magnificent autonomous entity in the universe, frequently takes the decision of `what to look at' in their day-to-day life without even realizing the complexities of the underlying process. When it comes to the design of such an attention system for autonomous robots, all of a sudden this apparently simple task appears to be an extremely complex one with highly dynamic interaction among motor skills, knowledge and experience developed throughout the life-time, highly connected circuitry of the visual cortex, and super-fast timing. The most fascinating thing about visual attention system of the primates is that the underlying mechanism is not precisely known yet. Different influential theories and hypothesis regarding this mechanism, however, are being proposed in psychology and neuroscience. These theories and hypothesis have encouraged the research on synthetic modeling of visual attention in computer vision, computational neuroscience and, very recently, in AI robotics. The major motivation behind the computational modeling of visual attention is two-fold: understanding the mechanism underlying the cognition of the primates' and using the principle of focused attention in different real-world applications, e.g. in computer vision, surveillance, and robotics. Accordingly, we observe the rise of two different trends in the computational modeling of visual attention. The first one is mostly focused on developing mathematical models which mimic, as much as possible, the details of the primates' attention system: the structure, the connectivity among visual neurons and different regions of the visual cortex, the flow of information etc. Such models provide a way to test the theories of the primates' visual attention with minimal involvement from the live subjects. This is a magnificent way to use technological advancement for the understanding of human cognition. The second trend in computational modeling, on the other hand, uses the methodological sophistication of the biological processes (like visual attention) to advance the technology. These models are mostly concerned with developing a technical system of visual attention which can be used in real-world applications where the principle of focused attention might play a significant role for redundant information management. This thesis is focused on developing a computational model of visual attention for robotic cognition and, therefore, belongs to the second trend. The design of a visual attention model for robotic systems as a component of their cognition comes with a number of challenges which, generally, do not appear in the traditional computer vision applications of visual attention. The robotic models of visual attention, although heavily inspired by the rich literature of visual attention in computer vision, adopt different measures to cope with these challenges. This thesis proposes a Bayesian model of visual attention designed specifically for robotic systems and, therefore, tackles the challenges involved with robotic visual attention. The operation of the proposed model is guided by the theory of biased competition, a popular theory from cognitive neuroscience describing the mechanism of primates' visual attention. The proposed Bayesian attention model offers a robot-centric approach of visual attention where the head-pose of a robot in the 3D world is estimated recursively such that the robot can focus on the most behaviorally relevant stimuli in its environment. The behavioral relevance of an object determined based on two criteria which are inspired by the postulates of the biased competitive hypothesis of visual attention in the primates. Accordingly, the proposed model encourages a robot to focus on novel stimuli or stimuli that have similarity with a `sought for' object depending on the context. In order to address a number of robot-specific issues of visual attention, the proposed model is further extended to the multi-modal case where speech commands from the human are used to modulate the visual attention behavior of the robot. The Bayes model of visual attention, inherited from the Bayesian sensor fusion characteristic, naturally accommodates multi-modal information during attention selection. This enables the proposed model to be the core component of an attention oriented speech-based human-robot interaction framework. Extensive experiments are performed in the real-world to investigate different aspects of the proposed Bayesian visual attention model

    On the Recognition of Emotion from Physiological Data

    Get PDF
    This work encompasses several objectives, but is primarily concerned with an experiment where 33 participants were shown 32 slides in order to create ‗weakly induced emotions‘. Recordings of the participants‘ physiological state were taken as well as a self report of their emotional state. We then used an assortment of classifiers to predict emotional state from the recorded physiological signals, a process known as Physiological Pattern Recognition (PPR). We investigated techniques for recording, processing and extracting features from six different physiological signals: Electrocardiogram (ECG), Blood Volume Pulse (BVP), Galvanic Skin Response (GSR), Electromyography (EMG), for the corrugator muscle, skin temperature for the finger and respiratory rate. Improvements to the state of PPR emotion detection were made by allowing for 9 different weakly induced emotional states to be detected at nearly 65% accuracy. This is an improvement in the number of states readily detectable. The work presents many investigations into numerical feature extraction from physiological signals and has a chapter dedicated to collating and trialing facial electromyography techniques. There is also a hardware device we created to collect participant self reported emotional states which showed several improvements to experimental procedure
    corecore