104 research outputs found

    An IVR call performance classification system using computational intelligent techniques

    Get PDF
    Speech recognition adoption rate within Interactive Voice Response (IVR) systems is on the increase. If implemented correctly, businesses experience an increase of IVR utilization by customers, thus benefiting from reduced operational costs. However, it is essential for businesses to evaluate the productivity, quality and call resolution performance of these self-service applications. This research is concerned with the development of a business analytics for IVR application that could assist contact centers in evaluating these self-service IVR applications. A call classification system for a pay beneficiary IVR application has been developed. The system comprises of field and call performance classification components. ‘Say account’, ‘Say amount’, ‘Select beneficiary’ and ‘Say confirmation’ field classifiers were developed using Multi-Layer Perceptron (MLP) Artificial Neural Network (ANN), Radial Basis Function (RBF) ANN, Fuzzy Inference System (FIS) as well as Support Vector Machine (SVM). Call performance classifiers were also developed using these computational intelligent techniques. Binary and real coded Genetic Algorithm (GA) solutions were used to determine optimal MLP and RBF ANN classifiers. These GA solutions produced accurate MLP and RBF ANN classifiers. In order to increase the accuracy of the call performance RBF ANN classifier, the classification threshold has been optimized. This process increased the classifier accuracy by approximately eight percent. However, the field and call performance MLP ANN classifiers were the most accurate ANN solutions. Polynomial and RBF SVM kernel functions were most suited for field classifications. However, the linear SVM kernel function is most accurate for call performance classification. When compared to the ANN and SVM field classifiers, the FIS field classifiers did not perform well. The FIS call performance classifier did outperform the RBF ANN call performance network. Ensembles of MLP ANN, RBF ANN and SVM field classifiers were developed. Ensembles of FIS, MLP ANN and SVM call performance classifiers were also implemented. All the computational intelligent methods considered were compared in relation to accuracy, sensitivity and specificity performance metrics. MLP classifier solution is most appropriate for ‘Say account’ field classification. Ensemble of field classifiers and MLP classifier solutions performed the best in ‘Say amount’ field classification. Ensemble of field classifiers and SVM classifier solutions are most suited in ‘Select beneficiary’ and ‘Say confirmation’ field classifications. However, the ensemble of call performance classifiers is the preferred classification solution for call performance

    Neuro-fuzzy software for intelligent control and education

    Get PDF
    Tese de mestrado integrado. Engenharia Electrotécnica e de Computadores (Major Automação). Faculdade de Engenharia. Universidade do Porto. 200

    Prediction, classification and diagnosis of spur gear conditions using artificial neural network and acoustic emission

    Get PDF
    The gear system is a critical component in the machinery and predicting the performance of a gear system is an important function. Unpredictable failures of a gear system can cause serious threats to human life, and have large scale economic effects. It is necessary to inspect gear teeth periodically to identify crack propagation and, other damages at the earliest. This study has two main objectives. Firstly, the research predicted and classified specific film thickness (λ) of spur gear by Artificial Neural Network (ANN) and Regression models. Parameters such as acoustic emission (AE), temperature and specific film thickness (λ) data were extracted from works of other researchers. The acoustic emission signals and temperature were used as input to ANN and Regression models, while (λ) was the output of the models. Second objective is to use the third generation ANN (Spiking Neural Network) for fault diagnosis and classification of spur gear based on AE signal. For this purpose, a test rig was built with several gear faults. The AE signal was processed through preprocessing, features extraction and selection methods before the developed ANN diagnosis and classification model were built. These processes were meant to improve the accuracy of diagnosis system based on information or features fed into the model. This research investigated the possibility of improving accuracy of spur gear condition monitoring and fault diagnoses by using Feed-Forward Back- Propagation Neural Networks (FFBP), Elman Network (EN), Regression Model and Spiking Neural Network (SNN). The findings showed that use of specific film thickness has resulted in the FFBP network being able to provide 99.9% classification accuracy, while regression and multiple regression models attained 73.3 % and 81.2% classification accuracy respectively. For gear fault diagnosis, the SNN achieved nearly 97% accuracy in its diagnosis. Finally, the methods use in the study have proven to have high accuracy and can be used as tools for prediction, classification and fault diagnosis in spur gear

    Cyber-Physical Embedded Systems with Transient Supervisory Command and Control: A Framework for Validating Safety Response in Automated Collision Avoidance Systems

    Get PDF
    The ability to design and engineer complex and dynamical Cyber-Physical Systems (CPS) requires a systematic view that requires a definition of level of automation intent for the system. Since CPS covers a diverse range of systemized implementations of smart and intelligent technologies networked within a system of systems (SoS), the terms “smart” and “intelligent” is frequently used in describing systems that perform complex operations with a reduced need of a human-agent. The difference between this research and most papers in publication on CPS is that most other research focuses on the performance of the CPS rather than on the correctness of its design. However, by using both human and machine agency at different levels of automation, or autonomy, the levels of automation have profound implications and affects to the reliability and safety of the CPS. The human-agent and the machine-agent are in a tidal lock of decision-making using both feedforward and feedback information flows in similar processes, where a transient shift within the level of automation when the CPS is operating can have undesired consequences. As CPS systems become more common, and higher levels of autonomy are embedded within them, the relationship between human-agent and machine-agent also becomes more complex, and the testing methodologies for verification and validation of performance and correctness also become more complex and less clear. A framework then is developed to help the practitioner to understand the difficulties and pitfalls of CPS designs and provides guidance to test engineering design of soft computational systems using combinations of modeling, simulation, and prototyping

    The 1st International Conference on Computational Engineering and Intelligent Systems

    Get PDF
    Computational engineering, artificial intelligence and smart systems constitute a hot multidisciplinary topic contrasting computer science, engineering and applied mathematics that created a variety of fascinating intelligent systems. Computational engineering encloses fundamental engineering and science blended with the advanced knowledge of mathematics, algorithms and computer languages. It is concerned with the modeling and simulation of complex systems and data processing methods. Computing and artificial intelligence lead to smart systems that are advanced machines designed to fulfill certain specifications. This proceedings book is a collection of papers presented at the first International Conference on Computational Engineering and Intelligent Systems (ICCEIS2021), held online in the period December 10-12, 2021. The collection offers a wide scope of engineering topics, including smart grids, intelligent control, artificial intelligence, optimization, microelectronics and telecommunication systems. The contributions included in this book are of high quality, present details concerning the topics in a succinct way, and can be used as excellent reference and support for readers regarding the field of computational engineering, artificial intelligence and smart system

    Advanced Topics in Systems Safety and Security

    Get PDF
    This book presents valuable research results in the challenging field of systems (cyber)security. It is a reprint of the Information (MDPI, Basel) - Special Issue (SI) on Advanced Topics in Systems Safety and Security. The competitive review process of MDPI journals guarantees the quality of the presented concepts and results. The SI comprises high-quality papers focused on cutting-edge research topics in cybersecurity of computer networks and industrial control systems. The contributions presented in this book are mainly the extended versions of selected papers presented at the 7th and the 8th editions of the International Workshop on Systems Safety and Security—IWSSS. These two editions took place in Romania in 2019 and respectively in 2020. In addition to the selected papers from IWSSS, the special issue includes other valuable and relevant contributions. The papers included in this reprint discuss various subjects ranging from cyberattack or criminal activities detection, evaluation of the attacker skills, modeling of the cyber-attacks, and mobile application security evaluation. Given this diversity of topics and the scientific level of papers, we consider this book a valuable reference for researchers in the security and safety of systems

    Temporal Information in Data Science: An Integrated Framework and its Applications

    Get PDF
    Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems.Data science is a well-known buzzword, that is in fact composed of two distinct keywords, i.e., data and science. Data itself is of great importance: each analysis task begins from a set of examples. Based on such a consideration, the present work starts with the analysis of a real case scenario, by considering the development of a data warehouse-based decision support system for an Italian contact center company. Then, relying on the information collected in the developed system, a set of machine learning-based analysis tasks have been developed to answer specific business questions, such as employee work anomaly detection and automatic call classification. Although such initial applications rely on already available algorithms, as we shall see, some clever analysis workflows had also to be developed. Afterwards, continuously driven by real data and real world applications, we turned ourselves to the question of how to handle temporal information within classical decision tree models. Our research brought us the development of J48SS, a decision tree induction algorithm based on Quinlan's C4.5 learner, which is capable of dealing with temporal (e.g., sequential and time series) as well as atemporal (such as numerical and categorical) data during the same execution cycle. The decision tree has been applied into some real world analysis tasks, proving its worthiness. A key characteristic of J48SS is its interpretability, an aspect that we specifically addressed through the study of an evolutionary-based decision tree pruning technique. Next, since a lot of work concerning the management of temporal information has already been done in automated reasoning and formal verification fields, a natural direction in which to proceed was that of investigating how such solutions may be combined with machine learning, following two main tracks. First, we show, through the development of an enriched decision tree capable of encoding temporal information by means of interval temporal logic formulas, how a machine learning algorithm can successfully exploit temporal logic to perform data analysis. Then, we focus on the opposite direction, i.e., that of employing machine learning techniques to generate temporal logic formulas, considering a natural language processing scenario. Finally, as a conclusive development, the architecture of a system is proposed, in which formal methods and machine learning techniques are seamlessly combined to perform anomaly detection and predictive maintenance tasks. Such an integration represents an original, thrilling research direction that may open up new ways of dealing with complex, real-world problems

    A Survey on Compiler Autotuning using Machine Learning

    Full text link
    Since the mid-1990s, researchers have been trying to use machine-learning based approaches to solve a number of different compiler optimization problems. These techniques primarily enhance the quality of the obtained results and, more importantly, make it feasible to tackle two main compiler optimization problems: optimization selection (choosing which optimizations to apply) and phase-ordering (choosing the order of applying optimizations). The compiler optimization space continues to grow due to the advancement of applications, increasing number of compiler optimizations, and new target architectures. Generic optimization passes in compilers cannot fully leverage newly introduced optimizations and, therefore, cannot keep up with the pace of increasing options. This survey summarizes and classifies the recent advances in using machine learning for the compiler optimization field, particularly on the two major problems of (1) selecting the best optimizations and (2) the phase-ordering of optimizations. The survey highlights the approaches taken so far, the obtained results, the fine-grain classification among different approaches and finally, the influential papers of the field.Comment: version 5.0 (updated on September 2018)- Preprint Version For our Accepted Journal @ ACM CSUR 2018 (42 pages) - This survey will be updated quarterly here (Send me your new published papers to be added in the subsequent version) History: Received November 2016; Revised August 2017; Revised February 2018; Accepted March 2018
    • 

    corecore