19 research outputs found

    Adapting Prosody in a Text-to-Speech System

    Get PDF

    Multifunctional optimized group method data handling for software effort estimation

    Get PDF
    Nowadays, the trend of significant effort estimations is in demand. Due to its popularity, the stakeholder needs effective and efficient software development processes with the best estimation and accuracy to suit all data types. Nevertheless, finding the best effort estimation model with good accuracy is hard to serve this purpose. Group Method of Data Handling (GMDH) algorithms have been widely used for modelling and identifying complex systems and potentially applied in software effort estimation. However, there is limited study to determine the best architecture and optimal weight coefficients of the transfer function for the GMDH model. This study aimed to propose a hybrid multifunctional GMDH with Artificial Bee Colony (GMDH-ABC) based on a combination of four individual GMDH models, namely, GMDH-Polynomial, GMDH-Sigmoid, GMDH-Radial Basis Function, and GMDH-Tangent. The best GMDH architecture is determined based on L9 Taguchi orthogonal array. Five datasets (i.e., Cocomo, Dershanais, Albrecht, Kemerer and ISBSG) were used to validate the proposed models. The missing values in the dataset are imputed by the developed MissForest Multiple imputation method (MFMI). The Mean Absolute Percentage Error (MAPE) was used as performance measurement. The result showed that the GMDH-ABC model outperformed the individual GMDH by more than 50% improvement compared to standard conventional GMDH models and the benchmark ANN model in all datasets. The Cocomo dataset improved by 49% compared to the conventional GMDH-LSM. Improvements of 71%, 63%, 67%, and 82% in accuracy were obtained for the Dershanis dataset, Albrecht dataset, Kemerer dataset, and ISBSG dataset, respectively, as compared with the conventional GMDH-LSM. The results indicated that the proposed GMDH-ABC model has the ability to achieve higher accuracy in software effort estimation

    Texture-Based Segmentation and Finite Element Mesh Generation for Heterogeneous Biological Image Data

    Get PDF
    The design, analysis, and control of bio-systems remain an engineering challenge. This is mainly due to the material heterogeneity, boundary irregularity, and nonlinear dynamics associated with these systems. The recent developments in imaging techniques and stochastic upscaling methods provides a window of opportunity to more accurately assess these bio-systems than ever before. However, the use of image data directly in upscaled stochastic framework can only be realized by the development of certain intermediate steps. The goal of the research presented in this dissertation is to develop a texture-segmentation method and a unstructured mesh generation for heterogeneous image data. The following two new techniques are described and evaluated in this dissertation: 1. A new texture-based segmentation method, using the stochastic continuum concepts and wavelet multi-resolution analysis, is developed for characterization of heterogeneous materials in image data. The feature descriptors are developed to efficiently capture the micro-scale heterogeneity of macro-scale entities. The materials are then segmented at a representative elementary scale at which the statistics of the feature descriptor stabilize. 2. A new unstructured mesh generation technique for image data is developed using a hierarchical data structure. This representation allows for generating quality guaranteed finite element meshes. The framework for both the methods presented in this dissertation, as such, allows them for extending to higher dimensions. The experimental results using these methods conclude them to be promising tools for unifying data processing concepts within the upscaled stochastic framework across biological systems. These are targeted for inclusion in decision support systems where biological image data, simulation techniques and artificial intelligence will be used conjunctively and uniformly to assess bio-system quality and design effective and appropriate treatments that restore system health

    The characterisation and automatic classification of transmission line faults

    Get PDF
    Includes bibliographical references.A country's ability to sustain and grow its industrial and commercial activities is highly dependent on a reliable electricity supply. Electrical faults on transmission lines are a cause of both interruptions to supply and voltage dips. These are the most common events impacting electricity users and also have the largest financial impact on them. This research focuses on understanding the causes of transmission line faults and developing methods to automatically identify these causes. Records of faults occurring on the South African power transmission system over a 16-year period have been collected and analysed to find statistical relationships between local climate, key design parameters of the overhead lines and the main causes of power system faults. The results characterize the performance of the South African transmission system on a probabilistic basis and illustrate differences in fault cause statistics for the summer and winter rainfall areas of South Africa and for different times of the year and day. This analysis lays a foundation for reliability analysis and fault pattern recognition taking environmental features such as local geography, climate and power system parameters into account. A key aspect of using pattern recognition techniques is selecting appropriate classifying features. Transmission line fault waveforms are characterised by instantaneous symmetrical component analysis to describe the transient and steady state fault conditions. The waveform and environmental features are used to develop single nearest neighbour classifiers to identify the underlying cause of transmission line faults. A classification accuracy of 86% is achieved using a single nearest neighbour classifier. This classification performance is found to be superior to that of decision tree, artificial neural network and naïve Bayes classifiers. The results achieved demonstrate that transmission line faults can be automatically classified according to cause

    Applications of machine learning in diagnostics and prognostics of wind turbine high speed generator failure

    Get PDF
    The cost of wind energy has decreased over the last decade as technology has matured and the industry has benefited greatly from economies of scale. That being said, operations and maintenance still make up a significant proportion of the overall costs and needs to be reduced over the coming years as sites, particularly offshore, get larger and more remote. One of the key tools to achieve this is through enhancements of both SCADA and condition monitoring system analytics, leading to more informed and optimised operational decisions. Specifically examining the wind turbine generator and highspeed assembly, this thesis aims to showcase how machine learning techniques can be utilised to enhance vibration spectral analysis and SCADA analysis for early and more automated fault detection. First this will be performed separately based on features extracted from the vibration spectra and performance data in isolation before a framework will be presented to combine data sources to create a single anomaly detection model for early fault diagnosis. Additionally by further utilising vibration based analysis, machine learning techniques and a synchronised database of failures, remaining useful life prediction will also be explored for generator bearing faults, a key component when it comes to increasing wind turbine generator reliability. It will be shown that through early diagnosis and accurate prognosis, component replacements can be planned and optimised before catastrophic failures and large downtimes occur. Moreover, results also indicate that this can have a significant impact on the costs of operation and maintenance over the lifetime of an offshore development.The cost of wind energy has decreased over the last decade as technology has matured and the industry has benefited greatly from economies of scale. That being said, operations and maintenance still make up a significant proportion of the overall costs and needs to be reduced over the coming years as sites, particularly offshore, get larger and more remote. One of the key tools to achieve this is through enhancements of both SCADA and condition monitoring system analytics, leading to more informed and optimised operational decisions. Specifically examining the wind turbine generator and highspeed assembly, this thesis aims to showcase how machine learning techniques can be utilised to enhance vibration spectral analysis and SCADA analysis for early and more automated fault detection. First this will be performed separately based on features extracted from the vibration spectra and performance data in isolation before a framework will be presented to combine data sources to create a single anomaly detection model for early fault diagnosis. Additionally by further utilising vibration based analysis, machine learning techniques and a synchronised database of failures, remaining useful life prediction will also be explored for generator bearing faults, a key component when it comes to increasing wind turbine generator reliability. It will be shown that through early diagnosis and accurate prognosis, component replacements can be planned and optimised before catastrophic failures and large downtimes occur. Moreover, results also indicate that this can have a significant impact on the costs of operation and maintenance over the lifetime of an offshore development

    Application of Machine Learning within Visual Content Production

    Get PDF
    We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches

    A survey of Irish electronic industries towards development of a low cost MRP system to enhance the effectiveness of their inventory control

    Get PDF
    This thesis is predominantly concerned with the study of inventory control practices within the electronics industry in Ireland. The study o f the inventory control system has been carried out under three main interrelated sections: Industrial Survey Development of an MRP Model Development of a Material Flow Simulation Model. First, an industrial survey carried out to identify the common problems and challanges related to the electronics industry sector with respect to their inventory control systems. The results o f the industrial survey representing 44 companies are presented. The survey classifies the Irish Electronics industry sector in terms of company size, product structure and MRP levels. Second, based on the industrial survey results a low cost MRP model has been developed to enhance the effectiveness of their inventory control system. The model has been solved for a variety of product structures using standard mathematical programming packages. The results obtained are compared to those of standard MRP hot sizing techniques. The third section involves the development of a material flow simulation model using the SIMAN simulation package. The model is tested under a variety of operating conditions and performance statistics collected and analysed

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective
    corecore