1,847 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Importance and applications of robotic and autonomous systems (RAS) in railway maintenance sector: a review

    Get PDF
    Maintenance, which is critical for safe, reliable, quality, and cost-effective service, plays a dominant role in the railway industry. Therefore, this paper examines the importance and applications of Robotic and Autonomous Systems (RAS) in railway maintenance. More than 70 research publications, which are either in practice or under investigation describing RAS developments in the railway maintenance, are analysed. It has been found that the majority of RAS developed are for rolling-stock maintenance, followed by railway track maintenance. Further, it has been found that there is growing interest and demand for robotics and autonomous systems in the railway maintenance sector, which is largely due to the increased competition, rapid expansion and ever-increasing expense

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    Automatic classification of power quality disturbances using optimal feature selection based algorithm

    Get PDF
    The development of renewable energy sources and power electronic converters in conventional power systems leads to Power Quality (PQ) disturbances. This research aims at automatic detection and classification of single and multiple PQ disturbances using a novel optimal feature selection based on Discrete Wavelet Transform (DWT) and Artificial Neural Network (ANN). DWT is used for the extraction of useful features, which are used to distinguish among different PQ disturbances by an ANN classifier. The performance of the classifier solely depends on the feature vector used for the training. Therefore, this research is required for the constructive feature selection based classification system. In this study, an Artificial Bee Colony based Probabilistic Neural Network (ABCPNN) algorithm has been proposed for optimal feature selection. The most common types of single PQ disturbances include sag, swell, interruption, harmonics, oscillatory and impulsive transients, flicker, notch and spikes. Moreover, multiple disturbances consisting of combination of two disturbances are also considered. The DWT with multi-resolution analysis has been applied to decompose the PQ disturbance waveforms into detail and approximation coefficients at level eight using Daubechies wavelet family. Various types of statistical parameters of all the detail and approximation coefficients have been analysed for feature extraction, out of which the optimal features have been selected using ABC algorithm. The performance of the proposed algorithm has been analysed with different architectures of ANN such as multilayer perceptron and radial basis function neural network. The PNN has been found to be the most suitable classifier. The proposed algorithm is tested for both PQ disturbances obtained from the parametric equations and typical power distribution system models using MATLAB/Simulink and PSCAD/EMTDC. The PQ disturbances with uniformly distributed noise ranging from 20 to 50 dB have also been analysed. The experimental results show that the proposed ABC-PNN based approach is capable of efficiently eliminating unnecessary features to improve the accuracy and performance of the classifier

    A new approach to face recognition using Curvelet Transform

    Get PDF
    Multiresolution tools have been profusely employed in face recognition. Wavelet Transform is the best known among these multiresolution tools and is widely used for identification of human faces. Of late, following the success of wavelets a number of new multiresolution tools have been developed. Curvelet Transform is a recent addition to that list. It has better directional ability and effective curved edge representation capability. These two properties make curvelet transform a powerful weapon for extracting edge information from facial images. Our work aims at exploring the possibilities of curvelet transform for feature extraction from human faces in order to introduce a new alternative approach towards face recognition

    Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack

    Get PDF
    Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria

    AN INVESTIGATION OF ELECTROMYOGRAPHIC (EMG) CONTROL OF DEXTROUS HAND PROSTHESES FOR TRANSRADIAL AMPUTEES

    Get PDF
    In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of Plymouth University's products or services.There are many amputees around the world who have lost a limb through conflict, disease or an accident. Upper-limb prostheses controlled using surface Electromyography (sEMG) offer a solution to help the amputees; however, their functionality is limited by the small number of movements they can perform and their slow reaction times. Pattern recognition (PR)-based EMG control has been proposed to improve the functional performance of prostheses. It is a very promising approach, offering intuitive control, fast reaction times and the ability to control a large number of degrees of freedom (DOF). However, prostheses controlled with PR systems are not available for everyday use by amputees, because there are many major challenges and practical problems that need to be addressed before clinical implementation is possible. These include lack of individual finger control, an impractically large number of EMG electrodes, and the lack of deployment protocols for EMG electrodes site selection and movement optimisation. Moreover, the inability of PR systems to handle multiple forces is a further practical problem that needs to be addressed. The main aim of this project is to investigate the research challenges mentioned above via non-invasive EMG signal acquisition, and to propose practical solutions to help amputees. In a series of experiments, the PR systems presented here were tested with EMG signals acquired from seven transradial amputees, which is unique to this project. Previous studies have been conducted using non-amputees. In this work, the challenges described are addressed and a new protocol is proposed that delivers a fast clinical deployment of multi-functional upper limb prostheses controlled by PR systems. Controlling finger movement is a step towards the restoration of lost human capabilities, and is psychologically important, as well as physically. A central thread running through this work is the assertion that no two amputees are the same, each suffering different injuries and retaining differing nerve and muscle structures. This work is very much about individualised healthcare, and aims to provide the best possible solution for each affected individual on a case-by-case basis. Therefore, the approach has been to optimise the solution (in terms of function and reliability) for each individual, as opposed to developing a generic solution, where performance is optimised against a test population. This work is unique, in that it contributes to improving the quality of life for each individual amputee by optimising function and reliability. The main four contributions of the thesis are as follows: 1- Individual finger control was achieved with high accuracy for a large number of finger movements, using six optimally placed sEMG channels. This was validated on EMG signals for ten non-amputee and six amputee subjects. Thumb movements were classified successfully with high accuracy for the first time. The outcome of this investigation will help to add more movements to the prosthesis, and reduce hardware and computational complexity. 2- A new subject-specific protocol for sEMG site selection and reliable movement subset optimisation, based on the amputee’s needs, has been proposed and validated on seven amputees. This protocol will help clinicians to perform an efficient and fast deployment of prostheses, by finding the optimal number and locations of EMG channels. It will also find a reliable subset of movements that can be achieved with high performance. 3- The relationship between the force of contraction and the statistics of EMG signals has been investigated, utilising an experimental design where visual feedback from a Myoelectric Control Interface (MCI) helped the participants to produce the correct level of force. Kurtosis values were found to decrease monotonically when the contraction level increased, thus indicating that kurtosis can be used to distinguish different forces of contractions. 4- The real practical problem of the degradation of classification performance as a result of the variation of force levels during daily use of the prosthesis has been investigated, and solved by proposing a training approach and the use of a robust feature extraction method, based on the spectrum. The recommendations of this investigation improve the practical robustness of prostheses controlled with PR systems and progress a step further towards clinical implementation and improving the quality of life of amputees. The project showed that PR systems achieved a reliable performance for a large number of amputees, taking into account real life issues such as individual finger control for high dexterity, the effect of force level variation, and optimisation of the movements and EMG channels for each individual amputee. The findings of this thesis showed that the PR systems need to be appropriately tuned before usage, such as training with multiple forces to help to reduce the effect of force variation, aiming to improve practical robustness, and also finding the optimal EMG channel for each amputee, to improve the PR system’s performance. The outcome of this research enables the implementation of PR systems in real prostheses that can be used by amputees.Ministry of Higher Education and Scientific Research and Baghdad University- Baghdad/Ira

    Class-Level Refactoring Prediction by Ensemble Learning with Various Feature Selection Techniques

    Get PDF
    Background: Refactoring is changing a software system without affecting the software functionality. The current researchers aim i to identify the appropriate method(s) or class(s) that needs to be refactored in object-oriented software. Ensemble learning helps to reduce prediction errors by amalgamating different classifiers and their respective performances over the original feature data. Other motives are added in this paper regarding several ensemble learners, errors, sampling techniques, and feature selection techniques for refactoring prediction at the class level. Objective: This work aims to develop an ensemble-based refactoring prediction model with structural identification of source code metrics using different feature selection techniques and data sampling techniques to distribute the data uniformly. Our model finds the best classifier after achieving fewer errors during refactoring prediction at the class level. Methodology: At first, our proposed model extracts a total of 125 software metrics computed from object-oriented software systems processed for a robust multi-phased feature selection method encompassing Wilcoxon significant text, Pearson correlation test, and principal component analysis (PCA). The proposed multi-phased feature selection method retains the optimal features characterizing inheritance, size, coupling, cohesion, and complexity. After obtaining the optimal set of software metrics, a novel heterogeneous ensemble classifier is developed using techniques such as ANN-Gradient Descent, ANN-Levenberg Marquardt, ANN-GDX, ANN-Radial Basis Function; support vector machine with different kernel functions such as LSSVM-Linear, LSSVM-Polynomial, LSSVM-RBF, Decision Tree algorithm, Logistic Regression algorithm and extreme learning machine (ELM) model are used as the base classifier. In our paper, we have calculated four different errors i.e., Mean Absolute Error (MAE), Mean magnitude of Relative Error (MORE), Root Mean Square Error (RMSE), and Standard Error of Mean (SEM). Result: In our proposed model, the maximum voting ensemble (MVE) achieves better accuracy, recall, precision, and F-measure values (99.76, 99.93, 98.96, 98.44) as compared to the base trained ensemble (BTE) and it experiences less errors (MAE = 0.0057, MORE = 0.0701, RMSE = 0.0068, and SEM = 0.0107) during its implementation to develop the refactoring model. Conclusions: Our experimental result recommends that MVE with upsampling can be implemented to improve the performance of the refactoring prediction model at the class level. Furthermore, the performance of our model with different data sampling techniques and feature selection techniques has been shown in the form boxplot diagram of accuracy, F-measure, precision, recall, and area under the curve (AUC) parameters.publishedVersio

    2D and 3D computer vision analysis of gaze, gender and age

    Get PDF
    Human-Computer Interaction (HCI) has been an active research area for over four decades. Research studies and commercial designs in this area have been largely facilitated by the visual modality which brings diversified functionality and improved usability to HCI interfaces by employing various computer vision techniques. This thesis explores a number of facial cues, such as gender, age and gaze, by performing 2D and 3D based computer vision analysis. The ultimate aim is to create a natural HCI strategy that can fulfil user expectations, augment user satisfaction and enrich user experience by understanding user characteristics and behaviours. To this end, salient features have been extracted and analysed from 2D and 3D face representations; 3D reconstruction algorithms and their compatible real-world imaging systems have been investigated; case study HCI systems have been designed to demonstrate the reliability, robustness, and applicability of the proposed method.More specifically, an unsupervised approach has been proposed to localise eye centres in images and videos accurately and efficiently. This is achieved by utilisation of two types of geometric features and eye models, complemented by an iris radius constraint and a selective oriented gradient filter specifically tailored to this modular scheme. This approach resolves challenges such as interfering facial edges, undesirable illumination conditions, head poses, and the presence of facial accessories and makeup. Tested on 3 publicly available databases (the BioID database, the GI4E database and the extended Yale Face Database b), and a self-collected database, this method outperforms all the methods in comparison and thus proves to be highly accurate and robust. Based on this approach, a gaze gesture recognition algorithm has been designed to increase the interactivity of HCI systems by encoding eye saccades into a communication channel similar to the role of hand gestures. As well as analysing eye/gaze data that represent user behaviours and reveal user intentions, this thesis also investigates the automatic recognition of user demographics such as gender and age. The Fisher Vector encoding algorithm is employed to construct visual vocabularies as salient features for gender and age classification. Algorithm evaluations on three publicly available databases (the FERET database, the LFW database and the FRCVv2 database) demonstrate the superior performance of the proposed method in both laboratory and unconstrained environments. In order to achieve enhanced robustness, a two-source photometric stereo method has been introduced to recover surface normals such that more invariant 3D facia features become available that can further boost classification accuracy and robustness. A 2D+3D imaging system has been designed for construction of a self-collected dataset including 2D and 3D facial data. Experiments show that utilisation of 3D facial features can increase gender classification rate by up to 6% (based on the self-collected dataset), and can increase age classification rate by up to 12% (based on the Photoface database). Finally, two case study HCI systems, a gaze gesture based map browser and a directed advertising billboard, have been designed by adopting all the proposed algorithms as well as the fully compatible imaging system. Benefits from the proposed algorithms naturally ensure that the case study systems can possess high robustness to head pose variation and illumination variation; and can achieve excellent real-time performance. Overall, the proposed HCI strategy enabled by reliably recognised facial cues can serve to spawn a wide array of innovative systems and to bring HCI to a more natural and intelligent state

    Socialising around media. Improving the second screen experience through semantic analysis, context awareness and dynamic communities

    Get PDF
    SAM is a social media platform that enhances the experience of watching video content in a conventional living room setting, with a service that lets the viewer use a second screen (such as a smart phone) to interact with content, context and communities related to the main video content. This article describes three key functionalities used in the SAM platform in order to create an advanced interactive and social second screen experience for users: semantic analysis, context awareness and dynamic communities. Both dataset-based and end user evaluations of system functionalities are reported in order to determine the effectiveness and efficiency of the components directly involved and the platform as a whole
    corecore