991 research outputs found

    Robust algorithm for arrhythmia classification in ECG using extreme learning machine

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Recently, extensive studies have been carried out on arrhythmia classification algorithms using artificial intelligence pattern recognition methods such as neural network. To improve practicality, many studies have focused on learning speed and the accuracy of neural networks. However, algorithms based on neural networks still have some problems concerning practical application, such as slow learning speeds and unstable performance caused by local minima.</p> <p>Methods</p> <p>In this paper we propose a novel arrhythmia classification algorithm which has a fast learning speed and high accuracy, and uses Morphology Filtering, Principal Component Analysis and Extreme Learning Machine (ELM). The proposed algorithm can classify six beat types: normal beat, left bundle branch block, right bundle branch block, premature ventricular contraction, atrial premature beat, and paced beat.</p> <p>Results</p> <p>The experimental results of the entire MIT-BIH arrhythmia database demonstrate that the performances of the proposed algorithm are 98.00% in terms of average sensitivity, 97.95% in terms of average specificity, and 98.72% in terms of average accuracy. These accuracy levels are higher than or comparable with those of existing methods. We make a comparative study of algorithm using an ELM, back propagation neural network (BPNN), radial basis function network (RBFN), or support vector machine (SVM). Concerning the aspect of learning time, the proposed algorithm using ELM is about 290, 70, and 3 times faster than an algorithm using a BPNN, RBFN, and SVM, respectively.</p> <p>Conclusion</p> <p>The proposed algorithm shows effective accuracy performance with a short learning time. In addition we ascertained the robustness of the proposed algorithm by evaluating the entire MIT-BIH arrhythmia database.</p

    SW-ELM : A summation wavelet extreme learning machine algorithm with a priori initialization.

    No full text
    International audienceCombining neural networks and wavelet theory as an approximation or prediction models appears to be an effective solution in many applicative areas. However, when building such systems, one has to face parsimony problem, i.e., to look for a compromise between the complexity of the learning phase and accuracy performances. Following that, the aim of this paper is to propose a new structure of connectionist network, the Summation Wavelet Extreme Learning Machine (SW-ELM) that enables good accuracy and generalization performances, while limiting the learning time and reducing the impact of random initialization procedure. SW-ELM is based on Extreme Learning Machine (ELM) algorithm for fast batch learning, but with dual activation functions in the hidden layer nodes. This enhances dealing with non-linearity in an efficient manner. The initialization phase of wavelets (of hidden nodes) and neural network parameters (of input-hidden layer) is performed a priori, even before data are presented to the model. The whole proposition is illustrated and discussed by performing tests on three issues related to time-series application: an "input-output" approximation problem, a one-step ahead prediction problem, and a multi-steps ahead prediction problem. Performances of SW-ELM are benchmarked with ELM, Levenberg Marquardt algorithm for Single Layer Feed Forward Network (SLFN) and ELMAN network on six industrial data sets. Results show the significance of performances achieved by SW-ELM

    Analytical asssessment of the structural behavior of a specific composite floor system at elevated temperatures using a newly developed hybrid intelligence method

    Get PDF
    The aim of this paper is to study the performance of a composite floor system at different heat stages using artificial intelligence to derive a sustainable design and to select the most critical factors for a sustainable floor system at elevated temperatures. In a composite floor system, load bearing is due to composite action between steel and concrete materials which is achieved by using shear connectors. Although shear connectors play an important role in the performance of a composite floor system by transferring shear force from the concrete to the steel profile, if the composite floor system is exposed to high temperature conditions excessive deformations may reduce the shear-bearing capacity of the composite floor system. Therefore, in this paper, the slip response of angle shear connectors is evaluated by using artificial intelligence techniques to determine the performance of a composite floor system during high temperatures. Accordingly, authenticated experimental data on monotonic loading of a composite steel-concrete floor system in different heat stages were employed for analytical assessment. Moreover, an artificial neural network was developed with a fuzzy system (ANFIS) optimized by using a genetic algorithm (GA) and particle swarm optimization (PSO), namely the ANFIS-PSO-GA (ANPG) method. In addition, the results of the ANPG method were compared with those of an extreme learning machine (ELM) method and a radial basis function network (RBFN) method. The mechanical and geometrical properties of the shear connectors and the temperatures were included in the dataset. Based on the results, although the behavior of the composite floor system was accurately predicted by the three methods, the RBFN and ANPG methods represented the most accurate values for split-tensile load and slip prediction, respectively. Based on the numerical results, since the slip response had a rational relationship with the load and geometrical parameters, it was dramatically predictable. In addition, slip response and temperature were determined as the most critical factors affecting the shear-bearing capacity of the composite floor system at elevated temperatures

    The Challenge of Non-Technical Loss Detection using Artificial Intelligence: A Survey

    Get PDF
    Detection of non-technical losses (NTL) which include electricity theft, faulty meters or billing errors has attracted increasing attention from researchers in electrical engineering and computer science. NTLs cause significant harm to the economy, as in some countries they may range up to 40% of the total electricity distributed. The predominant research direction is employing artificial intelligence to predict whether a customer causes NTL. This paper first provides an overview of how NTLs are defined and their impact on economies, which include loss of revenue and profit of electricity providers and decrease of the stability and reliability of electrical power grids. It then surveys the state-of-the-art research efforts in a up-to-date and comprehensive review of algorithms, features and data sets used. It finally identifies the key scientific and engineering challenges in NTL detection and suggests how they could be addressed in the future

    Overview of the JET results in support to ITER

    Get PDF
    The 2014–2016 JET results are reviewed in the light of their significance for optimising the ITER research plan for the active and non-active operation. More than 60 h of plasma operation with ITER first wall materials successfully took place since its installation in 2011. New multi-machine scaling of the type I-ELM divertor energy flux density to ITER is supported by first principle modelling. ITER relevant disruption experiments and first principle modelling are reported with a set of three disruption mitigation valves mimicking the ITER setup. Insights of the L–H power threshold in Deuterium and Hydrogen are given, stressing the importance of the magnetic configurations and the recent measurements of fine-scale structures in the edge radial electric. Dimensionless scans of the core and pedestal confinement provide new information to elucidate the importance of the first wall material on the fusion performance. H-mode plasmas at ITER triangularity (H = 1 at βN ~ 1.8 and n/nGW ~ 0.6) have been sustained at 2 MA during 5 s. The ITER neutronics codes have been validated on high performance experiments. Prospects for the coming D–T campaign and 14 MeV neutron calibration strategy are reviewed.European Commission (EUROfusion 633053

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Efficient channel equalization algorithms for multicarrier communication systems

    Get PDF
    Blind adaptive algorithm that updates time-domain equalizer (TEQ) coefficients by Adjacent Lag Auto-correlation Minimization (ALAM) is proposed to shorten the channel for multicarrier modulation (MCM) systems. ALAM is an addition to the family of several existing correlation based algorithms that can achieve similar or better performance to existing algorithms with lower complexity. This is achieved by designing a cost function without the sum-square and utilizing symmetrical-TEQ property to reduce the complexity of adaptation of TEQ to half of the existing one. Furthermore, to avoid the limitations of lower unstable bit rate and high complexity, an adaptive TEQ using equal-taps constraints (ETC) is introduced to maximize the bit rate with the lowest complexity. An IP core is developed for the low-complexity ALAM (LALAM) algorithm to be implemented on an FPGA. This implementation is extended to include the implementation of the moving average (MA) estimate for the ALAM algorithm referred as ALAM-MA. Unit-tap constraint (UTC) is used instead of unit-norm constraint (UNC) while updating the adaptive algorithm to avoid all zero solution for the TEQ taps. The IP core is implemented on Xilinx Vertix II Pro XC2VP7-FF672-5 for ADSL receivers and the gate level simulation guaranteed successful operation at a maximum frequency of 27 MHz and 38 MHz for ALAM-MA and LALAM algorithm, respectively. FEQ equalizer is used, after channel shortening using TEQ, to recover distorted QAM signals due to channel effects. A new analytical learning based framework is proposed to jointly solve equalization and symbol detection problems in orthogonal frequency division multiplexing (OFDM) systems with QAM signals. The framework utilizes extreme learning machine (ELM) to achieve fast training, high performance, and low error rates. The proposed framework performs in real-domain by transforming a complex signal into a single 2–tuple real-valued vector. Such transformation offers equalization in real domain with minimum computational load and high accuracy. Simulation results show that the proposed framework outperforms other learning based equalizers in terms of symbol error rates and training speeds

    Exploitation of wireless control link in the software-defined LEO satellite network

    Get PDF
    software-defined satellite network, control link, cross layer optimization, power-efficient control link algorithmThe low earth orbit (LEO) satellite network can benefit from software-defined networking (SDN) by lightening forwarding devices and improving service diversity. In order to apply SDN into the network, however, reliable SDN control links should be associated from satellite gateways to satellites, with the wireless and mobile properties of the network taken into account. Since these characteristics affect both control link association and gateway power allocation, we define this new cross layer problem as an SDN control link problem. The problem is discussed from the viewpoint of multilayers such as automatic repeat request (ARQ) and gateway power allocation at the Link layer, and split transmit control protocol (TCP) and link scheduling at the Transport layer. A centralized SDN control framework constrained by maximum total power is introduced to enhance gateway power efficiency for control link setup. Based on the power control analysis of the problem, a power-efficient control link algorithm is developed, which establishes low latency control links with reduced power consumption. Along with the sensitivity analysis of the proposed control link algorithm, numerical results demonstrate low latency and high reliability of control links established by the algorithm, ultimately suggesting the feasibility, both technical and economical, of the software-defined LEO satellite network.open1. INTRODUCTION 1 1.1 Software-Defined Satellite Network 1 1.2 Wireless SDN Control Link Problem Statement 4 1.3 Contributions and Overview of Theses 5 1.4 Related Works 6 2. MODELING AND FORMULATION 8 2.1 Control Link Association 8 2.1.1 Graph Model 8 2.1.2 ARQ and Split TCP 9 2.1.3 Link Association Variable 10 2.2 Control Link Reliability and Expected Latency Formulation 12 2.2.1 Control Link Reliability and Gateway Power 12 2.2.2 Expected Latency Formulation 13 2.3 SDN Control Link Problem 16 2.3.1 Expected Latency Minimization Problem 16 2.3.2 Power-Efficient SDN Control Link Problem 17 3. SDN CONTROL LINK ALGORITHM 22 4. NUMERICAL RESULTS AND ANALYSIS 25 4.1 Latency Analysis and Feasibility of the Software-Defined Satellite Network 27 4.2 Sensitivity Analysis and Selection of the Maximum Total Power 33 5. CONCLUSION 37 APPENDIX 38 REFERENCES 40저궤도(LEO) 위성 네트워크는 데이터 전달 장치를 간소화하고 서비스 다양성을 향상시키는 등, 소프트웨어 정의 네트워킹(SDN)로부터 다양한 이점을 얻을 수 있다. 그러나 SDN을 위성 네트워크에 적용하기 위해서는, 신뢰성 있는 SDN 제어 링크가 위성 게이트웨이로부터 위성까지 연결되어야 하며, 위성 네트워크의 무선 특성과 이동성이 동시에 고려되어야 한다. 이러한 특성들은 제어 링크 연결과 게이트웨이 전력 할당 모두에 영향을 미치기 때문에, 우리는 이러한 교차 계층 문제를 SDN 제어 링크 문제로 새롭게 정의한다. 이 문제는 전송 계층의 자동 재전송 요구(ARQ) 및 전송 제어 프로토콜(TCP), 네트워크 계층의 라우팅, 물리 계층의 전력 할당과 같은 다중 계층의 관점에서 논의된다. 본 논문에서는 제어 링크 설정에 필요한 게이트웨이 전력 효율을 높이기 위해 최대 총 전력을 제한하는 중앙집권화 SDN 제어 프레임워크를 도입한다. 제안된 문제에 대한 전력 할당 분석을 기반으로, 전력 소비가 적으면서도 지연이 적은 제어 링크를 연결하는 전력 효율적인 제어 링크 알고리즘이 제안된다. 제안된 제어 링크 알고리즘의 민감도 분석과 함께, 시뮬레이션 결과는 알고리즘에 의해 설정되는 제어 링크의 낮은 지연과 높은 신뢰성을 보여주며, 궁극적으로 소프트웨어 정의 LEO 위성 네트워크의 기술적 및 경제적 타당성을 제시한다.MasterdCollectio
    corecore