543 research outputs found

    Unsupervised Heart-rate Estimation in Wearables With Liquid States and A Probabilistic Readout

    Full text link
    Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine intelligent approach for heart-rate estimation from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects are considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.Comment: 51 pages, 12 figures, 6 tables, 95 references. Under submission at Elsevier Neural Network

    A Multi-Agent Architecture for the Design of Hierarchical Interval Type-2 Beta Fuzzy System

    Get PDF
    This paper presents a new methodology for building and evolving hierarchical fuzzy systems. For the system design, a tree-based encoding method is adopted to hierarchically link low dimensional fuzzy systems. Such tree structural representation has by nature a flexible design offering more adjustable and modifiable structures. The proposed hierarchical structure employs a type-2 beta fuzzy system to cope with the faced uncertainties, and the resulting system is called the Hierarchical Interval Type-2 Beta Fuzzy System (HT2BFS). For the system optimization, two main tasks of structure learning and parameter tuning are applied. The structure learning phase aims to evolve and learn the structures of a population of HT2BFS in a multiobjective context taking into account the optimization of both the accuracy and the interpretability metrics. The parameter tuning phase is applied to refine and adjust the parameters of the system. To accomplish these two tasks in the most optimal and faster way, we further employ a multi-agent architecture to provide both a distributed and a cooperative management of the optimization tasks. Agents are divided into two different types based on their functions: a structure agent and a parameter agent. The main function of the structure agent is to perform a multi-objective evolutionary structure learning step by means of the Multi-Objective Immune Programming algorithm (MOIP). The parameter agents have the function of managing different hierarchical structures simultaneously to refine their parameters by means of the Hybrid Harmony Search algorithm (HHS). In this architecture, agents use cooperation and communication concepts to create high-performance HT2BFSs. The performance of the proposed system is evaluated by several comparisons with various state of art approaches on noise-free and noisy time series prediction data sets and regression problems. The results clearly demonstrate a great improvement in the accuracy rate, the convergence speed and the number of used rules as compared with other existing approaches

    An Integrated Method for Optimizing Bridge Maintenance Plans

    Get PDF
    Bridges are one of the vital civil infrastructure assets, essential for economic developments and public welfare. Their large numbers, deteriorating condition, public demands for safe and efficient transportation networks and limited maintenance and intervention budgets pose a challenge, particularly when coupled with the need to respect environmental constraints. This state of affairs creates a wide gap between critical needs for intervention actions, and tight maintenance and rehabilitation funds. In an effort to meet this challenge, a newly developed integrated method for optimized maintenance and intervention plans for reinforced concrete bridge decks is introduced. The method encompasses development of five models: surface defects evaluation, corrosion severities evaluation, deterioration modeling, integrated condition assessment, and optimized maintenance plans. These models were automated in a set of standalone computer applications, coded using C#.net in Matlab environment. These computer applications were subsequently combined to form an integrated method for optimized maintenance and intervention plans. Four bridges and a dataset of bridge images were used in testing and validating the developed optimization method and its five models. The developed models have unique features and demonstrated noticeable performance and accuracy over methods used in practice and those reported in the literature. For example, the accuracy of the surface defects detection and evaluation model outperforms those of widely-recognized machine leaning and deep learning models; reducing detection, recognition and evaluation of surface defects error by 56.08%, 20.2% and 64.23%, respectively. The corrosion evaluation model comprises design of a standardized amplitude rating system that circumvents limitations of numerical amplitude-based corrosion maps. In the integrated condition, it was inferred that the developed model accomplished consistent improvement over the visual inspection procedures in-use by the Ministry of Transportation in Quebec. Similarly, the deterioration model displayed average enhancement in the prediction accuracies by 60% when compared against the most commonly-utilized weibull distribution. The performance of the developed multi-objective optimization model yielded 49% and 25% improvement over that of genetic algorithm in a five-year study period and a twenty five-year study period, respectively. At the level of thirty five-year study period, unlike the developed model, classical meta-heuristics failed to find feasible solutions within the assigned constraints. The developed integrated platform is expected to provide an efficient tool that enables decision makers to formulate sustainable maintenance plans that optimize budget allocations and ensure efficient utilization of resources

    The Applications of Soft Computing Methods for Seepage Modeling: A Review

    Get PDF
    In recent times, significant research has been carried out into developing and applying soft computing techniques for modeling hydro-climatic processes such as seepage modeling. It is necessary to properly model seepage, which creates groundwater sources, to ensure adequate management of scarce water resources. On the other hand, excessive seepage can threaten the stability of earthfill dams and infrastructures. Furthermore, it could result in severe soil erosion and consequently cause environmental damage. Considering the complex and nonlinear nature of the seepage process, employing soft computing techniques, especially applying pre-post processing techniques as hybrid methods, such as wavelet analysis, could be appropriate to enhance modeling efficiency. This review paper summarizes standard soft computing techniques and reviews their seepage modeling and simulation applications in the last two decades. Accordingly, 48 research papers from 2002 to 2021 were reviewed. According to the reviewed papers, it could be understood that regardless of some limitations, soft computing techniques could simulate the seepage successfully either through groundwater or earthfill dam and hydraulic structures. Moreover, some suggestions for future research are presented. This review was conducted employing preferred reporting items for systematic reviews and meta-analyses (PRISMA) method

    Artificial cognitive architecture with self-learning and self-optimization capabilities. Case studies in micromachining processes

    Full text link
    Tesis doctoral inΓ©dita leΓ­da en la Universidad AutΓ³noma de Madrid, Escuela PolitΓ©cnica Superior, Departamento de IngenierΓ­a InformΓ‘tica. Fecha de lectura : 22-09-201

    Some Clustering Methods, Algorithms and their Applications

    Get PDF
    Clustering is a type of unsupervised learning [15]. When no target values are known, or "supervisors," in an unsupervised learning task, the purpose is to produce training data from the inputs themselves. Data mining and machine learning would be useless without clustering. If you utilize it to categorize your datasets according to their similarities, you'll be able to predict user behavior more accurately. The purpose of this research is to compare and contrast three widely-used data-clustering methods. Clustering techniques include partitioning, hierarchy, density, grid, and fuzzy clustering. Machine learning, data mining, pattern recognition, image analysis, and bioinformatics are just a few of the many fields where clustering is utilized as an analytical technique. In addition to defining the various algorithms, specialized forms of cluster analysis, linking methods, and please offer a review of the clustering techniques used in the big data setting

    Automatic object classification for surveillance videos.

    Get PDF
    PhDThe recent popularity of surveillance video systems, specially located in urban scenarios, demands the development of visual techniques for monitoring purposes. A primary step towards intelligent surveillance video systems consists on automatic object classification, which still remains an open research problem and the keystone for the development of more specific applications. Typically, object representation is based on the inherent visual features. However, psychological studies have demonstrated that human beings can routinely categorise objects according to their behaviour. The existing gap in the understanding between the features automatically extracted by a computer, such as appearance-based features, and the concepts unconsciously perceived by human beings but unattainable for machines, or the behaviour features, is most commonly known as semantic gap. Consequently, this thesis proposes to narrow the semantic gap and bring together machine and human understanding towards object classification. Thus, a Surveillance Media Management is proposed to automatically detect and classify objects by analysing the physical properties inherent in their appearance (machine understanding) and the behaviour patterns which require a higher level of understanding (human understanding). Finally, a probabilistic multimodal fusion algorithm bridges the gap performing an automatic classification considering both machine and human understanding. The performance of the proposed Surveillance Media Management framework has been thoroughly evaluated on outdoor surveillance datasets. The experiments conducted demonstrated that the combination of machine and human understanding substantially enhanced the object classification performance. Finally, the inclusion of human reasoning and understanding provides the essential information to bridge the semantic gap towards smart surveillance video systems

    A survey of the application of soft computing to investment and financial trading

    Get PDF
    • …
    corecore