5 research outputs found

    Towards Smart Data Selection from Tithe Series Using Statistical Methods

    Get PDF
    Transmitting and storing large volumes of dynamic / time series data collected by modern sensors can represent a significant technological challenge. A possibility to mitigate this challenge is to effectively select a subset of significant data points in order to reduce data volumes without sacrificing the quality of the results of the subsequent analysis. This paper proposes a method for adaptively identifying optimal data point selection algorithms for sensor time series on a window-by-window basis. Thus, this contribution focuses on quantifying the effect of the application of data selection algorithms to time series windows. The proposed approach is first used on multiple synthetically generated time series obtained by concatenating multiple sources one after the other, and then validated in the entire UCR time series public data archiveThis work was supported in part by the 3KIA Project through ELKARTEK, Basque Governmen

    Classifier-Based Data Transmission Reduction in Wearable Sensor Network for Human Activity Monitoring

    Get PDF
    The recent development of wireless wearable sensor networks offers a spectrum of new applications in fields of healthcare, medicine, activity monitoring, sport, safety, human-machine interfacing, and beyond. Successful use of this technology depends on lifetime of the battery-powered sensor nodes. This paper presents a new method for extending the lifetime of the wearable sensor networks by avoiding unnecessary data transmissions. The introduced method is based on embedded classifiers that allow sensor nodes to decide if current sensor readings have to be transmitted to cluster head or not. In order to train the classifiers, a procedure was elaborated, which takes into account the impact of data selection on accuracy of a recognition system. This approach was implemented in a prototype of wearable sensor network for human activity monitoring. Real-world experiments were conducted to evaluate the new method in terms of network lifetime, energy consumption, and accuracy of human activity recognition. Results of the experimental evaluation have confirmed that, the proposed method enables significant prolongation of the network lifetime, while preserving high accuracy of the activity recognition. The experiments have also revealed advantages of the method in comparison with state-of-the-art algorithms for data transmission reduction

    Data Compression Based on Stacked RBM-AE Model for Wireless Sensor Networks

    No full text
    Data compression is very important in wireless sensor networks (WSNs) with the limited energy of sensor nodes. Data communication results in energy consumption most of the time; the lifetime of sensor nodes is usually prolonged by reducing data transmission and reception. In this paper, we propose a new Stacked RBM Auto-Encoder (Stacked RBM-AE) model to compress sensing data, which is composed of a encode layer and a decode layer. In the encode layer, the sensing data is compressed; and in the decode layer, the sensing data is reconstructed. The encode layer and the decode layer are composed of four standard Restricted Boltzmann Machines (RBMs). We also provide an energy optimization method that can further reduce the energy consumption of the model storage and calculation by pruning the parameters of the model. We test the performance of the model by using the environment data collected by Intel Lab. When the compression ratio of the model is 10, the average Percentage RMS Difference value is 10.04%, and the average temperature reconstruction error value is 0.2815 °C. The node communication energy consumption in WSNs can be reduced by 90%. Compared with the traditional method, the proposed model has better compression efficiency and reconstruction accuracy under the same compression ratio. Our experiment results show that the new neural network model can not only apply to data compression for WSNs, but also have high compression efficiency and good transfer learning ability

    Data analysis and machine learning approaches for time series pre- and post- processing pipelines

    Get PDF
    157 p.En el 谩mbito industrial, las series temporales suelen generarse de forma continua mediante sensores quecaptan y supervisan constantemente el funcionamiento de las m谩quinas en tiempo real. Por ello, esimportante que los algoritmos de limpieza admitan un funcionamiento casi en tiempo real. Adem谩s, amedida que los datos evoluci贸n, la estrategia de limpieza debe cambiar de forma adaptativa eincremental, para evitar tener que empezar el proceso de limpieza desde cero cada vez.El objetivo de esta tesis es comprobar la posibilidad de aplicar flujos de aprendizaje autom谩tica a lasetapas de preprocesamiento de datos. Para ello, este trabajo propone m茅todos capaces de seleccionarestrategias 贸ptimas de preprocesamiento que se entrenan utilizando los datos hist贸ricos disponibles,minimizando las funciones de perdida emp铆ricas.En concreto, esta tesis estudia los procesos de compresi贸n de series temporales, uni贸n de variables,imputaci贸n de observaciones y generaci贸n de modelos subrogados. En cada uno de ellos se persigue laselecci贸n y combinaci贸n 贸ptima de m煤ltiples estrategias. Este enfoque se define en funci贸n de lascaracter铆sticas de los datos y de las propiedades y limitaciones del sistema definidas por el usuario

    Artificial intelligence methods for security and cyber security systems

    Get PDF
    This research is in threat analysis and countermeasures employing Artificial Intelligence (AI) methods within the civilian domain, where safety and mission-critical aspects are essential. AI has challenges of repeatable determinism and decision explanation. This research proposed methods for dense and convolutional networks that provided repeatable determinism. In dense networks, the proposed alternative method had an equal performance with more structured learnt weights. The proposed method also had earlier learning and higher accuracy in the Convolutional networks. When demonstrated in colour image classification, the accuracy improved in the first epoch to 67%, from 29% in the existing scheme. Examined in transferred learning with the Fast Sign Gradient Method (FSGM) as an analytical method to control distortion of dissimilarity, a finding was that the proposed method had more significant retention of the learnt model, with 31% accuracy instead of 9%. The research also proposed a threat analysis method with set-mappings and first principle analytical steps applied to a Symbolic AI method using an algebraic expert system with virtualized neurons. The neural expert system method demonstrated the infilling of parameters by calculating beamwidths with variations in the uncertainty of the antenna type. When combined with a proposed formula extraction method, it provides the potential for machine learning of new rules as a Neuro-Symbolic AI method. The proposed method uses extra weights allocated to neuron input value ranges as activation strengths. The method simplifies the learnt representation reducing model depth, thus with less significant dropout potential. Finally, an image classification method for emitter identification is proposed with a synthetic dataset generation method and shows the accurate identification between fourteen radar emission modes with high ambiguity between them (and achieved 99.8% accuracy). That method would be a mechanism to recognize non-threat civil radars aimed at threat alert when deviations from those civilian emitters are detected
    corecore