1,778 research outputs found

    Wavelet-based filtration procedure for denoising the predicted CO2 waveforms in smart home within the Internet of Things

    Get PDF
    The operating cost minimization of smart homes can be achieved with the optimization of the management of the building's technical functions by determination of the current occupancy status of the individual monitored spaces of a smart home. To respect the privacy of the smart home residents, indirect methods (without using cameras and microphones) are possible for occupancy recognition of space in smart homes. This article describes a newly proposed indirect method to increase the accuracy of the occupancy recognition of monitored spaces of smart homes. The proposed procedure uses the prediction of the course of CO2 concentration from operationally measured quantities (temperature indoor and relative humidity indoor) using artificial neural networks with a multilayer perceptron algorithm. The mathematical wavelet transformation method is used for additive noise canceling from the predicted course of the CO2 concentration signal with an objective increase accuracy of the prediction. The calculated accuracy of CO2 concentration waveform prediction in the additive noise-canceling application was higher than 98% in selected experiments.Web of Science203art. no. 62

    High-Dimensional Bayesian Geostatistics

    Full text link
    With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ∟n\sim n floating point operations (flops), where nn the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings

    Techniques of EMG signal analysis: detection, processing, classification and applications

    Get PDF
    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications

    The Cascade Neo-Fuzzy Architecture and its Online Learning Algorithm

    Get PDF
    In the paper learning algorithm for adjusting weight coefficients of the Cascade Neo-Fuzzy Neural Network (CNFNN) in sequential mode is introduced. Concerned architecture has the similar structure with the Cascade-Correlation Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN consists of neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of online learning algorithm allows to process input data sequentially in real time mode

    Neural network-based colonoscopic diagnosis using on-line learning and differential evolution

    Get PDF
    In this paper, on-line training of neural networks is investigated in the context of computer-assisted colonoscopic diagnosis. A memory-based adaptation of the learning rate for the on-line back-propagation (BP) is proposed and used to seed an on-line evolution process that applies a differential evolution (DE) strategy to (re-) adapt the neural network to modified environmental conditions. Our approach looks at on-line training from the perspective of tracking the changing location of an approximate solution of a pattern-based, and thus, dynamically changing, error function. The proposed hybrid strategy is compared with other standard training methods that have traditionally been used for training neural networks off-line. Results in interpreting colonoscopy images and frames of video sequences are promising and suggest that networks trained with this strategy detect malignant regions of interest with accuracy

    Data Simulation and Trend Removal Optimization Applied to Electrochemical Noise

    Get PDF
    A well-known technique, electrochemical noise analysis (ENA), measures the potential fluctuations produced by kinetic variations along the electrochemical corrosion process. This practice requires the application of diverse signal processing methods. Therefore, in order to propose and evaluate new methodologies, it is absolutely necessary to simulate signals by computer data generation using different algorithms. In the first approach, data were simulated by superimposing Gaussian noise to nontrivial trend lines. Then, several methods were assessed by using this set of computer-simulated data. These results indicate that a new methodology based on medians of moving intervals and cubic splines interpolation show the best performance. Nevertheless, relative errors are acceptable for the trend but not for noise. In the second approach, we used artificial intelligence for trend removal, combining an interval signal processing with backpropagation neural networks. Finally, a non-Gaussian noise function that simulates non-stationary pits was proposed and all detrending methods were re-evaluated, resulting that when increasing difference between trend and noise, the accuracy of the artificial neural networks (ANNs) was reduced. In addition, when polynomial fitting, moving average removal (MAR) and moving median removal (MMR) were evaluated, MMR yielded best results, though it is not a definitive solution
    • …
    corecore