1,007 research outputs found

    Anti-Fall: A Non-intrusive and Real-time Fall Detector Leveraging CSI from Commodity WiFi Devices

    Full text link
    Fall is one of the major health threats and obstacles to independent living for elders, timely and reliable fall detection is crucial for mitigating the effects of falls. In this paper, leveraging the fine-grained Channel State Information (CSI) and multi-antenna setting in commodity WiFi devices, we design and implement a real-time, non-intrusive, and low-cost indoor fall detector, called Anti-Fall. For the first time, the CSI phase difference over two antennas is identified as the salient feature to reliably segment the fall and fall-like activities, both phase and amplitude information of CSI is then exploited to accurately separate the fall from other fall-like activities. Experimental results in two indoor scenarios demonstrate that Anti-Fall consistently outperforms the state-of-the-art approach WiFall, with 10% higher detection rate and 10% less false alarm rate on average.Comment: 13 pages,8 figures,corrected version, ICOST conferenc

    Automatic detection of welding defects using the convolutional neural network

    Get PDF
    Quality control of welded joints is an important step before commissioning of various types of metal structures. The main obstacles to the commissioning of such facilities are the areas where the welded joint deviates from acceptable defective standards. The defects of welded joints include non-welded, foreign inclusions, cracks, pores, etc. The article describes an approach to the detection of the main types of defects of welded joints using a combination of convolutional neural networks and support vector machine methods. Convolutional neural networks are used for primary classification. The support vector machine is used to accurately define defect boundaries. As a preprocessing in our work, we use the methods of morphological filtration. A series of experiments confirms the high efficiency of the proposed method in comparison with pure CNN method for detecting defects

    Implicit surfaces with globally regularised and compactly supported basis functions

    Get PDF
    We consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data

    DC-Prophet: Predicting Catastrophic Machine Failures in DataCenters

    Full text link
    When will a server fail catastrophically in an industrial datacenter? Is it possible to forecast these failures so preventive actions can be taken to increase the reliability of a datacenter? To answer these questions, we have studied what are probably the largest, publicly available datacenter traces, containing more than 104 million events from 12,500 machines. Among these samples, we observe and categorize three types of machine failures, all of which are catastrophic and may lead to information loss, or even worse, reliability degradation of a datacenter. We further propose a two-stage framework-DC-Prophet-based on One-Class Support Vector Machine and Random Forest. DC-Prophet extracts surprising patterns and accurately predicts the next failure of a machine. Experimental results show that DC-Prophet achieves an AUC of 0.93 in predicting the next machine failure, and a F3-score of 0.88 (out of 1). On average, DC-Prophet outperforms other classical machine learning methods by 39.45% in F3-score.Comment: 13 pages, 5 figures, accepted by 2017 ECML PKD

    Real Time Trajectory Prediction Using Deep Conditional Generative Models

    Full text link
    Data driven methods for time series forecasting that quantify uncertainty open new important possibilities for robot tasks with hard real time constraints, allowing the robot system to make decisions that trade off between reaction time and accuracy in the predictions. Despite the recent advances in deep learning, it is still challenging to make long term accurate predictions with the low latency required by real time robotic systems. In this paper, we propose a deep conditional generative model for trajectory prediction that is learned from a data set of collected trajectories. Our method uses encoder and decoder deep networks that maps complete or partial trajectories to a Gaussian distributed latent space and back, allowing for fast inference of the future values of a trajectory given previous observations. The encoder and decoder networks are trained using stochastic gradient variational Bayes. In the experiments, we show that our model provides more accurate long term predictions with a lower latency that popular models for trajectory forecasting like recurrent neural networks or physical models based on differential equations. Finally, we test our proposed approach in a robot table tennis scenario to evaluate the performance of the proposed method in a robotic task with hard real time constraints

    Forecasting: Adopting the Methodology of Support Vector Machines to Nursing Research

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/71569/1/j.1741-6787.2006.00062.x.pd

    Domain adaptation with conditional transferable components

    Full text link
    © 2016 by the author(s). Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distribution- s. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y\X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components T(X) that have similar P(T(X)) on different domains by explicitly minimizing a distribution discrepancy measure. However, it is not clear if P(Y\T(X)) in different domains is also similar when P(Y/X)changes. Furthermore, transferable components do not necessarily have to be invariant. If the change in some components is identifiable, we can make use of such components for prediction in the target domain. In this paper, we focus on the case where P{X ,Y) and P(Y') both change in a causal system in which Y is the cause for X. Under appropriate assumptions, we aim to extract conditional transferable components whose conditional distribution P(T{X)\Y) is invariant after proper location-scale (LS) transformations, and identify how P{Y) changes between domains simultaneously. We provide theoretical analysis and empirical evaluation on both synthetic and real-world data to show the effectiveness of our method

    Kernel-based independence tests for causal structure learning on functional data

    Get PDF
    Measurements of systems taken along a continuous functional dimension, such as time or space, are ubiquitous in many fields, from the physical and biological sciences to economics and engineering. Such measurements can be viewed as realisations of an underlying smooth process sampled over the continuum. However, traditional methods for independence testing and causal learning are not directly applicable to such data, as they do not take into account the dependence along the functional dimension. By using specifically designed kernels, we introduce statistical tests for bivariate, joint, and conditional independence for functional variables. Our method not only extends the applicability to functional data of the Hilbert–Schmidt independence criterion (hsic) and its d-variate version (d-hsic), but also allows us to introduce a test for conditional independence by defining a novel statistic for the conditional permutation test (cpt) based on the Hilbert–Schmidt conditional independence criterion (hscic), with optimised regularisation strength estimated through an evaluation rejection rate. Our empirical results of the size and power of these tests on synthetic functional data show good performance, and we then exemplify their application to several constraint- and regression-based causal structure learning problems, including both synthetic examples and real socioeconomic data
    corecore