90 research outputs found

    Machine Learning in Wireless Sensor Networks for Smart Cities:A Survey

    Get PDF
    Artificial intelligence (AI) and machine learning (ML) techniques have huge potential to efficiently manage the automated operation of the internet of things (IoT) nodes deployed in smart cities. In smart cities, the major IoT applications are smart traffic monitoring, smart waste management, smart buildings and patient healthcare monitoring. The small size IoT nodes based on low power Bluetooth (IEEE 802.15.1) standard and wireless sensor networks (WSN) (IEEE 802.15.4) standard are generally used for transmission of data to a remote location using gateways. The WSN based IoT (WSN-IoT) design problems include network coverage and connectivity issues, energy consumption, bandwidth requirement, network lifetime maximization, communication protocols and state of the art infrastructure. In this paper, the authors propose machine learning methods as an optimization tool for regular WSN-IoT nodes deployed in smart city applications. As per the author’s knowledge, this is the first in-depth literature survey of all ML techniques in the field of low power consumption WSN-IoT for smart cities. The results of this unique survey article show that the supervised learning algorithms have been most widely used (61%) as compared to reinforcement learning (27%) and unsupervised learning (12%) for smart city applications

    Clustering-based Source-aware Assessment of True Robustness for Learning Models

    Full text link
    We introduce a novel validation framework to measure the true robustness of learning models for real-world applications by creating source-inclusive and source-exclusive partitions in a dataset via clustering. We develop a robustness metric derived from source-aware lower and upper bounds of model accuracy even when data source labels are not readily available. We clearly demonstrate that even on a well-explored dataset like MNIST, challenging training scenarios can be constructed under the proposed assessment framework for two separate yet equally important applications: i) more rigorous learning model comparison and ii) dataset adequacy evaluation. In addition, our findings not only promise a more complete identification of trade-offs between model complexity, accuracy and robustness but can also help researchers optimize their efforts in data collection by identifying the less robust and more challenging class labels.Comment: Submitted to UAI 201

    Active Learning for One-Class Classification Using Two One-Class Classifiers

    Full text link
    This paper introduces a novel, generic active learning method for one-class classification. Active learning methods play an important role to reduce the efforts of manual labeling in the field of machine learning. Although many active learning approaches have been proposed during the last years, most of them are restricted on binary or multi-class problems. One-class classifiers use samples from only one class, the so-called target class, during training and hence require special active learning strategies. The few strategies proposed for one-class classification either suffer from their limitation on specific one-class classifiers or their performance depends on particular assumptions about datasets like imbalance. Our proposed method bases on using two one-class classifiers, one for the desired target class and one for the so-called outlier class. It allows to invent new query strategies, to use binary query strategies and to define simple stopping criteria. Based on the new method, two query strategies are proposed. The provided experiments compare the proposed approach with known strategies on various datasets and show improved results in almost all situations.Comment: EUSIPCO 201

    Regularized Data Programming with Automated Bayesian Prior Selection

    Full text link
    The cost of manual data labeling can be a significant obstacle in supervised learning. Data programming (DP) offers a weakly supervised solution for training dataset creation, wherein the outputs of user-defined programmatic labeling functions (LFs) are reconciled through unsupervised learning. However, DP can fail to outperform an unweighted majority vote in some scenarios, including low-data contexts. This work introduces a Bayesian extension of classical DP that mitigates failures of unsupervised learning by augmenting the DP objective with regularization terms. Regularized learning is achieved through maximum a posteriori estimation with informative priors. Majority vote is proposed as a proxy signal for automated prior parameter selection. Results suggest that regularized DP improves performance relative to maximum likelihood and majority voting, confers greater interpretability, and bolsters performance in low-data regimes

    Anomaly detection of consumption in Hotel Units: A case study comparing isolation forest and variational autoencoder algorithms

    Get PDF
    Buildings are responsible for a high percentage of global energy consumption, and thus, the improvement of their efficiency can positively impact not only the costs to the companies they house, but also at a global level. One way to reduce that impact is to constantly monitor the consumption levels of these buildings and to quickly act when unjustified levels are detected. Currently, a variety of sensor networks can be deployed to constantly monitor many variables associated with these buildings, including distinct types of meters, air temperature, solar radiation, etc. However, as consumption is highly dependent on occupancy and environmental variables, the identification of anomalous consumption levels is a challenging task. This study focuses on the implementation of an intelligent system, capable of performing the early detection of anomalous sequences of values in consumption time series applied to distinct hotel unit meters. The development of the system was performed in several steps, which resulted in the implementation of several modules. An initial (i) Exploratory Data Analysis (EDA) phase was made to analyze the data, including the consumption datasets of electricity, water, and gas, obtained over several years. The results of the EDA were used to implement a (ii) data correction module, capable of dealing with the transmission losses and erroneous values identified during the EDA’s phase. Then, a (iii) comparative study was performed between a machine learning (ML) algorithm and a deep learning (DL) one, respectively, the isolation forest (IF) and a variational autoencoder (VAE). The study was made, taking into consideration a (iv) proposed performance metric for anomaly detection algorithms in unsupervised time series, also considering computational requirements and adaptability to different types of data. (v) The results show that the IF algorithm is a better solution for the presented problem, since it is easily adaptable to different sources of data, to different combinations of features, and has lower computational complexity. This allows its deployment without major computational requirements, high knowledge, and data history, whilst also being less prone to problems with missing data. As a global outcome, an architecture of a platform is proposed that encompasses the mentioned modules. The platform represents a running system, performing continuous detection and quickly alerting hotel managers about possible anomalous consumption levels, allowing them to take more timely measures to investigate and solve the associated causes.info:eu-repo/semantics/publishedVersio

    Clustering Financial Time Series: How Long is Enough?

    Get PDF
    Researchers have used from 30 days to several years of daily returns as source data for clustering financial time series based on their correlations. This paper sets up a statistical framework to study the validity of such practices. We first show that clustering correlated random variables from their observed values is statistically consistent. Then, we also give a first empirical answer to the much debated question: How long should the time series be? If too short, the clusters found can be spurious; if too long, dynamics can be smoothed out.Comment: Accepted at IJCAI 201

    One Deep Music Representation to Rule Them All? : A comparative analysis of different representation learning strategies

    Full text link
    Inspired by the success of deploying deep learning in the fields of Computer Vision and Natural Language Processing, this learning paradigm has also found its way into the field of Music Information Retrieval. In order to benefit from deep learning in an effective, but also efficient manner, deep transfer learning has become a common approach. In this approach, it is possible to reuse the output of a pre-trained neural network as the basis for a new learning task. The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g. music audio), the generated deep representation of the data is also informative for the new task. Since, however, most of the networks used to generate deep representations are trained using a single initial learning source, their representation is unlikely to be informative for all possible future tasks. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain. We conducted this investigation via an extensive empirical study that involves multiple learning sources, as well as multiple deep learning architectures with varying levels of information sharing between sources, in order to learn music representations. We then validate these representations considering multiple target datasets for evaluation. The results of our experiments yield several insights on how to approach the design of methods for learning widely deployable deep data representations in the music domain.Comment: This work has been accepted to "Neural Computing and Applications: Special Issue on Deep Learning for Music and Audio

    On the Evaluation of User Privacy in Deep Neural Networks using Timing Side Channel

    Full text link
    Recent Deep Learning (DL) advancements in solving complex real-world tasks have led to its widespread adoption in practical applications. However, this opportunity comes with significant underlying risks, as many of these models rely on privacy-sensitive data for training in a variety of applications, making them an overly-exposed threat surface for privacy violations. Furthermore, the widespread use of cloud-based Machine-Learning-as-a-Service (MLaaS) for its robust infrastructure support has broadened the threat surface to include a variety of remote side-channel attacks. In this paper, we first identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in DL implementations originating from non-constant time branching operation in a widely used DL framework PyTorch. We further demonstrate a practical inference-time attack where an adversary with user privilege and hard-label black-box access to an MLaaS can exploit Class Leakage to compromise the privacy of MLaaS users. DL models are vulnerable to Membership Inference Attack (MIA), where an adversary's objective is to deduce whether any particular data has been used while training the model. In this paper, as a separate case study, we demonstrate that a DL model secured with differential privacy (a popular countermeasure against MIA) is still vulnerable to MIA against an adversary exploiting Class Leakage. We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage and also aids in mitigating MIA. We have chosen two standard benchmarking image classification datasets, CIFAR-10 and CIFAR-100 to train five state-of-the-art pre-trained DL models, over two different computing environments having Intel Xeon and Intel i7 processors to validate our approach.Comment: 15 pages, 20 figure
    • …
    corecore