854 research outputs found

    Estimating Fire Weather Indices via Semantic Reasoning over Wireless Sensor Network Data Streams

    Full text link
    Wildfires are frequent, devastating events in Australia that regularly cause significant loss of life and widespread property damage. Fire weather indices are a widely-adopted method for measuring fire danger and they play a significant role in issuing bushfire warnings and in anticipating demand for bushfire management resources. Existing systems that calculate fire weather indices are limited due to low spatial and temporal resolution. Localized wireless sensor networks, on the other hand, gather continuous sensor data measuring variables such as air temperature, relative humidity, rainfall and wind speed at high resolutions. However, using wireless sensor networks to estimate fire weather indices is a challenge due to data quality issues, lack of standard data formats and lack of agreement on thresholds and methods for calculating fire weather indices. Within the scope of this paper, we propose a standardized approach to calculating Fire Weather Indices (a.k.a. fire danger ratings) and overcome a number of the challenges by applying Semantic Web Technologies to the processing of data streams from a wireless sensor network deployed in the Springbrook region of South East Queensland. This paper describes the underlying ontologies, the semantic reasoning and the Semantic Fire Weather Index (SFWI) system that we have developed to enable domain experts to specify and adapt rules for calculating Fire Weather Indices. We also describe the Web-based mapping interface that we have developed, that enables users to improve their understanding of how fire weather indices vary over time within a particular region.Finally, we discuss our evaluation results that indicate that the proposed system outperforms state-of-the-art techniques in terms of accuracy, precision and query performance.Comment: 20pages, 12 figure

    On the Nature and Types of Anomalies: A Review

    Full text link
    Anomalies are occurrences in a dataset that are in some way unusual and do not fit the general patterns. The concept of the anomaly is generally ill-defined and perceived as vague and domain-dependent. Moreover, despite some 250 years of publications on the topic, no comprehensive and concrete overviews of the different types of anomalies have hitherto been published. By means of an extensive literature review this study therefore offers the first theoretically principled and domain-independent typology of data anomalies, and presents a full overview of anomaly types and subtypes. To concretely define the concept of the anomaly and its different manifestations, the typology employs five dimensions: data type, cardinality of relationship, anomaly level, data structure and data distribution. These fundamental and data-centric dimensions naturally yield 3 broad groups, 9 basic types and 61 subtypes of anomalies. The typology facilitates the evaluation of the functional capabilities of anomaly detection algorithms, contributes to explainable data science, and provides insights into relevant topics such as local versus global anomalies.Comment: 38 pages (30 pages content), 10 figures, 3 tables. Preprint; review comments will be appreciated. Improvements in version 2: Explicit mention of fifth anomaly dimension; Added section on explainable anomaly detection; Added section on variations on the anomaly concept; Various minor additions and improvement

    Anomaly detection in unknown environments using wireless sensor networks

    Get PDF
    This dissertation addresses the problem of distributed anomaly detection in Wireless Sensor Networks (WSN). A challenge of designing such systems is that the sensor nodes are battery powered, often have different capabilities and generally operate in dynamic environments. Programming such sensor nodes at a large scale can be a tedious job if the system is not carefully designed. Data modeling in distributed systems is important for determining the normal operation mode of the system. Being able to model the expected sensor signatures for typical operations greatly simplifies the human designer’s job by enabling the system to autonomously characterize the expected sensor data streams. This, in turn, allows the system to perform autonomous anomaly detection to recognize when unexpected sensor signals are detected. This type of distributed sensor modeling can be used in a wide variety of sensor networks, such as detecting the presence of intruders, detecting sensor failures, and so forth. The advantage of this approach is that the human designer does not have to characterize the anomalous signatures in advance. The contributions of this approach include: (1) providing a way for a WSN to autonomously model sensor data with no prior knowledge of the environment; (2) enabling a distributed system to detect anomalies in both sensor signals and temporal events online; (3) providing a way to automatically extract semantic labels from temporal sequences; (4) providing a way for WSNs to save communication power by transmitting compressed temporal sequences; (5) enabling the system to detect time-related anomalies without prior knowledge of abnormal events; and, (6) providing a novel missing data estimation method that utilizes temporal and spatial information to replace missing values. The algorithms have been designed, developed, evaluated, and validated experimentally in synthesized data, and in real-world sensor network applications

    Innovative Technologies and Services for Smart Cities

    Get PDF
    A smart city is a modern technology-driven urban area which uses sensing devices, information, and communication technology connected to the internet of things (IoTs) for the optimum and efficient utilization of infrastructures and services with the goal of improving the living conditions of citizens. Increasing populations, lower budgets, limited resources, and compatibility of the upgraded technologies are some of the few problems affecting the implementation of smart cities. Hence, there is continuous advancement regarding technologies for the implementation of smart cities. The aim of this Special Issue is to report on the design and development of integrated/smart sensors, a universal interfacing platform, along with the IoT framework, extending it to next-generation communication networks for monitoring parameters of interest with the goal of achieving smart cities. The proposed universal interfacing platform with the IoT framework will solve many challenging issues and significantly boost the growth of IoT-related applications, not just in the environmental monitoring domain but in the other key areas, such as smart home, assistive technology for the elderly care, smart city with smart waste management, smart E-metering, smart water supply, intelligent traffic control, smart grid, remote healthcare applications, etc., signifying benefits for all countries

    Location tracking in indoor and outdoor environments based on the viterbi principle

    Get PDF

    Semantic technologies for supporting KDD processes

    Get PDF
    209 p.Achieving a comfortable thermal situation within buildings with an efficient use of energy remains still an open challenge for most buildings. In this regard, IoT (Internet of Things) and KDD (Knowledge Discovery in Databases) processes may be combined to solve these problems, even though data analysts may feel overwhelmed by heterogeneity and volume of the data to be considered. Data analysts could benefit from an application assistant that supports them throughout the KDD process. This research work aims at supporting data analysts through the different KDD phases towards the achievement of energy efficiency and thermal comfort in tertiary buildings. To do so, the EEPSA (Energy Efficiency Prediction Semantic Assistant) is proposed, which aids data analysts discovering the most relevant variables for the matter at hand, and informs them about relationships among relevant data. This assistant leverages Semantic Technologies such as ontologies, ontology-driven rules and ontology-driven data access. More specifically, the EEPSA ontology is the cornerstone of the assistant. This ontology is developed on top of three ODPs (Ontology Design Patterns) and it is designed so that its customization to address similar problems in different types of buildings can be approached methodically

    Anomaly Detection Using Hierarchical Temporal Memory in Smart Homes

    Get PDF
    This work focuses on unsupervised biologically-inspired machine learning techniques and algorithms that can detect anomalies. Specifically, the aim is to investigate the applicability of the Hierarchical Temporal Memory (HTM) theory in detecting anomalies in the smart home domain. The HTM theory proposes a model for the neurons that is more faithful to the actual neurons than their usual counterparts in Artificial Neural Networks (ANN) based on the current Neuroscience understanding. The HTM theory has several algorithmic implementations, the most prominent one is the Cortical Learning Algorithm (CLA). The CLA model typically consists of three main regions: the encoder, the spatial pooler and the temporal memory. Studying the performance of the CLA in the smart home domain revealed an issue with the standard encoders and high-dimensional datasets. In this domain, it is typical to have high-dimensional feature space representing the collection of smart devices. The standard CLA encoders are more suitable for low-dimensional datasets and there are encoders for categorical and scalar data types. A novel Hash Indexed Sparse Distributed Representation (HI-SDR) encoder was proposed and developed, to overcome the high-dimensionality issue. The HI-SDR encoder creates unique representation of the data which allows the rest of the CLA regions to learn from. The standard approach when creating HTM models to work with datasets with many features is to concatenate the output of each encoder. This work concludes that the standard encoders produced representations for the input during every timestep that were similar and less distinguishable for the HTM model. This output similarity confuses the HTM model and makes it hard to discern meaningful representations. The proposed novel encoder manages to capture the required properties in terms of sparsity and representations. To investigate and validate the performance of a proposed machine learning technique, there has to be a representative dataset. In the smart home literature, there exists many real-world smart home datasets that allow the researchers to validate their models. However, most of the existing datasets are created for classification and recognition of Activities of Daily Living (ADL). The lack of datasets for anomaly detection applications in the domain of smart homes required the development of a simulation tool. OpenSHS (Open Smart Home Simulator) was developed as an open-source, 3D and cross-platform smart home simulator that offers a novel hybrid approach to dataset generation. The tool allows the researchers to design a smart home and populate it with the needed smart devices. Then, the participants can use the designed smart home and simulate their habits and patterns. Anomaly detection in the smart home domain is highly contextual and dependent on the inhabitant’s activities. One inhabitant’s anomaly could be the norm for another, therefore the definition of anomalies is a complex consideration. Using OpenSHS, seven participants were invited to generated forty-two datasets of their activities. Moreover, each participant defined his/her own anomalous pattern that he/she would like the model to detect. Thus, the resulting datasets are annotated with contextual anomalies specific to each participant. The proposed encoder has been evaluated and compared against the standard CLA encoders and several state-of-the-art unsupervised anomaly detection algorithms, using Numenta Anomaly Benchmark (NAB). The HI-SDR encoder scored 81.9% accuracy, on the forty-two datasets, with 17.8% increase in accuracy compared to the k-NN algorithm and 47.5% increase over the standard CLA encoders. Using the Principal Component Analysis (PCA) algorithm as a preprocessing step proved to be beneficial to some of the tested algorithms. The k-NN algorithm scored 39.9% accuracy without PCA and scored 64.1% accuracy with PCA. Similarly, the Histogram Based Outlier Score (HBOS) algorithm scored 28.5% accuracy without PCA and 61.9% with PCA. The HTM-based models empirically showed good potential and exceeded in performance several algorithms, even without the HI-SDR encoder. However, the HTM-based models still lack an optimisation algorithm for its parameters when performing anomaly detection
    • …
    corecore