1,294 research outputs found

    Ensuring Reliable Measurements In Remote Aquatic Sensor Networks

    Full text link
    A flood monitoring system comprises an extensive network of water sensors, a bundle of forecast simulations models, and a decision-support information system. A cascade of uncertainties present in each part of the system affects a reliable flood alert and response. The timeliness and quality of data gathering, used subsequently in forecasting models, is affected by the pervasive nature of the monitoring network where aquatic sensors are vulnerable to external disturbances affecting the accuracy of data acquisition. Existing solutions for aquatic monitoring are composed by heterogeneous sensors usually unable to ensure reliable measurements in complex scenarios, due to specific effects of each technology as transitional loss of availability, errors, limits of coverage, etc. In this paper, we introduce a more general study of all aspects of the criticality of sensor networks in the aquatic monitoring process, and we motivate for the need of reliable data collection in harsh coastal and marine environments. It is presented an overview of the main challenges such as the sensors power life, sensor hardware compatibility, reliability and long-range communication. These issues need to be addressed to improve the robustness of the sensors measurements. The development of solutions to automatically adjust the sensors measurements to each disturbance accordingly would provide an important increase on the quality of the measurements, thus supplying other parts of a flood monitoring system with dependable monitoring data. Also, with the purpose of providing software solutions to hardware failures, we introduce context-awareness techniques such as data processing, filtering and sensor fusion methods that were applied to a real working monitoring network with several proprietary probes (measuring conductivity, temperature, depth and various water quality parameters) in distant sites in Portugal. The goal is to assess the best technique to overcome each detected faulty measurement without compromising the time frame of the monitoring process

    Dagstuhl News January - December 2007

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    A dependability framework for WSN-based aquatic monitoring systems

    Get PDF
    Wireless Sensor Networks (WSN) are being progressively used in several application areas, particularly to collect data and monitor physical processes. Moreover, sensor nodes used in environmental monitoring applications, such as the aquatic sensor networks, are often subject to harsh environmental conditions while monitoring complex phenomena. Non-functional requirements, like reliability, security or availability, are increasingly important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provides a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data is reliable or, more generically, that it has the necessary quality. The problem of ensuring the desired quality of data for dependable monitoring using WSNs is studied herein. With a dependability-oriented perspective, it is reviewed the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, it is given particular attention to understanding which faults can affect sensors, how they can affect the quality of the information, and how this quality can be improved and quantified. Open research issues for the specific case of aquatic monitoring applications are also discussed. One of the challenges in achieving a dependable system behavior is to overcome the external disturbances affecting sensor measurements and detect the failure patterns in sensor data. This is a particular problem in environmental monitoring, due to the difficulty in distinguishing a faulty behavior from the representation of a natural phenomenon. Existing solutions for failure detection assume that physical processes can be accurately modeled, or that there are large deviations that may be detected using coarse techniques, or more commonly that it is a high-density sensor network with value redundant sensors. This thesis aims at defining a new methodology for dependable data quality in environmental monitoring systems, aiming to detect faulty measurements and increase the sensors data quality. The framework of the methodology is overviewed through a generically applicable design, which can be employed to any environment sensor network dataset. The methodology is evaluated in various datasets of different WSNs, where it is used machine learning to model each sensor behavior, exploiting the existence of correlated data provided by neighbor sensors. It is intended to explore the data fusion strategies in order to effectively detect potential failures for each sensor and, simultaneously, distinguish truly abnormal measurements from deviations due to natural phenomena. This is accomplished with the successful application of the methodology to detect and correct outliers, offset and drifting failures in real monitoring networks datasets. In the future, the methodology can be applied to optimize the data quality control processes of new and already operating monitoring networks, and assist in the networks maintenance operations.As redes de sensores sem fios (RSSF) têm vindo cada vez mais a serem utilizadas em diversas áreas de aplicação, em especial para monitorizar e capturar informação de processos físicos em meios naturais. Neste contexto, os sensores que estão em contacto direto com o respectivo meio ambiente, como por exemplo os sensores em meios aquáticos, estão sujeitos a condições adversas e complexas durante o seu funcionamento. Esta complexidade conduz à necessidade de considerarmos, durante o desenvolvimento destas redes, os requisitos não funcionais da confiabilidade, da segurança ou da disponibilidade elevada. Para percebermos como satisfazer estes requisitos da monitorização com base em RSSF para aplicações ambientais, já existe uma boa base de conhecimento sobre técnicas de confiabilidade em sistemas distribuídos. Devido ao foco na obtenção de dados deste tipo de aplicações de RSSF, é particularmente importante garantir que os dados obtidos na monitorização sejam confiáveis ou, de uma forma mais geral, que tenham a qualidade necessária para o objetivo pretendido. Esta tese estuda o problema de garantir a qualidade de dados necessária para uma monitorização confiável usando RSSF. Com o foco na confiabilidade, revemos os possíveis impedimentos à obtenção de dados confiáveis e as soluções existentes capazes de corrigir ou mitigar esses impedimentos. Apesar de existir uma grande variedade de componentes que formam ou podem formar um sistema de monitorização com base em RSSF, prestamos particular atenção à compreensão das possíveis faltas que podem afetar os sensores, a como estas faltas afetam a qualidade dos dados recolhidos pelos sensores e a como podemos melhorar os dados e quantificar a sua qualidade. Tendo em conta o caso específico dos sistemas de monitorização em meios aquáticos, discutimos ainda as várias linhas de investigação em aberto neste tópico. Um dos desafios para se atingir um sistema de monitorização confiável é a deteção da influência de fatores externos relacionados com o ambiente monitorizado, que afetam as medições obtidas pelos sensores, bem como a deteção de comportamentos de falha nas medições. Este desafio é um problema particular na monitorização em ambientes naturais adversos devido à dificuldade da distinção entre os comportamentos associados às falhas nos sensores e os comportamentos dos sensores afetados pela à influência de um evento natural. As soluções existentes para este problema, relacionadas com deteção de faltas, assumem que os processos físicos a monitorizar podem ser modelados de forma eficaz, ou que os comportamentos de falha são caraterizados por desvios elevados do comportamento expectável de forma a serem facilmente detetáveis. Mais frequentemente, as soluções assumem que as redes de sensores contêm um número suficientemente elevado de sensores na área monitorizada e, consequentemente, que existem sensores redundantes relativamente à medição. Esta tese tem como objetivo a definição de uma nova metodologia para a obtenção de qualidade de dados confiável em sistemas de monitorização ambientais, com o intuito de detetar a presença de faltas nas medições e aumentar a qualidade dos dados dos sensores. Esta metodologia tem uma estrutura genérica de forma a ser aplicada a uma qualquer rede de sensores ambiental ou ao respectivo conjunto de dados obtido pelos sensores desta. A metodologia é avaliada através de vários conjuntos de dados de diferentes RSSF, em que aplicámos técnicas de aprendizagem automática para modelar o comportamento de cada sensor, com base na exploração das correlações existentes entre os dados obtidos pelos sensores da rede. O objetivo é a aplicação de estratégias de fusão de dados para a deteção de potenciais falhas em cada sensor e, simultaneamente, a distinção de medições verdadeiramente defeituosas de desvios derivados de eventos naturais. Este objectivo é cumprido através da aplicação bem sucedida da metodologia para detetar e corrigir outliers, offsets e drifts em conjuntos de dados reais obtidos por redes de sensores. No futuro, a metodologia pode ser aplicada para otimizar os processos de controlo da qualidade de dados quer de novos sistemas de monitorização, quer de redes de sensores já em funcionamento, bem como para auxiliar operações de manutenção das redes.Laboratório Nacional de Engenharia Civi

    Routing in MobileWireless Sensor Networks: A Leader-Based Approach

    Get PDF
    This paper presents a leader-based approach to routing in Mobile Wireless Sensor Networks (MWSN). Using local information from neighbour nodes, a leader election mechanism maintains a spanning tree in order to provide the necessary adaptations for efficient routing upon the connectivity changes resulting from the mobility of sensors or sink nodes. We present two protocols following the leader election approach, which have been implemented using Castalia and OMNeT++. The protocols have been evaluated, besides other reference MWSN routing protocols, to analyse the impact of network size and node velocity on performance, which has demonstrated the validity of our approach.Research supported by the Spanish Research Council (MINECO), Grant TIN2016-79897-P, and the Department of Education, Universities and Research of the Basque Government, Grant IT980-16

    KnowSafe: Combined Knowledge and Data Driven Hazard Mitigation in Artificial Pancreas Systems

    Full text link
    Significant progress has been made in anomaly detection and run-time monitoring to improve the safety and security of cyber-physical systems (CPS). However, less attention has been paid to hazard mitigation. This paper proposes a combined knowledge and data driven approach, KnowSafe, for the design of safety engines that can predict and mitigate safety hazards resulting from safety-critical malicious attacks or accidental faults targeting a CPS controller. We integrate domain-specific knowledge of safety constraints and context-specific mitigation actions with machine learning (ML) techniques to estimate system trajectories in the far and near future, infer potential hazards, and generate optimal corrective actions to keep the system safe. Experimental evaluation on two realistic closed-loop testbeds for artificial pancreas systems (APS) and a real-world clinical trial dataset for diabetes treatment demonstrates that KnowSafe outperforms the state-of-the-art by achieving higher accuracy in predicting system state trajectories and potential hazards, a low false positive rate, and no false negatives. It also maintains the safe operation of the simulated APS despite faults or attacks without introducing any new hazards, with a hazard mitigation success rate of 92.8%, which is at least 76% higher than solely rule-based (50.9%) and data-driven (52.7%) methods.Comment: 16 pages, 10 figures, 9 tables, submitted to the IEEE for possible publicatio

    Airborne Wireless Sensor Networks for Airplane Monitoring System

    Get PDF
    In traditional airplane monitoring system (AMS), data sensed from strain, vibration, ultrasound of structures or temperature, and humidity in cabin environment are transmitted to central data repository via wires. However, drawbacks still exist in wired AMS such as expensive installation and maintenance, and complicated wired connections. In recent years, accumulating interest has been drawn to performing AMS via airborne wireless sensor network (AWSN) system with the advantages of flexibility, low cost, and easy deployment. In this review, we present an overview of AMS and AWSN and demonstrate the requirements of AWSN for AMS particularly. Furthermore, existing wireless hardware prototypes and network communication schemes of AWSN are investigated according to these requirements. This paper will improve the understanding of how the AWSN design under AMS acquires sensor data accurately and carries out network communication efficiently, providing insights into prognostics and health management (PHM) for AMS in future

    Automatic Pain Assessment by Learning from Multiple Biopotentials

    Get PDF
    Kivun täsmällinen arviointi on tärkeää kivunhallinnassa, erityisesti sairaan- hoitoa vaativille ipupotilaille. Kipu on subjektiivista, sillä se ei ole pelkästään aistituntemus, vaan siihen saattaa liittyä myös tunnekokemuksia. Tällöin itsearviointiin perustuvat kipuasteikot ovat tärkein työkalu, niin auan kun potilas pystyy kokemuksensa arvioimaan. Arviointi on kuitenkin haasteellista potilailla, jotka eivät itse pysty kertomaan kivustaan. Kliinisessä hoito- työssä kipua pyritään objektiivisesti arvioimaan esimerkiksi havainnoimalla fysiologisia muuttujia kuten sykettä ja käyttäytymistä esimerkiksi potilaan kasvonilmeiden perusteella. Tutkimuksen päätavoitteena on automatisoida arviointiprosessi hyödyntämällä koneoppimismenetelmiä yhdessä biosignaalien prosessointnin kanssa. Tavoitteen saavuttamiseksi mitattiin autonomista keskushermoston toimintaa kuvastavia biopotentiaaleja: sydänsähkökäyrää, galvaanista ihoreaktiota ja kasvolihasliikkeitä mittaavaa lihassähkökäyrää. Mittaukset tehtiin terveillä vapaaehtoisilla, joille aiheutettiin kokeellista kipuärsykettä. Järestelmän kehittämiseen tarvittavaa tietokantaa varten rakennettiin biopotentiaaleja keräävä Internet of Things -pohjainen tallennusjärjestelmä. Koostetun tietokannan avulla kehitettiin biosignaaleille prosessointimenetelmä jatku- vaan kivun arviointiin. Signaaleista eroteltiin piirteitä sekuntitasoon mukautetuilla aikaikkunoilla. Piirteet visualisoitiin ja tarkasteltiin eri luokittelijoilla kivun ja kiputason tunnistamiseksi. Parhailla luokittelumenetelmillä saavutettiin kivuntunnistukseen 90% herkkyyskyky (sensitivity) ja 84% erottelukyky (specificity) ja kivun voimakkuuden arviointiin 62,5% tarkkuus (accuracy). Tulokset vahvistavat kyseisen käsittelytavan käyttökelpoisuuden erityis- esti tunnistettaessa kipua yksittäisessä arviointi-ikkunassa. Tutkimus vahvistaa biopotentiaalien avulla kehitettävän automatisoidun kivun arvioinnin toteutettavuuden kokeellisella kivulla, rohkaisten etenemään todellisen kivun tutkimiseen samoilla menetelmillä. Menetelmää kehitettäessä suoritettiin lisäksi vertailua ja yhteenvetoa automaattiseen kivuntunnistukseen kehitettyjen eri tutkimusten välisistä samankaltaisuuksista ja eroista. Tarkastelussa löytyi signaalien eroavaisuuksien lisäksi tutkimusmuotojen aiheuttamaa eroa arviointitavoitteisiin, mikä hankaloitti tutkimusten vertailua. Lisäksi pohdit- tiin mitkä perinteisten prosessointitapojen osiot rajoittavat tai edistävät ennustekykyä ja miten, sekä tuoko optimointi läpimurtoa järjestelmän näkökulmasta.Accurate pain assessment plays an important role in proper pain management, especially among hospitalized people experience acute pain. Pain is subjective in nature which is not only a sensory feeling but could also combine affective factors. Therefore self-report pain scales are the main assessment tools as long as patients are able to self-report. However, it remains a challenge to assess the pain from the patients who cannot self-report. In clinical practice, physiological parameters like heart rate and pain behaviors including facial expressions are observed as empirical references to infer pain objectively. The main aim of this study is to automate such process by leveraging machine learning methods and biosignal processing. To achieve this goal, biopotentials reflecting autonomic nervous system activities including electrocardiogram and galvanic skin response, and facial expressions measured with facial electromyograms were recorded from healthy volunteers undergoing experimental pain stimulus. IoT-enabled biopotential acquisition systems were developed to build the database aiming at providing compact and wearable solutions. Using the database, a biosignal processing flow was developed for continuous pain estimation. Signal features were extracted with customized time window lengths and updated every second. The extracted features were visualized and fed into multiple classifiers trained to estimate the presence of pain and pain intensity separately. Among the tested classifiers, the best pain presence estimating sensitivity achieved was 90% (specificity 84%) and the best pain intensity estimation accuracy achieved was 62.5%. The results show the validity of the proposed processing flow, especially in pain presence estimation at window level. This study adds one more piece of evidence on the feasibility of developing an automatic pain assessment tool from biopotentials, thus providing the confidence to move forward to real pain cases. In addition to the method development, the similarities and differences between automatic pain assessment studies were compared and summarized. It was found that in addition to the diversity of signals, the estimation goals also differed as a result of different study designs which made cross dataset comparison challenging. We also tried to discuss which parts in the classical processing flow would limit or boost the prediction performance and whether optimization can bring a breakthrough from the system’s perspective

    Survey and Systematization of Secure Device Pairing

    Full text link
    Secure Device Pairing (SDP) schemes have been developed to facilitate secure communications among smart devices, both personal mobile devices and Internet of Things (IoT) devices. Comparison and assessment of SDP schemes is troublesome, because each scheme makes different assumptions about out-of-band channels and adversary models, and are driven by their particular use-cases. A conceptual model that facilitates meaningful comparison among SDP schemes is missing. We provide such a model. In this article, we survey and analyze a wide range of SDP schemes that are described in the literature, including a number that have been adopted as standards. A system model and consistent terminology for SDP schemes are built on the foundation of this survey, which are then used to classify existing SDP schemes into a taxonomy that, for the first time, enables their meaningful comparison and analysis.The existing SDP schemes are analyzed using this model, revealing common systemic security weaknesses among the surveyed SDP schemes that should become priority areas for future SDP research, such as improving the integration of privacy requirements into the design of SDP schemes. Our results allow SDP scheme designers to create schemes that are more easily comparable with one another, and to assist the prevention of persisting the weaknesses common to the current generation of SDP schemes.Comment: 34 pages, 5 figures, 3 tables, accepted at IEEE Communications Surveys & Tutorials 2017 (Volume: PP, Issue: 99

    Interviewing data - the art of interpretation in analytics

    Get PDF
    Interviewing Data: The art of interpretation in analytics -- Algorithms and statistical models produce consistent results with confidence yet they do so with data that are subject to change. Furthermore, the underlying digital traces created within specifically designed platforms are rarely transparent. The emerging field which incorporates analytics, predictive behavior, big data, and data science, is still contesting its methodological boundaries. How can we use existing research tools to validate the reliability of the data? This paper explores alternatives to statistical validity by situating analytics as a form of naturalistic inquiry. A naturalistic research model, which has no assumption of an objective truth, places greater emphasis on logical reasoning and researcher reflectivity. "Interviewing data", based on journalistic practices, is introduced as a tool to convey the reliability of the data. The misleading 2013 flu prediction illustrates this approach and is discussed within the context of ethics and accountability in data science
    corecore