4,065 research outputs found

    When Things Matter: A Data-Centric View of the Internet of Things

    Full text link
    With the recent advances in radio-frequency identification (RFID), low-cost wireless sensor devices, and Web technologies, the Internet of Things (IoT) approach has gained momentum in connecting everyday objects to the Internet and facilitating machine-to-human and machine-to-machine communication with the physical world. While IoT offers the capability to connect and integrate both digital and physical entities, enabling a whole new class of applications and services, several significant challenges need to be addressed before these applications and services can be fully realized. A fundamental challenge centers around managing IoT data, typically produced in dynamic and volatile environments, which is not only extremely large in scale and volume, but also noisy, and continuous. This article surveys the main techniques and state-of-the-art research efforts in IoT from data-centric perspectives, including data stream processing, data storage models, complex event processing, and searching in IoT. Open research issues for IoT data management are also discussed

    Disaster Data Management in Cloud Environments

    Get PDF
    Facilitating decision-making in a vital discipline such as disaster management requires information gathering, sharing, and integration on a global scale and across governments, industries, communities, and academia. A large quantity of immensely heterogeneous disaster-related data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in cloud computing, Big Data, and NoSQL have opened the door for new solutions in disaster data management. In this thesis, a Knowledge as a Service (KaaS) framework is proposed for disaster cloud data management (Disaster-CDM) with the objectives of 1) facilitating information gathering and sharing, 2) storing large amounts of disaster-related data from diverse sources, and 3) facilitating search and supporting interoperability and integration. Data are stored in a cloud environment taking advantage of NoSQL data stores. The proposed framework is generic, but this thesis focuses on the disaster management domain and data formats commonly present in that domain, i.e., file-style formats such as PDF, text, MS Office files, and images. The framework component responsible for addressing simulation models is SimOnto. SimOnto, as proposed in this work, transforms domain simulation models into an ontology-based representation with the goal of facilitating integration with other data sources, supporting simulation model querying, and enabling rule and constraint validation. Two case studies presented in this thesis illustrate the use of Disaster-CDM on the data collected during the Disaster Response Network Enabled Platform (DR-NEP) project. The first case study demonstrates Disaster-CDM integration capabilities by full-text search and querying services. In contrast to direct full-text search, Disaster-CDM full-text search also includes simulation model files as well as text contained in image files. Moreover, Disaster-CDM provides querying capabilities and this case study demonstrates how file-style data can be queried by taking advantage of a NoSQL document data store. The second case study focuses on simulation models and uses SimOnto to transform proprietary simulation models into ontology-based models which are then stored in a graph database. This case study demonstrates Disaster-CDM benefits by showing how simulation models can be queried and how model compliance with rules and constraints can be validated

    Enabling the Internet of Mobile Crowdsourcing Health Things: A Mobile Fog Computing, Blockchain and IoT Based Continuous Glucose Monitoring System for Diabetes Mellitus Research and Care

    Get PDF
    [Abstract] Diabetes patients suffer from abnormal blood glucose levels, which can cause diverse health disorders that affect their kidneys, heart and vision. Due to these conditions, diabetes patients have traditionally checked blood glucose levels through Self-Monitoring of Blood Glucose (SMBG) techniques, like pricking their fingers multiple times per day. Such techniques involve a number of drawbacks that can be solved by using a device called Continuous Glucose Monitor (CGM), which can measure blood glucose levels continuously throughout the day without having to prick the patient when carrying out every measurement. This article details the design and implementation of a system that enhances commercial CGMs by adding Internet of Things (IoT) capabilities to them that allow for monitoring patients remotely and, thus, warning them about potentially dangerous situations. The proposed system makes use of smartphones to collect blood glucose values from CGMs and then sends them either to a remote cloud or to distributed fog computing nodes. Moreover, in order to exchange reliable, trustworthy and cybersecure data with medical scientists, doctors and caretakers, the system includes the deployment of a decentralized storage system that receives, processes and stores the collected data. Furthermore, in order to motivate users to add new data to the system, an incentive system based on a digital cryptocurrency named GlucoCoin was devised. Such a system makes use of a blockchain that is able to execute smart contracts in order to automate CGM sensor purchases or to reward the users that contribute to the system by providing their own data. Thanks to all the previously mentioned technologies, the proposed system enables patient data crowdsourcing and the development of novel mobile health (mHealth) applications for diagnosing, monitoring, studying and taking public health actions that can help to advance in the control of the disease and raise global awareness on the increasing prevalence of diabetes.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-045Agencia Estatal de Investigación de España; TEC2016-75067-C4-1-

    A Holistic Methodology for Profiling Ransomware Through Endpoint Detection

    Get PDF
    Computer security incident response is a critical capability in light of the growing threat of malware infecting endpoint systems today. Ransomware is one type of malware that is causing increasing harm to organizations. Ransomware infects an endpoint system by encrypting files until a ransom is paid. Ransomware can have a negative impact on an organization’s daily functions if critical business files are encrypted and are not backed up properly. Many tools exist that claim to detect and respond to malware. Organizations and small businesses are often short-staffed and lack the technical expertise to properly configure security tools. One such endpoint detection tool is Sysmon, which logs critical events to the Windows event log. Sysmon is free to download on the Internet. The details contained in Sysmon events can be extremely helpful during an incident response. The author of Sysmon states that the Sysmon configuration needs be iteratively assessed to determine which Sysmon events are most effective. Unfortunately, an organization may not have the time, knowledge, or infrastructure to properly configure and analyze Sysmon events. If configured incorrectly, the organization may have a false sense of security or lack the logs necessary to respond quickly and accurately during a malware incident. This research seeks to answer the question “What methodology can an organization follow to determine which Sysmon events should be analyzed to identify ransomware in a Windows environment?” The answer to this question helps organizations make informed decisions regarding how to configure Sysmon and analyze Sysmon logs. This study uses design science research methods to create three artifacts: a method, an instantiation, and a tool. The artifacts are used to analyze Sysmon logs against a ransomware dataset consisting of publicly available samples from three ransomware families that were major threats in 2017 according to Symantec. The artifacts are built using software that is free to download on the Internet. Step-by-step instructions, source code, and configuration files are provided so that other researchers can replicate and expand on the results. The end goal provides concrete results that organizations can apply directly to their environment to begin leveraging the benefits of Sysmon and understand the analytics needed to identify suspicious activity during an incident response

    A fog computing based cyber-physical system for the automation of pipe-related tasks in the Industry 4.0 shipyard

    Get PDF
    [Abstract] Pipes are one of the key elements in the construction of ships, which usually contain between 15,000 and 40,000 of them. This huge number, as well as the variety of processes that may be performed on a pipe, require rigorous identification, quality assessment and traceability. Traditionally, such tasks have been carried out by using manual procedures and following documentation on paper, which slows down the production processes and reduces the output of a pipe workshop. This article presents a system that allows for identifying and tracking the pipes of a ship through their construction cycle. For such a purpose, a fog computing architecture is proposed to extend cloud computing to the edge of the shipyard network. The system has been developed jointly by Navantia, one of the largest shipbuilders in the world, and the University of A Coruña (Spain), through a project that makes use of some of the latest Industry 4.0 technologies. Specifically, a Cyber-Physical System (CPS) is described, which uses active Radio Frequency Identification (RFID) tags to track pipes and detect relevant events. Furthermore, the CPS has been integrated and tested in conjunction with Siemens’ Manufacturing Execution System (MES) (Simatic IT). The experiments performed on the CPS show that, in the selected real-world scenarios, fog gateways respond faster than the tested cloud server, being such gateways are also able to process successfully more samples under high-load situations. In addition, under regular loads, fog gateways react between five and 481 times faster than the alternative cloud approach

    Cloud-assisted body area networks: state-of-the-art and future challenges

    Get PDF
    Body area networks (BANs) are emerging as enabling technology for many human-centered application domains such as health-care, sport, fitness, wellness, ergonomics, emergency, safety, security, and sociality. A BAN, which basically consists of wireless wearable sensor nodes usually coordinated by a static or mobile device, is mainly exploited to monitor single assisted livings. Data generated by a BAN can be processed in real-time by the BAN coordinator and/or transmitted to a server-side for online/offline processing and long-term storing. A network of BANs worn by a community of people produces large amount of contextual data that require a scalable and efficient approach for elaboration and storage. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of body sensor data streams. In this paper, we motivate the introduction of Cloud-assisted BANs along with the main challenges that need to be addressed for their development and management. The current state-of-the-art is overviewed and framed according to the main requirements for effective Cloud-assisted BAN architectures. Finally, relevant open research issues in terms of efficiency, scalability, security, interoperability, prototyping, dynamic deployment and management, are discussed

    Viriot: A cloud of things that offers iot infrastructures as a service

    Get PDF
    Many cloud providers offer IoT services that simplify the collection and processing of IoT information. However, the IoT infrastructure composed of sensors and actuators that produces this information remains outside the cloud; therefore, application developers must install, connect and manage the cloud. This requirement can be a market barrier, especially for small/medium software companies that cannot afford the infrastructural costs associated with it and would only prefer to focus on IoT application developments. Motivated by the wish to eliminate this barrier, this paper proposes a Cloud of Things platform, called VirIoT, which fully brings the Infrastructure as a service model typical of cloud computing to the world of Internet of Things. VirIoT provides users with virtual IoT infrastructures (Virtual Silos) composed of virtual things, with which users can interact through dedicated and standardized broker servers in which the technology can be chosen among those offered by the platform, such as oneM2M, NGSI and NGSI-LD. VirIoT allows developers to focus their efforts exclusively on IoT applications without worrying about infrastructure management and allows cloud providers to expand their IoT services portfolio. VirIoT uses external things and cloud/edge computing resources to deliver the IoT virtualization services. Its open-source architecture is microservice-based and runs on top of a distributed Kubernetes platform with nodes in central and edge data centers. The architecture is scalable, efficient and able to support the continuous integration of heterogeneous things and IoT standards, taking care of interoperability issues. Using a VirIoT deployment spanning data centers in Europe and Japan, we conducted a performance evaluation with a two-fold objective: showing the efficiency and scalability of the architecture; and leveraging VirIoT’s ability to integrate different IoT standards in order to make a fair comparison of some open-source IoT Broker implementations, namely Mobius for oneM2M, Orion for NGSIv2, Orion-LD and Scorpio for NGSI-LD
    • …
    corecore