10 research outputs found

    A Sleep Scheme Based on MQ Broker Using Subscribe/Publish in IoT Network

    Get PDF
    Constrained Application Protocol (CoAP) is a transfer protocol that is used for Internet of Things (IoT) devices such as sensors and actuators equip with low power supply, limited computing processor, and constrained network environment. For the constrained devices of CoAP based IoT network, we present a sleepy scheme based on a Message Queue (MQ) broker that supports the subscribe/publish communication architecture in the IoT middleware to enable the devices having better energy consumption. The IoT middleware is a server that providing services for storing and retrieving information of IoT node using Resource Directory (RD) functionality, and supporting the sleepy scheme using MQ broker functionality. The RD provides HTTP services to the HTTP client based application for discovering and looking up information of IoT nodes which are registered to the RD. The functionality of MQ broker is used for performs store-and-forward messaging. The MQ provides the service for IoT nodes to publish data to the middleware. The data shall be subscribed by the client application. The IoT node in the sleep mode that cannot be accessed by the client. Through the IoT middleware, the client can subscribe the IoT node for getting the result from IoT node after wake up. Once the IoT node wakes-up from the sleep status, the IoT middleware publishes the subscribed result to the client

    Sociology Paradigms for Dynamic Integration of Devices into a Context-Aware System

    Get PDF
    Ubiquitous and m obile context - aware computing is an essential component of the smart cities infrastructure. Widely available wireless networks, the maturity level of distributed computing and the increasing number of mobile devices have significantly influenced the human experience with computing. In the present paper, we discuss the need for a model that will be able to represent a formal structure of a context - aware system in a device . The core functionality of the model is expected to expose context - aware behaviour and support dynamic integration of mobile devices and context - aware behaviour. The major contribution of this work is to identify deficiencies of the existing model which is using the notions from sociology such as Role, Ownership and Responsibility.The authors gratefully acknowledge funding from the European Commission through the GEO-C project (H2020-MSCA-ITN-2014, Grant Agreement Number 642332, http://www.geo-c.eu/)

    EL ESTADO DEL ARTE SOBRE EL INTERNET DE LAS COSAS. AMENAZAS Y VULNERABILIDADES DE SEGURIDAD INFORMÁTICA EVIDENCIADAS DESDE LA DOMOTICA.

    Get PDF
    Ilustración 1 Funcionamiento del internet de las cosas. 9 Ilustración 2 arquitectura de alto nivel de los sistemas de IoT. 11 Ilustración 3 15 Ilustración 4 evolución IoT, integración de múltiples tecnologías. 20 Ilustración 5 estudio IoT, Analytics aplicacione más populares. 22 Ilustración 6 evolución of Internet of things. 23 Ilustración 7 cibercriminal tratando de ingresar a nuestros dispositivos IoT. 34 Ilustración 8 diagrama ataques e infección de dispositivos. 40Los nuevos productos y servicios de “internet de las cosas” nos harán más eficientes, con mayor capacidad de actuación y comprensión del entorno, habrá nuevas ayudas técnicas que permitirán prolongar nuestra vida activa y más. Sin embargo coexistiremos con una gran cantidad de dispositivos que recopilarán información sobre nuestra actividad, costumbres, preferencias, etc. que podrían amenazar nuestra privacidad.The new products and services of "internet of things" will make us more efficient, with greater capacity for action and understanding of the environment, there will be new technical aids that will prolong our active life and more. However, we will coexist with a large number of devices that will collect information about our activity, customs, preferences, etc. That could threaten our privacy

    Internet of Things in Geospatial Analytics

    Get PDF
    Digital Earth was born with the aim of replicating the real world within the digital world. Many efforts have been made to observe and sense the Earth, both from space and by using in situ sensors. Focusing on the latter, advances in Digital Earth have established vital bridges to exploit these sensors and their networks by taking location as a key element. The current era of connectivity envisions that everything is connected to everything. The concept of the Internet of Things emerged as a holistic proposal to enable an ecosystem of varied, heterogeneous networked objects and devices to speak and interact with each other. To make the IoT ecosystem a reality, it is necessary to understand the electronic components, communication protocols, real-time analysis techniques, and the location of the objects and devices. The IoT ecosystem and the Digital Earth jointly form interrelated infrastructures for addressing modern pressing issues and complex challenges. In this chapter, we explore the synergies and frictions in establishing an efficient and permanent collaboration between the two infrastructures, in order to adequately address multidisciplinary and increasingly complex real-world problems. Although there are still some pending issues, the identified synergies generate optimism for a true collaboration between the Internet of Things and the Digital Earth.Comment: Book chapter at the Manual of Digital Earth Book, ISDE, September 2019, Editors: Huadong Guo, Michael F. Goodchild and Alessandro Annoni, (Publisher: Springer, Singapore

    Middleware for plug and play integration of heterogeneous sensor resources into the sensor web

    Get PDF
    The study of global phenomena requires the combination of a considerable amount of data coming from different sources, acquired by different observation platforms and managed by institutions working in different scientific fields. Merging this data to provide extensive and complete data sets to monitor the long-term, global changes of our oceans is a major challenge. The data acquisition and data archival procedures usually vary significantly depending on the acquisition platform. This lack of standardization ultimately leads to information silos, preventing the data to be effectively shared across different scientific communities. In the past years, important steps have been taken in order to improve both standardization and interoperability, such as the Open Geospatial Consortium’s SensorWeb Enablement (SWE) framework. Within this framework, standardized models and interfaces to archive, access and visualize the data from heterogeneous sensor resources have been proposed. However, due to the wide variety of software and hardware architectures presented by marine sensors and marine observation platforms, there is still a lack of uniform procedures to integrate sensors into existing SWE-based data infrastructures. In this work, a framework aimed to enable sensor plug and play integration into existing SWE-based data infrastructures is presented. First, an analysis of the operations required to automatically identify, configure and operate a sensor are analysed. Then, the metadata required for these operations is structured in a standard way. Afterwards, a modular, plug and play, SWE-based acquisition chain is proposed. Finally different use cases for this framework are presented.Peer ReviewedPostprint (published version

    Latency performance modelling in hyperledger fabric blockchain: Challenges and directions with an IoT perspective

    Get PDF
    Blockchain is a decentralized and distributed ledger technology that enables secure and transparent recording of transactions across multiple participants. Hyperledger Fabric (HLF), a permissioned blockchain, enhances performance through its modular design and pluggable consensus. However, integrating HLF with enterprise applications introduces latency challenges. Researchers have proposed numerous latency performance modelling techniques to address this issue. These studies contribute to a deeper understanding of HLF's latency by employing various modelling approaches and exploring techniques to improve network latency. However, existing HLF latency modelling studies lack an analysis of how these research efforts apply to specific use cases. This paper examines existing research on latency performance modelling in HLF and the challenges of applying these models to HLF-enabled Internet of Things (IoT) use cases. We propose a novel set of criteria for evaluating HLF latency performance modelling and highlight key HLF parameters that influence latency, aligning them with our evaluation criteria. We then classify existing papers based on their focus on latency modelling and the criteria they address. Additionally, we provide a comprehensive overview of latency performance modelling from various researchers, emphasizing the challenges in adapting these models to HLF-enabled IoT blockchain within the framework of our evaluation criteria. Finally, we suggest directions for future research and highlight open research questions for further exploration

    Gestionando datos heterogéneos provenientes de sensores para medir la calidad del aire de Bogotá

    Get PDF
    El avance tecnológico y científico actual, ha impulsado el desarrollo de sistemas que mejoren la calidad de vida de las personas, aportando bienestar a la comunidad mediante el suministro de información relevante y pertinente para la toma de decisiones. En el contexto tecnológico de Internet de las Cosas (IoT), estos sistemas suponen la medición y el monitoreo de diversas variables del entorno. (Karnouskos, 2012) La heterogeneidad propia de los datos capturados y los instrumentos de medición utilizados, dificulta la interoperabilidad entre los diversos componentes de IoT. Tales problemas han generado interés en el desarrollo de métodos y herramientas que soporten la heterogeneidad de los datos de sensores, de las mediciones y de los dispositivos de medición. Existen herramientas privadas que han resuelto algunos de estos problemas de interoperabilidad pero restringen a los desarrolladores de proyectos IoT a utilizar sensores de marcas específicas, limitando el uso generalizado en la comunidad. Adicionalmente, se requiere resolver el reto de integrar protocolos diversos en un mismo proyecto IoT. Con el propósito de subsanar esas dificultades, se plantea una arquitectura basada en redes de sensores y software inspirados en la cultura libre, que permita la comunicación mediante protocolos diversos en un escenario de aplicación donde se monitorea la calidad del aire para informar a los usuarios, y que mediante la generación de alertas favorezca la toma de decisiones en su vida cotidiana, teniendo en cuenta los datos provenientes de los sensores.Abstract: The current technological and scientific progress has promoted the development of systems that improve people's quality of life, providing well-being to the community by the supply of relevant information for decision-making. In the technological context of the Internet of Things (IoT), these systems involve the measurement and monitoring of various environmental variables. The inherent heterogeneity of the captured data and the measurement instruments used makes it difficult to interoperate between the various IoT components. Such problems have generated interest in the development of methods and tools that support the heterogeneity of sensor data, measurements and measurement devices. There are private tools that have solved some of these interoperability issues but restrict IoT project developers to use proprietary sensors, limiting widespread use in the community. In addition, it is necessary to solve the challenge of integrating diverse protocols in the same IoT project. In order to overcome these difficulties, an architecture based on networks of sensors and software inspired by the free culture is proposed, allowing communication through various protocols in an application scenario where air quality is monitored to inform users, and that through the generation of alerts favor the decision making in their daily lives, taking into account the data coming from sensors.Maestrí

    Implementation and Evaluation of Four Interoperable Open Standards for the Internet of Things

    No full text
    Recently, researchers are focusing on a new use of the Internet called the Internet of Things (IoT), in which enabled electronic devices can be remotely accessed over the Internet. As the realization of IoT concept is still in its early stages, manufacturers of Internet-connected devices and IoT web service providers are defining their proprietary protocols based on their targeted applications. Consequently, IoT becomes heterogeneous in terms of hardware capabilities and communication protocols. Addressing these heterogeneities by following open standards is a necessary step to communicate with various IoT devices. In this research, we assess the feasibility of applying existing open standards on resource-constrained IoT devices. The standard protocols developed in this research are OGC PUCK over Bluetooth, TinySOS, SOS over CoAP, and OGC SensorThings API. We believe that by hosting open standard protocols on IoT devices, not only do the devices become self-describable, self-contained, and interoperable, but innovative applications can also be easily developed with standardized interfaces. In addition, we use memory consumption, request message size, response message size, and response latency to benchmark the efficiency of the implemented protocols. In all, this research presents and evaluates standard-based solutions to better understand the feasibility of applying existing standards to the IoT vision

    Education on the Gis Frontier: Cybergis and Its Components

    Get PDF
    Geographic information systems (GIS) are a fundamental information technology. Coupled with advancing developments in spatial analysis through geographic information science (GISci), the capabilities and applications of GIS and GISci continue to rapidly expand. This expansion requires practitioners to have new skills and competencies, especially in computer science and programming. One developing framework for GIS’ future is that of Cyber Geographic Information Systems (CyberGIS), which fuses the technical capabilities of advanced cyber-infrastructure, like cloud and server computing, with the spatial analysis capabilities of GIS. This structure of GIS requires further computer science and programming abilities, but how GIS practitioners use and value the variant components within CyberGIS is unknown. This gap makes teaching and preparing students on the CyberGIS frontier difficult. The GIS skillset is in an ever-present state of re-imagination, but with the growing prominence of CyberGIS, which seeks to capitalize on advanced computing to benefit analysis in GIS, the need for an understanding of educational implications continues to grow. This dissertation uses a mixed-methods approach to explore how CyberGIS functions academically. First, I explore how university geography departments in the U.S. integrate computer science and programming skills in their undergraduate geography and GIS degree programs by reviewing degree requirements in highly-ranked departments. Few departments require computer science or programming courses for undergraduate degrees. Then, I explore the nature of knowledge and skills in CyberGIS using machine reading and q- methodology to explore viewpoints of how key CyberGIS skills function. The three viewpoints I identify reveal highly conflicting mindsets of how GIS functions. Finally, I use syllabi from different GIS programming and computer science courses to identify common topics, course structures, and instructional materials across a broad sample of courses. Three major topic foci emerged, including GIS scripting with Python, web-enabling GIS with JavaScript and HTML, and geodatabase manipulation with SQL. Some common instructional materials exist, but syllabi show little consistency in their curriculum focus and instructional design within or across topics relating GIS programming and computer science. There is little consistency or emphasis in current educational efforts concerning computer science and programming and how they function in building competencies required in CyberGIS. While CyberGIS promises advanced computing capabilities using complex systems, the fractured and uneven nature of basic computer science and programming instruction in GIS indicates that to achieve a Cyber-enabled GIS future, a much larger chasm between GIS and computer science must be bridged
    corecore