75,481 research outputs found

    A gap analysis of Internet-of-Things platforms

    Full text link
    We are experiencing an abundance of Internet-of-Things (IoT) middleware solutions that provide connectivity for sensors and actuators to the Internet. To gain a widespread adoption, these middleware solutions, referred to as platforms, have to meet the expectations of different players in the IoT ecosystem, including device providers, application developers, and end-users, among others. In this article, we evaluate a representative sample of these platforms, both proprietary and open-source, on the basis of their ability to meet the expectations of different IoT users. The evaluation is thus more focused on how ready and usable these platforms are for IoT ecosystem players, rather than on the peculiarities of the underlying technological layers. The evaluation is carried out as a gap analysis of the current IoT landscape with respect to (i) the support for heterogeneous sensing and actuating technologies, (ii) the data ownership and its implications for security and privacy, (iii) data processing and data sharing capabilities, (iv) the support offered to application developers, (v) the completeness of an IoT ecosystem, and (vi) the availability of dedicated IoT marketplaces. The gap analysis aims to highlight the deficiencies of today's solutions to improve their integration to tomorrow's ecosystems. In order to strengthen the finding of our analysis, we conducted a survey among the partners of the Finnish IoT program, counting over 350 experts, to evaluate the most critical issues for the development of future IoT platforms. Based on the results of our analysis and our survey, we conclude this article with a list of recommendations for extending these IoT platforms in order to fill in the gaps.Comment: 15 pages, 4 figures, 3 tables, Accepted for publication in Computer Communications, special issue on the Internet of Things: Research challenges and solution

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    On-Demand Big Data Integration: A Hybrid ETL Approach for Reproducible Scientific Research

    Full text link
    Scientific research requires access, analysis, and sharing of data that is distributed across various heterogeneous data sources at the scale of the Internet. An eager ETL process constructs an integrated data repository as its first step, integrating and loading data in its entirety from the data sources. The bootstrapping of this process is not efficient for scientific research that requires access to data from very large and typically numerous distributed data sources. a lazy ETL process loads only the metadata, but still eagerly. Lazy ETL is faster in bootstrapping. However, queries on the integrated data repository of eager ETL perform faster, due to the availability of the entire data beforehand. In this paper, we propose a novel ETL approach for scientific data integration, as a hybrid of eager and lazy ETL approaches, and applied both to data as well as metadata. This way, Hybrid ETL supports incremental integration and loading of metadata and data from the data sources. We incorporate a human-in-the-loop approach, to enhance the hybrid ETL, with selective data integration driven by the user queries and sharing of integrated data between users. We implement our hybrid ETL approach in a prototype platform, Obidos, and evaluate it in the context of data sharing for medical research. Obidos outperforms both the eager ETL and lazy ETL approaches, for scientific research data integration and sharing, through its selective loading of data and metadata, while storing the integrated data in a scalable integrated data repository.Comment: Pre-print Submitted to the DMAH Special Issue of the Springer DAPD Journa

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    Smart PIN: utility-based replication and delivery of multimedia content to mobile users in wireless networks

    Get PDF
    Next generation wireless networks rely on heterogeneous connectivity technologies to support various rich media services such as personal information storage, file sharing and multimedia streaming. Due to users’ mobility and dynamic characteristics of wireless networks, data availability in collaborating devices is a critical issue. In this context Smart PIN was proposed as a personal information network which focuses on performance of delivery and cost efficiency. Smart PIN uses a novel data replication scheme based on individual and overall system utility to best balance the requirements for static data and multimedia content delivery with variable device availability due to user mobility. Simulations show improved results in comparison with other general purpose data replication schemes in terms of data availability

    Collective Decision Dynamics in the Presence of External Drivers

    Get PDF
    We develop a sequence of models describing information transmission and decision dynamics for a network of individual agents subject to multiple sources of influence. Our general framework is set in the context of an impending natural disaster, where individuals, represented by nodes on the network, must decide whether or not to evacuate. Sources of influence include a one-to-many externally driven global broadcast as well as pairwise interactions, across links in the network, in which agents transmit either continuous opinions or binary actions. We consider both uniform and variable threshold rules on the individual opinion as baseline models for decision-making. Our results indicate that 1) social networks lead to clustering and cohesive action among individuals, 2) binary information introduces high temporal variability and stagnation, and 3) information transmission over the network can either facilitate or hinder action adoption, depending on the influence of the global broadcast relative to the social network. Our framework highlights the essential role of local interactions between agents in predicting collective behavior of the population as a whole.Comment: 14 pages, 7 figure

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur

    Storage Solutions for Big Data Systems: A Qualitative Study and Comparison

    Full text link
    Big data systems development is full of challenges in view of the variety of application areas and domains that this technology promises to serve. Typically, fundamental design decisions involved in big data systems design include choosing appropriate storage and computing infrastructures. In this age of heterogeneous systems that integrate different technologies for optimized solution to a specific real world problem, big data system are not an exception to any such rule. As far as the storage aspect of any big data system is concerned, the primary facet in this regard is a storage infrastructure and NoSQL seems to be the right technology that fulfills its requirements. However, every big data application has variable data characteristics and thus, the corresponding data fits into a different data model. This paper presents feature and use case analysis and comparison of the four main data models namely document oriented, key value, graph and wide column. Moreover, a feature analysis of 80 NoSQL solutions has been provided, elaborating on the criteria and points that a developer must consider while making a possible choice. Typically, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings forth second facet of big data storage, big data file formats, into picture. The second half of the research paper compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Decentralized storage and blockchain are seen as the next generation of big data storage and its challenges and future prospects have also been discussed
    corecore