5,106 research outputs found

    Investigating the tension between cloud-related actors and individual privacy rights

    Get PDF
    Historically, little more than lip service has been paid to the rights of individuals to act to preserve their own privacy. Personal information is frequently exploited for commercial gain, often without the person’s knowledge or permission. New legislation, such as the EU General Data Protection Regulation Act, has acknowledged the need for legislative protection. This Act places the onus on service providers to preserve the confidentiality of their users’ and customers’ personal information, on pain of punitive fines for lapses. It accords special privileges to users, such as the right to be forgotten. This regulation has global jurisdiction covering the rights of any EU resident, worldwide. Assuring this legislated privacy protection presents a serious challenge, which is exacerbated in the cloud environment. A considerable number of actors are stakeholders in cloud ecosystems. Each has their own agenda and these are not necessarily well aligned. Cloud service providers, especially those offering social media services, are interested in growing their businesses and maximising revenue. There is a strong incentive for them to capitalise on their users’ personal information and usage information. Privacy is often the first victim. Here, we examine the tensions between the various cloud actors and propose a framework that could be used to ensure that privacy is preserved and respected in cloud systems

    Performance analysis of feedback-free collision resolution NDMA protocol

    Get PDF
    To support communications of a large number of deployed devices while guaranteeing limited signaling load, low energy consumption, and high reliability, future cellular systems require efficient random access protocols. However, how to address the collision resolution at the receiver is still the main bottleneck of these protocols. The network-assisted diversity multiple access (NDMA) protocol solves the issue and attains the highest potential throughput at the cost of keeping devices active to acquire feedback and repeating transmissions until successful decoding. In contrast, another potential approach is the feedback-free NDMA (FF-NDMA) protocol, in which devices do repeat packets in a pre-defined number of consecutive time slots without waiting for feedback associated with repetitions. Here, we investigate the FF-NDMA protocol from a cellular network perspective in order to elucidate under what circumstances this scheme is more energy efficient than NDMA. We characterize analytically the FF-NDMA protocol along with the multipacket reception model and a finite Markov chain. Analytic expressions for throughput, delay, capture probability, energy, and energy efficiency are derived. Then, clues for system design are established according to the different trade-offs studied. Simulation results show that FF-NDMA is more energy efficient than classical NDMA and HARQ-NDMA at low signal-to-noise ratio (SNR) and at medium SNR when the load increases.Peer ReviewedPostprint (published version

    Performance Evaluation of Constrained Application Protocol over TCP

    Get PDF
    The Constrained Application Protocol (CoAP) is specifically designed for constrained IoT devices and is being rapidly deployed for the communication needs of the IoT devices. CoAP has been specified with its own congestion control algorithms because it runs on top of UDP that does not include any congestion control measures. These algorithms aim at taking into account the specific needs of the IoT communication. The need of running CoAP also over TCP has arised recently and is expected to be increasingly deployed alongside with CoAP over UDP. To understand the benefits and shortcomings of both CoAP over TCP and CoAP over UDP, we run an extensive set of experiments in different network settings and compare the performance of CoAP over TCP to the existing congestion control algorithms for CoAP over UDP. Our results reveal that even though CoAP over TCP has its known limitations it scales well and performs even better than expected in certain wireless settings that CoAP over UDP algorithms are specifically designed for, often even outperforming CoAP over UDP.Peer reviewe

    The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions

    Full text link
    In recent years, the current Internet has experienced an unexpected paradigm shift in the usage model, which has pushed researchers towards the design of the Information-Centric Networking (ICN) paradigm as a possible replacement of the existing architecture. Even though both Academia and Industry have investigated the feasibility and effectiveness of ICN, achieving the complete replacement of the Internet Protocol (IP) is a challenging task. Some research groups have already addressed the coexistence by designing their own architectures, but none of those is the final solution to move towards the future Internet considering the unaltered state of the networking. To design such architecture, the research community needs now a comprehensive overview of the existing solutions that have so far addressed the coexistence. The purpose of this paper is to reach this goal by providing the first comprehensive survey and classification of the coexistence architectures according to their features (i.e., deployment approach, deployment scenarios, addressed coexistence requirements and architecture or technology used) and evaluation parameters (i.e., challenges emerging during the deployment and the runtime behaviour of an architecture). We believe that this paper will finally fill the gap required for moving towards the design of the final coexistence architecture.Comment: 23 pages, 16 figures, 3 table

    Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications

    Full text link
    In the era when the market segment of Internet of Things (IoT) tops the chart in various business reports, it is apparently envisioned that the field of medicine expects to gain a large benefit from the explosion of wearables and internet-connected sensors that surround us to acquire and communicate unprecedented data on symptoms, medication, food intake, and daily-life activities impacting one's health and wellness. However, IoT-driven healthcare would have to overcome many barriers, such as: 1) There is an increasing demand for data storage on cloud servers where the analysis of the medical big data becomes increasingly complex, 2) The data, when communicated, are vulnerable to security and privacy issues, 3) The communication of the continuously collected data is not only costly but also energy hungry, 4) Operating and maintaining the sensors directly from the cloud servers are non-trial tasks. This book chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog Computing is a service-oriented intermediate layer in IoT, providing the interfaces between the sensors and cloud servers for facilitating connectivity, data transfer, and queryable local database. The centerpiece of Fog computing is a low-power, intelligent, wireless, embedded computing node that carries out signal conditioning and data analytics on raw data collected from wearables or other medical sensors and offers efficient means to serve telehealth interventions. We implemented and tested an fog computing system using the Intel Edison and Raspberry Pi that allows acquisition, computing, storage and communication of the various medical data such as pathological speech data of individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area Network, Body Sensor Network, Edge Computing, Fog Computing, Medical Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment, Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in Smart Healthcare (2017), Springe

    Survey on Quality of Observation within Sensor Web Systems

    Get PDF
    The Sensor Web vision refers to the addition of a middleware layer between sensors and applications. To bridge the gap between these two layers, Sensor Web systems must deal with heterogeneous sources, which produce heterogeneous observations of disparate quality. Managing such diversity at the application level can be complex and requires high levels of expertise from application developers. Moreover, as an information-centric system, any Sensor Web should provide support for Quality of Observation (QoO) requirements. In practice, however, only few Sensor Webs provide satisfying QoO support and are able to deliver high-quality observations to end consumers in a specific manner. This survey aims to study why and how observation quality should be addressed in Sensor Webs. It proposes three original contributions. First, it provides important insights into quality dimensions and proposes to use the QoO notion to deal with information quality within Sensor Webs. Second, it proposes a QoO-oriented review of 29 Sensor Web solutions developed between 2003 and 2016, as well as a custom taxonomy to characterise some of their features from a QoO perspective. Finally, it draws four major requirements required to build future adaptive and QoO-aware Sensor Web solutions
    corecore