35 research outputs found

    Security protocols suite for machine-to-machine systems

    Get PDF
    Nowadays, the great diffusion of advanced devices, such as smart-phones, has shown that there is a growing trend to rely on new technologies to generate and/or support progress; the society is clearly ready to trust on next-generation communication systems to face today’s concerns on economic and social fields. The reason for this sociological change is represented by the fact that the technologies have been open to all users, even if the latter do not necessarily have a specific knowledge in this field, and therefore the introduction of new user-friendly applications has now appeared as a business opportunity and a key factor to increase the general cohesion among all citizens. Within the actors of this technological evolution, wireless machine-to-machine (M2M) networks are becoming of great importance. These wireless networks are made up of interconnected low-power devices that are able to provide a great variety of services with little or even no user intervention. Examples of these services can be fleet management, fire detection, utilities consumption (water and energy distribution, etc.) or patients monitoring. However, since any arising technology goes together with its security threats, which have to be faced, further studies are necessary to secure wireless M2M technology. In this context, main threats are those related to attacks to the services availability and to the privacy of both the subscribers’ and the services providers’ data. Taking into account the often limited resources of the M2M devices at the hardware level, ensuring the availability and privacy requirements in the range of M2M applications while minimizing the waste of valuable resources is even more challenging. Based on the above facts, this Ph. D. thesis is aimed at providing efficient security solutions for wireless M2M networks that effectively reduce energy consumption of the network while not affecting the overall security services of the system. With this goal, we first propose a coherent taxonomy of M2M network that allows us to identify which security topics deserve special attention and which entities or specific services are particularly threatened. Second, we define an efficient, secure-data aggregation scheme that is able to increase the network lifetime by optimizing the energy consumption of the devices. Third, we propose a novel physical authenticator or frame checker that minimizes the communication costs in wireless channels and that successfully faces exhaustion attacks. Fourth, we study specific aspects of typical key management schemes to provide a novel protocol which ensures the distribution of secret keys for all the cryptographic methods used in this system. Fifth, we describe the collaboration with the WAVE2M community in order to define a proper frame format actually able to support the necessary security services, including the ones that we have already proposed; WAVE2M was funded to promote the global use of an emerging wireless communication technology for ultra-low and long-range services. And finally sixth, we provide with an accurate analysis of privacy solutions that actually fit M2M-networks services’ requirements. All the analyses along this thesis are corroborated by simulations that confirm significant improvements in terms of efficiency while supporting the necessary security requirements for M2M networks

    Blockchain-enabled cybersecurity provision for scalable heterogeneous network: A comprehensive survey

    Get PDF
    Blockchain-enabled cybersecurity system to ensure and strengthen decentralized digital transaction is gradually gaining popularity in the digital era for various areas like finance, transportation, healthcare, education, and supply chain management. Blockchain interactions in the heterogeneous network have fascinated more attention due to the authentication of their digital application exchanges. However, the exponential development of storage space capabilities across the blockchain-based heterogeneous network has become an important issue in preventing blockchain distribution and the extension of blockchain nodes. There is the biggest challenge of data integrity and scalability, including significant computing complexity and inapplicable latency on regional network diversity, operating system diversity, bandwidth diversity, node diversity, etc., for decision-making of data transactions across blockchain-based heterogeneous networks. Data security and privacy have also become the main concerns across the heterogeneous network to build smart IoT ecosystems. To address these issues, today’s researchers have explored the potential solutions of the capability of heterogeneous network devices to perform data transactions where the system stimulates their integration reliably and securely with blockchain. The key goal of this paper is to conduct a state-of-the-art and comprehensive survey on cybersecurity enhancement using blockchain in the heterogeneous network. This paper proposes a full-fledged taxonomy to identify the main obstacles, research gaps, future research directions, effective solutions, and most relevant blockchain-enabled cybersecurity systems. In addition, Blockchain based heterogeneous network framework with cybersecurity is proposed in this paper to meet the goal of maintaining optimal performance data transactions among organizations. Overall, this paper provides an in-depth description based on the critical analysis to overcome the existing work gaps for future research where it presents a potential cybersecurity design with key requirements of blockchain across a heterogeneous network

    Design and Analysis of a Novel Split and Aggregated Transmission Control Protocol for Smart Metering Infrastructure

    Get PDF
    Utility companies (electricity, gas, and water suppliers), governments, and researchers recognize an urgent need to deploy communication-based systems to automate data collection from smart meters and sensors, known as Smart Metering Infrastructure (SMI) or Automatic Meter Reading (AMR). A smart metering system is envisaged to bring tremendous benefits to customers, utilities, and governments. The advantages include reducing peak demand for energy, supporting the time-of-use concept for billing, enabling customers to make informed decisions, and performing effective load management, to name a few. A key element in an SMI is communications between meters and utility servers. However, the mass deployment of metering devices in the grid calls for studying the scalability of communication protocols. SMI is characterized by the deployment of a large number of small Internet Protocol (IP) devices sending small packets at a low rate to a central server. Although the individual devices generate data at a low rate, the collective traffic produced is significant and is disruptive to network communication functionality. This research work focuses on the scalability of the transport layer functionalities. The TCP congestion control mechanism, in particular, would be ineffective for the traffic of smart meters because a large volume of data comes from a large number of individual sources. This situation makes the TCP congestion control mechanism unable to lower the transmission rate even when congestion occurs. The consequences are a high loss rate for metered data and degraded throughput for competing traffic in the smart metering network. To enhance the performance of TCP in a smart metering infrastructure (SMI), we introduce a novel TCP-based scheme, called Split- and Aggregated-TCP (SA-TCP). This scheme is based on the idea of upgrading intermediate devices in SMI (known in the industry as regional collectors) to offer the service of aggregating the TCP connections. An SA-TCP aggregator collects data packets from the smart meters of its region over separate TCP connections; then it reliably forwards the data over another TCP connection to the utility server. The proposed split and aggregated scheme provides a better response to traffic conditions and, most importantly, makes the TCP congestion control and flow control mechanisms effective. Supported by extensive ns-2 simulations, we show the effectiveness of the SA-TCP approach to mitigating the problems in terms of the throughput and packet loss rate performance metrics. A full mathematical model of SA-TCP is provided. The model is highly accurate and flexible in predicting the behaviour of the two stages, separately and combined, of the SA-TCP scheme in terms of throughput, packet loss rate and end-to-end delay. Considering the two stages of the scheme, the modelling approach uses Markovian models to represent smart meters in the first stage and SA-TCP aggregators in the second. Then, the approach studies the interaction of smart meters and SA-TCP aggregators with the network by means of standard queuing models. The ns-2 simulations validate the math model results. A comprehensive performance analysis of the SA-TCP scheme is performed. It studies the impact of varying various parameters on the scheme, including the impact of network link capacity, buffering capacity of those RCs that act as SA-TCP aggregators, propagation delay between the meters and the utility server, and finally, the number of SA-TCP aggregators. The performance results show that adjusting those parameters makes it possible to further enhance congestion control in SMI. Therefore, this thesis also formulates an optimization model to achieve better TCP performance and ensures satisfactory performance results, such as a minimal loss rate and acceptable end-to-end delay. The optimization model also considers minimizing the SA-TCP scheme deployment cost by balancing the number of SA-TCP aggregators and the link bandwidth, while still satisfying performance requirements

    Design and Analysis of a Novel Split and Aggregated Transmission Control Protocol for Smart Metering Infrastructure

    Get PDF
    Utility companies (electricity, gas, and water suppliers), governments, and researchers recognize an urgent need to deploy communication-based systems to automate data collection from smart meters and sensors, known as Smart Metering Infrastructure (SMI) or Automatic Meter Reading (AMR). A smart metering system is envisaged to bring tremendous benefits to customers, utilities, and governments. The advantages include reducing peak demand for energy, supporting the time-of-use concept for billing, enabling customers to make informed decisions, and performing effective load management, to name a few. A key element in an SMI is communications between meters and utility servers. However, the mass deployment of metering devices in the grid calls for studying the scalability of communication protocols. SMI is characterized by the deployment of a large number of small Internet Protocol (IP) devices sending small packets at a low rate to a central server. Although the individual devices generate data at a low rate, the collective traffic produced is significant and is disruptive to network communication functionality. This research work focuses on the scalability of the transport layer functionalities. The TCP congestion control mechanism, in particular, would be ineffective for the traffic of smart meters because a large volume of data comes from a large number of individual sources. This situation makes the TCP congestion control mechanism unable to lower the transmission rate even when congestion occurs. The consequences are a high loss rate for metered data and degraded throughput for competing traffic in the smart metering network. To enhance the performance of TCP in a smart metering infrastructure (SMI), we introduce a novel TCP-based scheme, called Split- and Aggregated-TCP (SA-TCP). This scheme is based on the idea of upgrading intermediate devices in SMI (known in the industry as regional collectors) to offer the service of aggregating the TCP connections. An SA-TCP aggregator collects data packets from the smart meters of its region over separate TCP connections; then it reliably forwards the data over another TCP connection to the utility server. The proposed split and aggregated scheme provides a better response to traffic conditions and, most importantly, makes the TCP congestion control and flow control mechanisms effective. Supported by extensive ns-2 simulations, we show the effectiveness of the SA-TCP approach to mitigating the problems in terms of the throughput and packet loss rate performance metrics. A full mathematical model of SA-TCP is provided. The model is highly accurate and flexible in predicting the behaviour of the two stages, separately and combined, of the SA-TCP scheme in terms of throughput, packet loss rate and end-to-end delay. Considering the two stages of the scheme, the modelling approach uses Markovian models to represent smart meters in the first stage and SA-TCP aggregators in the second. Then, the approach studies the interaction of smart meters and SA-TCP aggregators with the network by means of standard queuing models. The ns-2 simulations validate the math model results. A comprehensive performance analysis of the SA-TCP scheme is performed. It studies the impact of varying various parameters on the scheme, including the impact of network link capacity, buffering capacity of those RCs that act as SA-TCP aggregators, propagation delay between the meters and the utility server, and finally, the number of SA-TCP aggregators. The performance results show that adjusting those parameters makes it possible to further enhance congestion control in SMI. Therefore, this thesis also formulates an optimization model to achieve better TCP performance and ensures satisfactory performance results, such as a minimal loss rate and acceptable end-to-end delay. The optimization model also considers minimizing the SA-TCP scheme deployment cost by balancing the number of SA-TCP aggregators and the link bandwidth, while still satisfying performance requirements

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in field experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction specifies methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction unifies different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and verified with proofs of concept and analyses of field experiments. The resulting designs consist of an operating system kernel, a software update method, a data unification interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data unification interface homogenizes the access to the data of heterogeneous sensor networks. A unification model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical field experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the field experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms

    Abstracting Application Development for Resource Constrained Wireless Sensor Networks

    Get PDF
    Ubiquitous computing is a concept whereby computing is distributed across smart objects surrounding users, creating ambient intelligence. Ubiquitous applications use technologies such as the Internet, sensors, actuators, embedded computers, wireless communication, and new user interfaces. The Internet-of-Things (IoT) is one of the key concepts in the realization of ubiquitous computing, whereby smart objects communicate with each other and the Internet. Further, Wireless Sensor Networks (WSNs) are a sub-group of IoT technologies that consist of geographically distributed devices or nodes, capable of sensing and actuating the environment.WSNs typically contain tens to thousands of nodes that organize and operate autonomously to perform application-dependent sensing and sensor data processing tasks. The projected applications require nodes to be small in physical size and low-cost, and have a long lifetime with limited energy resources, while performing complex computing and communications tasks. As a result, WSNs are complex distributed systems that are constrained by communications, computing and energy resources. WSN functionality is dynamic according to the environment and application requirements. Dynamic multitasking, task distribution, task injection, and software updates are required in field experiments for possibly thousands of nodes functioning in harsh environments.The development of WSN application software requires the abstraction of computing, communication, data access, and heterogeneous sensor data sources to reduce the complexities. Abstractions enable the faster development of new applications with a better reuse of existing software, as applications are composed of high-level tasks that use the services provided by the devices to execute the application logic.The main research question of this thesis is: What abstractions are needed for application development for resource constrained WSNs? This thesis models WSN abstractions with three levels that build on top of each other: 1) node abstraction, 2) network abstraction, and 3) infrastructure abstraction. The node abstraction hides the details in the use of the sensing, communication, and processing hardware. The network abstraction specifies methods of discovering and accessing services, and distributing processing in the network. The infrastructure abstraction unifies different sensing technologies and infrastructure computing platforms.As a contribution, this thesis presents the abstraction model with a review of each abstraction level. Several designs for each of the levels are tested and verified with proofs of concept and analyses of field experiments. The resulting designs consist of an operating system kernel, a software update method, a data unification interface, and all abstraction levels combining abstraction called an embedded cloud.The presented operating system kernel has a scalable overhead and provides a programming approach similar to a desktop computer operating system with threads and processes. An over-the-air update method combines low overhead and robust software updating with application task dissemination. The data unification interface homogenizes the access to the data of heterogeneous sensor networks. A unification model is used for various use cases by mapping everything as measurements. The embedded cloud allows resource constrained WSNs to share services and data, and expand resources with other technologies. The embedded cloud allows the distributed processing of applications according to the available services. The applications are implemented as processes using a hardware independent description language that can be executed on resource constrained WSNs. The lessons of practical field experimenting are analyzed to study the importance of the abstractions. Software complexities encountered in the field experiments highlight the need for suitable abstractions.The results of this thesis are tested using proof of concept implementations on real WSN hardware which is constrained by computing power in the order of a few MIPS, memory sizes of a few kilobytes, and small sized batteries. The results will remain usable in the future, as the vast amount, tight integration, and low-cost of future IoT devices require the combination of complex computation with resource constrained platforms

    Privacy in rfid and mobile objects

    Get PDF
    Los sistemas RFID permiten la identificación rápida y automática de etiquetas RFID a través de un canal de comunicación inalámbrico. Dichas etiquetas son dispositivos con cierto poder de cómputo y capacidad de almacenamiento de información. Es por ello que los objetos que contienen una etiqueta RFID adherida permiten la lectura de una cantidad rica y variada de datos que los describen y caracterizan, por ejemplo, un código único de identificación, el nombre, el modelo o la fecha de expiración. Además, esta información puede ser leída sin la necesidad de un contacto visual entre el lector y la etiqueta, lo cual agiliza considerablemente los procesos de inventariado, identificación, o control automático. Para que el uso de la tecnología RFID se generalice con éxito, es conveniente cumplir con varios objetivos: eficiencia, seguridad y protección de la privacidad. Sin embargo, el diseño de protocolos de identificación seguros, privados, y escalables es un reto difícil de abordar dada las restricciones computacionales de las etiquetas RFID y su naturaleza inalámbrica. Es por ello que, en la presente tesis, partimos de protocolos de identificación seguros y privados, y mostramos cómo se puede lograr escalabilidad mediante una arquitectura distribuida y colaborativa. De este modo, la seguridad y la privacidad se alcanzan mediante el propio protocolo de identificación, mientras que la escalabilidad se logra por medio de novedosos métodos colaborativos que consideran la posición espacial y temporal de las etiquetas RFID. Independientemente de los avances en protocolos inalámbricos de identificación, existen ataques que pueden superar exitosamente cualquiera de estos protocolos sin necesidad de conocer o descubrir claves secretas válidas ni de encontrar vulnerabilidades en sus implementaciones criptográficas. La idea de estos ataques, conocidos como ataques de “relay”, consiste en crear inadvertidamente un puente de comunicación entre una etiqueta legítima y un lector legítimo. De este modo, el adversario usa los derechos de la etiqueta legítima para pasar el protocolo de autenticación usado por el lector. Nótese que, dada la naturaleza inalámbrica de los protocolos RFID, este tipo de ataques representa una amenaza importante a la seguridad en sistemas RFID. En esta tesis proponemos un nuevo protocolo que además de autenticación realiza un chequeo de la distancia a la cual se encuentran el lector y la etiqueta. Este tipo de protocolos se conocen como protocolos de acotación de distancia, los cuales no impiden este tipo de ataques, pero sí pueden frustrarlos con alta probabilidad. Por último, afrontamos los problemas de privacidad asociados con la publicación de información recogida a través de sistemas RFID. En particular, nos concentramos en datos de movilidad que también pueden ser proporcionados por otros sistemas ampliamente usados tales como el sistema de posicionamiento global (GPS) y el sistema global de comunicaciones móviles. Nuestra solución se basa en la conocida noción de k-anonimato, alcanzada mediante permutaciones y microagregación. Para este fin, definimos una novedosa función de distancia entre trayectorias con la cual desarrollamos dos métodos diferentes de anonimización de trayectorias.Els sistemes RFID permeten la identificació ràpida i automàtica d’etiquetes RFID a través d’un canal de comunicació sense fils. Aquestes etiquetes són dispositius amb cert poder de còmput i amb capacitat d’emmagatzematge de informació. Es per això que els objectes que porten una etiqueta RFID adherida permeten la lectura d’una quantitat rica i variada de dades que els descriuen i caracteritzen, com per exemple un codi únic d’identificació, el nom, el model o la data d’expiració. A més, aquesta informació pot ser llegida sense la necessitat d’un contacte visual entre el lector i l’etiqueta, la qual cosa agilitza considerablement els processos d’inventariat, identificació o control automàtic. Per a que l’ús de la tecnologia RFID es generalitzi amb èxit, es convenient complir amb diversos objectius: eficiència, seguretat i protecció de la privacitat. No obstant això, el disseny de protocols d’identificació segurs, privats i escalables, es un repte difícil d’abordar dades les restriccions computacionals de les etiquetes RFID i la seva naturalesa sense fils. Es per això que, en la present tesi, partim de protocols d’identificació segurs i privats, i mostrem com es pot aconseguir escalabilitat mitjançant una arquitectura distribuïda i col•laborativa. D’aquesta manera, la seguretat i la privacitat s’aconsegueixen mitjançant el propi protocol d’identificació, mentre que l’escalabilitat s’aconsegueix per mitjà de nous protocols col•laboratius que consideren la posició espacial i temporal de les etiquetes RFID. Independentment dels avenços en protocols d’identificació sense fils, existeixen atacs que poden passar exitosament qualsevol d’aquests protocols sense necessitat de conèixer o descobrir claus secretes vàlides, ni de trobar vulnerabilitats a les seves implantacions criptogràfiques. La idea d’aquestos atacs, coneguts com atacs de “relay”, consisteix en crear inadvertidament un pont de comunicació entre una etiqueta legítima i un lector legítim. D’aquesta manera, l’adversari utilitza els drets de l’etiqueta legítima per passar el protocol d’autentificació utilitzat pel lector. Es important tindre en compte que, dada la naturalesa sense fils dels protocols RFID, aquests tipus d’atacs representen una amenaça important a la seguretat en sistemes RFID. En aquesta dissertació proposem un nou protocol que, a més d’autentificació, realitza una revisió de la distància a la qual es troben el lector i l’etiqueta. Aquests tipus de protocols es coneixen com a “distance-boulding protocols”, els quals no prevenen aquests tipus d’atacs, però si que poden frustrar-los amb alta probabilitat. Per últim, afrontem els problemes de privacitat associats amb la publicació de informació recol•lectada a través de sistemes RFID. En concret, ens concentrem en dades de mobilitat, que també poden ser proveïdes per altres sistemes àmpliament utilitzats tals com el sistema de posicionament global (GPS) i el sistema global de comunicacions mòbils. La nostra solució es basa en la coneguda noció de privacitat “k-anonymity” i parcialment en micro-agregació. Per a aquesta finalitat, definim una nova funció de distància entre trajectòries amb la qual desenvolupen dos mètodes diferents d’anonimització de trajectòries.Radio Frequency Identification (RFID) is a technology aimed at efficiently identifying and tracking goods and assets. Such identification may be performed without requiring line-of-sight alignment or physical contact between the RFID tag and the RFID reader, whilst tracking is naturally achieved due to the short interrogation field of RFID readers. That is why the reduction in price of the RFID tags has been accompanied with an increasing attention paid to this technology. However, since tags are resource-constrained devices sending identification data wirelessly, designing secure and private RFID identification protocols is a challenging task. This scenario is even more complex when scalability must be met by those protocols. Assuming the existence of a lightweight, secure, private and scalable RFID identification protocol, there exist other concerns surrounding the RFID technology. Some of them arise from the technology itself, such as distance checking, but others are related to the potential of RFID systems to gather huge amount of tracking data. Publishing and mining such moving objects data is essential to improve efficiency of supervisory control, assets management and localisation, transportation, etc. However, obvious privacy threats arise if an individual can be linked with some of those published trajectories. The present dissertation contributes to the design of algorithms and protocols aimed at dealing with the issues explained above. First, we propose a set of protocols and heuristics based on a distributed architecture that improve the efficiency of the identification process without compromising privacy or security. Moreover, we present a novel distance-bounding protocol based on graphs that is extremely low-resource consuming. Finally, we present two trajectory anonymisation methods aimed at preserving the individuals' privacy when their trajectories are released
    corecore