291 research outputs found

    Authentication enhancement in command and control networks: (a study in Vehicular Ad-Hoc Networks)

    Get PDF
    Intelligent transportation systems contribute to improved traffic safety by facilitating real time communication between vehicles. By using wireless channels for communication, vehicular networks are susceptible to a wide range of attacks, such as impersonation, modification, and replay. In this context, securing data exchange between intercommunicating terminals, e.g., vehicle-to-everything (V2X) communication, constitutes a technological challenge that needs to be addressed. Hence, message authentication is crucial to safeguard vehicular ad-hoc networks (VANETs) from malicious attacks. The current state-of-the-art for authentication in VANETs relies on conventional cryptographic primitives, introducing significant computation and communication overheads. In this challenging scenario, physical (PHY)-layer authentication has gained popularity, which involves leveraging the inherent characteristics of wireless channels and the hardware imperfections to discriminate between wireless devices. However, PHY-layerbased authentication cannot be an alternative to crypto-based methods as the initial legitimacy detection must be conducted using cryptographic methods to extract the communicating terminal secret features. Nevertheless, it can be a promising complementary solution for the reauthentication problem in VANETs, introducing what is known as “cross-layer authentication.” This thesis focuses on designing efficient cross-layer authentication schemes for VANETs, reducing the communication and computation overheads associated with transmitting and verifying a crypto-based signature for each transmission. The following provides an overview of the proposed methodologies employed in various contributions presented in this thesis. 1. The first cross-layer authentication scheme: A four-step process represents this approach: initial crypto-based authentication, shared key extraction, re-authentication via a PHY challenge-response algorithm, and adaptive adjustments based on channel conditions. Simulation results validate its efficacy, especially in low signal-to-noise ratio (SNR) scenarios while proving its resilience against active and passive attacks. 2. The second cross-layer authentication scheme: Leveraging the spatially and temporally correlated wireless channel features, this scheme extracts high entropy shared keys that can be used to create dynamic PHY-layer signatures for authentication. A 3-Dimensional (3D) scattering Doppler emulator is designed to investigate the scheme’s performance at different speeds of a moving vehicle and SNRs. Theoretical and hardware implementation analyses prove the scheme’s capability to support high detection probability for an acceptable false alarm value ≤ 0.1 at SNR ≥ 0 dB and speed ≤ 45 m/s. 3. The third proposal: Reconfigurable intelligent surfaces (RIS) integration for improved authentication: Focusing on enhancing PHY-layer re-authentication, this proposal explores integrating RIS technology to improve SNR directed at designated vehicles. Theoretical analysis and practical implementation of the proposed scheme are conducted using a 1-bit RIS, consisting of 64 × 64 reflective units. Experimental results show a significant improvement in the Pd, increasing from 0.82 to 0.96 at SNR = − 6 dB for multicarrier communications. 4. The fourth proposal: RIS-enhanced vehicular communication security: Tailored for challenging SNR in non-line-of-sight (NLoS) scenarios, this proposal optimises key extraction and defends against denial-of-service (DoS) attacks through selective signal strengthening. Hardware implementation studies prove its effectiveness, showcasing improved key extraction performance and resilience against potential threats. 5. The fifth cross-layer authentication scheme: Integrating PKI-based initial legitimacy detection and blockchain-based reconciliation techniques, this scheme ensures secure data exchange. Rigorous security analyses and performance evaluations using network simulators and computation metrics showcase its effectiveness, ensuring its resistance against common attacks and time efficiency in message verification. 6. The final proposal: Group key distribution: Employing smart contract-based blockchain technology alongside PKI-based authentication, this proposal distributes group session keys securely. Its lightweight symmetric key cryptography-based method maintains privacy in VANETs, validated via Ethereum’s main network (MainNet) and comprehensive computation and communication evaluations. The analysis shows that the proposed methods yield a noteworthy reduction, approximately ranging from 70% to 99%, in both computation and communication overheads, as compared to the conventional approaches. This reduction pertains to the verification and transmission of 1000 messages in total

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Efficient Security Algorithm for Provisioning Constrained Internet of Things (IoT) Devices

    Get PDF
    Addressing the security concerns of constrained Internet of Things (IoT) devices, such as client- side encryption and secure provisioning remains a work in progress. IoT devices characterized by low power and processing capabilities do not exactly fit into the provisions of existing security schemes, as classical security algorithms are built on complex cryptographic functions that are too complex for constrained IoT devices. Consequently, the option for constrained IoT devices lies in either developing new security schemes or modifying existing ones as lightweight. This work presents an improved version of the Advanced Encryption Standard (AES) known as the Efficient Security Algorithm for Power-constrained IoT devices, which addressed some of the security concerns of constrained Internet of Things (IoT) devices, such as client-side encryption and secure provisioning. With cloud computing being the key enabler for the massive provisioning of IoT devices, encryption of data generated by IoT devices before onward transmission to cloud platforms of choice is being advocated via client-side encryption. However, coping with trade-offs remain a notable challenge with Lightweight algorithms, making the innovation of cheaper secu- rity schemes without compromise to security a high desirable in the secure provisioning of IoT devices. A cryptanalytic overview of the consequence of complexity reduction with mathematical justification, while using a Secure Element (ATECC608A) as a trade-off is given. The extent of constraint of a typical IoT device is investigated by comparing the Laptop/SAMG55 implemen- tations of the Efficient algorithm for constrained IoT devices. An analysis of the implementation and comparison of the Algorithm to lightweight algorithms is given. Based on experimentation results, resource constrain impacts a 657% increase in the encryption completion time on the IoT device in comparison to the laptop implementation; of the Efficient algorithm for Constrained IoT devices, which is 0.9 times cheaper than CLEFIA and 35% cheaper than the AES in terms of the encryption completion times, compared to current results in literature at 26%, and with a 93% of avalanche effect rate, well above a recommended 50% in literature. The algorithm is utilised for client-side encryption to provision the device onto AWS IoT core

    Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts-Volume II

    Get PDF
    The climate changes that are becoming visible today are a challenge for the global research community. In this context, renewable energy sources, fuel cell systems, and other energy generating sources must be optimally combined and connected to the grid system using advanced energy transaction methods. As this reprint presents the latest solutions in the implementation of fuel cell and renewable energy in mobile and stationary applications, such as hybrid and microgrid power systems based on the Energy Internet, Blockchain technology, and smart contracts, we hope that they will be of interest to readers working in the related fields mentioned above

    RescueSNN: enabling reliable executions on spiking neural network accelerators under permanent faults

    Get PDF
    To maximize the performance and energy efficiency of Spiking Neural Network (SNN) processing on resource-constrained embedded systems, specialized hardware accelerators/chips are employed. However, these SNN chips may suffer from permanent faults which can affect the functionality of weight memory and neuron behavior, thereby causing potentially significant accuracy degradation and system malfunctioning. Such permanent faults may come from manufacturing defects during the fabrication process, and/or from device/transistor damages (e.g., due to wear out) during the run-time operation. However, the impact of permanent faults in SNN chips and the respective mitigation techniques have not been thoroughly investigated yet. Toward this, we propose RescueSNN, a novel methodology to mitigate permanent faults in the compute engine of SNN chips without requiring additional retraining, thereby significantly cutting down the design time and retraining costs, while maintaining the throughput and quality. The key ideas of our RescueSNN methodology are (1) analyzing the characteristics of SNN under permanent faults; (2) leveraging this analysis to improve the SNN fault-tolerance through effective fault-aware mapping (FAM); and (3) devising lightweight hardware enhancements to support FAM. Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow. The experimental results show that our RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate (e.g., 0.5 of the potential fault locations), as compared to running SNNs on the faulty chip without mitigation. In this manner, the embedded systems that employ RescueSNN-enhanced chips can efficiently ensure reliable executions against permanent faults during their operational lifetime

    Direct communication radio Iinterface for new radio multicasting and cooperative positioning

    Get PDF
    Cotutela: Universidad de defensa UNIVERSITA’ MEDITERRANEA DI REGGIO CALABRIARecently, the popularity of Millimeter Wave (mmWave) wireless networks has increased due to their capability to cope with the escalation of mobile data demands caused by the unprecedented proliferation of smart devices in the fifth-generation (5G). Extremely high frequency or mmWave band is a fundamental pillar in the provision of the expected gigabit data rates. Hence, according to both academic and industrial communities, mmWave technology, e.g., 5G New Radio (NR) and WiGig (60 GHz), is considered as one of the main components of 5G and beyond networks. Particularly, the 3rd Generation Partnership Project (3GPP) provides for the use of licensed mmWave sub-bands for the 5G mmWave cellular networks, whereas IEEE actively explores the unlicensed band at 60 GHz for the next-generation wireless local area networks. In this regard, mmWave has been envisaged as a new technology layout for real-time heavy-traffic and wearable applications. This very work is devoted to solving the problem of mmWave band communication system while enhancing its advantages through utilizing the direct communication radio interface for NR multicasting, cooperative positioning, and mission-critical applications. The main contributions presented in this work include: (i) a set of mathematical frameworks and simulation tools to characterize multicast traffic delivery in mmWave directional systems; (ii) sidelink relaying concept exploitation to deal with the channel condition deterioration of dynamic multicast systems and to ensure mission-critical and ultra-reliable low-latency communications; (iii) cooperative positioning techniques analysis for enhancing cellular positioning accuracy for 5G+ emerging applications that require not only improved communication characteristics but also precise localization. Our study indicates the need for additional mechanisms/research that can be utilized: (i) to further improve multicasting performance in 5G/6G systems; (ii) to investigate sideline aspects, including, but not limited to, standardization perspective and the next relay selection strategies; and (iii) to design cooperative positioning systems based on Device-to-Device (D2D) technology

    RescueSNN: Enabling Reliable Executions on Spiking Neural Network Accelerators under Permanent Faults

    Full text link
    To maximize the performance and energy efficiency of Spiking Neural Network (SNN) processing on resource-constrained embedded systems, specialized hardware accelerators/chips are employed. However, these SNN chips may suffer from permanent faults which can affect the functionality of weight memory and neuron behavior, thereby causing potentially significant accuracy degradation and system malfunctioning. Such permanent faults may come from manufacturing defects during the fabrication process, and/or from device/transistor damages (e.g., due to wear out) during the run-time operation. However, the impact of permanent faults in SNN chips and the respective mitigation techniques have not been thoroughly investigated yet. Toward this, we propose RescueSNN, a novel methodology to mitigate permanent faults in the compute engine of SNN chips without requiring additional retraining, thereby significantly cutting down the design time and retraining costs, while maintaining the throughput and quality. The key ideas of our RescueSNN methodology are (1) analyzing the characteristics of SNN under permanent faults; (2) leveraging this analysis to improve the SNN fault-tolerance through effective fault-aware mapping (FAM); and (3) devising lightweight hardware enhancements to support FAM. Our FAM technique leverages the fault map of SNN compute engine for (i) minimizing weight corruption when mapping weight bits on the faulty memory cells, and (ii) selectively employing faulty neurons that do not cause significant accuracy degradation to maintain accuracy and throughput, while considering the SNN operations and processing dataflow. The experimental results show that our RescueSNN improves accuracy by up to 80% while maintaining the throughput reduction below 25% in high fault rate (e.g., 0.5 of the potential fault locations), as compared to running SNNs on the faulty chip without mitigation.Comment: Accepted for publication at Frontiers in Neuroscience - Section Neuromorphic Engineerin

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Machine learning as a service for high energy physics (MLaaS4HEP): a service for ML-based data analyses

    Get PDF
    With the CERN LHC program underway, there has been an acceleration of data growth in the High Energy Physics (HEP) field and the usage of Machine Learning (ML) in HEP will be critical during the HL-LHC program when the data that will be produced will reach the exascale. ML techniques have been successfully used in many areas of HEP nevertheless, the development of a ML project and its implementation for production use is a highly time-consuming task and requires specific skills. Complicating this scenario is the fact that HEP data is stored in ROOT data format, which is mostly unknown outside of the HEP community. The work presented in this thesis is focused on the development of a ML as a Service (MLaaS) solution for HEP, aiming to provide a cloud service that allows HEP users to run ML pipelines via HTTP calls. These pipelines are executed by using the MLaaS4HEP framework, which allows reading data, processing data, and training ML models directly using ROOT files of arbitrary size from local or distributed data sources. Such a solution provides HEP users non-expert in ML with a tool that allows them to apply ML techniques in their analyses in a streamlined manner. Over the years the MLaaS4HEP framework has been developed, validated, and tested and new features have been added. A first MLaaS solution has been developed by automatizing the deployment of a platform equipped with the MLaaS4HEP framework. Then, a service with APIs has been developed, so that a user after being authenticated and authorized can submit MLaaS4HEP workflows producing trained ML models ready for the inference phase. A working prototype of this service is currently running on a virtual machine of INFN-Cloud and is compliant to be added to the INFN Cloud portfolio of services

    Gaussian Processes for Machine Learning in Robotics

    Get PDF
    Mención Internacional en el título de doctorNowadays, machine learning is widely used in robotics for a variety of tasks such as perception, control, planning, and decision making. Machine learning involves learning, reasoning, and acting based on the data. This is achieved by constructing computer programs that process the data, extract useful information or features, make predictions to infer unknown properties, and suggest actions to take or decisions to make. This computer program corresponds to a mathematical model of the data that describes the relationship between the variables that represent the observed data and properties of interest. The aforementioned model is learned based on the available training data, which is accomplished using a learning algorithm capable of automatically adjusting the parameters of the model to agree with the data. Therefore, the architecture of the model needs to be selected accordingly, which is not a trivial task and usually depends on the machine-learning engineer’s insights and past experience. The number of parameters to be tuned varies significantly with the selected machine learning model, ranging from two or three parameters for Gaussian processes (GP) to hundreds of thousands for artificial neural networks. However, as more complex and novel robotic applications emerge, data complexity increases and prior experience may be insufficient to define adequate mathematical models. In addition, traditional machine learning methods are prone to problems such as overfitting, which can lead to inaccurate predictions and catastrophic failures in critical applications. These methods provide probabilistic distributions as model outputs, allowing for estimating the uncertainty associated with predictions and making more informed decisions. That is, they provide a mean and variance for the model responses. This thesis focuses on the application of machine learning solutions based on Gaussian processes to various problems in robotics, with the aim of improving current methods and providing a new perspective. Key areas such as trajectory planning for unmanned aerial vehicles (UAVs), motion planning for robotic manipulators and model identification of nonlinear systems are addressed. In the field of path planning for UAVs, algorithms based on Gaussian processes that allow for more efficient planning and energy savings in exploration missions have been developed. These algorithms are compared with traditional analytical approaches, demonstrating their superiority in terms of efficiency when using machine learning. Area coverage and linear coverage algorithms with UAV formations are presented, as well as a sea surface search algorithm. Finally, these algorithms are compared with a new method that uses Gaussian processes to perform probabilistic predictions and optimise trajectory planning, resulting in improved performance and reduced energy consumption. Regarding motion planning for robotic manipulators, an approach based on Gaussian process models that provides a significant reduction in computational times is proposed. A Gaussian process model is used to approximate the configuration space of a robot, which provides valuable information to avoid collisions and improve safety in dynamic environments. This approach is compared to conventional collision checking methods and its effectiveness in terms of computational time and accuracy is demonstrated. In this application, the variance provides information about dangerous zones for the manipulator. In terms of creating models of non-linear systems, Gaussian processes also offer significant advantages. This approach is applied to a soft robotic arm system and UAV energy consumption models, where experimental data is used to train Gaussian process models that capture the relationships between system inputs and outputs. The results show accurate identification of system parameters and the ability to make reliable future predictions. In summary, this thesis presents a variety of applications of Gaussian processes in robotics, from trajectory and motion planning to model identification. These machine learning-based solutions provide probabilistic predictions and improve the ability of robots to perform tasks safely and efficiently. Gaussian processes are positioned as a powerful tool to address current challenges in robotics and open up new possibilities in the field.El aprendizaje automático ha revolucionado el campo de la robótica al ofrecer una amplia gama de aplicaciones en áreas como la percepción, el control, la planificación y la toma de decisiones. Este enfoque implica desarrollar programas informáticos que pueden procesar datos, extraer información valiosa, realizar predicciones y ofrecer recomendaciones o sugerencias de acciones. Estos programas se basan en modelos matemáticos que capturan las relaciones entre las variables que representan los datos observados y las propiedades que se desean analizar. Los modelos se entrenan utilizando algoritmos de optimización que ajustan automáticamente los parámetros para lograr un rendimiento óptimo. Sin embargo, a medida que surgen aplicaciones robóticas más complejas y novedosas, la complejidad de los datos aumenta y la experiencia previa puede resultar insuficiente para definir modelos matemáticos adecuados. Además, los métodos de aprendizaje automático tradicionales son propensos a problemas como el sobreajuste, lo que puede llevar a predicciones inexactas y fallos catastróficos en aplicaciones críticas. Para superar estos desafíos, los métodos probabilísticos de aprendizaje automático, como los procesos gaussianos, han ganado popularidad. Estos métodos ofrecen distribuciones probabilísticas como salidas del modelo, lo que permite estimar la incertidumbre asociada a las predicciones y tomar decisiones más informadas. Esto es, proporcionan una media y una varianza para las respuestas del modelo. Esta tesis se centra en la aplicación de soluciones de aprendizaje automático basadas en procesos gaussianos a diversos problemas en robótica, con el objetivo de mejorar los métodos actuales y proporcionar una nueva perspectiva. Se abordan áreas clave como la planificación de trayectorias para vehículos aéreos no tripulados (UAVs), la planificación de movimientos para manipuladores robóticos y la identificación de modelos de sistemas no lineales. En el campo de la planificación de trayectorias para UAVs, se han desarrollado algoritmos basados en procesos gaussianos que permiten una planificación más eficiente y un ahorro de energía en misiones de exploración. Estos algoritmos se comparan con los enfoques analíticos tradicionales, demostrando su superioridad en términos de eficiencia al utilizar el aprendizaje automático. Se presentan algoritmos de recubrimiento de áreas y recubrimiento lineal con formaciones de UAVs, así como un algoritmo de búsqueda en superficies marinas. Finalmente, estos algoritmos se comparan con un nuevo método que utiliza procesos gaussianos para realizar predicciones probabilísticas y optimizar la planificación de trayectorias, lo que resulta en un rendimiento mejorado y una reducción del consumo de energía. En cuanto a la planificación de movimientos para manipuladores robóticos, se propone un enfoque basado en modelos gaussianos que permite una reducción significativa en los tiempos de cálculo. Se utiliza un modelo de procesos gaussianos para aproximar el espacio de configuraciones de un robot, lo que proporciona información valiosa para evitar colisiones y mejorar la seguridad en entornos dinámicos. Este enfoque se compara con los métodos convencionales de planificación de movimientos y se demuestra su eficacia en términos de tiempo de cálculo y precisión de los movimientos. En esta aplicación, la varianza proporciona información sobre zonas peligrosas para el manipulador. En cuanto a la identificación de modelos de sistemas no lineales, los procesos gaussianos también ofrecen ventajas significativas. Este enfoque se aplica a un sistema de brazo robótico blando y a modelos de consumo energético de UAVs, donde se utilizan datos experimentales para entrenar un modelo de proceso gaussiano que captura las relaciones entre las entradas y las salidas del sistema. Los resultados muestran una identificación precisa de los parámetros del sistema y la capacidad de realizar predicciones futuras confiables. En resumen, esta tesis presenta una variedad de aplicaciones de procesos gaussianos en robótica, desde la planificación de trayectorias y movimientos hasta la identificación de modelos. Estas soluciones basadas en aprendizaje automático ofrecen predicciones probabilísticas y mejoran la capacidad de los robots para realizar tareas de manera segura y eficiente. Los procesos gaussianos se posicionan como una herramienta poderosa para abordar los desafíos actuales en robótica y abrir nuevas posibilidades en el campo.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Juan Jesús Romero Cardalda.- Secretaria: María Dolores Blanco Rojas.- Vocal: Giuseppe Carbon
    corecore