9,255 research outputs found

    Intrusion Response Systems for the 5G Networks and Beyond: A New Joint Security-vs-QoS Optimization Approach

    Get PDF
    Network connectivity exposes the network infrastructure and assets to vulnerabilities that attackers can exploit. Protecting network assets against attacks requires the application of security countermeasures. Nevertheless, employing countermeasures incurs costs, such as monetary costs, along with time and energy to prepare and deploy the countermeasures. Thus, an Intrusion Response System (IRS) shall consider security and QoS costs when dynamically selecting the countermeasures to address the detected attacks. This has motivated us to formulate a joint Security-vs-QoS optimization problem to select the best countermeasures in an IRS. The problem is then transformed into a matching game-theoretical model. Considering the monetary costs and attack coverage constraints, we first derive the theoretical upper bound for the problem and later propose stable matching-based solutions to address the trade-off. The performance of the proposed solution, considering different settings, is validated over a series of simulations

    Attack Taxonomy Methodology Applied to Web Services

    Get PDF
    With the rapid evolution of attack techniques and attacker targets, companies and researchers question the applicability and effectiveness of security taxonomies. Although the attack taxonomies allow us to propose a classification scheme, they are easily rendered useless by the generation of new attacks. Due to its distributed and open nature, web services give rise to new security challenges. The purpose of this study is to apply a methodology for categorizing and updating attacks prior to the continuous creation and evolution of new attack schemes on web services. Also, in this research, we collected thirty-three (33) types of attacks classified into five (5) categories, such as brute force, spoofing, flooding, denial-of-services, and injection attacks, in order to obtain the state of the art of vulnerabilities against web services. Finally, the attack taxonomy is applied to a web service, modeling through attack trees. The use of this methodology allows us to prevent future attacks applied to many technologies, not only web services.Con la rápida evolución de las técnicas de ataque y los objetivos de los atacantes, las empresas y los investigadores cuestionan la aplicabilidad y eficacia de las taxonomías de seguridad. Si bien las taxonomías de ataque nos permiten proponer un esquema de clasificación, son fácilmente inutilizadas por la generación de nuevos ataques. Debido a su naturaleza distribuida y abierta, los servicios web plantean nuevos desafíos de seguridad. El propósito de este estudio es aplicar una metodología para categorizar y actualizar ataques previos a la continua creación y evolución de nuevos esquemas de ataque a servicios web. Asimismo, en esta investigación recolectamos treinta y tres (33) tipos de ataques clasificados en cinco (5) categorías, tales como fuerza bruta, suplantación de identidad, inundación, denegación de servicios y ataques de inyección, con el fin de obtener el estado del arte de las vulnerabilidades contra servicios web. Finalmente, se aplica la taxonomía de ataque a un servicio web, modelado a través de árboles de ataque. El uso de esta metodología nos permite prevenir futuros ataques aplicados a muchas tecnologías, no solo a servicios web

    A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection

    Get PDF
    The broadening dependency and reliance that modern societies have on essential services provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just at the economic level but also in terms of physical damage and even loss of human life. Complementing traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are in place and compliant with standards and internal policies. Forensics assist the investigation of past security incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of tackling the requirements imposed by massively distributed and complex Industrial Automation and Control Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic template for a converged platform. These results are intended to guide future research on forensics and compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio

    Performance Analytical Modelling of Mobile Edge Computing for Mobile Vehicular Applications: A Worst-Case Perspective

    Get PDF
    Quantitative performance analysis plays a pivotal role in theoretically investigating the performance of Vehicular Edge Computing (VEC) systems. Although considerable research efforts have been devoted to VEC performance analysis, all of the existing analytical models were designed to derive the average system performance, paying insufficient attention to the worst-case performance analysis, which hinders the practical deployment of VEC systems to support mission-critical vehicular applications, such as collision avoidance. To bridge this gap, we develop an original performance analytical model by virtue of Stochastic Network Calculus (SNC) to investigate the worst-case end-to-end performance of VEC systems. Specifically, to capture the bursty feature of task generation, an innovative bivariate Markov Chain is firstly established and rigorously analysed to derive the stochastic task envelope. Then, an effective service curve is created to investigate the severe resource competition among vehicular applications. Driven by the stochastic task envelope and effective service curve, a closed-form end-to-end analytical model is derived to obtain the latency bound for VEC systems. Extensive simulation experiments are conducted to validate the accuracy of the proposed analytical model under different system configurations. Furthermore, we exploit the proposed analytical model as a cost-effective tool to investigate the resource allocation strategies in VEC systems

    Efficient network management and security in 5G enabled internet of things using deep learning algorithms

    Get PDF
    The rise of fifth generation (5G) networks and the proliferation of internet-of-things (IoT) devices have created new opportunities for innovation and increased connectivity. However, this growth has also brought forth several challenges related to network management and security. Based on the review of literature it has been identified that majority of existing research work are limited to either addressing the network management issue or security concerns. In this paper, the proposed work has presented an integrated framework to address both network management and security concerns in 5G internet-of-things (IoT) network using a deep learning algorithm. Firstly, a joint approach of attention mechanism and long short-term memory (LSTM) model is proposed to forecast network traffic and optimization of network resources in a, service-based and user-oriented manner. The second contribution is development of reliable network attack detection system using autoencoder mechanism. Finally, a contextual model of 5G-IoT is discussed to demonstrate the scope of the proposed models quantifying the network behavior to drive predictive decision making in network resources and attack detection with performance guarantees. The experiments are conducted with respect to various statistical error analysis and other performance indicators to assess prediction capability of both traffic forecasting and attack detection model

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    E-learning in the Cloud Computing Environment: Features, Architecture, Challenges and Solutions

    Get PDF
    The need to constantly and consistently improve the quality and quantity of the educational system is essential. E-learning has emerged from the rapid cycle of change and the expansion of new technologies. Advances in information technology have increased network bandwidth, data access speed, and reduced data storage costs. In recent years, the implementation of cloud computing in educational settings has garnered the interest of major companies, leading to substantial investments in this area. Cloud computing improves engineering education by providing an environment that can be accessed from anywhere and allowing access to educational resources on demand. Cloud computing is a term used to describe the provision of hosting services on the Internet. It is predicted to be the next generation of information technology architecture and offers great potential to enhance productivity and reduce costs. Cloud service providers offer their processing and memory resources to users. By paying for the use of these resources, users can access them for their calculations and processing anytime and anywhere. Cloud computing provides the ability to increase productivity, save information technology resources, and enhance computing power, converting processing power into a tool with constant access capabilities. The use of cloud computing in a system that supports remote education has its own set of characteristics and requires a unique strategy. Students can access a wide variety of instructional engineering materials at any time and from any location, thanks to cloud computing. Additionally, they can share their materials with other community members. The use of cloud computing in e-learning offers several advantages, such as unlimited computing resources, high scalability, and reduced costs associated with e-learning. An improvement in the quality of teaching and learning is achieved through the use of flexible cloud computing, which offers a variety of resources for educators and students. In light of this, the current research presents cloud computing technology as a suitable and superior option for e-learning systems

    Configuration Management of Distributed Systems over Unreliable and Hostile Networks

    Get PDF
    Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems. This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration. Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture. The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn. Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts

    A Multivocal Literature Review on Non-Technical Debt in Software Development: An Insight into Process, Social, People, Organizational, and Culture Debt

    Get PDF
    Software development encompasses various factors beyond technical considerations. Neglecting non-technical elements like individuals, processes, culture, and social and organizational aspects can lead to debt-like characteristics that demand attention. Therefore, we introduce the non-technical debt (NTD) concept to encompass and explore these aspects. This indicates the applicability of the debt analogy to non-technical facets of software development. Technical debt (TD) and NTD share similarities and often arise from risky decision-making processes, impacting both software development professionals and software quality. Overlooking either type of debt can lead to significant implications for software development success. The current study conducts a comprehensive multivocal literature review (MLR) to explore the most recent research on NTD, its causes, and potential mitigation strategies. For analysis, we carefully selected 40 primary studies among 110 records published until October 1, 2022. The study investigates the factors contributing to the accumulation of NTD in software development and proposes strategies to alleviate the adverse effects associated with it. This MLR offers a contemporary overview and identifies prospects for further investigation, making a valuable contribution to the field. The findings of this research highlight that NTD's impacts extend beyond monetary aspects, setting it apart from TD. Furthermore, the findings reveal that rectifying NTD is more challenging than addressing TD, and its consequences contribute to the accumulation of TD. To avert software project failures, a comprehensive approach that addresses NTD and TD concurrently is crucial. Effective communication and coordination play a vital role in mitigating NTD, and the study proposes utilizing the 3C model as a recommended framework to tackle NTD concerns
    corecore