112 research outputs found

    Enhanced Study of Deep Learning Algorithms for Web Vulnerability Scanner

    Get PDF
    The detection of online vulnerabilities is the most important task for network security. In this paper, deep learning methodologies for dealing with tough or complicated challenges are investigated using convolutional neural networks, long-short-term memory, and generative adversarial networks.Experimental results demonstrate that deep learning approaches can significantly outperform standard methods when compared to them. In addition, we examine the various aspects that affect performance. This work can provide researchers with useful direction when designing network architecture and parameters for identifying web attacks

    Recent Advances in Deep Learning Techniques for Face Recognition

    Full text link
    In recent years, researchers have proposed many deep learning (DL) methods for various tasks, and particularly face recognition (FR) made an enormous leap using these techniques. Deep FR systems benefit from the hierarchical architecture of the DL methods to learn discriminative face representation. Therefore, DL techniques significantly improve state-of-the-art performance on FR systems and encourage diverse and efficient real-world applications. In this paper, we present a comprehensive analysis of various FR systems that leverage the different types of DL techniques, and for the study, we summarize 168 recent contributions from this area. We discuss the papers related to different algorithms, architectures, loss functions, activation functions, datasets, challenges, improvement ideas, current and future trends of DL-based FR systems. We provide a detailed discussion of various DL methods to understand the current state-of-the-art, and then we discuss various activation and loss functions for the methods. Additionally, we summarize different datasets used widely for FR tasks and discuss challenges related to illumination, expression, pose variations, and occlusion. Finally, we discuss improvement ideas, current and future trends of FR tasks.Comment: 32 pages and citation: M. T. H. Fuad et al., "Recent Advances in Deep Learning Techniques for Face Recognition," in IEEE Access, vol. 9, pp. 99112-99142, 2021, doi: 10.1109/ACCESS.2021.309613

    Methods and Techniques for Dynamic Deployability of Software-Defined Security Services

    Get PDF
    With the recent trend of “network softwarisation”, enabled by emerging technologies such as Software-Defined Networking and Network Function Virtualisation, system administrators of data centres and enterprise networks have started replacing dedicated hardware-based middleboxes with virtualised network functions running on servers and end hosts. This radical change has facilitated the provisioning of advanced and flexible network services, ultimately helping system administrators and network operators to cope with the rapid changes in service requirements and networking workloads. This thesis investigates the challenges of provisioning network security services in “softwarised” networks, where the security of residential and business users can be provided by means of sets of software-based network functions running on high performance servers or on commodity devices. The study is approached from the perspective of the telecom operator, whose goal is to protect the customers from network threats and, at the same time, maximize the number of provisioned services, and thereby revenue. Specifically, the overall aim of the research presented in this thesis is proposing novel techniques for optimising the resource usage of software-based security services, hence for increasing the chances for the operator to accommodate more service requests while respecting the desired level of network security of its customers. In this direction, the contributions of this thesis are the following: (i) a solution for the dynamic provisioning of security services that minimises the utilisation of computing and network resources, and (ii) novel methods based on Deep Learning and Linux kernel technologies for reducing the CPU usage of software-based security network functions, with specific focus on the defence against Distributed Denial of Service (DDoS) attacks. The experimental results reported in this thesis demonstrate that the proposed solutions for service provisioning and DDoS defence require fewer computing resources, compared to similar approaches available in the scientific literature or adopted in production networks

    Deep neural networks in the cloud: Review, applications, challenges and research directions

    Get PDF
    Deep neural networks (DNNs) are currently being deployed as machine learning technology in a wide range of important real-world applications. DNNs consist of a huge number of parameters that require millions of floating-point operations (FLOPs) to be executed both in learning and prediction modes. A more effective method is to implement DNNs in a cloud computing system equipped with centralized servers and data storage sub-systems with high-speed and high-performance computing capabilities. This paper presents an up-to-date survey on current state-of-the-art deployed DNNs for cloud computing. Various DNN complexities associated with different architectures are presented and discussed alongside the necessities of using cloud computing. We also present an extensive overview of different cloud computing platforms for the deployment of DNNs and discuss them in detail. Moreover, DNN applications already deployed in cloud computing systems are reviewed to demonstrate the advantages of using cloud computing for DNNs. The paper emphasizes the challenges of deploying DNNs in cloud computing systems and provides guidance on enhancing current and new deployments.The EGIA project (KK-2022/00119The Consolidated Research Group MATHMODE (IT1456-22

    Foundations and Technological Landscape of Cloud Computing

    Get PDF
    The cloud computing paradigm has brought the benefits of utility computing to a global scale. It has gained paramount attention in recent years. Companies are seriously considering to adopt this new paradigm and expecting to receive significant benefits. In fact, the concept of cloud computing is not a revolution in terms of technology; it has been established based on the solid ground of virtualization, distributed system, and web services. To comprehend cloud computing, its foundations and technological landscape need to be adequately understood. This paper provides a comprehensive review on the building blocks of cloud computing and relevant technological aspects. It focuses on four key areas including architecture, virtualization, data management, and security issues

    Cyber Security of Critical Infrastructures

    Get PDF
    Critical infrastructures are vital assets for public safety, economic welfare, and the national security of countries. The vulnerabilities of critical infrastructures have increased with the widespread use of information technologies. As Critical National Infrastructures are becoming more vulnerable to cyber-attacks, their protection becomes a significant issue for organizations as well as nations. The risks to continued operations, from failing to upgrade aging infrastructure or not meeting mandated regulatory regimes, are considered highly significant, given the demonstrable impact of such circumstances. Due to the rapid increase of sophisticated cyber threats targeting critical infrastructures with significant destructive effects, the cybersecurity of critical infrastructures has become an agenda item for academics, practitioners, and policy makers. A holistic view which covers technical, policy, human, and behavioural aspects is essential to handle cyber security of critical infrastructures effectively. Moreover, the ability to attribute crimes to criminals is a vital element of avoiding impunity in cyberspace. In this book, both research and practical aspects of cyber security considerations in critical infrastructures are presented. Aligned with the interdisciplinary nature of cyber security, authors from academia, government, and industry have contributed 13 chapters. The issues that are discussed and analysed include cybersecurity training, maturity assessment frameworks, malware analysis techniques, ransomware attacks, security solutions for industrial control systems, and privacy preservation methods

    Imaging Sensors and Applications

    Get PDF
    In past decades, various sensor technologies have been used in all areas of our lives, thus improving our quality of life. In particular, imaging sensors have been widely applied in the development of various imaging approaches such as optical imaging, ultrasound imaging, X-ray imaging, and nuclear imaging, and contributed to achieve high sensitivity, miniaturization, and real-time imaging. These advanced image sensing technologies play an important role not only in the medical field but also in the industrial field. This Special Issue covers broad topics on imaging sensors and applications. The scope range of imaging sensors can be extended to novel imaging sensors and diverse imaging systems, including hardware and software advancements. Additionally, biomedical and nondestructive sensing applications are welcome

    Stochastic Model Predictive Control and Machine Learning for the Participation of Virtual Power Plants in Simultaneous Energy Markets

    Get PDF
    The emergence of distributed energy resources in the electricity system involves new scenarios in which domestic consumers (end-users) can be aggregated to participate in energy markets, acting as prosumers. Every prosumer is considered to work as an individual energy node, which has its own renewable generation source, its controllable and non-controllable energy loads, or even its own individual tariffs to trade. The nodes can build aggregations which are managed by a system operator. The participation in energy markets is not trivial for individual prosumers due to different aspects such as the technical requirements which must be satisfied, or the need to trade with a minimum volume of energy. These requirements can be solved by the definition of aggregated participations. In this context, the aggregators handle the difficult task of coordinating and stabilizing the prosumers' operations, not only at an individual level, but also at a system level, so that the set of energy nodes behaves as a single entity with respect to the market. The system operators can act as a trading-distributing company, or only as a trading one. For this reason, the optimization model must consider not only aggregated tariffs, but also individual tariffs to allow individual billing for each energy node. The energy node must have the required technical and legal competences, as well as the necessary equipment to manage their participation in energy markets or to delegate it to the system operator. This aggregation, according to business rules and not only to physical locations, is known as virtual power plant. The optimization of the aggregated participation in the different energy markets requires the introduction of the concept of dynamic storage virtualization. Therefore, every energy node in the system under study will have a battery installed to store excess energy. This dynamic virtualization defines logical partitions in the storage system to allow its use for different purposes. As an example, two different partitions can be defined: one for the aggregated participation in the day-ahead market, and the other one for the demand-response program. There are several criteria which must be considered when defining the participation strategy. A risky strategy will report more benefits in terms of trading; however, this strategy will also be more likely to get penalties for not meeting the contract due to uncertainties or operation errors. On the other hand, a conservative strategy would result worse economically in terms of trading, but it will reduce these potential penalties. The inclusion of dynamic intent profiles allows to set risky bids when there exist a potential low error of forecast in terms of generation, load or failures; and conservative bids otherwise. The system operator is the agent who decides how much energy will be reserved to trade, how much to energy node self consumption, how much to demand-response program participation etc. The large number of variables and states makes this problem too complex to be solved by classical methods, especially considering the fact that slight differences in wrong decisions would imply important economic issues in the short term. The concept of dynamic storage virtualization has been studied and implemented to allow the simultaneous participation in multiple energy markets. The simultaneous participations can be optimized considering the objective of potential profits, potential risks or even a combination of both considering more advanced criteria related to the system operator's know-how. Day-ahead bidding algorithms, demand-response program participation optimization and a penalty-reduction operation control algorithm have been developed. A stochastic layer has been defined and implemented to improve the robustness inherent to forecast-dependent systems. This layer has been developed with chance-constraints, which includes the possibility of combining an intelligent agent based on a encoder-decoder arquitecture built with neural networks composed of gated recurrent units. The formulation and the implementation allow a total decouplement among all the algorithms without any dependency among them. Nevertheless, they are completely engaged because the individual execution of each one considers both the current scenario and the selected strategy. This makes possible a wider and better context definition and a more real and accurate situation awareness. In addition to the relevant simulation runs, the platform has also been tested on a real system composed of 40 energy nodes during one year in the German island of Borkum. This experience allowed the extraction of very satisfactory conclusions about the deployment of the platform in real environments.La irrupción de los sistemas de generación distribuidos en los sistemas eléctricos dan lugar a nuevos escenarios donde los consumidores domésticos (usuarios finales) pueden participar en los mercados de energía actuando como prosumidores. Cada prosumidor es considerado como un nodo de energía con su propia fuente de generación de energía renovable, sus cargas controlables y no controlables e incluso sus propias tarifas. Los nodos pueden formar agregaciones que serán gestionadas por un agente denominado operador del sistema. La participación en los mercados energéticos no es trivial, bien sea por requerimientos técnicos de instalación o debido a la necesidad de cubrir un volumen mínimo de energía por transacción, que cada nodo debe cumplir individualmente. Estas limitaciones hacen casi imposible la participación individual, pero pueden ser salvadas mediante participaciones agregadas. El agregador llevará a cabo la ardua tarea de coordinar y estabilizar las operaciones de los nodos de energía, tanto individualmente como a nivel de sistema, para que todo el conjunto se comporte como una unidad con respecto al mercado. Las entidades que gestionan el sistema pueden ser meras comercializadoras, o distribuidoras y comercializadoras simultáneamente. Por este motivo, el modelo de optimización sobre el que basarán sus decisiones deberá considerar, además de las tarifas agregadas, otras individuales para permitir facturaciones independientes. Los nodos deberán tener autonomía legal y técnica, así como el equipamiento necesario y suficiente para poder gestionar, o delegar en el operador del sistema, su participación en los mercados de energía. Esta agregación atendiendo a reglas de negocio y no solamente a restricciones de localización física es lo que se conoce como Virtual Power Plant. La optimización de la participación agregada en los mercados, desde el punto de vista técnico y económico, requiere de la introducción del concepto de virtualización dinámica del almacenamiento, para lo que será indispensable que los nodos pertenecientes al sistema bajo estudio consten de una batería para almacenar la energía sobrante. Esta virtualización dinámica definirá particiones lógicas en el sistema de almacenamiento para dedicar diferentes porcentajes de la energía almacenada para propósitos distintos. Como ejemplo, se podría hacer una virtualización en dos particiones lógicas diferentes: una de demand-response. Así, el sistema podría operar y satisfacer ambos mercados de manera simultánea con el mismo grid y el mismo almacenamiento. El potencial de estas particiones lógicas es que se pueden definir de manera dinámica, dependiendo del contexto de ejecución y del estado, tanto de la red, como de cada uno de los nodos a nivel individual. Para establecer una estrategia de participación se pueden considerar apuestas arriesgadas que reportarán más beneficios en términos de compra-venta, pero también posibles penalizaciones por no poder cumplir con el contrato. Por el contrario, una estrategia conservadora podría resultar menos beneficiosa económicamente en dichos términos de compra-venta, pero reducirá las penalizaciones. La inclusión del concepto de perfiles de intención dinámicos permitirá hacer pujas que sean arriesgadas, cuando existan errores de predicción potencialmente pequeños en términos de generación, consumo o fallos; y pujas más conservadoras en caso contrario. El operador del sistema es el agente que definirá cuánta energía utiliza para comercializar, cuánta para asegurar autoconsumo, cuánta desea tener disponible para participar en el programa de demand-response etc. El gran número de variables y de situaciones posibles hacen que este problema sea muy costoso y complejo de resolver mediante métodos clásicos, sobre todo teniendo en cuenta que pequeñas variaciones en la toma de decisiones pueden tener grandes implicaciones económicas incluso a corto plazo. En esta tesis se ha investigado en el concepto de virtualización dinámica del almacenamiento para permitir una participación simultánea en múltiples mercados. La estrategia de optimización definida permite participaciones simultáneas en diferentes mercados que pueden ser controladas con el objetivo de optimizar el beneficio potencial, el riesgo potencial, o incluso una combinación mixta de ambas en base a otros criterios más avanzados marcados por el know-how del operador del sistema. Se han desarrollado algoritmos de optimización para el mercado del day-ahead, para la participación en el programa de demand-response y un algoritmo de control para reducir las penalizaciones durante la operación mediante modelos de control predictivo. Se ha realizado la definición e implementación de un componente estocástico para hacer el sistema más robusto frente a la incertidumbre inherente a estos sistemas en los que hay tanto peso de una componente de tipo forecasing. La formulación de esta capa se ha realizado mediante chance-constraints, que incluye la posibilidad de combinar diferentes componentes para mejorar la precisión de la optimización. Para el caso de uso presentado se ha elegido la combinación de métodos estadísticos por probabilidad junto a un agente inteligente basado en una arquitectura de codificador-decodificador construida con redes neuronales compuestas de Gated Recurrent Units. La formulación y la implementación utilizada permiten que, aunque todos los algoritmos estén completamente desacoplados y no presenten dependencias entre ellos, todos se actual como la estrategia seleccionada. Esto permite la definición de un contexto mucho más amplio en la ejecución de las optimizaciones y una toma de decisiones más consciente, real y ajustada a la situación que condiciona al proceso. Además de las pertinentes pruebas de simulación, parte de la herramienta ha sido probada en un sistema real compuesto por 40 nodos domésticos, convenientemente equipados, durante un año en una infraestructura implantada en la isla alemana de Borkum. Esta experiencia ha permitido extraer conclusiones muy interesantes sobre la implantación de la plataforma en entornos reales

    Magnetic and Newtonian noises in Advanced Virgo: evaluation and mitigation strategies

    Get PDF
    In the present study, I table the first detailed estimation of the magnetic noise contribution to the Advanced Virgo sensitivity to gravitational waves. I tackle the topic by performing experimental assessments and numerical finite element simulations, all accompanied by careful data analysis. Results suggest that the magnetic noise impact for Advanced Virgo is not dramatic, but it will eventually be a considerable issue once the detector will approach its final design. In anticipation of that, I propose a mitigation strategy based on passive magnetic field shielding. In the second part, I deal with seismic newtonian noise, focusing on two crucial aspects involving the noise cancellation pipeline. These are the choice of the subtraction filter and the optimization of the seismic sensor array placement. The former issue required the definition of a machine learning algorithm based on deep neural networks, and its fine tuning. Results give some indication of good performances compared to the standard Wiener filter approach. The problem of the sensors deployment is instead addressed with the finite element analysis of the actual Virgo infrastructure and underground soil layers surrounding the test masses
    corecore