53 research outputs found

    Cyber-Physical Threat Intelligence for Critical Infrastructures Security

    Get PDF
    Modern critical infrastructures comprise of many interconnected cyber and physical assets, and as such are large scale cyber-physical systems. Hence, the conventional approach of securing these infrastructures by addressing cyber security and physical security separately is no longer effective. Rather more integrated approaches that address the security of cyber and physical assets at the same time are required. This book presents integrated (i.e. cyber and physical) security approaches and technologies for the critical infrastructures that underpin our societies. Specifically, it introduces advanced techniques for threat detection, risk assessment and security information sharing, based on leading edge technologies like machine learning, security knowledge modelling, IoT security and distributed ledger infrastructures. Likewise, it presets how established security technologies like Security Information and Event Management (SIEM), pen-testing, vulnerability assessment and security data analytics can be used in the context of integrated Critical Infrastructure Protection. The novel methods and techniques of the book are exemplified in case studies involving critical infrastructures in four industrial sectors, namely finance, healthcare, energy and communications. The peculiarities of critical infrastructure protection in each one of these sectors is discussed and addressed based on sector-specific solutions. The advent of the fourth industrial revolution (Industry 4.0) is expected to increase the cyber-physical nature of critical infrastructures as well as their interconnection in the scope of sectorial and cross-sector value chains. Therefore, the demand for solutions that foster the interplay between cyber and physical security, and enable Cyber-Physical Threat Intelligence is likely to explode. In this book, we have shed light on the structure of such integrated security systems, as well as on the technologies that will underpin their operation. We hope that Security and Critical Infrastructure Protection stakeholders will find the book useful when planning their future security strategies

    Journal of Telecommunications and Information Technology, 2004, nr 4

    Get PDF
    kwartalni

    Contributions to security and privacy protection in recommendation systems

    Get PDF
    A recommender system is an automatic system that, given a customer model and a set of available documents, is able to select and offer those documents that are more interesting to the customer. From the point of view of security, there are two main issues that recommender systems must face: protection of the users' privacy and protection of other participants of the recommendation process. Recommenders issue personalized recommendations taking into account not only the profile of the documents, but also the private information that customers send to the recommender. Hence, the users' profiles include personal and highly sensitive information, such as their likes and dislikes. In order to have a really useful recommender system and improve its efficiency, we believe that users shouldn't be afraid of stating their preferences. The second challenge from the point of view of security involves the protection against a new kind of attack. Copyright holders have shifted their targets to attack the document providers and any other participant that aids in the process of distributing documents, even unknowingly. In addition, new legislation trends such as ACTA or the ¿Sinde-Wert law¿ in Spain show the interest of states all over the world to control and prosecute these intermediate nodes. we proposed the next contributions: 1.A social model that captures user's interests into the users' profiles, and a metric function that calculates the similarity between users, queries and documents. This model represents profiles as vectors of a social space. Document profiles are created by means of the inspection of the contents of the document. Then, user profiles are calculated as an aggregation of the profiles of the documents that the user owns. Finally, queries are a constrained view of a user profile. This way, all profiles are contained in the same social space, and the similarity metric can be used on any pair of them. 2.Two mechanisms to protect the personal information that the user profiles contain. The first mechanism takes advantage of the Johnson-Lindestrauss and Undecomposability of random matrices theorems to project profiles into social spaces of less dimensions. Even if the information about the user is reduced in the projected social space, under certain circumstances the distances between the original profiles are maintained. The second approach uses a zero-knowledge protocol to answer the question of whether or not two profiles are affine without leaking any information in case of that they are not. 3.A distributed system on a cloud that protects merchants, customers and indexers against legal attacks, by means of providing plausible deniability and oblivious routing to all the participants of the system. We use the term DocCloud to refer to this system. DocCloud organizes databases in a tree-shape structure over a cloud system and provide a Private Information Retrieval protocol to avoid that any participant or observer of the process can identify the recommender. This way, customers, intermediate nodes and even databases are not aware of the specific database that answered the query. 4.A social, P2P network where users link together according to their similarity, and provide recommendations to other users in their neighborhood. We defined an epidemic protocol were links are established based on the neighbors similarity, clustering and randomness. Additionally, we proposed some mechanisms such as the use SoftDHT to aid in the identification of affine users, and speed up the process of creation of clusters of similar users. 5.A document distribution system that provides the recommended documents at the end of the process. In our view of a recommender system, the recommendation is a complete process that ends when the customer receives the recommended document. We proposed SCFS, a distributed and secure filesystem where merchants, documents and users are protectedEste documento explora c omo localizar documentos interesantes para el usuario en grandes redes distribuidas mediante el uso de sistemas de recomendaci on. Se de fine un sistema de recomendaci on como un sistema autom atico que, dado un modelo de cliente y un conjunto de documentos disponibles, es capaz de seleccionar y ofrecer los documentos que son m as interesantes para el cliente. Las caracter sticas deseables de un sistema de recomendaci on son: (i) ser r apido, (ii) distribuido y (iii) seguro. Un sistema de recomendaci on r apido mejora la experiencia de compra del cliente, ya que una recomendaci on no es util si es que llega demasiado tarde. Un sistema de recomendaci on distribuido evita la creaci on de bases de datos centralizadas con informaci on sensible y mejora la disponibilidad de los documentos. Por ultimo, un sistema de recomendaci on seguro protege a todos los participantes del sistema: usuarios, proveedores de contenido, recomendadores y nodos intermedios. Desde el punto de vista de la seguridad, existen dos problemas principales a los que se deben enfrentar los sistemas de recomendaci on: (i) la protecci on de la intimidad de los usuarios y (ii) la protecci on de los dem as participantes del proceso de recomendaci on. Los recomendadores son capaces de emitir recomendaciones personalizadas teniendo en cuenta no s olo el per l de los documentos, sino tambi en a la informaci on privada que los clientes env an al recomendador. Por tanto, los per les de usuario incluyen informaci on personal y altamente sensible, como sus gustos y fobias. Con el n de desarrollar un sistema de recomendaci on util y mejorar su e cacia, creemos que los usuarios no deben tener miedo a la hora de expresar sus preferencias. Para ello, la informaci on personal que est a incluida en los per les de usuario debe ser protegida y la privacidad del usuario garantizada. El segundo desafi o desde el punto de vista de la seguridad implica un nuevo tipo de ataque. Dado que la prevenci on de la distribuci on ilegal de documentos con derechos de autor por medio de soluciones t ecnicas no ha sido efi caz, los titulares de derechos de autor cambiaron sus objetivos para atacar a los proveedores de documentos y cualquier otro participante que ayude en el proceso de distribuci on de documentos. Adem as, tratados y leyes como ACTA, la ley SOPA de EEUU o la ley "Sinde-Wert" en España ponen de manfi esto el inter es de los estados de todo el mundo para controlar y procesar a estos nodos intermedios. Los juicios recientes como MegaUpload, PirateBay o el caso contra el Sr. Pablo Soto en España muestran que estas amenazas son una realidad

    The 11th Conference of PhD Students in Computer Science

    Get PDF

    Contents

    Get PDF

    Information security and assurance : Proceedings international conference, ISA 2012, Shanghai China, April 2012

    Full text link

    Group Key Agreement for Ad Hoc Networks

    Get PDF
    Over the last 30 years the study of group key agreement has stimulated much work. And as a result of the increased popularity of ad hoc networks, some approaches for the group key establishment in such networks are proposed. However, they are either only for static group or the memory, computation and communication costs are unacceptable for ad-hoc networks. In this thesis some protocol suites from the literature (2^d-cube, 2^d-octopus, Asokan-Ginzboorg, CLIQUES, STR and TGDH) shall be discussed. We have optimized STR and TGDH by reducing the memory, communication and computation costs. The optimized version are denoted by µSTR and µTGDH respectively. Based on the protocol suites µSTR and µTGDH we present a Tree-based group key agreement Framework for Ad-hoc Networks (TFAN). TFAN is especially suitable for ad-hoc networks with limited bandwidth and devices with limited memory and computation capability. To simulate the protocols, we have implemented TFAN, µSTR and µTGDH with J2ME CDC. The TFAN API will be described in this thesis

    Deep Learning and parallelization of Meta-heuristic Methods for IoT Cloud

    Get PDF
    Healthcare 4.0 is one of the Fourth Industrial Revolution’s outcomes that make a big revolution in the medical field. Healthcare 4.0 came with more facilities advantages that improved the average life expectancy and reduced population mortality. This paradigm depends on intelligent medical devices (wearable devices, sensors), which are supposed to generate a massive amount of data that need to be analyzed and treated with appropriate data-driven algorithms powered by Artificial Intelligence such as machine learning and deep learning (DL). However, one of the most significant limits of DL techniques is the long time required for the training process. Meanwhile, the realtime application of DL techniques, especially in sensitive domains such as healthcare, is still an open question that needs to be treated. On the other hand, meta-heuristic achieved good results in optimizing machine learning models. The Internet of Things (IoT) integrates billions of smart devices that can communicate with one another with minimal human intervention. IoT technologies are crucial in enhancing several real-life smart applications that can improve life quality. Cloud Computing has emerged as a key enabler for IoT applications because it provides scalable and on-demand, anytime, anywhere access to the computing resources. In this thesis, we are interested in improving the efficacity and performance of Computer-aided diagnosis systems in the medical field by decreasing the complexity of the model and increasing the quality of data. To accomplish this, three contributions have been proposed. First, we proposed a computer aid diagnosis system for neonatal seizures detection using metaheuristics and convolutional neural network (CNN) model to enhance the system’s performance by optimizing the CNN model. Secondly, we focused our interest on the covid-19 pandemic and proposed a computer-aided diagnosis system for its detection. In this contribution, we investigate Marine Predator Algorithm to optimize the configuration of the CNN model that will improve the system’s performance. In the third contribution, we aimed to improve the performance of the computer aid diagnosis system for covid-19. This contribution aims to discover the power of optimizing the data using different AI methods such as Principal Component Analysis (PCA), Discrete wavelet transform (DWT), and Teager Kaiser Energy Operator (TKEO). The proposed methods and the obtained results were validated with comparative studies using benchmark and public medical data
    corecore