596 research outputs found

    Networked world: Risks and opportunities in the Internet of Things

    Get PDF
    The Internet of Things (IoT) – devices that are connected to the Internet and collect and use data to operate – is about to transform society. Everything from smart fridges and lightbulbs to remote sensors and cities will collect data that can be analysed and used to provide a wealth of bespoke products and services. The impacts will be huge - by 2020, some 25 billion devices will be connected to the Internet with some studies estimating this number will rise to 125 billion in 2030. These will include many things that have never been connected to the Internet before. Like all new technologies, IoT offers substantial new opportunities which must be considered in parallel with the new risks that come with it. To make sense of this new world, Lloyd’s worked with University College London’s (UCL) Department of Science, Technology, Engineering and Public Policy (STEaPP) and the PETRAS IoT Research Hub to publish this report. ‘Networked world’ analyses IoT’s opportunities, risks and regulatory landscape. It aims to help insurers understand potential exposures across marine, smart homes, water infrastructure and agriculture while highlighting the implications for insurance operations and product development. The report also helps risk managers assess how this technology could impact their businesses and consider how they can mitigate associated risks

    Digital resilience and financial stability: the quest for policy tools in the financial sector

    Get PDF
    As a result of the sweeping transition to a digitalised financial system, digital resilience is a fundamental pillar of financial stability. Achieving digital resilience poses a broad range of regulatory challenges, to respond to the complex combination of risks, essentially consisting of cyber (in)security and the concentration of computer resources in the cloud. This article presents the guiding principles of the new regulatory logic needed in the microprudential and macroprudential fields, highlighting its special features and its relationship to the exceptional combination of risks at stake in the area of digital resilience. It also discusses the need for instrumental innovations, such as greater use of circuit breakers, the singular role of cooperation in cybersecurity regulation and the unique challenges raised by the regulatory perimeter of digital resilience.La resiliencia digital constituye un pilar fundamental para la estabilidad financiera ante la radical transición a la digitalización del sistema financiero. La consecución de resiliencia digital plantea retos regulatorios de amplio espectro con los que dar respuesta al complejo combinado de riesgos que conforman, principalmente, la ciber(in)seguridad y la concentración de recursos computacionales en la nube. Este artículo presenta las líneas maestras de la nueva lógica regulatoria precisa en los ámbitos micro- y macroprudencial, destaca sus rasgos singulares y la relación de estos con el atípico combinado de riesgos en juego en el ámbito de la resiliencia digital. En concreto, el artículo versa sobre la necesidad de innovaciones instrumentales como un mayor recurso a circuit breakers, sobre el singular papel de la cooperación en la regulación para la ciberseguridad y sobre los retos únicos que plantea el perímetro regulatorio de la resiliencia digital

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    A toolbox for Artificial Intelligence Algorithms in Cyber Attacks Prevention and Detection

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementThis Thesis provides a qualitative view on the usage of AI technology in cybersecurity strategy of businesses. It explores the field of AI technology today, and how it is a good technology to implement into Cyber Security. The Internet and Informational technology have transformed the world of today. There is no doubt that it has created huge opportunities for global economy and humanity. The fact that Businesses of today is thoroughly dependent on the Internet and Information Systems has also exposed new vulnerabilities in terms of cybercrimes performed by a diversity of hackers, criminals, terrorists, the state and the non-state actors. All Public, private companies and government agencies are vulnerable for cybercrimes, none is left fully protected. In the recent years AI and machine learning technology have become essential to information security, since these technologies can analyze swiftly millions of datasets and tracking down a wide range of cyber threats. Alongside With the increasingly growth of automation in businesses, is it realistic that cybersecurity can be removed from human interaction into fully independent AI Applications to cover the businesses Information System Architecture of businesses in the future? This is a very interesting field those resources really need to deep into to be able to fully take advantage of the fully potential of AI technology in the usage in the field of cybersecurity. This thesis will explore the usage of AI algorithms in the prevention and detection of cyberattack in businesses and how to optimize its use. This knowledge will be used to implement a framework and a corresponding hybrid toolbox application that its purpose is be to be useful in every business in terms of strengthening the cybersecurity environment

    Security Aspects of Social Robots in Public Spaces: A Systematic Mapping Study

    Get PDF
    Background: As social robots increasingly integrate into public spaces, comprehending their security implications becomes paramount. This study is conducted amidst the growing use of social robots in public spaces (SRPS), emphasising the necessity for tailored security standards for these unique robotic systems. Methods: In this systematic mapping study (SMS), we meticulously review and analyse existing literature from the Web of Science database, following guidelines by Petersen et al. We employ a structured approach to categorise and synthesise literature on SRPS security aspects, including physical safety, data privacy, cybersecurity, and legal/ethical considerations. Results: Our analysis reveals a significant gap in existing safety standards, originally designed for industrial robots, that need to be revised for SRPS. We propose a thematic framework consolidating essential security guidelines for SRPS, substantiated by evidence from a considerable percentage of the primary studies analysed. Conclusions: The study underscores the urgent need for comprehensive, bespoke security standards and frameworks for SRPS. These standards ensure that SRPS operate securely and ethically, respecting individual rights and public safety, while fostering seamless integration into diverse human-centric environments. This work is poised to enhance public trust and acceptance of these robots, offering significant value to developers, policymakers, and the general public.publishedVersio

    Machine Learning Threatens 5G Security

    Get PDF
    Machine learning (ML) is expected to solve many challenges in the fifth generation (5G) of mobile networks. However, ML will also open the network to several serious cybersecurity vulnerabilities. Most of the learning in ML happens through data gathered from the environment. Un-scrutinized data will have serious consequences on machines absorbing the data to produce actionable intelligence for the network. Scrutinizing the data, on the other hand, opens privacy challenges. Unfortunately, most of the ML systems are borrowed from other disciplines that provide excellent results in small closed environments. The resulting deployment of such ML systems in 5G can inadvertently open the network to serious security challenges such as unfair use of resources, denial of service, as well as leakage of private and confidential information. Therefore, in this article we dig into the weaknesses of the most prominent ML systems that are currently vigorously researched for deployment in 5G. We further classify and survey solutions for avoiding such pitfalls of ML in 5G systems

    Contribution to privacy-enhancing tecnologies for machine learning applications

    Get PDF
    For some time now, big data applications have been enabling revolutionary innovation in every aspect of our daily life by taking advantage of lots of data generated from the interactions of users with technology. Supported by machine learning and unprecedented computation capabilities, different entities are capable of efficiently exploiting such data to obtain significant utility. However, since personal information is involved, these practices raise serious privacy concerns. Although multiple privacy protection mechanisms have been proposed, there are some challenges that need to be addressed for these mechanisms to be adopted in practice, i.e., to be “usable” beyond the privacy guarantee offered. To start, the real impact of privacy protection mechanisms on data utility is not clear, thus an empirical evaluation of such impact is crucial. Moreover, since privacy is commonly obtained through the perturbation of large data sets, usable privacy technologies may require not only preservation of data utility but also efficient algorithms in terms of computation speed. Satisfying both requirements is key to encourage the adoption of privacy initiatives. Although considerable effort has been devoted to design less “destructive” privacy mechanisms, the utility metrics employed may not be appropriate, thus the wellness of such mechanisms would be incorrectly measured. On the other hand, despite the advent of big data, more efficient approaches are not being considered. Not complying with the requirements of current applications may hinder the adoption of privacy technologies. In the first part of this thesis, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, evaluated over original test data. Our experiments show that the impact of the de facto microaggregation standard on the performance of machine-learning algorithms is often minor for a variety of data sets. Furthermore, experimental evidence suggests that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data. Secondly, we address the problem of preserving the empirical utility of data. By transforming the original data records to a different data space, our approach, based on linear discriminant analysis, enables k-anonymous microaggregation to be adapted to the application domain of data. To do this, first, data is rotated (projected) towards the direction of maximum discrimination and, second, scaled in this direction, penalizing distortion across the classification threshold. As a result, data utility is preserved in terms of the accuracy of machine learned models for a number of standardized data sets. Afterwards, we propose a mechanism to reduce the running time for the k-anonymous microaggregation algorithm. This is obtained by simplifying the internal operations of the original algorithm. Through extensive experimentation over multiple data sets, we show that the new algorithm gets significantly faster. Interestingly, this remarkable speedup factor is achieved with no additional loss of data utility.Les aplicacions de big data impulsen actualment una accelerada innovació aprofitant la gran quantitat d’informació generada a partir de les interaccions dels usuaris amb la tecnologia. Així, qualsevol entitat és capaç d'explotar eficientment les dades per obtenir utilitat, emprant aprenentatge automàtic i capacitats de còmput sense precedents. No obstant això, sorgeixen en aquest escenari serioses preocupacions pel que fa a la privacitat dels usuaris ja que hi ha informació personal involucrada. Tot i que s'han proposat diversos mecanismes de protecció, hi ha alguns reptes per a la seva adopció en la pràctica, és a dir perquè es puguin utilitzar. Per començar, l’impacte real d'aquests mecanismes en la utilitat de les dades no esta clar, raó per la qual la seva avaluació empírica és important. A més, considerant que actualment es manegen grans volums de dades, una privacitat usable requereix, no només preservació de la utilitat de les dades, sinó també algoritmes eficients en temes de temps de còmput. És clau satisfer tots dos requeriments per incentivar l’adopció de mesures de privacitat. Malgrat que hi ha diversos esforços per dissenyar mecanismes de privacitat menys "destructius", les mètriques d'utilitat emprades no serien apropiades, de manera que aquests mecanismes de protecció podrien estar sent incorrectament avaluats. D'altra banda, tot i l’adveniment del big data, la investigació existent no s’enfoca molt en millorar la seva eficiència. Lamentablement, si els requisits de les aplicacions actuals no es satisfan, s’obstaculitzarà l'adopció de tecnologies de privacitat. A la primera part d'aquesta tesi abordem el problema de mesurar l'impacte de la microagregació k-Gnónima en la utilitat empírica de microdades. Per això, quantifiquem la utilitat com la precisió de models de classificació obtinguts a partir de les dades microagregades. i avaluats sobre dades de prova originals. Els experiments mostren que l'impacte de l’algoritme de rmicroagregació estàndard en el rendiment d’algoritmes d'aprenentatge automàtic és usualment menor per a una varietat de conjunts de dades avaluats. A més, l’evidència experimental suggereix que la mètrica tradicional de distorsió de les dades seria inapropiada per avaluar la utilitat empírica de dades microagregades. Així també estudiem el problema de preservar la utilitat empírica de les dades a l'ésser anonimitzades. Transformant els registres originaIs de dades en un espai de dades diferent, el nostre enfocament, basat en anàlisi de discriminant lineal, permet que el procés de microagregació k-anònima s'adapti al domini d’aplicació de les dades. Per això, primer, les dades són rotades o projectades en la direcció de màxima discriminació i, segon, escalades en aquesta direcció, penalitzant la distorsió a través del llindar de classificació. Com a resultat, la utilitat de les dades es preserva en termes de la precisió dels models d'aprenentatge automàtic en diversos conjunts de dades. Posteriorment, proposem un mecanisme per reduir el temps d'execució per a la microagregació k-anònima. Això s'aconsegueix simplificant les operacions internes de l'algoritme escollit Mitjançant una extensa experimentació sobre diversos conjunts de dades, vam mostrar que el nou algoritme és bastant més ràpid. Aquesta acceleració s'aconsegueix sense que hi ha pèrdua en la utilitat de les dades. Finalment, en un enfocament més aplicat, es proposa una eina de protecció de privacitat d'individus i organitzacions mitjançant l'anonimització de dades sensibles inclosos en logs de seguretat. Es dissenyen diferents mecanismes d'anonimat per implementar-los en base a la definició d'una política de privacitat, en el context d'un projecte europeu que té per objectiu construir un sistema de seguretat unificat

    Contribution to privacy-enhancing tecnologies for machine learning applications

    Get PDF
    For some time now, big data applications have been enabling revolutionary innovation in every aspect of our daily life by taking advantage of lots of data generated from the interactions of users with technology. Supported by machine learning and unprecedented computation capabilities, different entities are capable of efficiently exploiting such data to obtain significant utility. However, since personal information is involved, these practices raise serious privacy concerns. Although multiple privacy protection mechanisms have been proposed, there are some challenges that need to be addressed for these mechanisms to be adopted in practice, i.e., to be “usable” beyond the privacy guarantee offered. To start, the real impact of privacy protection mechanisms on data utility is not clear, thus an empirical evaluation of such impact is crucial. Moreover, since privacy is commonly obtained through the perturbation of large data sets, usable privacy technologies may require not only preservation of data utility but also efficient algorithms in terms of computation speed. Satisfying both requirements is key to encourage the adoption of privacy initiatives. Although considerable effort has been devoted to design less “destructive” privacy mechanisms, the utility metrics employed may not be appropriate, thus the wellness of such mechanisms would be incorrectly measured. On the other hand, despite the advent of big data, more efficient approaches are not being considered. Not complying with the requirements of current applications may hinder the adoption of privacy technologies. In the first part of this thesis, we address the problem of measuring the effect of k-anonymous microaggregation on the empirical utility of microdata. We quantify utility accordingly as the accuracy of classification models learned from microaggregated data, evaluated over original test data. Our experiments show that the impact of the de facto microaggregation standard on the performance of machine-learning algorithms is often minor for a variety of data sets. Furthermore, experimental evidence suggests that the traditional measure of distortion in the community of microdata anonymization may be inappropriate for evaluating the utility of microaggregated data. Secondly, we address the problem of preserving the empirical utility of data. By transforming the original data records to a different data space, our approach, based on linear discriminant analysis, enables k-anonymous microaggregation to be adapted to the application domain of data. To do this, first, data is rotated (projected) towards the direction of maximum discrimination and, second, scaled in this direction, penalizing distortion across the classification threshold. As a result, data utility is preserved in terms of the accuracy of machine learned models for a number of standardized data sets. Afterwards, we propose a mechanism to reduce the running time for the k-anonymous microaggregation algorithm. This is obtained by simplifying the internal operations of the original algorithm. Through extensive experimentation over multiple data sets, we show that the new algorithm gets significantly faster. Interestingly, this remarkable speedup factor is achieved with no additional loss of data utility.Les aplicacions de big data impulsen actualment una accelerada innovació aprofitant la gran quantitat d’informació generada a partir de les interaccions dels usuaris amb la tecnologia. Així, qualsevol entitat és capaç d'explotar eficientment les dades per obtenir utilitat, emprant aprenentatge automàtic i capacitats de còmput sense precedents. No obstant això, sorgeixen en aquest escenari serioses preocupacions pel que fa a la privacitat dels usuaris ja que hi ha informació personal involucrada. Tot i que s'han proposat diversos mecanismes de protecció, hi ha alguns reptes per a la seva adopció en la pràctica, és a dir perquè es puguin utilitzar. Per començar, l’impacte real d'aquests mecanismes en la utilitat de les dades no esta clar, raó per la qual la seva avaluació empírica és important. A més, considerant que actualment es manegen grans volums de dades, una privacitat usable requereix, no només preservació de la utilitat de les dades, sinó també algoritmes eficients en temes de temps de còmput. És clau satisfer tots dos requeriments per incentivar l’adopció de mesures de privacitat. Malgrat que hi ha diversos esforços per dissenyar mecanismes de privacitat menys "destructius", les mètriques d'utilitat emprades no serien apropiades, de manera que aquests mecanismes de protecció podrien estar sent incorrectament avaluats. D'altra banda, tot i l’adveniment del big data, la investigació existent no s’enfoca molt en millorar la seva eficiència. Lamentablement, si els requisits de les aplicacions actuals no es satisfan, s’obstaculitzarà l'adopció de tecnologies de privacitat. A la primera part d'aquesta tesi abordem el problema de mesurar l'impacte de la microagregació k-Gnónima en la utilitat empírica de microdades. Per això, quantifiquem la utilitat com la precisió de models de classificació obtinguts a partir de les dades microagregades. i avaluats sobre dades de prova originals. Els experiments mostren que l'impacte de l’algoritme de rmicroagregació estàndard en el rendiment d’algoritmes d'aprenentatge automàtic és usualment menor per a una varietat de conjunts de dades avaluats. A més, l’evidència experimental suggereix que la mètrica tradicional de distorsió de les dades seria inapropiada per avaluar la utilitat empírica de dades microagregades. Així també estudiem el problema de preservar la utilitat empírica de les dades a l'ésser anonimitzades. Transformant els registres originaIs de dades en un espai de dades diferent, el nostre enfocament, basat en anàlisi de discriminant lineal, permet que el procés de microagregació k-anònima s'adapti al domini d’aplicació de les dades. Per això, primer, les dades són rotades o projectades en la direcció de màxima discriminació i, segon, escalades en aquesta direcció, penalitzant la distorsió a través del llindar de classificació. Com a resultat, la utilitat de les dades es preserva en termes de la precisió dels models d'aprenentatge automàtic en diversos conjunts de dades. Posteriorment, proposem un mecanisme per reduir el temps d'execució per a la microagregació k-anònima. Això s'aconsegueix simplificant les operacions internes de l'algoritme escollit Mitjançant una extensa experimentació sobre diversos conjunts de dades, vam mostrar que el nou algoritme és bastant més ràpid. Aquesta acceleració s'aconsegueix sense que hi ha pèrdua en la utilitat de les dades. Finalment, en un enfocament més aplicat, es proposa una eina de protecció de privacitat d'individus i organitzacions mitjançant l'anonimització de dades sensibles inclosos en logs de seguretat. Es dissenyen diferents mecanismes d'anonimat per implementar-los en base a la definició d'una política de privacitat, en el context d'un projecte europeu que té per objectiu construir un sistema de seguretat unificat.Postprint (published version

    Cybersecurity Information Exchange with Privacy (CYBEX-P) and TAHOE – A Cyberthreat Language

    Get PDF
    Cybersecurity information sharing (CIS) is envisioned to protect organizations more effectively from advanced cyberattacks. However, a completely automated CIS platform is not widely adopted. The major challenges are: (1) the absence of advanced data analytics capabilities and (2) the absence of a robust cyberthreat language (CTL). This work introduces Cybersecurity Information Exchange with Privacy (CYBEX-P), as a CIS framework, to tackle these challenges. CYBEX-P allows organizations to share heterogeneous data from various sources. It correlates the data to automatically generate intuitive reports and defensive rules. To achieve such versatility, we have developed TAHOE - a graph-based CTL. TAHOE is a structure for storing, sharing, and analyzing threat data. It also intrinsically correlates the data. We have further developed a universal Threat Data Query Language (TDQL). In this work, we propose the system architecture for CYBEX-P. We then discuss its scalability along with a protocol to correlate attributes of threat data. We further introduce TAHOE & TDQL as better alternatives to existing CTLs and formulate ThreatRank - an algorithm to detect new malicious events.We have developed CYBEX-P as a complete CIS platform for not only data sharing but also for advanced threat data analysis. To that end, we have developed two frameworks that use CYBEX-P infrastructure as a service (IaaS). The first work is a phishing URL detector that uses machine learning to detect new phishing URLs. This real-time system adapts to the ever-changing landscape of phishing URLs and maintains an accuracy of 86%. The second work models attacker behavior in a botnet. It combines heterogeneous threat data and analyses them together to predict the behavior of an attacker in a host infected by a bot malware. We have achieved a prediction accuracy of 85-97% using our methodology. These two frameworks establish the feasibility of CYBEX-P for advanced threat data analysis for future researchers
    corecore