8 research outputs found

    Collaborative Networks, Decision Systems, Web Applications and Services for Supporting Engineering and Production Management

    Get PDF
    This book focused on fundamental and applied research on collaborative and intelligent networks and decision systems and services for supporting engineering and production management, along with other kinds of problems and services. The development and application of innovative collaborative approaches and systems are of primer importance currently, in Industry 4.0. Special attention is given to flexible and cyber-physical systems, and advanced design, manufacturing and management, based on artificial intelligence approaches and practices, among others, including social systems and services

    White Paper 11: Artificial intelligence, robotics & data science

    Get PDF
    198 p. : 17 cmSIC white paper on Artificial Intelligence, Robotics and Data Science sketches a preliminary roadmap for addressing current R&D challenges associated with automated and autonomous machines. More than 50 research challenges investigated all over Spain by more than 150 experts within CSIC are presented in eight chapters. Chapter One introduces key concepts and tackles the issue of the integration of knowledge (representation), reasoning and learning in the design of artificial entities. Chapter Two analyses challenges associated with the development of theories –and supporting technologies– for modelling the behaviour of autonomous agents. Specifically, it pays attention to the interplay between elements at micro level (individual autonomous agent interactions) with the macro world (the properties we seek in large and complex societies). While Chapter Three discusses the variety of data science applications currently used in all fields of science, paying particular attention to Machine Learning (ML) techniques, Chapter Four presents current development in various areas of robotics. Chapter Five explores the challenges associated with computational cognitive models. Chapter Six pays attention to the ethical, legal, economic and social challenges coming alongside the development of smart systems. Chapter Seven engages with the problem of the environmental sustainability of deploying intelligent systems at large scale. Finally, Chapter Eight deals with the complexity of ensuring the security, safety, resilience and privacy-protection of smart systems against cyber threats.18 EXECUTIVE SUMMARY ARTIFICIAL INTELLIGENCE, ROBOTICS AND DATA SCIENCE Topic Coordinators Sara Degli Esposti ( IPP-CCHS, CSIC ) and Carles Sierra ( IIIA, CSIC ) 18 CHALLENGE 1 INTEGRATING KNOWLEDGE, REASONING AND LEARNING Challenge Coordinators Felip Manyà ( IIIA, CSIC ) and Adrià Colomé ( IRI, CSIC – UPC ) 38 CHALLENGE 2 MULTIAGENT SYSTEMS Challenge Coordinators N. Osman ( IIIA, CSIC ) and D. López ( IFS, CSIC ) 54 CHALLENGE 3 MACHINE LEARNING AND DATA SCIENCE Challenge Coordinators J. J. Ramasco Sukia ( IFISC ) and L. Lloret Iglesias ( IFCA, CSIC ) 80 CHALLENGE 4 INTELLIGENT ROBOTICS Topic Coordinators G. Alenyà ( IRI, CSIC – UPC ) and J. Villagra ( CAR, CSIC ) 100 CHALLENGE 5 COMPUTATIONAL COGNITIVE MODELS Challenge Coordinators M. D. del Castillo ( CAR, CSIC) and M. Schorlemmer ( IIIA, CSIC ) 120 CHALLENGE 6 ETHICAL, LEGAL, ECONOMIC, AND SOCIAL IMPLICATIONS Challenge Coordinators P. Noriega ( IIIA, CSIC ) and T. Ausín ( IFS, CSIC ) 142 CHALLENGE 7 LOW-POWER SUSTAINABLE HARDWARE FOR AI Challenge Coordinators T. Serrano ( IMSE-CNM, CSIC – US ) and A. Oyanguren ( IFIC, CSIC - UV ) 160 CHALLENGE 8 SMART CYBERSECURITY Challenge Coordinators D. Arroyo Guardeño ( ITEFI, CSIC ) and P. Brox Jiménez ( IMSE-CNM, CSIC – US )Peer reviewe

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    Secure and efficient processing of outsourced data structures using trusted execution environments

    Full text link
    In recent years, more and more companies make use of cloud computing; in other words, they outsource data storage and data processing to a third party, the cloud provider. From cloud computing, the companies expect, for example, cost reductions, fast deployment time, and improved security. However, security also presents a significant challenge as demonstrated by many cloud computing–related data breaches. Whether it is due to failing security measures, government interventions, or internal attackers, data leakages can have severe consequences, e.g., revenue loss, damage to brand reputation, and loss of intellectual property. A valid strategy to mitigate these consequences is data encryption during storage, transport, and processing. Nevertheless, the outsourced data processing should combine the following three properties: strong security, high efficiency, and arbitrary processing capabilities. Many approaches for outsourced data processing based purely on cryptography are available. For instance, encrypted storage of outsourced data, property-preserving encryption, fully homomorphic encryption, searchable encryption, and functional encryption. However, all of these approaches fail in at least one of the three mentioned properties. Besides approaches purely based on cryptography, some approaches use a trusted execution environment (TEE) to process data at a cloud provider. TEEs provide an isolated processing environment for user-defined code and data, i.e., the confidentiality and integrity of code and data processed in this environment are protected against other software and physical accesses. Additionally, TEEs promise efficient data processing. Various research papers use TEEs to protect objects at different levels of granularity. On the one end of the range, TEEs can protect entire (legacy) applications. This approach facilitates the development effort for protected applications as it requires only minor changes. However, the downsides of this approach are that the attack surface is large, it is difficult to capture the exact leakage, and it might not even be possible as the isolated environment of commercially available TEEs is limited. On the other end of the range, TEEs can protect individual, stateless operations, which are called from otherwise unchanged applications. This approach does not suffer from the problems stated before, but it leaks the (encrypted) result of each operation and the detailed control flow through the application. It is difficult to capture the leakage of this approach, because it depends on the processed operation and the operation’s location in the code. In this dissertation, we propose a trade-off between both approaches: the TEE-based processing of data structures. In this approach, otherwise unchanged applications call a TEE for self-contained data structure operations and receive encrypted results. We examine three data structures: TEE-protected B+-trees, TEE-protected database dictionaries, and TEE-protected file systems. Using these data structures, we design three secure and efficient systems: an outsourced system for index searches; an outsourced, dictionary-encoding–based, column-oriented, in-memory database supporting analytic queries on large datasets; and an outsourced system for group file sharing supporting large and dynamic groups. Due to our approach, the systems have a small attack surface, a low likelihood of security-relevant bugs, and a data owner can easily perform a (formal) code verification of the sensitive code. At the same time, we prevent low-level leakage of individual operation results. For all systems, we present a thorough security evaluation showing lower bounds of security. Additionally, we use prototype implementations to present upper bounds on performance. For our implementations, we use a widely available TEE that has a limited isolated environment—Intel Software Guard Extensions. By comparing our systems to related work, we show that they provide a favorable trade-off regarding security and efficiency

    A Systematic Approach to Benchmark and Improve Automated Static Detection of Java-API Misuses

    Get PDF
    Today's software industry relies heavily on the reuse of existing software libraries. Such libraries provide the building blocks for modern software products. Reusing them allow developers to focus on innovation, while standing on the shoulders of giants. To use libraries effectively, developers need to know the Application Programming Interfaces (APIs) through which they communicate with the libraries. This includes both the APIs' semantics and the (implicit) usage constraints that come with them. In the face of the rapidly growing and evolving supply of software libraries, this has become a challenging task. As a result, incorrect usages of APIs, or API misuses, are a prevalent cause of software bugs, crashes, and vulnerabilities. In reaction to this problem, researchers have proposed a multitude of developer-assistance tools. One particular class of such tools automates the detection of API misuses in software code. We call these tools API-misuse detectors. Existing misuse detectors address different aspects of API misuse. However, no attempt has been made to systematically define the problem space of API misuse and to assess the prevalence of API misuses compared to other types of bugs. This makes it impossible to judge the relevance of research on API-misuse detection. Moreover, previous empirical evaluations of misuse detectors commonly measure the detectors' precision. However, since the studies use different datasets, it is unclear to which extend the results are comparable. It is also unclear where the detectors make trade-offs between their precision and their recall. In this thesis, we first present a systematic analysis of the problem of API misuse. We find that API misuse causes 9.1% of all software bugs in real-world projects, including many critical issues, such as program crashes, data loss, and security vulnerabilities. Then, we survey the literature to consolidate over a decade of research on API-misuse detection and build MUBench, a public automated benchmark for API-misuse detectors. This enables us to conduct the first-ever qualitative and quantitative comparison of existing misuse detectors. We find that these detectors have the potential to discover many API misuses, but suffer from extremely low precision and recall in practice. Finally, we systematically design MUDetect, a new API-misuse detector that addresses many of the problems of existing detectors. Using MUBench, we demonstrate that MUDetect clearly outperforms existing detectors with respect to both precision and recall. Our results provide strong evidence that, following our systematic approach, we can develop API-misuse detectors that are fit for practical application

    Secure fingerprinting on sound foundations

    Get PDF
    The rapid development and the advancement of digital technologies open a variety of opportunities to consumers and content providers for using and trading digital goods. In this context, particularly the Internet has gained a major ground as a worldwiede platform for exchanging and distributing digital goods. Beside all its possibilities and advantages digital technology can be misuesd to breach copyright regulations: unauthorized use and illegal distribution of intellectual property cause authors and content providers considerable loss. Protections of intellectual property has therefore become one of the major challenges of our information society. Fingerprinting is a key technology in copyright protection of intellectual property. Its goal is to deter people from copyright violation by allowing to provably identify the source of illegally copied and redistributed content. As one of its focuses, this thesis considers the design and construction of various fingerprinting schemes and presents the first explicit, secure and reasonably efficient construction for a fingerprinting scheme which fulfills advanced security requirements such as collusion-tolerance, asymmetry, anonymity and direct non-repudiation. Crucial for the security of such s is a careful study of the underlying cryptographic assumptions. In case of the fingerprinting scheme presented here, these are mainly assumptions related to discrete logarithms. The study and analysis of these assumptions is a further focus of this thesis. Based on the first thorough classification of assumptions related to discrete logarithms, this thesis gives novel insights into the relations between these assumptions. In particular, depending on the underlying probability space we present new reuslts on the reducibility between some of these assumptions as well as on their reduction efficency.Die Fortschritte im Bereich der Digitaltechnologien bieten Konsumenten, Urhebern und Anbietern große Potentiale für innovative Geschäftsmodelle zum Handel mit digitalen Gütern und zu deren Nutzung. Das Internet stellt hierbei eine interessante Möglichkeit zum Austausch und zur Verbreitung digitaler Güter dar. Neben vielen Vorteilen kann die Digitaltechnik jedoch auch missbräuchlich eingesetzt werden, wie beispielsweise zur Verletzung von Urheberrechten durch illegale Nutzung und Verbreitung von Inhalten, wodurch involvierten Parteien erhebliche Schäden entstehen können. Der Schutz des geistigen Eigentums hat sich deshalb zu einer der besonderen Herausforderungen unseres Digitalzeitalters entwickelt. Fingerprinting ist eine Schlüsseltechnologie zum Urheberschutz. Sie hat das Ziel, vor illegaler Vervielfältigung und Verteilung digitaler Werke abzuschrecken, indem sie die Identifikation eines Betrügers und das Nachweisen seines Fehlverhaltens ermöglicht. Diese Dissertation liefert als eines ihrer Ergebnisse die erste explizite, sichere und effiziente Konstruktion, welche die Berücksichtigung besonders fortgeschrittener Sicherheitseigenschaften wie Kollusionstoleranz, Asymmetrie, Anonymität und direkte Unabstreitbarkeit erlaubt. Entscheidend für die Sicherheit kryptographischer Systeme ist die präzise Analyse der ihnen zugrunde liegenden kryptographischen Annahmen. Den im Rahmen dieser Dissertation konstruierten Fingerprintingsystemen liegen hauptsächlich kryptographische Annahmen zugrunde, welche auf diskreten Logarithmen basieren. Die Untersuchung dieser Annahmen stellt einen weiteren Schwerpunkt dieser Dissertation dar. Basierend auf einer hier erstmals in der Literatur vorgenommenen Klassifikation dieser Annahmen werden neue und weitreichende Kenntnisse über deren Zusammenhänge gewonnen. Insbesondere werden, in Abhängigkeit von dem zugrunde liegenden Wahrscheinlichkeitsraum, neue Resultate hinsichtlich der Reduzierbarkeit dieser Annahmen und ihrer Reduktionseffizienz erzielt

    An Investigation into Trust and Reputation Frameworks for Autonomous Underwater Vehicles

    Get PDF
    As Autonomous Underwater Vehicles (AUVs) become more technically capable and economically feasible, they are being increasingly used in a great many areas of defence, commercial and environmental applications. These applications are tending towards using independent, autonomous, ad-hoc, collaborative behaviour of teams or fleets of these AUV platforms. This convergence of research experiences in the Underwater Acoustic Network (UAN) and Mobile Ad-hoc Network (MANET) fields, along with the increasing Level of Automation (LOA) of such platforms, creates unique challenges to secure the operation and communication of these networks. The question of security and reliability of operation in networked systems has usually been resolved by having a centralised coordinating agent to manage shared secrets and monitor for misbehaviour. However, in the sparse, noisy and constrained communications environment of UANs, the communications overheads and single-point-of-failure risk of this model is challenged (particularly when faced with capable attackers). As such, more lightweight, distributed, experience based systems of “Trust” have been proposed to dynamically model and evaluate the “trustworthiness” of nodes within a MANET across the network to prevent or isolate the impact of malicious, selfish, or faulty misbehaviour. Previously, these models have monitored actions purely within the communications domain. Moreover, the vast majority rely on only one type of observation (metric) to evaluate trust; successful packet forwarding. In these cases, motivated actors may use this limited scope of observation to either perform unfairly without repercussions in other domains/metrics, or to make another, fair, node appear to be operating unfairly. This thesis is primarily concerned with the use of terrestrial-MANET trust frameworks to the UAN space. Considering the massive theoretical and practical difference in the communications environment, these frameworks must be reassessed for suitability to the marine realm. We find that current single-metric Trust Management Frameworks (TMFs) do not perform well in a best-case scaling of the marine network, due to sparse and noisy observation metrics, and while basic multi-metric communications-only frameworks perform better than their single-metric forms, this performance is still not at a reliable level. We propose, demonstrate (through simulation) and integrate the use of physical observational metrics for trust assessment, in tandem with metrics from the communications realm, improving the safety, security, reliability and integrity of autonomous UANs. Three main novelties are demonstrated in this work: Trust evaluation using metrics from the physical domain (movement/distribution/etc.), demonstration of the failings of Communications-based Trust evaluation in sparse, noisy, delayful and non-linear UAN environments, and the deployment of trust assessment across multiple domains, e.g. the physical and communications domains. The latter contribution includes the generation and optimisation of cross-domain metric composition or“synthetic domains” as a performance improvement method
    corecore