755 research outputs found

    An Iterative and Toolchain-Based Approach to Automate Scanning and Mapping Computer Networks

    Full text link
    As today's organizational computer networks are ever evolving and becoming more and more complex, finding potential vulnerabilities and conducting security audits has become a crucial element in securing these networks. The first step in auditing a network is reconnaissance by mapping it to get a comprehensive overview over its structure. The growing complexity, however, makes this task increasingly effortful, even more as mapping (instead of plain scanning), presently, still involves a lot of manual work. Therefore, the concept proposed in this paper automates the scanning and mapping of unknown and non-cooperative computer networks in order to find security weaknesses or verify access controls. It further helps to conduct audits by allowing comparing documented with actual networks and finding unauthorized network devices, as well as evaluating access control methods by conducting delta scans. It uses a novel approach of augmenting data from iteratively chained existing scanning tools with context, using genuine analytics modules to allow assessing a network's topology instead of just generating a list of scanned devices. It further contains a visualization model that provides a clear, lucid topology map and a special graph for comparative analysis. The goal is to provide maximum insight with a minimum of a priori knowledge.Comment: 7 pages, 6 figure

    BEAT: An Open-Source Web-Based Open-Science Platform

    Get PDF
    With the increased interest in computational sciences, machine learning (ML), pattern recognition (PR) and big data, governmental agencies, academia and manufacturers are overwhelmed by the constant influx of new algorithms and techniques promising improved performance, generalization and robustness. Sadly, result reproducibility is often an overlooked feature accompanying original research publications, competitions and benchmark evaluations. The main reasons behind such a gap arise from natural complications in research and development in this area: the distribution of data may be a sensitive issue; software frameworks are difficult to install and maintain; Test protocols may involve a potentially large set of intricate steps which are difficult to handle. Given the raising complexity of research challenges and the constant increase in data volume, the conditions for achieving reproducible research in the domain are also increasingly difficult to meet. To bridge this gap, we built an open platform for research in computational sciences related to pattern recognition and machine learning, to help on the development, reproducibility and certification of results obtained in the field. By making use of such a system, academic, governmental or industrial organizations enable users to easily and socially develop processing toolchains, re-use data, algorithms, workflows and compare results from distinct algorithms and/or parameterizations with minimal effort. This article presents such a platform and discusses some of its key features, uses and limitations. We overview a currently operational prototype and provide design insights.Comment: References to papers published on the platform incorporate

    Trustworthy Transparency by Design

    Full text link
    Individuals lack oversight over systems that process their data. This can lead to discrimination and hidden biases that are hard to uncover. Recent data protection legislation tries to tackle these issues, but it is inadequate. It does not prevent data misusage while stifling sensible use cases for data. We think the conflict between data protection and increasingly data-based systems should be solved differently. When access to data is given, all usages should be made transparent to the data subjects. This enables their data sovereignty, allowing individuals to benefit from sensible data usage while addressing potentially harmful consequences of data misusage. We contribute to this with a technical concept and an empirical evaluation. First, we conceptualize a transparency framework for software design, incorporating research on user trust and experience. Second, we instantiate and empirically evaluate the framework in a focus group study over three months, centering on the user perspective. Our transparency framework enables developing software that incorporates transparency in its design. The evaluation shows that it satisfies usability and trustworthiness requirements. The provided transparency is experienced as beneficial and participants feel empowered by it. This shows that our framework enables Trustworthy Transparency by Design

    Towards a method to quantitatively measure toolchain interoperability in the engineering lifecycle: A case study of digital hardware design

    Get PDF
    The engineering lifecycle of cyber-physical systems is becoming more challenging than ever. Multiple engineering disciplines must be orchestrated to produce both a virtual and physical version of the system. Each engineering discipline makes use of their own methods and tools generating different types of work products that must be consistently linked together and reused throughout the lifecycle. Requirements, logical/descriptive and physical/analytical models, 3D designs, test case descriptions, product lines, ontologies, evidence argumentations, and many other work products are continuously being produced and integrated to implement the technical engineering and technical management processes established in standards such as the ISO/IEC/IEEE 15288:2015 "Systems and software engineering-System life cycle processes". Toolchains are then created as a set of collaborative tools to provide an executable version of the required technical processes. In this engineering environment, there is a need for technical interoperability enabling tools to easily exchange data and invoke operations among them under different protocols, formats, and schemas. However, this automation of tasks and lifecycle processes does not come free of charge. Although enterprise integration patterns, shared and standardized data schemas and business process management tools are being used to implement toolchains, the reality shows that in many cases, the integration of tools within a toolchain is implemented through point-to-point connectors or applying some architectural style such as a communication bus to ease data exchange and to invoke operations. In this context, the ability to measure the current and expected degree of interoperability becomes relevant: 1) to understand the implications of defining a toolchain (need of different protocols, formats, schemas and tool interconnections) and 2) to measure the effort to implement the desired toolchain. To improve the management of the engineering lifecycle, a method is defined: 1) to measure the degree of interoperability within a technical engineering process implemented with a toolchain and 2) to estimate the effort to transition from an existing toolchain to another. A case study in the field of digital hardware design comprising 6 different technical engineering processes and 7 domain engineering tools is conducted to demonstrate and validate the proposed method.The work leading to these results has received funding from the H2020-ECSEL Joint Undertaking (JU) under grant agreement No 826452-“Arrowhead Tools for Engineering of Digitalisation Solutions” and from specific national programs and/or funding authorities. Funding for APC: Universidad Carlos III de Madrid (Read & Publish Agreement CRUE-CSIC 2023)

    Big Data Management Towards Impact Assessment of Level 3 Automated Driving Functions

    Get PDF
    As industrial research in automated driving is rapidly advancing, it is of paramount importance to analyze field data from extensive road tests. This thesis presents a research work done in L3Pilot, the first comprehensive test of automated driving functions (ADFs) on public roads in Europe. L3Pilot is now completing the test of ADFs in vehicles by 13 companies. The tested functions are mainly of Society of Automotive Engineers (SAE) automation level 3, some of level 4. The overall collaboration among several organizations led to the design and development of a toolchain aimed at processing and managing experimental data sharable among all the vehicle manufacturers to answer a set of 100+ research questions (RQs) about the evaluation of ADFs at various levels, from technical system functioning to overall impact assessment. The toolchain was designed to support a coherent, robust workflow based on Field opErational teSt supporT Action (FESTA), a well-established reference methodology for automotive piloting. Key challenges included ensuring methodological soundness and data validity while protecting the vehicle manufacturers\u2019 intellectual property. Through this toolchain, the project set up what could become a reference architecture for managing research data in automated vehicle tests. In the first step of the workflow, the methodology partners captured the quantitative requirements of each RQ in terms of the relevant data needed from the tests. L3Pilot did not intend to share the original vehicular signal timeseries, both for confidentiality reasons and for the enormous amount of data that would have been shared. As the factual basis for quantitatively answering the RQs, a set of performance indicators (PIs) was defined. The source vehicular signals were translated from their proprietary format into the common data format (CDF), which was defined by L3Pilot to support efficient processing through multiple partners\u2019 tools, and data quality checking. The subsequent vi performance indicator (PI) computation step consists in synthesizing the vehicular time series into statistical syntheses to be stored in the project-shared database, namely the Consolidated Database (CDB). Computation of the PIs is segmented based on experimental condition, road type and driving scenarios, as required to answer the RQs. The supported analysis concerns both objective data, from vehicular sensors, and subjective data from user (test drivers and passengers) questionnaires. The overall L3Pilot toolchain allowed setting up a data management process involving several partners (vehicle manufacturers, research institutions, suppliers, and developers), with different perspectives and requirements. The system was deployed and used by all the relevant partners in the pilot sites. The experience highlights the importance of the reference methodology to theoretically inform and coherently manage all the steps of the project and the need for effective and efficient tools, to support the everyday work of all the involved research teams, from vehicle manufacturers to data analysts

    Artificial Intelligence Advancements for Digitising Industry

    Get PDF
    In the digital transformation era, when flexibility and know-how in manufacturing complex products become a critical competitive advantage, artificial intelligence (AI) is one of the technologies driving the digital transformation of industry and industrial products. These products with high complexity based on multi-dimensional requirements need flexible and adaptive manufacturing lines and novel components, e.g., dedicated CPUs, GPUs, FPGAs, TPUs and neuromorphic architectures that support AI operations at the edge with reliable sensors and specialised AI capabilities. The change towards AI-driven applications in industrial sectors enables new innovative industrial and manufacturing models. New process management approaches appear and become part of the core competence in the organizations and the network of manufacturing sites. In this context, bringing AI from the cloud to the edge and promoting the silicon-born AI components by advancing Moore’s law and accelerating edge processing adoption in different industries through reference implementations becomes a priority for digitising industry. This article gives an overview of the ECSEL AI4DI project that aims to apply at the edge AI-based technologies, methods, algorithms, and integration with Industrial Internet of Things (IIoT) and robotics to enhance industrial processes based on repetitive tasks, focusing on replacing process identification and validation methods with intelligent technologies across automotive, semiconductor, machinery, food and beverage, and transportation industries.publishedVersio

    Multi-Platform Generative Development of Component & Connector Systems using Model and Code Libraries

    Get PDF
    Component-based software engineering aims to reduce software development effort by reusing established components as building blocks of complex systems. Defining components in general-purpose programming languages restricts their reuse to platforms supporting these languages and complicates component composition with implementation details. The vision of model-driven engineering is to reduce the gap between developer intention and implementation details by lifting abstract models to primary development artifacts and systematically transforming these into executable systems. For sufficiently complex systems the transformation from abstract models to platform-specific implementations requires augmentation with platform-specific components. We propose a model-driven mechanism to transform platform-independent logical component & connector architectures into platform-specific implementations combining model and code libraries. This mechanism allows to postpone commitment to a specific platform and thus increases reuse of software architectures and components.Comment: 10 pages, 4 figures, 1 listin
    • 

    corecore