131 research outputs found

    Praktische Übungen unter z/OS

    Get PDF

    INTER-ENG 2020

    Get PDF
    These proceedings contain research papers that were accepted for presentation at the 14th International Conference Inter-Eng 2020 ,Interdisciplinarity in Engineering, which was held on 8–9 October 2020, in Târgu Mureș, Romania. It is a leading international professional and scientific forum for engineers and scientists to present research works, contributions, and recent developments, as well as current practices in engineering, which is falling into a tradition of important scientific events occurring at Faculty of Engineering and Information Technology in the George Emil Palade University of Medicine, Pharmacy Science, and Technology of Târgu Mures, Romania. The Inter-Eng conference started from the observation that in the 21st century, the era of high technology, without new approaches in research, we cannot speak of a harmonious society. The theme of the conference, proposing a new approach related to Industry 4.0, was the development of a new generation of smart factories based on the manufacturing and assembly process digitalization, related to advanced manufacturing technology, lean manufacturing, sustainable manufacturing, additive manufacturing, and manufacturing tools and equipment. The conference slogan was “Europe’s future is digital: a broad vision of the Industry 4.0 concept beyond direct manufacturing in the company”

    Viiteraamistik turvariskide haldamiseks plokiahela abil

    Get PDF
    Turvalise tarkvara loomiseks on olemas erinevad programmid (nt OWASP), ohumudelid (nt STRIDE), turvariskide juhtimise mudelid (nt ISSRM) ja eeskirjad (nt GDPR). Turvaohud aga arenevad pidevalt, sest traditsiooniline tehnoloogiline infrastruktuur ei rakenda turvameetmeid kavandatult. Blockchain näib leevendavat traditsiooniliste rakenduste turvaohte. Kuigi plokiahelapõhiseid rakendusi peetakse vähem haavatavateks, ei saanud need erinevate turvaohtude eest kaitsmise hõbekuuliks. Lisaks areneb plokiahela domeen pidevalt, pakkudes uusi tehnikaid ja sageli vahetatavaid disainikontseptsioone, mille tulemuseks on kontseptuaalne ebaselgus ja segadus turvaohtude tõhusal käsitlemisel. Üldiselt käsitleme traditsiooniliste rakenduste TJ-e probleemi, kasutades vastumeetmena plokiahelat ja plokiahelapõhiste rakenduste TJ-t. Alustuseks uurime, kuidas plokiahel leevendab traditsiooniliste rakenduste turvaohte, ja tulemuseks on plokiahelapõhine võrdlusmudel (PV), mis järgib TJ-e domeenimudelit. Järgmisena esitleme PV-it kontseptualiseerimisega alusontoloogiana kõrgema taseme võrdlusontoloogiat (ULRO). Pakume ULRO kahte eksemplari. Esimene eksemplar sisaldab Cordat, kui lubatud plokiahelat ja finantsjuhtumit. Teine eksemplar sisaldab lubadeta plokiahelate komponente ja tervishoiu juhtumit. Mõlemad ontoloogiaesitlused aitavad traditsiooniliste ja plokiahelapõhiste rakenduste TJ-es. Lisaks koostasime veebipõhise ontoloogia parsimise tööriista OwlParser. Kaastööde tulemusel loodi ontoloogiapõhine turberaamistik turvariskide haldamiseks plokiahela abil. Raamistik on dünaamiline, toetab TJ-e iteratiivset protsessi ja potentsiaalselt vähendab traditsiooniliste ja plokiahelapõhiste rakenduste turbeohte.Various programs (e.g., OWASP), threat models (e.g., STRIDE), security risk management models (e.g., ISSRM), and regulations (e.g., GDPR) exist to communicate and reduce the security threats to build secure software. However, security threats continuously evolve because the traditional technology infrastructure does not implement security measures by design. Blockchain is appearing to mitigate traditional applications’ security threats. Although blockchain-based applications are considered less vulnerable, they did not become the silver bullet for securing against different security threats. Moreover, the blockchain domain is constantly evolving, providing new techniques and often interchangeable design concepts, resulting in conceptual ambiguity and confusion in treating security threats effectively. Overall, we address the problem of traditional applications’ SRM using blockchain as a countermeasure and the SRM of blockchain-based applications. We start by surveying how blockchain mitigates the security threats of traditional applications, and the outcome is a blockchain-based reference model (BbRM) that adheres to the SRM domain model. Next, we present an upper-level reference ontology (ULRO) as a foundation ontology and provide two instantiations of the ULRO. The first instantiation includes Corda as a permissioned blockchain and the financial case. The second instantiation includes the permissionless blockchain components and the healthcare case. Both ontology representations help in the SRM of traditional and blockchain-based applications. Furthermore, we built a web-based ontology parsing tool, OwlParser. Contributions resulted in an ontology-based security reference framework for managing security risks using blockchain. The framework is dynamic, supports the iterative process of SRM, and potentially lessens the security threats of traditional and blockchain-based applications.https://www.ester.ee/record=b551352

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Data Spaces

    Get PDF
    This open access book aims to educate data space designers to understand what is required to create a successful data space. It explores cutting-edge theory, technologies, methodologies, and best practices for data spaces for both industrial and personal data and provides the reader with a basis for understanding the design, deployment, and future directions of data spaces. The book captures the early lessons and experience in creating data spaces. It arranges these contributions into three parts covering design, deployment, and future directions respectively. The first part explores the design space of data spaces. The single chapters detail the organisational design for data spaces, data platforms, data governance federated learning, personal data sharing, data marketplaces, and hybrid artificial intelligence for data spaces. The second part describes the use of data spaces within real-world deployments. Its chapters are co-authored with industry experts and include case studies of data spaces in sectors including industry 4.0, food safety, FinTech, health care, and energy. The third and final part details future directions for data spaces, including challenges and opportunities for common European data spaces and privacy-preserving techniques for trustworthy data sharing. The book is of interest to two primary audiences: first, researchers interested in data management and data sharing, and second, practitioners and industry experts engaged in data-driven systems where the sharing and exchange of data within an ecosystem are critical

    Logging Statements Analysis and Automation in Software Systems with Data Mining and Machine Learning Techniques

    Get PDF
    Log files are widely used to record runtime information of software systems, such as the timestamp of an event, the name or ID of the component that generated the log, and parts of the state of a task execution. The rich information of logs enables system developers (and operators) to monitor the runtime behavior of their systems and further track down system problems in development and production settings. With the ever-increasing scale and complexity of modern computing systems, the volume of logs is rapidly growing. For example, eBay reported that the rate of log generation on their servers is in the order of several petabytes per day in 2018 [17]. Therefore, the traditional way of log analysis that largely relies on manual inspection (e.g., searching for error/warning keywords or grep) has become an inefficient, a labor intensive, error-prone, and outdated task. The growth of the logs has initiated the emergence of automated tools and approaches for log mining and analysis. In parallel, the embedding of logging statements in the source code is a manual and error-prone task, and developers often might forget to add a logging statement in the software's source code. To address the logging challenge, many e orts have aimed to automate logging statements in the source code, and in addition, many tools have been proposed to perform large-scale log le analysis by use of machine learning and data mining techniques. However, the current logging process is yet mostly manual, and thus, proper placement and content of logging statements remain as challenges. To overcome these challenges, methods that aim to automate log placement and content prediction, i.e., `where and what to log', are of high interest. In addition, approaches that can automatically mine and extract insight from large-scale logs are also well sought after. Thus, in this research, we focus on predicting the log statements, and for this purpose, we perform an experimental study on open-source Java projects. We introduce a log-aware code-clone detection method to predict the location and description of logging statements. Additionally, we incorporate natural language processing (NLP) and deep learning methods to further enhance the performance of the log statements' description prediction. We also introduce deep learning based approaches for automated analysis of software logs. In particular, we analyze execution logs and extract natural language characteristics of logs to enable the application of natural language models for automated log le analysis. Then, we propose automated tools for analyzing log files and measuring the information gain from logs for different log analysis tasks such as anomaly detection. We then continue our NLP-enabled approach by leveraging the state-of-the-art language models, i.e., Transformers, to perform automated log parsing

    DUNE Offline Computing Conceptual Design Report

    Get PDF
    This document describes the conceptual design for the Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE). The goals of the experiment include 1) studying neutrino oscillations using a beam of neutrinos sent from Fermilab in Illinois to the Sanford Underground Research Facility (SURF) in Lead, South Dakota, 2) studying astrophysical neutrino sources and rare processes and 3) understanding the physics of neutrino interactions in matter. We describe the development of the computing infrastructure needed to achieve the physics goals of the experiment by storing, cataloging, reconstructing, simulating, and analyzing \sim 30 PB of data/year from DUNE and its prototypes. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions and advanced algorithms as HEP computing evolves. We describe the physics objectives, organization, use cases, and proposed technical solutions

    Estudio de la aparamenta, máquinas y demás equipos que componen una subestación eléctrica instalada para evacuar la energía generada en una central fotovoltaica = Study of the switchgear, machines and other equipment that make up an electrical substation installed in order to transfer power generated in a photovoltaic plant

    Get PDF
    [ES] El presente Trabajo de Fin de Grado consiste en un estudio de todos los sistemas que componen una subestación eléctrica destinada a la evacuación de energía fotovoltaica. Esto incluye el estudio de la aparamenta eléctrica de alta y media tensión y los sistemas de protección y control, así como el conjunto de servicios auxiliares y resto de equipos pertinentes. En primer lugar, se definen los conceptos más básicos acerca de las subestaciones para determinar los componentes con los que cuenta la instalación a estudiar, procediendo a realizar un análisis de cada uno de los elementos y equipos involucrados. Los equipos estudiados son clasificados según el sistema de la subestación en el que se encuentran. Existen, por lo tanto, sistema de 220 kV, transformador de potencia, sistema de 30 kV, sistema de protección y control y servicios auxiliares. Cada uno de los equipos es estudiado y analizado, explicando las funciones que cumple dentro de la subestación, los elementos que lo componen y las características técnicas con las que cuenta. A lo largo del trabajo se analizan productos de diferentes fabricantes, como son Arteche, ABB, Siemens, SEL, Ge Grid Solutions, A-EBERLE, Gedelsa, Genesal Energy o IMEFY. Finalmente, tras el estudio de todos los componentes que conforman el modelo de subestación, se realiza un ejemplo de configuración del conjunto de equipos estudiado, estableciendo así un modelo práctico de subestación. De esta manera, se pretende mostrar un ejemplo de aplicación de todos los equipos en conjunto, sirviendo dicho modelo como subestación tipo para la finalidad que posee, la de evacuar a red la energía procedente de parques fotovoltaicos
    corecore