5 research outputs found

    A Reference Model for Common Understanding of Capabilities and Skills in Manufacturing

    Full text link
    In manufacturing, many use cases of Industry 4.0 require vendor-neutral and machine-readable information models to describe, implement and execute resource functions. Such models have been researched under the terms capabilities and skills. Standardization of such models is required, but currently not available. This paper presents a reference model developed jointly by members of various organizations in a working group of the Plattform Industrie 4.0. This model covers definitions of most important aspects of capabilities and skills. It can be seen as a basis for further standardization efforts

    Modelgetriebene System Konfiguration

    No full text
    Abweichender Titel nach Übersetzung der Verfasserin/des VerfassersLarge-scale projects in Production Systems Engineering occur in multidisciplinary envi- ronments were engineers of different domains work together in a combined effort. Due to their limited integration and connectivity, specialized engineering tools, deeply-rooted in these domains, determine the Engineering Processes. Tool Integration Platforms, like the Engineering Service Bus, seamlessly integrate processes and tools. Nevertheless, Tool Integration Platforms need to be tailored to implement the customer’s specific processes and tools. Today, application integrators manually perform the customization process, which is tedious and often error-prone. The customization of Engineering Processes and the configuration of their connection to software services is particularly costly and cumbersome. This work aims at answering to what extent the customization process for Engineering Processes can be improved using a more sophisticated approach than the manual one. Therefore, it investigates how variants of Engineering Processes can be mapped to service variants and configured adequately by a (semi) automated method. Beforehand, we need to examine how concepts of Variability Modeling can model Engineering Processes and software systems to map variants of either domain? The research approach, first, investigates related work. Second, variabilities in Engineering Processes of industry partners and the services of the Engineering Service Bus is examined. Afterwards, variability models for Engineering Processes and Engineering Service Bus services are developed. Based on this, we propose an approach for mapping Engineering Process variants to service variants. Finally, the solution approach is evaluated based on a real-world example of an industry partner and a prototype. As results, the thesis proposes an approach to define variability models based on the Business Process & Model Notification language and Feature Modeling. The main result is a method to map process templates to Feature Models. The evaluation shows that the solution is feasible and that the manual approach was reduced in effort and complexity significantly. Moreover, the proposed solution increased the number of quality assurance mechanisms. The solution approach was evaluated on a small sample and showed its superiority compared to the manual approach. The author expects the solution approach to work even better on more extensive examples, which has to be proved in future work.Große Planungsprojekte im Anlagenbau finden in einer multidisziplinären Umgebung statt, in der Ingenieure aus mehreren Disziplinen an einem gemeinsamen Ziel arbei- ten. Arbeitsprozesse in einer solchen Umgebung werden aufgrund limitierter Datenaus- tauschmöglichkeiten der Softwarewerkzeuge, die tief in der jeweiligen Disziplin verankert sind, durch diese bestimmt. Plattformen für Werkzeugintegration, wie der Engineering Service Bus, integrieren Werkzeuge und Arbeitsprozesse nahtlos. Solche Plattformen müssen aber an die jeweiligen Werkzeuge und Arbeitsprozesse der Kunden angepasst werden. Heutzutage wird diese Anpassung durch Anwendungsintegratoren in manueller Arbeit durchgeführt, die oft mühsam und fehleranfällig ist. Besonders die Anpassung der Arbeitsprozesse in der Anwendung, deren Konfiguration und die Verbindung mit Softwareservices ist aufwendig und beschwerlich. Diese Arbeit beabsichtigt die Frage zu beantworten, inwiefern der Anpassungsprozess durch einen weiterentwickelten Ansatz verbessert werden kann. Dafür, wird untersucht wie Varianten von Arbeitsprozessen auf Varianten von Softwareservices abgebildet und mit einer automatisch unterstützten Methode konfiguriert werden können. Dazu wird erforscht, wie Konzepte der Variabilitätsmodellierung für Arbeitsprozesse und Softwareservices adaptiert werden können, um Varianten beider Gruppen aufeinander abbilden zu können. Im Forschungsteil werden zuerst ähnliche Arbeiten, auf denen aufgebaut werden kann angesehen. Danach, wird Variabilität in den Arbeitsprozessen von Industriepartnern und den Services des Engineering Service Bus erforscht. Basierend darauf, wird ein Ansatz vorgeschlagen, um Varianten von Arbeitsprozessen auf solche von Softwareservices abzubilden. Schließlich, wird der Ansatz anhand eines Beispiels eines Industriepartners mit einem Prototypen evaluiert. Die Resultate der Arbeit sind, ein Ansatz um Variabilität anhand der Business Process Model & Notation Sprache und Feature Modeling darzustellen, sowie der Vorschlag einer Methode um Varianten aufeinander abbilden zu können. Die Evaluierung zeigt, dass der Ansatz machbar ist und den Traditionellen in Komplexität und Aufwand signifikant reduziert. Zusätzlich, werden die Möglichkeiten für qualitätsgesicherte Maßnahmen erhöht. Der Ansatz wurde an einem einfachen Beispiel getestet und zeigte die Überlegenheit gegenüber dem traditionellen Ansatz. Der Autor geht davon aus, dass die Methode noch vorteilhafter für größere Beispiele ist, was in einer zukünftigen Arbeit zu überprüfen ist.11

    Precise Data Identification Services for Long Tail Research Data: Paper - iPRES 2016 - Swiss National Library, Bern

    No full text
    While sophisticated research infrastructures assist scientists in managing massive volumes of data, the so-called long tail of research data frequently suffers from a lack of such services. This is mostly due to the complexity caused by the variety of data to be managed and a lack of easily standardiseable procedures in highly diverse research settings. Yet, as even domains in this long tail of research data are increasingly data-driven, scientists need efficient means to precisely communicate, which version and subset of data was used in a particular study to enable reproducibility and comparability of result and foster data re-use. This paper presents three implementations of systems supporting such data identification services for comma separated value (CSV) files, a dominant format for data exchange in these settings. The implementations are based on the recommendations of the Working Group on Dynamic Data Citation of the Research Data Alliance (RDA). They provide implicit change tracking of all data modifications, while precise subsets are identified via the respective subsetting process. These enhances reproducibility of experiments and allows efficient sharing of specific subsets of data even in highly dynamic data settings

    Precise Data Identification Services for Long Tail Research Data

    No full text
    While sophisticated research infrastructures assist scientists<br>in managing massive volumes of data, the so-called long tail<br>of research data frequently suffers from a lack of such ser-<br>vices. This is mostly due to the complexity caused by the va-<br>riety of data to be managed and a lack of easily standardise-<br>able procedures in highly diverse research settings. Yet, as<br>even domains in this long tail of research data are increas-<br>ingly data-driven, scientists need efficient means to precisely<br>communicate, which version and subset of data was used in a<br>particular study to enable reproducibility and comparability<br>of result and foster data re-use.<br>This paper presents three implementations of systems sup-<br>porting such data identification services for comma sepa-<br>rated value (CSV) files, a dominant format for data ex-<br>change in these settings. The implementations are based<br>on the recommendations of the Working Group on Dynamic<br>Data Citation of the Research Data Alliance (RDA). They<br>provide implicit change tracking of all data modifications,<br>while precise subsets are identified via the respective subset-<br>ting process. These enhances reproducibility of experiments<br>and allows efficient sharing of specific subsets of data even<br>in highly dynamic data setting

    Precisely and Persistently Identifying and Citing Arbitrary Subsets of Dynamic Data

    No full text
    International audiencePrecisely identifying arbitrary subsets of data so that these can be reproduced is a daunting challenge in data-driven science, the more so if the underlying data source is dynamically evolving. Yet an increasing number of settings exhibit exactly those characteristics. Larger amounts of data are being continuously ingested from a range of sources (be it sensor values, online questionnaires, documents, etc.), with error correction and quality improvement processes adding to the dynamics. Yet, for studies to be reproducible, for decision-making to be transparent, and for meta studies to be performed conveniently, having a precise identification mechanism to reference, retrieve, and work with such data is essential. The Research Data Alliance (RDA) Working Group on Dynamic Data Citation has published 14 recommendations that are centered around time-stamping and versioning evolving data sources and identifying subsets dynamically via persistent identifiers that are assigned to the queries selecting the respective subsets. These principles are generic and work for virtually any kind of data. In the past few years numerous repositories around the globe have implemented these recommendations and deployed solutions. We provide an overview of the recommendations, reference implementations, and pilot systems deployed and then analyze lessons learned from these implementations. This article provides a basis for institutions and data stewards considering adding this functionality to their data systems
    corecore