1,229 research outputs found

    Rethink Digital Health Innovation: Understanding Socio-Technical Interoperability as Guiding Concept

    Get PDF
    Diese Dissertation sucht nach einem theoretischem GrundgerĂŒst, um komplexe, digitale Gesundheitsinnovationen so zu entwickeln, dass sie bessere Erfolgsaussichten haben, auch in der alltĂ€glichen Versorgungspraxis anzukommen. Denn obwohl es weder am Bedarf von noch an Ideen fĂŒr digitale Gesundheitsinnovationen mangelt, bleibt die Flut an erfolgreich in der Praxis etablierten Lösungen leider aus. Dieser unzureichende Diffusionserfolg einer entwickelten Lösung - gern auch als Pilotitis pathologisiert - offenbart sich insbesondere dann, wenn die geplante Innovation mit grĂ¶ĂŸeren Ambitionen und KomplexitĂ€t verbunden ist. Dem geĂŒbten Kritiker werden sofort ketzerische Gegenfragen in den Sinn kommen. Beispielsweise was denn unter komplexen, digitalen Gesundheitsinnovationen verstanden werden soll und ob es ĂŒberhaupt möglich ist, eine universale Lösungsformel zu finden, die eine erfolgreiche Diffusion digitaler Gesundheitsinnovationen garantieren kann. Beide Fragen sind nicht nur berechtigt, sondern mĂŒnden letztlich auch in zwei ForschungsstrĂ€nge, welchen ich mich in dieser Dissertation explizit widme. In einem ersten Block erarbeite ich eine Abgrenzung jener digitalen Gesundheitsinnovationen, welche derzeit in Literatur und Praxis besondere Aufmerksamkeit aufgrund ihres hohen Potentials zur Versorgungsverbesserung und ihrer resultierenden KomplexitĂ€t gewidmet ist. Genauer gesagt untersuche ich dominante Zielstellungen und welche Herausforderung mit ihnen einhergehen. Innerhalb der Arbeiten in diesem Forschungsstrang kristallisieren sich vier Zielstellungen heraus: 1. die UnterstĂŒtzung kontinuierlicher, gemeinschaftlicher Versorgungsprozesse ĂŒber diverse Leistungserbringer (auch als inter-organisationale Versorgungspfade bekannt); 2. die aktive Einbeziehung der Patient:innen in ihre Versorgungsprozesse (auch als Patient Empowerment oder Patient Engagement bekannt); 3. die StĂ€rkung der sektoren-ĂŒbergreifenden Zusammenarbeit zwischen Wissenschaft und Versorgungpraxis bis hin zu lernenden Gesundheitssystemen und 4. die Etablierung daten-zentrierter Wertschöpfung fĂŒr das Gesundheitswesen aufgrund steigender bzgl. VerfĂŒgbarkeit valider Daten, neuen Verarbeitungsmethoden (Stichwort KĂŒnstliche Intelligenz) sowie den zahlreichen Nutzungsmöglichkeiten. Im Fokus dieser Dissertation stehen daher weniger die autarken, klar abgrenzbaren Innovationen (bspw. eine Symptomtagebuch-App zur Beschwerdedokumentation). Vielmehr adressiert diese Doktorarbeit jene Innovationsvorhaben, welche eine oder mehrere der o.g. Zielstellung verfolgen, ein weiteres technologisches Puzzleteil in komplexe Informationssystemlandschaften hinzufĂŒgen und somit im Zusammenspiel mit diversen weiteren IT-Systemen zur Verbesserung der Gesundheitsversorgung und/ oder ihrer Organisation beitragen. In der Auseinandersetzung mit diesen Zielstellungen und verbundenen Herausforderungen der Systementwicklung rĂŒckte das Problem fragmentierter IT-Systemlandschaften des Gesundheitswesens in den Mittelpunkt. Darunter wird der unerfreuliche Zustand verstanden, dass unterschiedliche Informations- und Anwendungssysteme nicht wie gewĂŒnscht miteinander interagieren können. So kommt es zu Unterbrechungen von InformationsflĂŒssen und Versorgungsprozessen, welche anderweitig durch fehleranfĂ€llige ZusatzaufwĂ€nde (bspw. Doppeldokumentation) aufgefangen werden mĂŒssen. Um diesen EinschrĂ€nkungen der EffektivitĂ€t und Effizienz zu begegnen, mĂŒssen eben jene IT-System-Silos abgebaut werden. Alle o.g. Zielstellungen ordnen sich dieser defragmentierenden Wirkung unter, in dem sie 1. verschiedene Leistungserbringer, 2. Versorgungsteams und Patient:innen, 3. Wissenschaft und Versorgung oder 4. diverse Datenquellen und moderne Auswertungstechnologien zusammenfĂŒhren wollen. Doch nun kommt es zu einem komplexen Ringschluss. Einerseits suchen die in dieser Arbeit thematisierten digitalen Gesundheitsinnovationen Wege zur Defragmentierung der Informationssystemlandschaften. Andererseits ist ihre eingeschrĂ€nkte Erfolgsquote u.a. in eben jener bestehenden Fragmentierung begrĂŒndet, die sie aufzulösen suchen. Mit diesem Erkenntnisgewinn eröffnet sich der zweite Forschungsstrang dieser Arbeit, der sich mit der Eigenschaft der 'InteroperabilitĂ€t' intensiv auseinandersetzt. Er untersucht, wie diese Eigenschaft eine zentrale Rolle fĂŒr Innovationsvorhaben in der Digital Health DomĂ€ne einnehmen soll. Denn InteroperabilitĂ€t beschreibt, vereinfacht ausgedrĂŒckt, die FĂ€higkeit von zwei oder mehreren Systemen miteinander gemeinsame Aufgaben zu erfĂŒllen. Sie reprĂ€sentiert somit das Kernanliegen der identifizierten Zielstellungen und ist Dreh- und Angelpunkt, wenn eine entwickelte Lösung in eine konkrete Zielumgebung integriert werden soll. Von einem technisch-dominierten Blickwinkel aus betrachtet, geht es hierbei um die GewĂ€hrleistung von validen, performanten und sicheren Kommunikationsszenarien, sodass die o.g. InformationsflussbrĂŒche zwischen technischen Teilsystemen abgebaut werden. Ein rein technisches InteroperabilitĂ€tsverstĂ€ndnis genĂŒgt jedoch nicht, um die Vielfalt an Diffusionsbarrieren von digitalen Gesundheitsinnovationen zu umfassen. Denn beispielsweise das Fehlen adĂ€quater VergĂŒtungsoptionen innerhalb der gesetzlichen Rahmenbedingungen oder eine mangelhafte PassfĂ€higkeit fĂŒr den bestimmten Versorgungsprozess sind keine rein technischen Probleme. Vielmehr kommt hier eine Grundhaltung der Wirtschaftsinformatik zum Tragen, die Informationssysteme - auch die des Gesundheitswesens - als sozio-technische Systeme begreift und dabei Technologie stets im Zusammenhang mit Menschen, die sie nutzen, von ihr beeinflusst werden oder sie organisieren, betrachtet. Soll eine digitale Gesundheitsinnovation, die einen Mehrwert gemĂ€ĂŸ der o.g. Zielstellungen verspricht, in eine existierende Informationssystemlandschaft der Gesundheitsversorgung integriert werden, so muss sie aus technischen sowie nicht-technischen Gesichtspunkten 'interoperabel' sein. Zwar ist die Notwendigkeit von InteroperabilitĂ€t in der Wissenschaft, Politik und Praxis bekannt und auch positive Bewegungen der DomĂ€ne hin zu mehr InteroperabilitĂ€t sind zu verspĂŒren. Jedoch dominiert dabei einerseits ein technisches VerstĂ€ndnis und andererseits bleibt das Potential dieser Eigenschaft als Leitmotiv fĂŒr das Innovationsmanagement bislang weitestgehend ungenutzt. An genau dieser Stelle knĂŒpft nun der Hauptbeitrag dieser Doktorarbeit an, in dem sie eine sozio-technische Konzeptualisierung und Kontextualisierung von InteroperabilitĂ€t fĂŒr kĂŒnftige digitale Gesundheitsinnovationen vorschlĂ€gt. Literatur- und expertenbasiert wird ein Rahmenwerk erarbeitet - das Digital Health Innovation Interoperability Framework - das insbesondere Innovatoren und Innovationsfördernde dabei unterstĂŒtzen soll, die Diffusionswahrscheinlichkeit in die Praxis zu erhöhen. Nun sind mit diesem Framework viele Erkenntnisse und Botschaften verbunden, die ich fĂŒr diesen Prolog wie folgt zusammenfassen möchte: 1. Um die Entwicklung digitaler Gesundheitsinnovationen bestmöglich auf eine erfolgreiche Integration in eine bestimmte Zielumgebung auszurichten, sind die Realisierung eines neuartigen Wertversprechens sowie die GewĂ€hrleistung sozio-technischer InteroperabilitĂ€t die zwei zusammenhĂ€ngenden Hauptaufgaben eines Innovationsprozesses. 2. Die GewĂ€hrleistung von InteroperabilitĂ€t ist eine aktiv zu verantwortende Managementaufgabe und wird durch projektspezifische Bedingungen sowie von externen und internen Dynamiken beeinflusst. 3. Sozio-technische InteroperabilitĂ€t im Kontext digitaler Gesundheitsinnovationen kann ĂŒber sieben, interdependente Ebenen definiert werden: Politische und regulatorische Bedingungen; Vertragsbedingungen; Versorgungs- und GeschĂ€ftsprozesse; Nutzung; Information; Anwendungen; IT-Infrastruktur. 4. Um InteroperabilitĂ€t auf jeder dieser Ebenen zu gewĂ€hrleisten, sind Strategien differenziert zu definieren, welche auf einem Kontinuum zwischen KompatibilitĂ€tsanforderungen aufseiten der Innovation und der Motivation von Anpassungen aufseiten der Zielumgebung verortet werden können. 5. Das Streben nach mehr InteroperabilitĂ€t fördert sowohl den nachhaltigen Erfolg der einzelnen digitalen Gesundheitsinnovation als auch die Defragmentierung existierender Informationssystemlandschaften und trĂ€gt somit zur Verbesserung des Gesundheitswesens bei. Zugegeben: die letzte dieser fĂŒnf Botschaften trĂ€gt eher die FĂ€rbung einer Überzeugung, als dass sie ein Ergebnis wissenschaftlicher BeweisfĂŒhrung ist. Dennoch empfinde ich diese, wenn auch persönliche Erkenntnis als Maxim der DomĂ€ne, der ich mich zugehörig fĂŒhle - der IT-Systementwicklung des Gesundheitswesens

    Systemic Circular Economy Solutions for Fiber Reinforced Composites

    Get PDF
    This open access book provides an overview of the work undertaken within the FiberEUse project, which developed solutions enhancing the profitability of composite recycling and reuse in value-added products, with a cross-sectorial approach. Glass and carbon fiber reinforced polymers, or composites, are increasingly used as structural materials in many manufacturing sectors like transport, constructions and energy due to their better lightweight and corrosion resistance compared to metals. However, composite recycling is still a challenge since no significant added value in the recycling and reprocessing of composites is demonstrated. FiberEUse developed innovative solutions and business models towards sustainable Circular Economy solutions for post-use composite-made products. Three strategies are presented, namely mechanical recycling of short fibers, thermal recycling of long fibers and modular car parts design for sustainable disassembly and remanufacturing. The validation of the FiberEUse approach within eight industrial demonstrators shows the potentials towards new Circular Economy value-chains for composite materials

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    Capability-Based Routes for Autonomous Vehicles

    Get PDF
    The pursuit of vehicle automation is an ongoing trend in the automotive industry. Particularly challenging is the goal of introducing driverless autonomous vehicles (AVs) into road traffic. To realize this vision, a targeted development of autonomous driving functions is essential. However, a targeted development process is only possible if the driving functions are tailored as appropriately and completely as possible to the operational design domain (ODD). Regardless of use case, all AVs have one thing in common: driving at least one route from A to B - whether simple or complex. For operational purposes, it is therefore necessary to ensure that the driving requirements (DRs) of the potential routes within the ODD do not exceed the driving capabilities (DCs) of the AVs. Currently, there is no approach that accomplishes the identification of exceeded capabilities. This work presents a method for route-based specification of DRs and DCs for AVs. It addresses the core research question of how to identify routes with DRs that do not exceed the DCs of AVs. An initial analysis reveals the dependencies between route and DRs. Thereby, the scenery defined in the ODD is found to be a fundamental basis for the specification of behavioral requirements as part of the DRs. In combination with the applicable traffic rules, the scenery elements define the behavioral limits for AVs. These limits are specifically extracted and classified as behavioral demands from the scenery using an analysis of these combinations. To enable a route-based specification of DRs, the behavioral demands are modeled as behavior spaces and transformed into a generic map representation - the Behavior-Semantic Scenery Description (BSSD). Based on the BSSD, a method is developed that generates behavioral requirements based on the route-constrained concatenation of behavior spaces. As a result, in addition to the method itself, the associated behavioral requirements are available as a basis for the route-based specification of DRs and DCs. Constraints for the specification are defined by the developed concept for the matching of DRs and DCs. It is shown that the DRs are strongly dependent on the geometry and property of the scenery elements, so that equal behavioral requirements do not necessarily imply equal DRs. These dependencies are used for the specification enabling the definition of matching criteria for a selection of DRs and corresponding DCs. To realize the matching, a capability-based route search is developed and implemented. The route search incorporates all elaborated results of the work enabling the whole approach to be evaluated by applying it to a real road network. The evaluation shows that the identification of feasible routes for AVs based on the scenery is possible and which hurdles based on identified deficits still have to be overcome

    A Formal Engineering Approach for Interweaving Functional and Security Requirements of RESTful Web APIs

    Get PDF
    RESTful Web API adoption has become ubiquitous with the proliferation of REST APIs in almost all domains with modern web applications embracing the micro-service architecture. This vibrant and expanding adoption of APIs, has made an increasing amount of data to be funneled through systems which require proper access management to ensure that web assets are secured. A RESTful API provides data using the HTTP protocol over the network, interacting with databases and other services and must preserve its security properties. Currently, practitioners are facing two major challenges for developing high quality secure RESTful APIs. One, REST is not a protocol. Instead, it is a set of guidelines that define how web resources can be designed and accessed over HTTP endpoints. There are a set of guidelines which stipulate how related resources should be structured using hierarchical URIs as well as how specific well-defined actions on those resources should be represented using different HTTP verbs. Whereas security has always been critical in the design of RESTful APIs, there are no clear formal models utilizing a secure-by-design approach that interweaves both the functional and security requirements. The other challenge is how to effectively utilize a model driven approach for constructing precise requirements and design specifications so that the security of a RESTFul API is considered as a concern that transcends across functionality rather than individual isolated operations.This thesis proposes a novel technique that encourages a model driven approach to specifying and verifying APIs functional and security requirements with the practical formal method SOFL (Structured-Object-Oriented Formal Language). Our proposed approach provides a generic 6 step model driven approach for designing security aware APIs by utilizing concepts of domain models, domain primitives, Ecore metamodel and SOFL. The first step involves generating a flat file with APIs resource listings. In this step, we extract resource definitions from an input RESTful API documentation written in RAML using an existing RAML parser. The output of this step is a flat file representing API resources as defined in the RAML input file. This step is fully automated. The second step involves automatic construction of an API resource graph that will work as a blue print for creating the target API domain model. The input for this step is the flat file generated from step 1 and the output is a directed graph (digraph) of API resource. We leverage on an algorithm which we created that takes a list of lists of API resource nodes and the defined API root resource node as an input, and constructs a digraph highlighting all the API resources as an output. In step 3, we use the generated digraph as a guide to manually define the API’s initial domain model as the target output with an aggregate root corresponding to the root node of the input digraph and the rest of the nodes corresponding to domain model entities. In actual sense, the generated digraph in step 2 is a barebone representation of the target domain model, but what is missing in the domain model at this stage in the distinction between containment and reference relationship between entities. The resulting domain model describes the entire ecosystem of the modeled API in the form of Domain Driven Design Concepts of aggregates, aggregate root, entities, entity relationships, value objects and aggregate boundaries. The fourth step, which takes our newly defined domain model as input, involves a threat modeling process using Attack Defense Trees (ADTrees) to identify potential security vulnerabilities in our API domain model and their countermeasures. aCountermeasures that can enforce secure constructs on the attributes and behavior of their associated domain entities are modeled as domain primitives. Domain primitives are distilled versions of value objects with proper invariants. These invariants enforce security constraints on the behavior of their associated entities in our API domain model. The output of this step is a complete refined domain model with additional security invariants from the threat modeling process defined as domain primitives in the refined domain model. This fourth step achieves our first interweaving of functional and security requirements in an implicit manner. The fifth step involves creating an Ecore metamodel that describes the structure of our API domain model. In this step, we rely on the refined domain model as input and create an Ecore metamodel that our refined domain model corresponds to, as an output. Specifically, this step encompasses structural modeling of our target RESTful API. The structural model describes the possible resource types, their attributes, and relations as well as their interface and representations. The sixth and the final step involves behavioral modeling. The input for this step is an Ecore metamodel from step 5 and the output is formal security aware RESTful API specifications in SOFL language. Our goal here is to define RESTful API behaviors that consist of actions corresponding to their respective HTTP verbs i.e., GET, POST, PUT, DELETE and PATCH. For example, CreateAction creates a new resource, an UpdateAction provides the capability to change the value of attributes and ReturnAction allows for response definition including the Representation and all metadata. To achieve behavioral modelling, we transform our API methods into SOFL processes. We take advantage of the expressive nature of SOFL processes to define our modeled API behaviors. We achieve the interweaving of functional and security requirements by injecting boolean formulas in post condition of SOFL processes. To verify whether the interweaved functional and security requirements implement all expected functions correctly and satisfy the desired security constraints, we can optionally perform specification testing. Since implicit specifications do not indicate algorithms for implementation but are rather expressed with predicate expressions involving pre and post conditions for any given specification, we can substitute all the variables involved a process with concrete values of their types with results and evaluate their results in the form of truth values true or false. When conducting specification testing, we apply SOFL process animation technique to obtain the set of concrete values of output variables for each process functional scenario. We analyse test results by comparing the evaluation results with an analysis criteria. An analysis criteria is a predicate expression representing the properties to be verified. If the evaluation results are consistent with the predicate expression, the analysis show consistency between the process specification and its associated requirement. We generate the test cases for both input and output variables based on the user requirements. The test cases generated are usually based on test targets which are predicate expressions, such as the pre and post conditions of a process. when testing for conformance of a process specification to its associated service operation, we only need to observe the execution results of the process by providing concrete input values to all of its functional scenarios and analyze their defining conditions relative to user requirements. We present an empirical case study for validating the practicality and usability of our model driven formal engineering approach by applying it in developing a Salon Booking System. A total of 32 services covering functionalities provided by the Salon Booking System API were developed. We defined process specifications for the API services with their respective security requirements. The security requirements were injected in the threat modeling and behavioral modeling phase of our approach. We test for the interweaving of functional and security requirements in the specifications generated by our approach by conducting tests relative to original RAML specifications. Failed tests were exhibited in cases where injected security measure like requirement of an object level access control is not respected i.e., object level access control is not checked. Our generated SOFL specification correctly rejects such case by returning an appropriate error message while the original RAML specification incorrectly dictates to accept such request, because it is not aware of such measure. We further demonstrate a technique for generating SOFL specifications from a domain model via model to text transformation. The model to text transformation technique semi-automates the generation of SOFL formal specification in step 6 of our proposed approach. The technique allows for isolation of dynamic and static sections of the generated specifications. This enables our technique to have the capability of preserving the static sections of the target specifications while updating the dynamic sections in response to the changes of the underlying domain model representing the RESTful API in design. Specifically, our contribution is provision of a systemic model driven formal engineering approach for design and development of secure RESTful web APIs. The proposed approach offers a six-step methodology covering both structural and behavioral modelling of APIs with a focus on security. The most distinguished merit of the model to text transformation is the utilization of the API’s domain model as well as a metamodel that the domain model corresponds to as the foundation for generation of formal SOFL specifications that is a representation of API’s functional and security requirements.ćšćŁ«(理歩)æł•æ”żć€§ć­Š (Hosei University

    The Systems Engineering Approach as a Modelling Paradigm of the Agri-food Supply-chain

    Get PDF
    The agri-food supply-chain represents a complex System-of-System (SoS), because it crosses different other sectors and involves many different actors. The System Engineering (SE) approach helps to identify boundaries as a line of demarcation between the system itself and its greater context, including the operating environment, without neglecting any aspect. Model-Based System Engineering (MBSE) was used, due to its capacity to realise more readable and compact documentation than other models. It uses System Modeling Language (SysML) to construct a structure, the behaviours, the requirements and the constraints of the system. The modelling method developed to design and implement a model is based on one of the main purposes of the agri-food SoS: the need of tracking and tracing useful information to realize a traceable and sustainable system. Agri-food supply-chain SoS was analysed through an iterative procedure model – developed in Papyrus open-source – that has the system requirements as the centre. The requirement diagram is composed of: consumer requirements, business requirements, legislation requirements and environmental requirements. The typical structure of the requirement package in SysML has inherent the parent-child relationship and responds in an appropriate manner to the system traceability. Papyrus can directly validate the designed model, without complex simulations, especially in the case of a high level of abstraction, which would be missing some implementation parameters necessary for the simulation of the technical phase. This type of validation allowed to check better and faster what has been developed through project simulations. This work presents the main results of the SE approach applied to the agri-food supply-chain. It represents a starting point for the choice of the technical traceability solution

    Digital Twins of production systems - Automated validation and update of material flow simulation models with real data

    Get PDF
    Um eine gute Wirtschaftlichkeit und Nachhaltigkeit zu erzielen, mĂŒssen Produktionssysteme ĂŒber lange ZeitrĂ€ume mit einer hohen ProduktivitĂ€t betrieben werden. Dies stellt produzierende Unternehmen insbesondere in Zeiten gesteigerter VolatilitĂ€t, die z.B. durch technologische UmbrĂŒche in der MobilitĂ€t, sowie politischen und gesellschaftlichen Wandel ausgelöst wird, vor große Herausforderungen, da sich die Anforderungen an das Produktionssystem stĂ€ndig verĂ€ndern. Die Frequenz von notwendigen Anpassungsentscheidungen und folgenden Optimierungsmaßnahmen steigt, sodass der Bedarf nach Bewertungsmöglichkeiten von Szenarien und möglichen Systemkonfigurationen zunimmt. Ein mĂ€chtiges Werkzeug hierzu ist die Materialflusssimulation, deren Einsatz aktuell jedoch durch ihre aufwĂ€ndige manuelle Erstellung und ihre zeitlich begrenzte, projektbasierte Nutzung eingeschrĂ€nkt wird. Einer lĂ€ngerfristigen, lebenszyklusbegleitenden Nutzung steht momentan die arbeitsintensive Pflege des Simulationsmodells, d.h. die manuelle Anpassung des Modells bei VerĂ€nderungen am Realsystem, im Wege. Das Ziel der vorliegenden Arbeit ist die Entwicklung und Umsetzung eines Konzeptes inkl. der benötigten Methoden, die Pflege und Anpassung des Simulationsmodells an die RealitĂ€t zu automatisieren. Hierzu werden die zur VerfĂŒgung stehenden Realdaten genutzt, die aufgrund von Trends wie Industrie 4.0 und allgemeiner Digitalisierung verstĂ€rkt vorliegen. Die verfolgte Vision der Arbeit ist ein Digitaler Zwilling des Produktionssystems, der durch den Dateninput zu jedem Zeitpunkt ein realitĂ€tsnahes Abbild des Systems darstellt und zur realistischen Bewertung von Szenarien verwendet werden kann. HierfĂŒr wurde das benötigte Gesamtkonzept entworfen und die Mechanismen zur automatischen Validierung und Aktualisierung des Modells entwickelt. Im Fokus standen dabei unter anderem die Entwicklung von Algorithmen zur Erkennung von VerĂ€nderungen in der Struktur und den AblĂ€ufen im Produktionssystem, sowie die Untersuchung des Einflusses der zur VerfĂŒgung stehenden Daten. Die entwickelten Komponenten konnten an einem realen Anwendungsfall der Robert Bosch GmbH erfolgreich eingesetzt werden und fĂŒhrten zu einer Steigerung der RealitĂ€tsnĂ€he des Digitalen Zwillings, der erfolgreich zur Produktionsplanung und -optimierung eingesetzt werden konnte. Das Potential von Lokalisierungsdaten fĂŒr die Erstellung von Digitalen Zwillingen von Produktionssystem konnte anhand der Versuchsumgebung der Lernfabrik des wbk Instituts fĂŒr Produktionstechnik demonstriert werden

    Machine Learning Algorithm for the Scansion of Old Saxon Poetry

    Get PDF
    Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input verses

    Continuous Rationale Management

    Get PDF
    Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE. However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems. The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis. The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration

    Validation and Verification of Safety-Critical Systems in Avionics

    Get PDF
    This research addresses the issues of safety-critical systems verification and validation. Safety-critical systems such as avionics systems are complex embedded systems. They are composed of several hardware and software components whose integration requires verification and testing in compliance with the Radio Technical Commission for Aeronautics standards and their supplements (RTCA DO-178C). Avionics software requires certification before its deployment into an aircraft system, and testing is mandatory for certification. Until now, the avionics industry has relied on expensive manual testing. The industry is searching for better (quicker and less costly) solutions. This research investigates formal verification and automatic test case generation approaches to enhance the quality of avionics software systems, ensure their conformity to the standard, and to provide artifacts that support their certification. The contributions of this thesis are in model-based automatic test case generations approaches that satisfy MC/DC criterion, and bidirectional requirement traceability between low-level requirements (LLRs) and test cases. In the first contribution, we integrate model-based verification of properties and automatic test case generation in a single framework. The system is modeled as an extended finite state machine model (EFSM) that supports both the verification of properties and automatic test case generation. The EFSM models the control and dataflow aspects of the system. For verification, we model the system and some properties and ensure that properties are correctly propagated to the implementation via mandatory testing. For testing, we extended an existing test case generation approach with MC/DC criterion to satisfy RTCA DO-178C requirements. Both local test cases for each component and global test cases for their integration are generated. The second contribution is a model checking-based approach for automatic test case generation. In the third contribution, we developed an EFSM-based approach that uses constraints solving to handle test case feasibility and addresses bidirectional requirements traceability between LLRs and test cases. Traceability elements are determined at a low-level of granularity, and then identified, linked to their source artifact, created, stored, and retrieved for several purposes. Requirements’ traceability has been extensively studied but not at the proposed low-level of granularity
    • 

    corecore