6 research outputs found

    Multi-Agent Systems

    Get PDF
    This Special Issue ""Multi-Agent Systems"" gathers original research articles reporting results on the steadily growing area of agent-oriented computing and multi-agent systems technologies. After more than 20 years of academic research on multi-agent systems (MASs), in fact, agent-oriented models and technologies have been promoted as the most suitable candidates for the design and development of distributed and intelligent applications in complex and dynamic environments. With respect to both their quality and range, the papers in this Special Issue already represent a meaningful sample of the most recent advancements in the field of agent-oriented models and technologies. In particular, the 17 contributions cover agent-based modeling and simulation, situated multi-agent systems, socio-technical multi-agent systems, and semantic technologies applied to multi-agent systems. In fact, it is surprising to witness how such a limited portion of MAS research already highlights the most relevant usage of agent-based models and technologies, as well as their most appreciated characteristics. We are thus confident that the readers of Applied Sciences will be able to appreciate the growing role that MASs will play in the design and development of the next generation of complex intelligent systems. This Special Issue has been converted into a yearly series, for which a new call for papers is already available at the Applied Sciences journal’s website: https://www.mdpi.com/journal/applsci/special_issues/Multi-Agent_Systems_2019

    Domain- and Quality-aware Requirements Engineering for Law-compliant Systems

    Get PDF
    Titel in deutscher Übersetzung: Domänen- und qualitätsgetriebene Anforderungserhebung für gesetzeskonforme Systeme Der bekannte Leitsatz in der Anforderungserhebung und -analyse besagt, dass es schwierig ist, das richtige System zu bauen, wenn man nicht weiß, was das 'Richtige' eigentlich ist. Es existieren überzeugende Belege, dass dieser Leitsatz die Notwendigkeit der Anforderungserhebung und -analyse exakt definiert und beschreibt. Zum Beispiel ergaben Studien, dass das Beheben von Defekten in einer Software, die bereits produktiv genutzt wird, bis zu 80 mal so teuer ist wie das frühzeitige Beheben der korrespondierenden Defekte in den Anforderungen. Generell hat es sich gezeigt, dass das Durchführen einer angemessenen Anforderungserhebung und -analyse ein wichtiger Erfolgsfaktor für Softwareentwicklungsprojekte ist. Während der Progression von den initialen Wünschen der beteiligten Interessensvertretern für ein zu entwickelndes System zu einer Spezifikation für eben dieses Systems müssen Anforderungsanalysten einen komplexen Entscheidungsprozess durchlaufen, der die initialen Wünsche in die Spezifikation überführt. Tatsächlich wird das Treffen von Entscheidungen als integraler Bestandteil der Anforderungsanalyse gesehen. In dieser Arbeit werden wir versuchen zu verstehen welche Aktivitäten und Information von Nöten sind, um eine fundierte Auswahl von Anforderungen vorzunehmen, welche Herausforderungen damit verbunden sind, wie eine ideale Lösung zur Anforderungswahl aussehen könnte und in welchen Bereichen der aktuelle Stand der Technik in Bezug auf diese ideale Lösung lückenhaft ist. Innerhalb dieser Arbeit werden wir die Informationen, die notwendig für eine fundierte Anforderungsauswahl sind, identifizieren, einen Prozess präsentieren, um diese notwendigen Informationen zu sammeln, die Herausforderungen herausstellen, die durch diesen Prozess und die damit verbundenen Aktivitäten adressiert werden und eine Auswahl von Methoden diskutieren, mit deren Hilfe man die Aktivitäten des Prozesses umsetzen kann. Die gesammelten Informationen werden dann für eine automatisierte Anforderungsauswahl verwendet. Für die Auswahl kommt ein Optimierungsmodell, das Teil des Beitrags dieser Arbeit ist, zum Einsatz. Da wir während der Erstellung dieser Arbeit zwei große Lücken im Stand der Technik bezüglich unseres Prozesses und der damit verbundenen Aktivitäten identifiziert haben, präsentieren wir darüber hinaus zwei neuartige Methoden für die Kontexterhebung und die Erhebung von rechtlichen Anforderungen, um diese Lücken zu schließen. Diese Methoden sind Teil des Hauptbeitrags dieser Arbeit. Unsere Lösung für der Erhebung des Kontext für ein zu entwickelndes System ermöglicht das Etablieren eines domänenspezifischen Kontextes unter Zuhilfenahme von Mustern für verschiedene Domänen. Diese Kontextmuster erlauben eine strukturierte Erhebung und Dokumentation aller relevanten Interessensvertreter und technischen Entitäten für ein zu entwickelndes System. Sowohl die Dokumentation in Form von grafischen Musterinstanzen und textuellen Vorlageninstanzen als auch die Methode zum Sammeln der notwendigen Informationen sind expliziter Bestandteil jedes Kontextmusters. Zusätzlich stellen wir auch Hilfsmittel für die Erstellung neuer Kontextmuster und das Erweitern der in dieser Arbeit präsentierten Kontextmustersprache zur Verfügung. Unsere Lösung für die Erhebung von rechtlichen Anforderungen basiert auch auf Mustern und stellt eine Methode bereit, welche es einem erlaubt, die relevanten Gesetze für ein zu erstellendes System, welches in Form der funktionalen Anforderungen bereits beschrieben sein muss, zu identifizieren und welche die bestehenden funktionalen Anforderungen mit den rechtlichen Anforderungen verknüpft. Diese Methode beruht auf der Zusammenarbeit zwischen Anforderungsanalysten und Rechtsexperten und schließt die Verständnislücke zwischen ihren verschiedenartigen Welten. Wir veranschaulichen unseren Prozess unter der Zuhilfenahme eines durchgehenden Beispiels aus dem Bereich der service-orientierten Architekturen. Zusätzlich präsentieren wir sowohl die Ergebnisse der Anwendung unseres Prozesses (bzw. Teilen davon) auf zwei reale Fälle aus den Bereichen von Smart Grids und Wahlsystemen, als auch alle anderen Ergebnisse der wissenschaftlichen Methoden, die wir genutzt haben, um unsere Lösung zu fundieren und validieren.The long known credo of requirements engineering states that it is challenging to build the right system if you do not know what right is. There is strong evidence that this credo exactly defines and describes the necessity of requirements engineering. Fixing a defect when it is already fielded is reported to be up to eighty times more expensive than fixing the corresponding requirements defects early on. In general, conducting sufficient requirements engineering has shown to be a crucial success factor for software development projects. Throughout the progression from initial stakeholders' wishes regarding the system-to-be to a specification for the system-to-be requirements engineers have to undergo a complex decision process for forming the actual plan connecting stakeholder wishes and the final specification. Indeed, decision making is considered to be an inherent part of requirements engineering. In this thesis, we try to understand which activities and information are needed for selecting requirements, which the challenges are, how an ideal solution for selecting requirements would look like, and where the current state of the art is deficient regarding the ideal solution. Within this thesis we identify the information necessary for an informed requirements selection, present a process in which one collects all the necessary information, highlight the challenges to be addressed by this process and its activities, and a selection of methods to conduct the activities of the process. All the collected information is then used for an automated requirements selection using an optimization model which is also part of the contribution of this thesis. As we identified two major gaps in the state of the art considering the proposed process and its activities, we also present two novel methods for context elicitation and for legal compliance requirements elicitation to fill the gaps as part of the main contribution. Our solution for context elicitation enables a domain-specific context establishment based on patterns for different domains. The context patterns allow a structured elicitation and documentation of relevant stakeholders and technical entities for a system-to-be. Both, the documentation in means of graphical pattern instances and textual template instances as well as the method for collecting the necessary information are explicitly given in each context pattern. Additionally, we also provide the means which are necessary to derive new context patterns and extend our context patterns language which is part of this thesis. Our solution for legal compliance requirements elicitation is a pattern-based and guided method which lets one identify the relevant laws for a system-to-be, which is described in means of functional requirements, and which intertwines the functional requirements with the according legal requirements. This method relies on the collaboration of requirements engineers and legal experts, and bridges the gap between their distinct worlds. Our process is exemplified using a running example in the domain of service oriented architectures. Additionally, the results of applying (parts of) the process to real life cases from the smart grid domain and voting system domain are presented, as well as all other results from the scientific means we took to ground and validate the proposed solutions

    Data-driven conceptual modeling: how some knowledge drivers for the enterprise might be mined from enterprise data

    Get PDF
    As organizations perform their business, they analyze, design and manage a variety of processes represented in models with different scopes and scale of complexity. Specifying these processes requires a certain level of modeling competence. However, this condition does not seem to be balanced with adequate capability of the person(s) who are responsible for the task of defining and modeling an organization or enterprise operation. On the other hand, an enterprise typically collects various records of all events occur during the operation of their processes. Records, such as the start and end of the tasks in a process instance, state transitions of objects impacted by the process execution, the message exchange during the process execution, etc., are maintained in enterprise repositories as various logs, such as event logs, process logs, effect logs, message logs, etc. Furthermore, the growth rate in the volume of these data generated by enterprise process execution has increased manyfold in just a few years. On top of these, models often considered as the dashboard view of an enterprise. Models represents an abstraction of the underlying reality of an enterprise. Models also served as the knowledge driver through which an enterprise can be managed. Data-driven extraction offers the capability to mine these knowledge drivers from enterprise data and leverage the mined models to establish the set of enterprise data that conforms with the desired behaviour. This thesis aimed to generate models or knowledge drivers from enterprise data to enable some type of dashboard view of enterprise to provide support for analysts. The rationale for this has been started as the requirement to improve an existing process or to create a new process. It was also mentioned models can also serve as a collection of effectors through which an organization or an enterprise can be managed. The enterprise data refer to above has been identified as process logs, effect logs, message logs, and invocation logs. The approach in this thesis is to mine these logs to generate process, requirement, and enterprise architecture models, and how goals get fulfilled based on collected operational data. The above a research question has been formulated as whether it is possible to derive the knowledge drivers from the enterprise data, which represent the running operation of the enterprise, or in other words, is it possible to use the available data in the enterprise repository to generate the knowledge drivers? . In Chapter 2, review of literature that can provide the necessary background knowledge to explore the above research question has been presented. Chapter 3 presents how process semantics can be mined. Chapter 4 suggest a way to extract a requirements model. The Chapter 5 presents a way to discover the underlying enterprise architecture and Chapter 6 presents a way to mine how goals get orchestrated. Overall finding have been discussed in Chapter 7 to derive some conclusions

    Hybrid model checking approach to analysing rule conformance applied to HIPAA privacy rules, A

    Get PDF
    2017 Summer.Includes bibliographical references.Many of today's computing systems must show evidence of conformance to rules. The rules may come from business protocol choices or from multi-jurisdictional sources. Some examples are the rules that come from the regulations in the Health Insurance Portability and Accountability Act (HIPAA) protecting the privacy of patient information and the Family Educational Rights and Privacy Act (FERPA) protecting the privacy of student education records. The rules impose additional requirements on already complex systems, and rigorous analysis is needed to show that any system implementing the rules exhibit conformance. If the analysis finds that a rule is not satisfied, we adjudge that the system fails conformance analysis and that it contains a fault, and this fault must be located in the system and fixed. The exhaustive analysis performed by Model Checking makes it suitable for showing that systems satisfy conformance rules. Conformance rules may be viewed in two, sometimes overlapping, categories: process- aware conformance rules that dictate process sequencing, and data-aware conformance rules that dictate acceptable system states. Where conformance rules relate to privacy, the analysis performed in model check- ing requires the examination of fine-grained structural details in the system state for showing conformance to data-aware conformance rules. The analysis of these rules may cause model checking to be intractable due to a state space explosion when there are too many system states or too many details in a system state. To over- come this intractable complexity, various abstraction techniques have been proposed that achieve a smaller abstracted system state model that is more amenable to model checking. These abstraction techniques are not useful when the abstractions hide the details necessary to verify conformance. If non-conformance occurs, the abstraction may not allow isolation of the fault. In this dissertation, we introduce a Hybrid Model Checking Approach (HMCA) to analyse a system for both process- and data-aware conformance rules without abstracting the details from a system's detailed process- and data models. Model Checking requires an analysable model of the system under analysis called a program graph and a representation of the rules that can be checked on the program graph. In our approach, we use connections between a process-oriented (e.g. a Unified Modelling Language (UML) activity model) and a data-oriented (e.g. UML class model) to create a unified paths-and-state system model. We represent this unified model as a UML state machine. The rule-relevant part of the state machine along with a graph-oriented formalism of the rules are the inputs to HMCA. The model checker uses an exhaustive unfolding of the program graph to produce a transition system showing all the program graph's reachable paths and states. Intractable complexity during model checking is encountered when trying to create the transition system. In HMCA, we use a divide and conquer approach that applies a slicing technique on the program graph to semi- automatically produce the transition system by analysing each slice individually, and composing its result with the results from other slices. Our ability to construct the transition system from the slices relieves a traditional model checker of that step. We then return to use model checking techniques to verify whether the transition system satisfies the rules. Since the analysis involves examining system states, if any of the rules are not satisfied, we can isolate the specific location of the fault from the details contained in the slices. We demonstrate our technique on an instance of a medical research system whose requirements include the privacy rules mandated by HIPAA. Our technique found seeded faults for common mistakes in logic that led to non-conformance and underspecification leading to conflicts of interests in personnel relationships

    Requirements engineering: foundation for software quality

    Get PDF

    A method for developing Reference Enterprise Architectures

    Get PDF
    Industrial change forces enterprises to constantly adjust their organizational structures in order to stay competitive. In this regard, research acknowledges the potential of Reference Enterprise Architectures (REA). This thesis proposes REAM - a method for developing REAs. After contrasting organizations' needs with approaches available in the current knowledge base, this work identifies the absence of method support for REA development. Proposing REAM, the author aims to close this research gap and evaluates the method's utility by applying REAM in different naturalistic settings
    corecore