49 research outputs found

    Konzeption einer Komponentenarchitektur für prozessorientierte OLTP- & OLAP-Anwendungssysteme

    Get PDF
    Prozessorientierte Data-Warehouse-Systeme (DWH-Systeme) stellen, im Vergleich zu klassischen DWH-Systemen, neben entscheidungsunterstützenden Daten zum Ergebnis von Geschäftsprozessen auch Daten zu deren Ablauf bereit. Sie sind dabei auf zwei wesentliche Szenarien ausgerichtet: Das erste Szenario hat die Bereitstellung multidimensionaler, prozessbezogener Daten zum Ziel, mit denen die Gestaltung von Prozessen unterstützt werden kann. Das zweite Szenario hat die Datenbereitstellung und die Entscheidungsfindung mit niedriger Latenz zum Ziel. Es ist auf steuernde Maßnahmen in laufenden Prozessinstanzen ausgerichtet. Zur Unterstützung beider Szenarien wird im vorliegenden Beitrag ein Architekturkonzept für prozessorientierte OLTP- & OLAP-Anwendungssysteme, auf der Basis von Komponenten, vorgeschlagen. Das Architekturkonzept berücksichtigt dabei neben der Realisierung der Funktionen eines prozessorientierten DWH-Systems auch deren Integration mit Funktionen operativer Teilsysteme sowie Funktionen zur automatisierten Entscheidungsfindung. Weitere im Architekturkonzept berücksichtigte Anforderungen sind die zeit- und bedarfsgerechte Informationsversorgung heterogener Nutzergruppen sowie die flexible Anpassbarkeit an Veränderungen in Geschäftsprozessen

    A Process for Identifying Predictive Correlation Patterns in Service Management Systems

    Get PDF
    By using the remote functions of a modern IT service management system infrastructure, it is possible to analyze huge amounts of logfile data from complex technical equipment. This enables a service provider to predict failures of connected equipment before they happen. The problem most providers face in this context is finding a needle in a haystack - the obtained amount of data turns out to be too large to be analyzed manually. This report describes a process to find suitable predictive patterns in log files for the detection of upcoming critical situations. The identification process may serve as a hands-on guide. It describes how to connect statistical means, data mining algorithms and expert domain knowledge in the domain of service management. The process was developed in a research project which is currently being carried out within the Siemens Healthcare service organization. The project deals with two main aspects: First, the identification of predictive patterns in existing service data and second, the architecture of an autonomous agent which is able to correlate such patterns. This paper summarizes the results of the first project challenge. The identification process was tested successfully in a proof of concept for several Siemens Healthcare products

    Literature Survey of Performance Benchmarking Approaches of BPEL Engines

    Get PDF
    Despite the popularity of BPEL engines to orchestrate complex and executable processes, there are still only few approaches available to help find the most appropriate engine for individual requirements. One of the more crucial factors for such a middleware product in industry are the performance characteristics of a BPEL engine. There exist multiple studies in industry and academia testing the performance of BPEL engines, which differ in focus and method. We aim to compare the methods used in these approaches and provide guidance for further research in this area. Based on the related work in the field of performance testing, we created a process engine specific comparison framework, which we used to evaluate and classify nine different approaches that were found using the method of a systematical literature survey. With the results of the status quo analysis in mind, we derived directions for further research in this area

    Symbolic Object Code Analysis

    Get PDF
    Current software model checkers quickly reach their limit when being applied to verifying pointer safety properties in source code that includes function pointers and inlined assembly. This paper introduces an alternative technique for checking pointer safety violations, called Symbolic Object Code Analysis (SOCA), which is based on bounded symbolic execution, incorporates path-sensitive slicing, and employs the SMT solver Yices as its execution and verification engine. Extensive experimental results of a prototypic SOCA Verifier, using the Verisec suite and almost 10,000 Linux device driver functions as benchmarks, show that SOCA performs competitively to current source-code model checkers and that it also scales well when applied to real operating systems code and pointer safety issues. SOCA effectively explores semantic niches of software that current software verifiers do not reach

    A Service Description Framework for Service Ecosystems

    Get PDF
    Recently, service orientation has strongly influenced the way enterprise applications are built. Service ecosystems, which provide means to trade services between companies like goods, are an evaluation of service orientation. To allow service ordering, discovering, selection, and consumption, a common way to describe services is a necessity. This paper discusses existing approaches to describe certain service aspects. Finally, a Service Description Framework for service ecosystems is proposed and exemplified

    A Generalised Theory of Interface Automata, Component Compatibility and Error

    Get PDF
    Interface theories allow systems designers to reason about the composability and compatibility of concurrent system components. Such theories often extend both de Alfaro and Henzinger’s Interface Automata and Larsen’s Modal Transition Systems, which leads, however, to several issues that are undesirable in practice: an unintuitive treatment of specified unwanted behaviour, a binary compatibility concept that does not scale to multi-component assemblies, and compatibility guarantees that are insufficient for software product lines. In this paper we show that communication mismatches are central to all these problems and, thus, the ability to represent such errors semantically is an important feature of an interface theory. Accordingly, we present the error-aware interface theory EMIA, where the above shortcomings are remedied by introducing explicit fatal error states. In addition, we prove via a Galois insertion that EMIA is a conservative generalisation of the established MIA (Modal Interface Automata) theory

    The COCOMO-Models in the Light of the Agile Software Development

    Get PDF
    Aufwandsschätzungen sind wichtig, um ökonomische und strategische Entscheidungen in der Softwareentwicklung treffen zu können. Verschiedene Veröffentlichungen propagieren das Constructive Cost Model (COCOMO) als ein algorithmisches Kostenmodell, basierend auf Formeln mit objektiven Variablen für Schätzungen in der klassischen Softwareentwicklung (KS). Arbeiten aus der agilen Softwareentwicklung (AS) verweisen auf den Einsatz von erfahrungsbasierten Schätzmethoden und von subjektiven Variablen. Aufgrund der schwachen Operationalisierung im agilen Kontext sind Aussagen über konkrete Ursache- und Wirkungszusammenhänge schwer zu treffen. Hinzu kommt der einseitige Fokus der klassischen und agilen Untersuchungen auf den eigene Forschungsbereich, der nach sich zieht, dass eine Verwendung von Variablen aus COCOMO in der AS unklar ist. Wenn hierzu Details bekannt wären, könnten operationalisierte Variablen aus COCOMO auch in der AS eingesetzt werden. Dadurch wird es möglich, in einer wissenschaftlichen Untersuchung eine Konzeptionierung von konkreten kausalen Abhängigkeiten vorzunehmen – diese Erkenntnisse würden wiederum eine Optimierung des Entwicklungsprozesses erlauben. Zur Identifikation von Variablen wird dazu eine qualitative und deskriptive Arbeit mit einer Literaturrecherche und einer Auswertung der Quellen durchgeführt. Erste Ergebnisse zwischen beiden Welten zeigen dabei sowohl Unterschiede als auch Gemeinsamkeiten. Eine Vielzahl von Variablen aus COCOMO kann in der AS verwendet werden. Inwieweit dies möglich ist, ist von den objektiven und subjektiven Anteilen der Variablen abhängig. Vertreter mit erfahrungsbasiertem Hintergrund wie Analyst Capability (ACAP) und Programmer Capability (PCAP) lassen sich aufgrund von Übereinstimmungen mit personenbezogenen Merkmalen gut in die AS übertragen. Parallel dazu sind Variablen aus dem Prozess- und Werkzeugumfeld weniger gut transferierbar, da konkret die AS einen Fokus auf solche Projektmerkmale ablehnt. Eine Weiterverwendung von Variablen ist damit grundsätzlich unter der Berücksichtigung von gegebenen Rahmenbedingungen möglich.Effort estimations are important in order to make economic and strategic decisions in software development. Various publications propagate the Constructive Cost Model (COCOMO) as an algorithmic cost model, based on formulas with objective variables for estimations in classical software development (KS). Papers from agile software development (AS) refers to the use of experience-based estimation methods and subjective variables. Due to the weak operationalization in an agile context, statements about concrete cause and effect relationships are difficult to make. In addition, there is the one-sided focus of classical and agile investigations on their own research field, which suggests that the use of variables from COCOMO in the AS is unclear. If details were available, operational variables from COCOMO could also be used in the AS. This makes it possible to carry out a conceptualization of concrete causal dependencies in a scientific investigation - these findings in turn would allow an optimization of the development process. To identify variables, a qualitative and descriptive work with a literature research and an evaluation of the sources is carried out. First results between the two worlds show both differences and similarities. A large number of variables from COCOMO can be used in the AS. This is possible depending on the objective and subjective proportions of the variables. Variables with an experience-based background, such as Analyst Capability (ACAP) and Programmer Capability (PCAP), can be well transferred to the AS by matching personal characteristics. At the same time, variables from the process and tool environment are less easily transferable, because AS specifically rejects a focus on such project features. A re-use of variables is thus possible under consideration of given conditions

    Applying Business Process Management Systems - a Case Study

    Get PDF
    Business Process Management Systems (BPMS) aim to support the Business Process Management paradigm and to ease legacy application integration. Often, they rely on Service-oriented Architecture (SoA). However, do such systems really meet real-world requirements? This paper introduces and discusses a set of criteria which are important for BPMS and applies these criteria in comparing tools from three important vendors, namely IDS Scheer, Oracle and Intalio based on a case study

    Visibility in Information Spaces and in Geographic Environments. Post-Proceedings of the KI'11 Workshop (October 4th, 2011, TU Berlin, Germany)

    Get PDF
    In the post-proceedings of the Workshop "Visibility in Information Spaces and in Geographic Environments" a selection of research papers is presented where the topic of visibility is addressed in different contexts. Visibility governs information selection in geographic environments as well as in information spaces and in cognition. The users of social media navigate in information spaces and at the same time, as embodied agents, they move in geographic environments. Both activities follow a similar type of information economy in which decisions by individuals or groups require a highly selective filtering to avoid information overload. In this context, visibility refers to the fact that in social processes some actors, topics or places are more salient than others. Formal notions of visibility include the centrality measures from social network analysis or the plethora of web page ranking methods. Recently, comparable approaches have been proposed to analyse activities in geographic environments: Place Rank, for instance, describes the social visibility of urban places based on the temporal sequence of tourist visit patterns. The workshop aimed to bring together researchers from AI, Geographic Information Science, Cognitive Science, and other disciplines who are interested in understanding how the different forms of visibility in information spaces and geographic environments relate to one another and how the results from basic research can be used to improve spatial search engines, geo-recommender systems or location-based social networks

    Static Analysis Rules of the BPEL Specification: Tagging, Formalization and Tests

    Get PDF
    In 2007, OASIS finalized their Business Process Execution Language 2.0 (BPEL) specification which defines an XML-based language for orchestrations of Web Services. As the validation of BPEL processes against the official BPEL XML schema leaves room for a plethora of static errors, the specification contains 94 static analysis rules to cover all static errors. According to the specification, any violations of these rules are to be checked by a standard conformant engine at deployment time. When a violation is not detected in BPEL processes during deployment, such errors are only detectable at runtime, making them expensive to find and fix. Due to the large amount of rules, we have created a tag system to categorize them, allowing easier reasoning about these rules. Next, we formalized the static rules and derived test cases based on these formalizations with the aim to evaluate the degree of support for static analysis of BPEL engines. Hence, this work is the foundation of the static analysis capabilities of BPEL engines
    corecore