65 research outputs found

    Towards Connecting Online Interfacing and Internal Core Business Processes

    Get PDF
    Nowadays, organisations tend to do more business online by enabling their business processes to interact with customers, suppliers, etc., via different online channels. On the other hand, their core business processes, such as production, engineering, etc., may still stay inside the organisation. As a consequence, this makes an organisation rely on the collaboration between these two types of business processes to conduct its business, and this collaboration brings issues like multiple instance correlation, process view, and process evolution, to the business process management (BPM) of the organisation. This paper reports our research in progress on these issues. It firstly identifies the requirements to fully support such collaboration, and then presents a framework to illustrate how the collaboration can be facilitated using latest BPM technologies. This framework provides a reference architecture to incorporating online interfacing and internal core business processes

    Semantic Platform for building coherent net of smart services

    Get PDF
    Information infrastrucrure of modern cities has been developping incredibly fast over last decades. Improvement of all kinds of services has been impatiently demanded by end users in all domains who where forced to keep up-to-date not to lose ground in their spheres of interest. As a result, the majority of services used now are high-quality services that meet the extended requirements of end users. They can be definitely called smart services. The proplem is that there are a lot of services, but they are not compatible with each other. They can hardly be considered as elements of complicated business processes. It leads to creation of new services with duplicating functionality. Observed dynamics of service market development and its short term prediction clearly shows that in near future it will be impossible to satisfy all requests for new services and service infrastructure will become overheated. At the level of enterprises the problem is commonly solved by means of enterprise service bus, at the level of WWW – due to building and overall application of semantic web services. For the level of cities still there are no solutions that allow building complex logical structures based on existing services. The most obvious way for services integration is their unification. Even this simple solution is unimplementable for two reasons. First, it requires huge resources that depend on the total number of services. Second, it can affect the functionality of the services that is inadmissible for end users. So, one can say that at the level of the city integration solutions based on enterprise service bus are too light but Internet oriented solutions such as semantic web servises are too heavy. In this paper we propose a platform for agile service integration that allows linking services using semantic technologies. The platform does not generate additional requirements to services or imposes any restrictions. It supports linking services and, thus, building a net of services. Furthermore, it can reveal possible links between services that can enrich the service infrastructure. Sematic technologies form the base for integration platform. The services and their peculiar features are described in the platform ontology using OWL language. The OWL description of the services clarifies reasonable cases and ways for services usage. Similar approach is used for describing logic of complex services application. The processes of services interaction are defined in ontologies as well. For logic description BPEL is used

    A Process Modelling Framework Based on Point Interval Temporal Logic with an Application to Modelling Patient Flows

    Get PDF
    This thesis considers an application of a temporal theory to describe and model the patient journey in the hospital accident and emergency (A&E) department. The aim is to introduce a generic but dynamic method applied to any setting, including healthcare. Constructing a consistent process model can be instrumental in streamlining healthcare issues. Current process modelling techniques used in healthcare such as flowcharts, unified modelling language activity diagram (UML AD), and business process modelling notation (BPMN) are intuitive and imprecise. They cannot fully capture the complexities of the types of activities and the full extent of temporal constraints to an extent where one could reason about the flows. Formal approaches such as Petri have also been reviewed to investigate their applicability to the healthcare domain to model processes. Additionally, to schedule patient flows, current modelling standards do not offer any formal mechanism, so healthcare relies on critical path method (CPM) and program evaluation review technique (PERT), that also have limitations, i.e. finish-start barrier. It is imperative to specify the temporal constraints between the start and/or end of a process, e.g., the beginning of a process A precedes the start (or end) of a process B. However, these approaches failed to provide us with a mechanism for handling these temporal situations. If provided, a formal representation can assist in effective knowledge representation and quality enhancement concerning a process. Also, it would help in uncovering complexities of a system and assist in modelling it in a consistent way which is not possible with the existing modelling techniques. The above issues are addressed in this thesis by proposing a framework that would provide a knowledge base to model patient flows for accurate representation based on point interval temporal logic (PITL) that treats point and interval as primitives. These objects would constitute the knowledge base for the formal description of a system. With the aid of the inference mechanism of the temporal theory presented here, exhaustive temporal constraints derived from the proposed axiomatic system’ components serves as a knowledge base. The proposed methodological framework would adopt a model-theoretic approach in which a theory is developed and considered as a model while the corresponding instance is considered as its application. Using this approach would assist in identifying core components of the system and their precise operation representing a real-life domain deemed suitable to the process modelling issues specified in this thesis. Thus, I have evaluated the modelling standards for their most-used terminologies and constructs to identify their key components. It will also assist in the generalisation of the critical terms (of process modelling standards) based on their ontology. A set of generalised terms proposed would serve as an enumeration of the theory and subsume the core modelling elements of the process modelling standards. The catalogue presents a knowledge base for the business and healthcare domains, and its components are formally defined (semantics). Furthermore, a resolution theorem-proof is used to show the structural features of the theory (model) to establish it is sound and complete. After establishing that the theory is sound and complete, the next step is to provide the instantiation of the theory. This is achieved by mapping the core components of the theory to their corresponding instances. Additionally, a formal graphical tool termed as point graph (PG) is used to visualise the cases of the proposed axiomatic system. PG facilitates in modelling, and scheduling patient flows and enables analysing existing models for possible inaccuracies and inconsistencies supported by a reasoning mechanism based on PITL. Following that, a transformation is developed to map the core modelling components of the standards into the extended PG (PG*) based on the semantics presented by the axiomatic system. A real-life case (from the King’s College hospital accident and emergency (A&E) department’s trauma patient pathway) is considered to validate the framework. It is divided into three patient flows to depict the journey of a patient with significant trauma, arriving at A&E, undergoing a procedure and subsequently discharged. Their staff relied upon the UML-AD and BPMN to model the patient flows. An evaluation of their representation is presented to show the shortfalls of the modelling standards to model patient flows. The last step is to model these patient flows using the developed approach, which is supported by enhanced reasoning and scheduling

    Interoperability of Enterprise Software and Applications

    Get PDF

    Graph-based Pattern Matching and Discovery for Process-centric Service Architecture Design and Integration

    Get PDF
    Process automation and applications integration initiatives are often complex and involve significant resources in large organisations. The increasing adoption of service-based architectures to solve integration problems and the widely accepted practice of utilising patterns as a medium to reuse design knowledge motivated the definition of this work. In this work a pattern-based framework and techniques providing automation and structure to address the process and application integration problem are proposed. The framework is a layered architecture providing modelling and traceability support to different abstraction layers of the integration problem. To define new services - building blocks of the integration solution - the framework includes techniques to identify process patterns in concrete process models. Graphs and graph morphisms provide a formal basis to represent patterns and their relation to models. A family of graph-based algorithms support automation during matching and discovery of patterns in layered process service models. The framework and techniques are demonstrated in a case study. The algorithms implementing the pattern matching and discovery techniques are investigated through a set of experiments from an empirical evaluation. Observations from conducted interviews to practitioners provide suggestions to enhance the proposed techniques and direct future work regarding analysis tasks in process integration initiatives

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    Supporting Distributed Geo-Processing: A Framework for Managing Multi-Accuracy Spatial Data

    Get PDF
    Negli ultimi anni molti paesi hanno sviluppato un'infrastruttura tecnologica al fine di gestire i propri dati geografici (Spatial Data Infrastructure, SDI). Tali infrastrutture rechiedono nuove ed efficati metodologie per integrare continuamente dati che provengoono da sorgenti diverse e sono caratterizzati da diversi livelli di qualit\ue0. Questo bisogno \ue8 riconosciuto in letteratura ed \ue8 noto come problema di integrazione del dato (data integration) o fusione di informazioni (information fusion). Un aspetto peculiare dell'integrazione del dato geografico riguarda il matching e l'allineamento degli oggetti geometrici. I metodi esistenti solitamente eseguono l'integrazione semplicemente allineando il database meno accurato con quello pi\uf9 accurato, assumendo che il secondo contenga sempre una rappresentazione migliore delle geometrie rilevate. Seguendo questo approccio, gli oggetti geografici sono combinati assieme in una maniera non ottimale, causando distorsioni che potenzialmente riducono la qualit\ue0 complessiva del database finale. Questa tesi si occupa del problema dell'integrazione del dato spaziale all'interno di una SDI fortemente strutturata, in cui i membri hanno preventivamente aderito ad uno schema globale comune, pertanto si focalizza sul problema dell'integrazione geometrica, assumendo che precedenti operazioni di integrazione sullo schema siano gi\ue0 state eseguire. In particulare, la tesi inizia proponendo un modello per la rappresentazione dell'informazione spaziale caratterizzata da differenti livelli di qualit\ue0, quindi definisce un processo di integrazione che tiene conto dell'accuratezza delle posizioni contenute in entrambi i database coinvoilti. La tecnica di integrazione proposta rappresenta la base per un framework capace di supportare il processamento distributo di dati geografici (geo-processing) nel contesto di una SDI. Il problema di implementare tale computazione distribuita e di lunga durata \ue8 trattato anche da un punto di vista pratico attraverso la valutazione dell'applicabilit\ue0 delle tecnologie di workflow esistenti. Tale valutazione ha portato alla definizione di una soluzione software ideale, le cui caratteristiche sono discusse negli ultimi capitoli, considerando come caso di studio il design del processo di integrazione proposto.In the last years many countries have developed a Spatial Data Infrastructure (SDI) to manage their geographical information. Large SDIs require new effective techniques to continuously integrate spatial data coming from different sources and characterized by different quality levels. This need is recognized in the scientific literature and is known as data integration or information fusion problem. A specific aspect of spatial data integration concerns the matching and alignment of object geometries. Existing methods mainly perform the integration by simply aligning the less accurate database with the more accurate one, assuming that the latter always contains a better representation of the relevant geometries. Following this approach, spatial entities are merged together in a sub-optimal manner, causing distortions that potentially reduce the overall database quality. This thesis deals with the problem of spatial data integration in a highly-coupled SDI where members have already adhered to a common global schema, hence it focuses on the geometric integration problem assuming that some schema matching operations have already been performed. In particular, the thesis initially proposes a model for representing spatial data together with their quality characteristics, producing a multi-accuracy spatial database, then it defines a novel integration process that takes care of the different positional accuracies of the involved source databases. The main goal of such process is to preserve coherence and consistency of the integrated data and when possible enhancing its accuracy. The proposed multi-accuracy spatial data model and the related integration technique represent the basis for a framework able to support distributed geo-processing in a SDI context. The problem of implementing such long-running distributed computations is also treated from a practical perspective by evaluating the applicability of existing workflow technologies. This evaluation leads to the definition of an ideal software solution, whose characteristics are discussed in the last chapters by considering the design of the proposed integration process as a motivating example

    Programming and parallelising applications for distributed infrastructures

    Get PDF
    The last decade has witnessed unprecedented changes in parallel and distributed infrastructures. Due to the diminished gains in processor performance from increasing clock frequency, manufacturers have moved from uniprocessor architectures to multicores; as a result, clusters of computers have incorporated such new CPU designs. Furthermore, the ever-growing need of scienti c applications for computing and storage capabilities has motivated the appearance of grids: geographically-distributed, multi-domain infrastructures based on sharing of resources to accomplish large and complex tasks. More recently, clouds have emerged by combining virtualisation technologies, service-orientation and business models to deliver IT resources on demand over the Internet. The size and complexity of these new infrastructures poses a challenge for programmers to exploit them. On the one hand, some of the di culties are inherent to concurrent and distributed programming themselves, e.g. dealing with thread creation and synchronisation, messaging, data partitioning and transfer, etc. On the other hand, other issues are related to the singularities of each scenario, like the heterogeneity of Grid middleware and resources or the risk of vendor lock-in when writing an application for a particular Cloud provider. In the face of such a challenge, programming productivity - understood as a tradeo between programmability and performance - has become crucial for software developers. There is a strong need for high-productivity programming models and languages, which should provide simple means for writing parallel and distributed applications that can run on current infrastructures without sacri cing performance. In that sense, this thesis contributes with Java StarSs, a programming model and runtime system for developing and parallelising Java applications on distributed infrastructures. The model has two key features: first, the user programs in a fully-sequential standard-Java fashion - no parallel construct, API call or pragma must be included in the application code; second, it is completely infrastructure-unaware, i.e. programs do not contain any details about deployment or resource management, so that the same application can run in di erent infrastructures with no changes. The only requirement for the user is to select the application tasks, which are the model's unit of parallelism. Tasks can be either regular Java methods or web service operations, and they can handle any data type supported by the Java language, namely les, objects, arrays and primitives. For the sake of simplicity of the model, Java StarSs shifts the burden of parallelisation from the programmer to the runtime system. The runtime is responsible from modifying the original application to make it create asynchronous tasks and synchronise data accesses from the main program. Moreover, the implicit inter-task concurrency is automatically found as the application executes, thanks to a data dependency detection mechanism that integrates all the Java data types. This thesis provides a fairly comprehensive evaluation of Java StarSs on three di erent distributed scenarios: Grid, Cluster and Cloud. For each of them, a runtime system was designed and implemented to exploit their particular characteristics as well as to address their issues, while keeping the infrastructure unawareness of the programming model. The evaluation compares Java StarSs against state-of-the-art solutions, both in terms of programmability and performance, and demonstrates how the model can bring remarkable productivity to programmers of parallel distributed applications

    An approach to cross-domain situation-based context management and highly adaptive services in pervasive environments

    Get PDF
    The concept of context-awareness is widely used in mobile and pervasive computing to reduce explicit user input and customization through the increased use of implicit input. It is considered to be the corner stone technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of the user. This requires the applications to take advantage of the context in order to infer the user’s objective and relevant environmental features. However, context-awareness introduces various software engineering challenges such as the need to provide developers with middleware infrastructure to acquire the context information available in distributed domains, reasoning about contextual situations that span one or more domains, and providing tools to facilitate building context-aware adaptive services. The separation of concerns is a promising approach in the design of such applications where the core logic is designed and implemented separately from the context handling and adaptation logics. In this respect, the aim of this dissertation is to introduce a unified approach for developing such applications and software infrastructure for efficient context management that together address these software engineering challenges and facilitate the design and implementation tasks associated with such context-aware services. The approach is based around a set of new conceptual foundations, including a context modelling technique that describes context at different levels of abstraction, domain-based context management middleware architecture, cross-domain contextual situation recognition, and a generative mechanism for context-aware service adaptation.Prototype tool has been built as an implementation of the proposed unified approach. Case studies have been done to illustrate and evaluate the approach, in terms of its effectiveness and applicability in real-life application scenarios to provide users with personalized services
    corecore