34 research outputs found

    An Audience Centred Approach to Business Process Reengineering

    Get PDF
    This paper describes a method for process modelling which is designed to provide guidance to the business process modeller. The method has evolved from our experience of attempting to apply software process modelling approaches to business processes. A major influence on the method has been our observations that a pragmatic approach to notation selection is required in order to maintain a meaningful dialogue with end-users. Business process modelling methods typically fall into two camps. General methods attempt to describe the managerial activities which surround the modelling itself (Coulson-Thomas, 94; GISIP, 95). Specific methods, on the other hand, tend to concentrate on the details of a particular notational approach. However, as with programming languages or design methods, no single notational approach is best suited to all problems. Ideally, the process modeller should be able to incorporate the appropriate notational approach into some coherent generic modelling method.This paper addresses the needs of the modeller at the detailed level without prescribing a specific notation. This is achieved by describing categories of modelling activities which the modeller should undertake within process modelling, and suggesting how notations may be used within these categories. Our method is generally applicable, and is illustrated here by models of processes within the Construction industry

    Instances and connectors : issues for a second generation process language

    Get PDF
    This work is supported by UK EPSRC grants GR/L34433 and GR/L32699Over the past decade a variety of process languages have been defined, used and evaluated. It is now possible to consider second generation languages based on this experience. Rather than develop a second generation wish list this position paper explores two issues: instances and connectors. Instances relate to the relationship between a process model as a description and the, possibly multiple, enacting instances which are created from it. Connectors refers to the issue of concurrency control and achieving a higher level of abstraction in how parts of a model interact. We believe that these issues are key to developing systems which can effectively support business processes, and that they have not received sufficient attention within the process modelling community. Through exploring these issues we also illustrate our approach to designing a second generation process language.Postprin

    Instances and connectors : issues for a second generation process language

    Get PDF
    This work is supported by UK EPSRC grants GR/L34433 and GR/L32699Over the past decade a variety of process languages have been defined, used and evaluated. It is now possible to consider second generation languages based on this experience. Rather than develop a second generation wish list this position paper explores two issues: instances and connectors. Instances relate to the relationship between a process model as a description and the, possibly multiple, enacting instances which are created from it. Connectors refers to the issue of concurrency control and achieving a higher level of abstraction in how parts of a model interact. We believe that these issues are key to developing systems which can effectively support business processes, and that they have not received sufficient attention within the process modelling community. Through exploring these issues we also illustrate our approach to designing a second generation process language.Postprin

    Embedding requirements within the model driven architecture.

    Get PDF
    The Model Driven Architecture (MDA) is offered as one way forward in software systems modelling to connect software design with the business domain. The general focus of the MDA is the development of software systems by performing transformations between software design models, and the automatic generation of application code from those models. Software systems are provided by developers, whose experience and models are not always in line with those of other stakeholders, which presents a challenge for the community. From reviewing the available literature, it is found that whilst many models and notations are available, those that are significantly supported by the MDA may not be best for use by non technical stakeholders. In addition, the MDA does not explicitly consider requirements and specification. This research begins by investigating the adequacy of the MDA requirements phase and examining the feasibility of incorporating a requirements definition, specifically focusing upon model transformations. MDA artefacts were found to serve better the software community and requirements were not appropriately integrated within the MDA, with significant extension upstream being required in order to sufficiently accommodate the business user in terms of a requirements definition. Therefore, an extension to the MDA framework is offered that directly addresses Requirements Engineering (RE), including the distinction of analysis from design, highlighting the importance of specification. This extension is suggested to further the utility of the MDA by making it accessible to a wider audience upstream, enabling specification to be a direct output from business user involvement in the requirements phase of the MDA. To demonstrate applicability, this research illustrates the framework extension with the provision of a method and discusses the use of the approach in both academic and commercial settings. The results suggest that such an extension is academically viable in facilitating the move from analysis into the design of software systems, accessible for business use and beneficial in industry by allowing for the involvement of the client in producing models sufficient enough for use in the development of software systems using MDA tools and techniques

    Coded cooperative diversity with low complexity encoding and decoding algorithms.

    Get PDF
    One of the main concerns in designing the wireless communication systems is to provide sufficiently large data rates while considering the different aspects of the implementation complexity that is often constrained by limited battery power and signal processing capability of the devices. Thus, in this thesis, a low complexity encoding and decoding algorithms are investigated for systems with the transmission diversity, particularly the receiver diversity and the cooperative diversity. Design guidelines for such systems are provided to provide a good trade-off between the implementation complexity and the performance. The order statistics based list decoding techniques for linear binary block codes of small to medium block length are investigated to reduce the complexity of coded systems. The original order statistics decoding (OSD) is generalized by assuming segmentation of the most reliable independent positions of the received bits. The segmentation is shown to overcome several drawbacks of the original order statistics decoding. The complexity of the OSD is further reduced by assuming a partial ordering of the received bits in order to avoid the highly complex Gauss elimination. The bit error rate performance and the decoding complexity trade-off of the proposed decoding algorithms are studied by computer simulations. Numerical examples show that, in some cases, the proposed decoding schemes are superior to the original order statistics decoding in terms of both the bit error rate performance as well as the decoding complexity. The complexity of the order statistics based list decoding algorithms for linear block codes and binary block turbo codes (BTC) is further reduced by employing highly reliable cyclic redundancy check (CRC) bits. The results show that sending CRC bits for many segments is the most effective tecnhique in reducing the complexity. The coded cooperative diversity is compared with the conventional receiver coded diversity in terms of the pairwise error probability and the overall bit error rate (BER). The expressions for the pairwise error probabilities are obtained analytically and verified by computer simulations. The performance of the cooperative diversity is found to be strongly relay location dependent. Using the analytical as well as extensive numerical results, the geographical areas of the relay locations are obtained for small to medium signal-to-noise ratio values, such that the cooperative coded diversity outperforms the receiver coded diversity. However, for sufficiently large signal-to-noise ratio (SNR) values, or if the path-loss attenuations are not considered, then the receiver coded diversity always outperforms the cooperative coded diversity. The obtained results have important implications on the deployment of the next generation cellular systems supporting the cooperative as well as the receiver diversity

    DescribeX: A Framework for Exploring and Querying XML Web Collections

    Full text link
    This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogeneous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX's light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.Comment: PhD thesis, University of Toronto, 2008, 163 page

    On the evaluation of quantum instruments with a consideration to measurements in trapped ion systems

    Get PDF
    Trapped ion chains have shown promise in their application as quantum simulators. However, the close proximity of ions in the trap leads to operations such as state detection causing loss of coherence of other ions due to imperfect beam addressing and absorption of scattered photons during neighbouring ion fluorescence. In the first part of my thesis I consider a Ytterbium 171 ion trap and calculate the probability of neighbouring ion photon absorption. I additionally show how to use the Lindblad master equation to simulate the dynamics of an ion under an attenuated beam, and provide a review of determining the detection fidelity through histograms of photon detection counts. I then return to the more abstract setting of non-destructive quantum measurements, which are instrumental for fault-tolerant quantum computers. These non-destructive quantum measurements have both a quantum and classical output and hence must be described through a quantum instrument as opposed to the typical representation of measurements through POVM's. It therefore becomes necessary to develop methods for evaluating quantum instruments. In this thesis I show that the process fidelity and diamond distance as figures of merits on unitary channels may be generalized to figures of merits on quantum instruments. I show that these figures of merit adequately capture the errors associated to non-destructive measurements, and they additionally provide upper and lower bounds for each other. Several examples are also given of computing the figures of merit on quantum instruments under various noise models

    Mechanised Uniform Interpolation for Modal Logics K, GL, and iSL

    Get PDF
    The uniform interpolation property in a given logic can be understood as the definability of propositional quantifiers. We mechanise the computation of these quantifiers and prove correctness in the Coq proof assistant for three modal logics, namely: (1) the modal logic K, for which a pen-and-paper proof exists; (2) Gödel-Löb logic GL, for which our formalisation clarifies an important point in an existing, but incomplete, sequent-style proof; and (3) intuitionistic strong Löb logic iSL, for which this is the first proof-theoretic construction of uniform interpolants. Our work also yields verified programs that allow one to compute the propositional quantifiers on any formula in this logic
    corecore