10,976 research outputs found

    On the Modeling of Correct Service Flows with BPEL4WS

    Get PDF
    Frameworks for composing Web Services offer a promising approach for realizing enterprise-wide and cross-organizational business applications. With BPEL4WS a powerful composition language exists. BPEL implementations allow orchestrating complex, stateful interactions among Web Services in a process-oriented way. One important task in this context is to ensure that respective flow specifications can be correctly processed, i.e., there will be no bad surprises (e.g., deadlocks, invocation of service operations with missing input data) at runtime. In this paper we subdivide BPEL schemes into different classes and discuss to which extent instances of these classes can be analyzed for the absence of control flow errors and inconsistencies. Altogether our work shall contribute to a more systematic evolution of the BPEL standard instead of overloading it with too many features

    Case Study on Design Management: Inefficiencies and Possible Remedies

    Get PDF
    Delivering better products with a reduced lead time and less resources has become the primary focus of design management. The aim of this work is to revisit typical design management inefficiencies and discuss possible remedies for these problems. To this end, a case study and interviews with seven Estonian architects were carried out. The data obtained was analyzed within the framework of the transformation-flow-value theory of production. Despite its failure to deliver customer value, a single-minded transformation view of operations has been the dominant approach taken in design management and processes, leading to inefficiencies in design practices

    Mixed-signal CNN array chips for image processing

    Get PDF
    Due to their local connectivity and wide functional capabilities, cellular nonlinear networks (CNN) are excellent candidates for the implementation of image processing algorithms using VLSI analog parallel arrays. However, the design of general purpose, programmable CNN chips with dimensions required for practical applications raises many challenging problems to analog designers. This is basically due to the fact that large silicon area means large development cost, large spatial deviations of design parameters and low production yield. CNN designers must face different issues to keep reasonable enough accuracy level and production yield together with reasonably low development cost in their design of large CNN chips. This paper outlines some of these major issues and their solutions

    Customisable e-training programmes based on trainees profiles

    Get PDF
    Dissertation presented at Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa to obtain the Master degree in Electrical and Computer EngineeringOnline training (e-training) is a major driver to promote the development of competencies and knowledge in enterprises. A lack of customizable e-training programmes based on trainees‟ profiles and of continuous maintenance of the training materials prevents the sustainability of industrial training deployment. This dissertation presents a training strategy and a methodology for building training courses with the purpose to provide a trainee oriented industrial training development. The training strategy intends to facilitate the management of all the training components and tasks to be able to build a training structure focused in a specific planned objective. The methodology for building e-training courses proposes to create customizable training materials in an easier way, enabling various organizations to participate actively on its production. Additionally a customisable training programme framework is presented. It is supported by a compliant ontology-based model able to support adaptable training contents, orchestration service, facilitating the efficiency and acceptance of the e-training programmes delivery

    Exploratory Mediation Analysis with Many Potential Mediators

    Full text link
    Social and behavioral scientists are increasingly employing technologies such as fMRI, smartphones, and gene sequencing, which yield 'high-dimensional' datasets with more columns than rows. There is increasing interest, but little substantive theory, in the role the variables in these data play in known processes. This necessitates exploratory mediation analysis, for which structural equation modeling is the benchmark method. However, this method cannot perform mediation analysis with more variables than observations. One option is to run a series of univariate mediation models, which incorrectly assumes independence of the mediators. Another option is regularization, but the available implementations may lead to high false positive rates. In this paper, we develop a hybrid approach which uses components of both filter and regularization: the 'Coordinate-wise Mediation Filter'. It performs filtering conditional on the other selected mediators. We show through simulation that it improves performance over existing methods. Finally, we provide an empirical example, showing how our method may be used for epigenetic research.Comment: R code and package are available online as supplementary material at https://github.com/vankesteren/cmfilter and https://github.com/vankesteren/ema_simulation

    TransForm: Formally Specifying Transistency Models and Synthesizing Enhanced Litmus Tests

    Full text link
    Memory consistency models (MCMs) specify the legal ordering and visibility of shared memory accesses in a parallel program. Traditionally, instruction set architecture (ISA) MCMs assume that relevant program-visible memory ordering behaviors only result from shared memory interactions that take place between user-level program instructions. This assumption fails to account for virtual memory (VM) implementations that may result in additional shared memory interactions between user-level program instructions and both 1) system-level operations (e.g., address remappings and translation lookaside buffer invalidations initiated by system calls) and 2) hardware-level operations (e.g., hardware page table walks and dirty bit updates) during a user-level program's execution. These additional shared memory interactions can impact the observable memory ordering behaviors of user-level programs. Thus, memory transistency models (MTMs) have been coined as a superset of MCMs to additionally articulate VM-aware consistency rules. However, no prior work has enabled formal MTM specifications, nor methods to support their automated analysis. To fill the above gap, this paper presents the TransForm framework. First, TransForm features an axiomatic vocabulary for formally specifying MTMs. Second, TransForm includes a synthesis engine to support the automated generation of litmus tests enhanced with MTM features (i.e., enhanced litmus tests, or ELTs) when supplied with a TransForm MTM specification. As a case study, we formally define an estimated MTM for Intel x86 processors, called x86t_elt, that is based on observations made by an ELT-based evaluation of an Intel x86 MTM implementation from prior work and available public documentation. Given x86t_elt and a synthesis bound as input, TransForm's synthesis engine successfully produces a set of ELTs including relevant ELTs from prior work.Comment: *This is an updated version of the TransForm paper that features updated results reflecting performance optimizations and software bug fixes. 14 pages, 11 figures, Proceedings of the 47th Annual International Symposium on Computer Architecture (ISCA
    corecore