10,381 research outputs found

    From Design to Production Control Through the Integration of Engineering Data Management and Workflow Management Systems

    Full text link
    At a time when many companies are under pressure to reduce "times-to-market" the management of product information from the early stages of design through assembly to manufacture and production has become increasingly important. Similarly in the construction of high energy physics devices the collection of (often evolving) engineering data is central to the subsequent physics analysis. Traditionally in industry design engineers have employed Engineering Data Management Systems (also called Product Data Management Systems) to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed in industry to coordinate and support the more complex and repeatable work processes of the production environment. Commercial workflow products cannot support the highly dynamic activities found both in the design stages of product development and in rapidly evolving workflow definitions. The integration of Product Data Management with Workflow Management can provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates this integration and proposes a philosophy for the support of product data throughout the full development and production lifecycle and demonstrates its usefulness in the construction of CMS detectors.Comment: 18 pages, 13 figure

    Designing Reusable Systems that Can Handle Change - Description-Driven Systems : Revisiting Object-Oriented Principles

    Full text link
    In the age of the Cloud and so-called Big Data systems must be increasingly flexible, reconfigurable and adaptable to change in addition to being developed rapidly. As a consequence, designing systems to cater for evolution is becoming critical to their success. To be able to cope with change, systems must have the capability of reuse and the ability to adapt as and when necessary to changes in requirements. Allowing systems to be self-describing is one way to facilitate this. To address the issues of reuse in designing evolvable systems, this paper proposes a so-called description-driven approach to systems design. This approach enables new versions of data structures and processes to be created alongside the old, thereby providing a history of changes to the underlying data models and enabling the capture of provenance data. The efficacy of the description-driven approach is exemplified by the CRISTAL project. CRISTAL is based on description-driven design principles; it uses versions of stored descriptions to define various versions of data which can be stored in diverse forms. This paper discusses the need for capturing holistic system description when modelling large-scale distributed systems.Comment: 8 pages, 1 figure and 1 table. Accepted by the 9th Int Conf on the Evaluation of Novel Approaches to Software Engineering (ENASE'14). Lisbon, Portugal. April 201

    Managing healthcare workflows in a multi-agent system environment

    Get PDF
    Whilst Multi-Agent System (MAS) architectures appear to offer a more flexible model for designers and developers of complex, collaborative information systems, implementing real-world business processes that can be delegated to autonomous agents is still a relatively difficult task. Although a range of agent tools and toolkits exist, there still remains the need to move the creation of models nearer to code generation, in order that the development path be more rigorous and repeatable. In particular, it is essential that complex organisational process workflows are captured and expressed in a way that MAS can successfully interpret. Using a complex social care system as an exemplar, we describe a technique whereby a business process is captured, expressed, verified and specified in a suitable format for a healthcare MAS.</p

    Model Exploration Using OpenMOLE - a workflow engine for large scale distributed design of experiments and parameter tuning

    Get PDF
    OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. In this work, we briefly expose the strong assets of OpenMOLE and demonstrate its efficiency at exploring the parameter set of an agent simulation model. We perform a multi-objective optimisation on this model using computationally expensive Genetic Algorithms (GA). OpenMOLE hides the complexity of designing such an experiment thanks to its DSL, and transparently distributes the optimisation process. The example shows how an initialisation of the GA with a population of 200,000 individuals can be evaluated in one hour on the European Grid Infrastructure.Comment: IEEE High Performance Computing and Simulation conference 2015, Jun 2015, Amsterdam, Netherland

    Reports Of Conferences, Institutes, And Seminars

    Get PDF
    This quarter\u27s column offers coverage of multiple sessions from the 2016 Electronic Resources & Libraries (ER&L) Conference, held April 3–6, 2016, in Austin, Texas. Topics in serials acquisitions dominate the column, including reports on altmetrics, cost per use, demand-driven acquisitions, and scholarly communications and the use of subscriptions agents; ERMS, access, and knowledgebases are also featured

    Transparent Orchestration of Task-based Parallel Applications in Containers Platforms

    Get PDF
    This paper presents a framework to easily build and execute parallel applications in container-based distributed computing platforms in a user-transparent way. The proposed framework is a combination of the COMP Superscalar (COMPSs) programming model and runtime, which provides a straightforward way to develop task-based parallel applications from sequential codes, and containers management platforms that ease the deployment of applications in computing environments (as Docker, Mesos or Singularity). This framework provides scientists and developers with an easy way to implement parallel distributed applications and deploy them in a one-click fashion. We have built a prototype which integrates COMPSs with different containers engines in different scenarios: i) a Docker cluster, ii) a Mesos cluster, and iii) Singularity in an HPC cluster. We have evaluated the overhead in the building phase, deployment and execution of two benchmark applications compared to a Cloud testbed based on KVM and OpenStack and to the usage of bare metal nodes. We have observed an important gain in comparison to cloud environments during the building and deployment phases. This enables better adaptation of resources with respect to the computational load. In contrast, we detected an extra overhead during the execution, which is mainly due to the multi-host Docker networking.This work is partly supported by the Spanish Government through Programa Severo Ochoa (SEV-2015-0493), by the Spanish Ministry of Science and Technology through TIN2015-65316 project, by the Generalitat de Catalunya under contracts 2014-SGR-1051 and 2014-SGR-1272, and by the European Union through the Horizon 2020 research and innovation program under grant 690116 (EUBra-BIGSEA Project). Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation.Peer ReviewedPostprint (author's final draft

    Implementation of context-aware workflows with Multi-agent Systems

    Get PDF
    Systems in Ambient Intelligence (AmI) need to manage workflows that represent users’ activities. These workflows can be quite complex, as they may involve multiple participants, both physical and computational, playing different roles. Their execution implies monitoring the development of the activities in the environment, and taking the necessary actions for them and the workflow to reach a certain end. The context-aware approach supports the development of these applications to cope with event processing and regarding information issues. Modeling the actors in these context-aware workflows, where complex decisions and interactions must be considered, can be achieved with multi-agent systems. Agents are autonomous entities with sophisticated and flexible behaviors, which are able to adapt to complex and evolving environments, and to collaborate to reach common goals. This work presents architectural patterns to integrate agents on top of an existing context-aware architecture. This allows an additional abstraction layer on top of context-aware systems, where knowledge management is performed by agents.This approach improves the flexibility of AmI systems and facilitates their design. A case study on guiding users in buildings to their meetings illustrates this approach
    • …
    corecore