902 research outputs found

    Making Strategic Supply Chain Capacity Planning more Dynamic to cope with Hyperconnected and Uncertain Environments

    Get PDF
    Public and private organizations cope with a lot of uncertainties when planning the future of their supply chains. Additionally, the network of stakeholders is now intensely interconnected and dynamic, revealing new collaboration opportunities at a tremendous pace. In such a context, organizations must rethink most of their supply chain planning decision support systems. This is the case regarding strategic supply chain capacity planning systems that should ensure that supply chains will have enough resources to profitably produce and deliver products on time, whatever hazards and disruptions. Unfortunately, most of the existing systems are unable to consider satisfactorily this new deal. To solve this issue, this paper develops a decision support system designed for making strategic supply chain capacity planning more dynamic to cope with hyperconnected and uncertain environments. To validate this decision support system, two industrial experiments have been conducted with two European pharmaceuticals and cosmetics companies

    Reference catalogue for ICT services in healthcare : model for ICT service management, controlling and benchmarking : version 1.0

    Get PDF
    Translation of the German original

    Feeds as Query Result Serializations

    Full text link
    Many Web-based data sources and services are available as feeds, a model that provides consumers with a loosely coupled way of interacting with providers. The current feed model is limited in its capabilities, however. Though it is simple to implement and scales well, it cannot be transferred to a wider range of application scenarios. This paper conceptualizes feeds as a way to serialize query results, describes the current hardcoded query semantics of such a perspective, and surveys the ways in which extensions of this hardcoded model have been proposed or implemented. Our generalized view of feeds as query result serializations has implications for the applicability of feeds as a generic Web service for any collection that is providing access to individual information items. As one interesting and compelling class of applications, we describe a simple way in which a query-based approach to feeds can be used to support location-based services

    SP2MN: a Software Process Meta-Modeling Language

    Get PDF
    In the last two decades, software process modeling has been an area of interest within both academia and industry. Software process modeling aims at defining and representing software processes in the form of models. A software process model represents the medium that allows better understanding, management and control of the software process. Software process metamodeling rather, provides standard metamodels which enable the defining of customized software process models for a specific project in hand by instantiation. Several software process modeling/meta-modeling languages have been introduced to formalize software process models. Nonetheless, none of them has managed to introduce a compatible yet precise language to include all necessary concepts and information for software process modeling. This paper presents Software Process Meta-Modeling and Notation (SP2MN); a meta-modeling language that provides simple and expressive graphical notations for the aim of software process modeling. SP2MN has been evaluated based upon the well-known ISPW-6 process example, a standard benchmark problem for software process modeling. SP2MN has proved that it presents a valid and expressive software process modeling language

    iFloW: an integrated logistics software system for inbound supply chain traceability

    Get PDF
    Visibility plays an important role in supply chain management. Such visibility is not only important for better planning, but especially for real-time execution related with the traceability of goods. In inbound supply chain management, logistics planners need to trace raw materials from their requests in order to properly plan a plant’s production. The iFloW (Inbound Logistics Tracking System) integrates logistics providers IT applications and Global Positioning System (GPS) technology to track and trace incoming freights. The Estimated Time of Arrival (ETA) is updated in real-time allowing an improved materials planning process. This paper presents the iFloW project and describes how these issues are addressed and validated in a real pilot project.This research is sponsored by the Portugal Incentive System for Research and technological Development PEst-UID/CEC/00319/2013 and by project in co-promotion no 36265/2013 (Project HMIExcel—2013–2015)

    SIMULATION ANALYSIS OF USMC HIMARS EMPLOYMENT IN THE WESTERN PACIFIC

    Get PDF
    As a result of renewed focus on great power competition, the United States Marine Corps is currently undergoing a comprehensive force redesign. In accordance with the Commandant’s Planning Guidance and Force Design 2030, this redesign includes an increase of 14 rocket artillery batteries while divesting 14 cannon artillery batteries. These changes necessitate study into tactics and capabilities for rocket artillery against a peer threat in the Indo-Pacific region. This thesis implements an efficient design of experiments to simulate over 1.6 million Taiwan invasions using a stochastic, agent-based combat model. Varying tactics and capabilities as input, the model returns measures of effectiveness to serve as the response in metamodels, which are then analyzed for critical factors, interactions, and change points. The analysis provides insight into the principal factors affecting lethality and survivability for ground-based rocket fires. The major findings from this study include the need for increasingly distributed artillery formations, highly mobile launchers that can emplace and displace quickly, and the inadequacy of the unitary warheads currently employed by HIMARS units. Solutions robust to adversary actions and simulation variability can inform wargames and future studies as the Marine Corps continues to adapt in preparation for potential peer conflict.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited

    Metodología dirigida por modelos para las pruebas de un sistema distribuido multiagente de fabricación

    Get PDF
    Las presiones del mercado han empujado a las empresas de fabricación a reducir costes a la vez que mejoran sus productos, especializándose en las actividades sobre las que pueden añadir valor y colaborando con especialistas de las otras áreas para el resto. Estos sistemas distribuidos de fabricación conllevan nuevos retos, dado que es difícil integrar los distintos sistemas de información y organizarlos de forma coherente. Esto ha llevado a los investigadores a proponer una variedad de abstracciones, arquitecturas y especificaciones que tratan de atacar esta complejidad. Entre ellas, los sistemas de fabricación holónicos han recibido una atención especial: ven las empresas como redes de holones, entidades que a la vez están formados y forman parte de varios otros holones. Hasta ahora, los holones se han implementado para control de fabricación como agentes inteligentes autoconscientes, pero su curva de aprendizaje y las dificultades a la hora de integrarlos con sistemas tradicionales han dificultado su adopción en la industria. Por otro lado, su comportamiento emergente puede que no sea deseable si se necesita que las tareas cumplan ciertas garantías, como ocurren en las relaciones de negocio a negocio o de negocio a cliente y en las operaciones de alto nivel de gestión de planta. Esta tesis propone una visión más flexible del concepto de holón, permitiendo que se sitúe en un espectro más amplio de niveles de inteligencia, y defiende que sea mejor implementar los holones de negocio como servicios, componentes software que pueden ser reutilizados a través de tecnologías estándar desde cualquier parte de la organización. Estos servicios suelen organizarse como catálogos coherentes, conocidos como Arquitecturas Orientadas a Servicios (‘Service Oriented Architectures’ o SOA). Una iniciativa SOA exitosa puede reportar importantes beneficios, pero no es una tarea trivial. Por este motivo, se han propuesto muchas metodologías SOA en la literatura, pero ninguna de ellas cubre explícitamente la necesidad de probar los servicios. Considerando que la meta de las SOA es incrementar la reutilización del software en la organización, es una carencia importante: tener servicios de alta calidad es crucial para una SOA exitosa. Por este motivo, el objetivo principal de la presente Tesis es definir una metodología extendida que ayude a los usuarios a probar los servicios que implementan a sus holones de negocio. Tras considerar las opciones disponibles, se tomó la metodología dirigida por modelos SODM como punto de partida y se reescribió en su mayor parte con el framework Epsilon de código abierto, permitiendo a los usuarios que modelen su conocimiento parcial sobre el rendimiento esperado de los servicios. Este conocimiento parcial es aprovechado por varios nuevos algoritmos de inferencia de requisitos de rendimiento, que extraen los requisitos específicos de cada servicio. Aunque el algoritmo de inferencia de peticiones por segundo es sencillo, el algoritmo de inferencia de tiempos límite pasó por numerosas revisiones hasta obtener el nivel deseado de funcionalidad y rendimiento. Tras una primera formulación basada en programación lineal, se reemplazó con un algoritmo sencillo ad-hoc que recorría el grafo y después con un algoritmo incremental mucho más rápido y avanzado. El algoritmo incremental produce resultados equivalentes y tarda mucho menos, incluso con modelos grandes. Para sacar más partidos de los modelos, esta Tesis también propone un enfoque general para generar artefactos de prueba para múltiples tecnologías a partir de los modelos anotados por los algoritmos. Para evaluar la viabilidad de este enfoque, se implementó para dos posibles usos: reutilizar pruebas unitarias escritas en Java como pruebas de rendimiento, y generar proyectos completos de prueba de rendimiento usando el framework The Grinder para cualquier Servicio Web que esté descrito usando el estándar Web Services Description Language. La metodología completa es finalmente aplicada con éxito a un caso de estudio basado en un área de fabricación de losas cerámicas rectificadas de un grupo de empresas español. En este caso de estudio se parte de una descripción de alto nivel del negocio y se termina con la implementación de parte de uno de los holones y la generación de pruebas de rendimiento para uno de sus Servicios Web. Con su soporte para tanto diseñar como implementar pruebas de rendimiento de los servicios, se puede concluir que SODM+T ayuda a que los usuarios tengan una mayor confianza en sus implementaciones de los holones de negocio observados en sus empresas

    Generating Smart Glasses-based Information Systems with BPMN4SGA: A BPMN Extension for Smart Glasses Applications

    Get PDF
    Although smart glasses allow hands-free interaction with information systems and can enhance business processes, they face problems with the adoption in businesses. Implementation challenges arise due to specific hardware conditions e.g. computational power, limited battery, small screen size and privacy issues caused by the camera. In addition, not many programmers are specialized for the development of smart glasses-based applications to conquer the mentioned challenges. We address this issue with a generation tool for smart glasses-based information systems. A BPMN extension for smart glasses applications allows the abstract specification. Specified processes are then integrated into a model-driven software development approach that transforms processes directly into smart glasses applications. This paper covers the design and development phase of the abstract and concrete syntax of the BPMN extension and the representation of the architecture to generate smart glasses-based information systems with the new developed BPMN extension

    Development and validation of a disaster management metamodel (DMM)

    Get PDF
    Disaster Management (DM) is a diffused area of knowledge. It has many complex features interconnecting the physical and the social views of the world. Many international and national bodies create knowledge models to allow knowledge sharing and effective DM activities. But these are often narrow in focus and deal with specified disaster types. We analyze thirty such models to uncover that many DM activities are actually common even when the events vary. We then create a unified view of DM in the form of a metamodel. We apply a metamodelling process to ensure that this metamodel is complete and consistent. We validate it and present a representational layer to unify and share knowledge as well as combine and match different DM activities according to different disaster situations

    Dynamic variability support in context-aware workflow-based systems

    Get PDF
    Workflow-based systems are increasingly becomingmore complex and dynamic. Besides the large sets of process variants to be managed, process variants need to be context sensitive in order to accommodate new user requirements and intrinsic complexity. This paradigm shift forces us to defer decisions to run time where process variants must be customized and executed based on a recognized context. However, few efforts have been focused on dynamic variability for process families. This dissertation proposes an approach for variant-rich workflow-based systems that can comprise context data while deferring process configuration to run time. Whereas existing early process variability approaches, like Worklets, VxBPEL, or Provop handle run-time reconfiguration, ours lets us resolve variants at execution time and supports multiple binding required for dynamic environments. Finally, unlike the specialized reconfiguration solutions for some workflow-based systems, our approach allows an automated decision making, enabling different run-time resolution strategies that intermix constraint solving and feature models. We achieve these results through a simple extension to BPMN that adds primitives for process variability constructs. We show that this is enough to eficiently model process variability while preserving separation of concerns. We implemented our approach in the LateVa framework and evaluated it using both synthetic and realworld scenarios. LateVa achieves a reasonable performance over runtime resolution, which means that can facilitate practical adoption in context-aware and variant-rich work ow-based systems
    corecore