107 research outputs found

    Service Performance Assessment: A PI Toolset Methodology for VEs

    Get PDF
    Part 2: Knowledge-Based ServicesInternational audienceNowadays service sector is becoming more and more relevant in building successful collaborative economies. In this environment Virtual Enterprises (VEs) are forcing a change in the way traditional manufacturing systems are managed. Therefore measuring service performances plays an important role in turning company strategic goals into reality. Performance Indicators (PIs) consist in a supporting tool to assess service efficiency and effectiveness. Consequently, determining the most significant activities which need to be controlled and measured through proper PIs becomes essential. Within this paper, a PI Toolset is going to be presented and tested through industrial use case. The PI Toolset has been developed to support VEs in selecting significant activities, to manage governance processes and to support the design and implementation of specific PIs related to the precise use case objectives. Finally, a lesson learnt approach has been adopted so to stress strengths and weaknesses of both proposed methodology and tools

    Non-stationary demand forecasting by cross-sectional aggregation

    Get PDF
    In this paper the relative effectiveness of top-down (TD) versus bottom-up (BU) approaches is compared for cross-sectionally forecasting aggregate and sub-aggregate demand. We assume that the sub-aggregate demand follows a non-stationary Integrated Moving Average (IMA) process of order one and a Single Exponential Smoothing (SES) procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA process). Theoretical variances of forecast error are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate level, in addition to empirically validating our findings on a real dataset from a European superstore. The results demonstrate the increased benefit resulting from cross-sectional forecasting in a non-stationary environment than in a stationary one. Valuable insights are offered to demand planners and the paper closes with an agenda for further research in this area. © 2015 Elsevier B.V. All rights reserved

    Erroma eta Jerusalem

    Get PDF
    Dialecto : texto en euskera labortanoS. XIX -- Periodo : Ășltimo euskera modernoEuskalkia : lapurteraXIX. md. -- Aroa : azken euskara modernoaDigitalizaciĂłn. Vitoria-Gasteiz : FundaciĂłn Sancho el Sabio, 2008Carton

    Digital Twin for Smart Cities: An Enabler for Large-Scale Enterprise Interoperability

    Get PDF
    In a context of increasingly connected production systems and ambient intelligence, the digital twin is an approach that is becoming increasingly popular to help control and pilot such systems. The interest for the digital twin is to be able to meet a need for modeling and piloting as close as possible to the physical system and a better anticipation of behavior. How, in this context, the question of the composition of digital twins to model a system of systems, where each system already has its own digital twin? This paper examines such a question from the perspective of digital twin for smart cities. The position adopted here is the concept of Digital Industrial Territories, a middleware for large scale interoperability between digital twins of enterprises involved in multiple supply chains (energy, transport, health, etc.). © 2022 CEUR-WS. All rights reserved

    A Simulation Based Approach to Digital Twin’s Interoperability Verification & Validation

    Get PDF
    The digital twins of production systems are one of the pillars of the Indus-try of the Future. Despite numerous on-going research and development initiatives the verification and validation of the digital twin remains a major scientific obstacle. This work proposes a simulation-based approach to achieve this goal: support Digital Twin verification and validation through the definition of a dedicated framework. A simulation model is used in place of the real-world system for ensuring the digital twin behaves as expected and for assessing its proper interoperability with the system to be twinned with. Then the simulation model is replaced by the real-world sys-tem, to interoperate with the verified and validated digital twin. With such an approach, the interoperability middleware, i.e. the IoT between the sys-tem and its digital twin can also be modeled, simulated, verified and vali-dated. Consequently, an optimized solution can be built for an entire value chain, from the system to its digital twin and conversely. © 2022 CEUR-WS. All rights reserved

    Business models for distributed-simulation orchestration and risk management

    Get PDF
    Nowadays, industries are implementing heterogeneous systems from different domains, backgrounds, and operating systems. Manufacturing systems are becoming more and more complex, which forces engineers to manage the complexity in several aspects. Technical complexities bring interoperability, risk management, and hazards issues that must be taken into consideration, from the business model design to the technical implementation. To solve the complexities and the incompatibilities between heterogeneous components, several distributed and cosimulation standards and tools can be used for data exchange and interconnection. High-level architecture (HLA) and functional mockup interface (FMI) are the main international standards used for distributed and cosimulation. HLA is mainly used in academic and defense domains while FMI is mostly used in industry. In this article, we propose an HLA/FMI implementation with a connection to an external business process-modeling tool called Papyrus. Papyrus is configured as a master federate that orchestrates the subsimulations based on the above standards. The developed framework is integrated with external heterogeneous components through an FMI interface. This framework is developed with the aim of bringing interoperability to a system used in a power generation compan

    Demand forecasting by temporal aggregation

    Get PDF
    Demand forecasting performance is subject to the uncertainty underlying the time series an organization is dealing with. There are many approaches that may be used to reduce uncertainty and thus to improve forecasting performance. One intuitively appealing such approach is to aggregate demand in lower-frequency “time buckets.” The approach under concern is termed to as temporal aggregation, and in this article, we investigate its impact on forecasting performance. We assume that the nonaggregated demand follows either a moving average process of order one or a first-order autoregressive process and a single exponential smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical mean-squared error expressions are derived for the aggregated and nonaggregated demand to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant, and the process parameters. Valuable insights are offered to practitioners and the article closes with an agenda for further research in this area

    Knowledge formalization in experience feedback processes : an ontology-based approach

    Get PDF
    Because of the current trend of integration and interoperability of industrial systems, their size and complexity continue to grow making it more difficult to analyze, to understand and to solve the problems that happen in their organizations. Continuous improvement methodologies are powerful tools in order to understand and to solve problems, to control the effects of changes and finally to capitalize knowledge about changes and improvements. These tools involve suitably represent knowledge relating to the concerned system. Consequently, knowledge management (KM) is an increasingly important source of competitive advantage for organizations. Particularly, the capitalization and sharing of knowledge resulting from experience feedback are elements which play an essential role in the continuous improvement of industrial activities. In this paper, the contribution deals with semantic interoperability and relates to the structuring and the formalization of an experience feedback (EF) process aiming at transforming information or understanding gained by experience into explicit knowledge. The reuse of such knowledge has proved to have significant impact on achieving themissions of companies. However, the means of describing the knowledge objects of an experience generally remain informal. Based on an experience feedback process model and conceptual graphs, this paper takes domain ontology as a framework for the clarification of explicit knowledge and know-how, the aim of which is to get lessons learned descriptions that are significant, correct and applicable

    Should a Sentinel Node Biopsy Be Performed in Patients with High-Risk Breast Cancer?

    Get PDF
    A negative sentinel lymph node (SLN) biopsy spares many breast cancer patients the complications associated with lymph node irradiation or additional surgery. However, patients at high risk for nodal involvement based on clinical characteristics may remain at unacceptably high risk of axillary disease even after a negative SLN biopsy result. A Bayesian nomogram was designed to combine the probability of axillary disease prior to nodal biopsy with customized test characteristics for an SLN biopsy and provides the probability of axillary disease despite a negative SLN biopsy. Users may individualize the sensitivity of an SLN biopsy based on factors known to modify the sensitivity of the procedure. This tool may be useful in identifying patients who should have expanded upfront exploration of the axilla or comprehensive axillary irradiation

    A Governance Framework for Mitigating Risks and Uncertainty in Collaborative Business Processes

    Full text link
    International audienceThe development of collaborative business process relies mostly on software services spanning multiple organizations. Therefore, uncertainty related to the shared assets and risks of Intellectual Property infringement form major concerns and hamper the development of inter-enterprise collaboration. This paper proposes a governance framework to enhance trust and assurance in such collaborative context, coping with the impacts of Cloud infrastructure. First, a collaborative security requirements engineering approach analyzes assets sharing relations in business process, to identify risks and uncertainties and, therefore, elicits partners’ security requirements and profiles. Then, a ‘due usage’ aware policy model supports negotiation between asset provider’s requirements and consumer’s profiles. The enforcement mechanism adapts to dynamic business processes and Cloud infrastructures to provide end-to-end protection on shared assets
    • 

    corecore