2,415 research outputs found

    5G & SLAs: Automated proposition and management of agreements towards QoS enforcement

    Get PDF
    Efficient Service Level Agreements (SLA) management and anticipation of Service Level Objectives (SLO) breaches become mandatory to guarantee the required service quality in software- defined and 5G networks. To create an operational Network Service, it is highly envisaged to associate it with their network-related parameters that reflect the corresponding quality levels. These are included in policies but while SLAs target usually business users, there is a challenge for mechanisms that bridge this abstraction gap. In this paper, a generic black box approach is used to map high-level requirements expressed by users in SLAs to low-level network parameters included in policies, enabling Quality of Service (QoS) enforcement by triggering the required policies and manage the infrastructure accordingly. In addition, a mechanism for determining the importance of different QoS parameters is presented, mainly used for “relevant” QoS metrics recommendation in the SLA template

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Challenges Emerging from Future Cloud Application Scenarios

    Get PDF
    The cloud computing paradigm encompasses several key differentiating elements and technologies, tackling a number of inefficiencies, limitations and problems that have been identified in the distributed and virtualized computing domain. Nonetheless, and as it is the case for all emerging technologies, their adoption led to the presentation of new challenges and new complexities. In this paper we present key application areas and capabilities of future scenarios, which are not tackled by current advancements and highlight specific requirements and goals for advancements in the cloud computing domain. We discuss these requirements and goals across different focus areas of cloud computing, ranging from cloud service and application integration, development environments and abstractions, to interoperability and relevant to it aspects such as legislation. The future application areas and their requirements are also mapped to the aforementioned areas in order to highlight their dependencies and potential for moving cloud technologies forward and contributing towards their wider adoption

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Development of a Dynamic Performance Management Framework for Naval Ship Power System using Model-Based Predictive Control

    Get PDF
    Medium-Voltage Direct-Current (MVDC) power system has been considered as the trending technology for future All-Electric Ships (AES) to produce, convert and distribute electrical power. With the wide employment of highrequency power electronics converters and motor drives in DC system, accurate and fast assessment of system dynamic behaviors , as well as the optimization of system transient performance have become serious concerns for system-level studies, high-level control designs and power management algorithm development. The proposed technique presents a coordinated and automated approach to determine the system adjustment strategy for naval power systems to improve the transient performance and prevent potential instability following a system contingency. In contrast with the conventional design schemes that heavily rely on the human operators and pre-specified rules/set points, we focus on the development of the capability to automatically and efficiently detect and react to system state changes following disturbances and or damages by incooperating different system components to formulate an overall system-level solution. To achieve this objective, we propose a generic model-based predictive management framework that can be applied to a variety of Shipboard Power System (SPS) applications to meet the stringent performance requirements under different operating conditions. The proposed technique is proven to effectively prevent the system from instability caused by known and unknown disturbances with little or none human intervention under a variety of operation conditions. The management framework proposed in this dissertation is designed based on the concept of Model Predictive Control (MPC) techniques. A numerical approximation of the actual system is used to predict future system behaviors based on the current states and the candidate control input sequences. Based on the predictions the optimal control solution is chosen and applied as the current control input. The effectiveness and efficiency of the proposed framework can be evaluated conveniently based on a series of performance criteria such as fitness, robustness and computational overhead. An automatic system modeling, analysis and synthesis software environment is also introduced in this dissertation to facilitate the rapid implementation of the proposed performance management framework according to various testing scenarios

    On systematic approaches for interpreted information transfer of inspection data from bridge models to structural analysis

    Get PDF
    In conjunction with the improved methods of monitoring damage and degradation processes, the interest in reliability assessment of reinforced concrete bridges is increasing in recent years. Automated imagebased inspections of the structural surface provide valuable data to extract quantitative information about deteriorations, such as crack patterns. However, the knowledge gain results from processing this information in a structural context, i.e. relating the damage artifacts to building components. This way, transformation to structural analysis is enabled. This approach sets two further requirements: availability of structural bridge information and a standardized storage for interoperability with subsequent analysis tools. Since the involved large datasets are only efficiently processed in an automated manner, the implementation of the complete workflow from damage and building data to structural analysis is targeted in this work. First, domain concepts are derived from the back-end tasks: structural analysis, damage modeling, and life-cycle assessment. The common interoperability format, the Industry Foundation Class (IFC), and processes in these domains are further assessed. The need for usercontrolled interpretation steps is identified and the developed prototype thus allows interaction at subsequent model stages. The latter has the advantage that interpretation steps can be individually separated into either a structural analysis or a damage information model or a combination of both. This approach to damage information processing from the perspective of structural analysis is then validated in different case studies

    D5.2: Digital-Twin Enabled multi-physics simulation and model matching

    Get PDF
    This deliverable presents a report on the developed actions and results concerning Digital-Twin-enabled multi-physics simulations and model matching. Enabling meaningful simulations within new human-infrastructure interfaces such as Digital twins is paramount. Accessing the power of simulation opens manifold new ways for observation, understanding, analysis and prediction of numerous scenarios to which the asset may be faced. As a result, managers can access countless ways of acquiring synthetic data for eventually taking better, more informed decisions. The tool MatchFEM is conceived as a fundamental part of this endeavour. From a broad perspective, the tool is aimed at contextualizing information between multi-physics simulations and vaster information constructs such as digital twins. 3D geometries, measurements, simulations, and asset management coexist in such information constructs. This report provides guidance for the generation of comprehensive adequate initial conditions of the assets to be used during their life span using a DT basis. From a more specific focus, this deliverable presents a set of exemplary recommendations for the development of DT-enabled load tests of assets in the form of a white paper. The deliverable also belongs to a vaster suit of documents encountered in WP5 of the Ashvin project in which measurements, models and assessments are described thoroughly.Objectius de Desenvolupament Sostenible::9 - Indústria, Innovació i InfraestructuraPreprin

    Monitoring, Modeling, and Hybrid Simulation An Integrated Bayesian-based Approach to High-fidelity Fragility Analysis

    Get PDF
    Fragility functions are one of the key technical ingredients in seismic risk assessment. The derivation of fragility functions has been extensively studied in the past; however, large uncertainties still exist, mainly due to limited collaboration between the interdependent components involved in the course of fragility estimation. This research aims to develop a systematic Bayesian-based framework to estimate high-fidelity fragility functions by integrating monitoring, modeling, and hybrid simulation, with the final goal of improving the accuracy of seismic risk assessment to support both pre- and post-disaster decision-making. In particular, this research addresses the following five aspects of the problem: (1) monitoring with wireless smart sensor networks to facilitate efficient and accurate pre- and post-disaster data collection, (2) new modeling techniques including innovative system identification strategies and model updating to enable accurate structural modeling, (3) hybrid simulation as an advanced numerical experimental simulation tool to generate highly realistic and accurate response data for structures subject to earthquakes, (4) Bayesian-updating as a systematic way of incorporating hybrid simulation data to generate composite fragility functions with higher fidelity, and 5) the implementation of an integrated fragility analysis approach as a part of a seismic risk assessment framework. This research not only delivers an extensible and scalable framework for high fidelity fragility analysis and reliable seismic risk assessment, but also provides advances in wireless smart sensor networks, system identification, and pseudo-dynamic testing in civil engineering applications.Financial support for this research was provided in part by the National Science Foundation under NSF Grants No. CMS-060043, CMMI-0724172, CMMI-0928886, and CNS-1035573.Ope
    corecore