480 research outputs found

    A framework for evaluating the impact of communication on performance in large-scale distributed urban simulations

    Get PDF
    A primary motivation for employing distributed simulation is to enable the execution of large-scale simulation workloads that cannot be handled by the resources of a single stand-alone computing node. To make execution possible, the workload is distributed among multiple computing nodes connected to one another via a communication network. The execution of a distributed simulation involves alternating phases of computation and communication to coordinate the co-operating nodes and ensure correctness of the resulting simulation outputs. Reliably estimating the execution performance of a distributed simulation can be difficult due to non-deterministic execution paths involved in alternating computation and communication operations. However, performance estimates are useful as a guide for the simulation time that can be expected when using a given set of computing resources. Performance estimates can support decisions to commit time and resources to running distributed simulations, especially where significant amounts of funds or computing resources are necessary. Various performance estimation approaches are employed in the distributed computing literature, including the influential Bulk Synchronous Parallel (BSP) and LogP models. Different approaches make various assumptions that render them more suitable for some applications than for others. Actual performance depends on characteristics inherent to each distributed simulation application. An important aspect of these individual characteristics is the dynamic relationship between the communication and computation phases of the distributed simulation application. This work develops a framework for estimating the performance of distributed simulation applications, focusing mainly on aspects relevant to the dynamic relationship between communication and computation during distributed simulation execution. The framework proposes a meta-simulation approach based on the Multi-Agent Simulation (MAS) paradigm. Using the approach proposed by the framework, meta-simulations can be developed to investigate the performance of specific distributed simulation applications. The proposed approach enables the ability to compare various what-if scenarios. This ability is useful for comparing the effects of various parameters and strategies such as the number of computing nodes, the communication strategy, and the workload-distribution strategy. The proposed meta-simulation approach can also aid a search for optimal parameters and strategies for specific distributed simulation applications. The framework is demonstrated by implementing a meta-simulation which is based on case studies from the Urban Simulation domain

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 171

    Get PDF
    This bibliography lists 186 reports, articles, and other documents introduced into the NASA scientific and technical information system in August 1977

    On power system automation: a Digital Twin-centric framework for the next generation of energy management systems

    Get PDF
    The ubiquitous digital transformation also influences power system operation. Emerging real-time applications in information (IT) and operational technology (OT) provide new opportunities to address the increasingly demanding power system operation imposed by the progressing energy transition. This IT/OT convergence is epitomised by the novel Digital Twin (DT) concept. By integrating sensor data into analytical models and aligning the model states with the observed system, a power system DT can be created. As a result, a validated high-fidelity model is derived, which can be applied within the next generation of energy management systems (EMS) to support power system operation. By providing a consistent and maintainable data model, the modular DT-centric EMS proposed in this work addresses several key requirements of modern EMS architectures. It increases the situation awareness in the control room, enables the implementation of model maintenance routines, and facilitates automation approaches, while raising the confidence into operational decisions deduced from the validated model. This gain in trust contributes to the digital transformation and enables a higher degree of power system automation. By considering operational planning and power system operation processes, a direct link to practice is ensured. The feasibility of the concept is examined by numerical case studies.The electrical power system is in the process of an extensive transformation. Driven by the energy transition towards renewable energy resources, many conventional power plants in Germany have already been decommissioned or will be decommissioned within the next decade. Among other things, these changes lead to an increased utilisation of power transmission equipment, and an increasing number of complex dynamic phenomena. The resulting system operation closer to physical boundaries leads to an increased susceptibility to disturbances, and to a reduced time span to react to critical contingencies and perturbations. In consequence, the task to operate the power system will become increasingly demanding. As some reactions to disturbances may be required within timeframes that exceed human capabilities, these developments are intrinsic drivers to enable a higher degree of automation in power system operation. This thesis proposes a framework to create a modular Digital Twin-centric energy management system. It enables the provision of validated and trustworthy models built from knowledge about the power system derived from physical laws, and process data. As the interaction of information and operational technologies is combined in the concept of the Digital Twin, it can serve as a framework for future energy management systems including novel applications for power system monitoring and control, which consider power system dynamics. To provide a validated high-fidelity dynamic power system model, time-synchronised phasor measurements of high-resolution are applied for validation and parameter estimation. This increases the trust into the underlying power system model as well as the confidence into operational decisions derived from advanced analytic applications such as online dynamic security assessment. By providing an appropriate, consistent, and maintainable data model, the framework addresses several key requirements of modern energy management system architectures, while enabling the implementation of advanced automation routines and control approaches. Future energy management systems can provide an increased observability based on the proposed architecture, whereby the situational awareness of human operators in the control room can be improved. In further development stages, cognitive systems can be applied that are able to learn from the data provided, e.g., machine learning based analytical functions. Thus, the framework enables a higher degree of power system automation, as well as the deployment of assistance and decision support functions for power system operation pointing towards a higher degree of automation in power system operation. The framework represents a contribution to the digital transformation of power system operation and facilitates a successful energy transition. The feasibility of the concept is examined by case studies in form of numerical simulations to provide a proof of concept.Das elektrische Energiesystem befindet sich in einem umfangreichen Transformations-prozess. Durch die voranschreitende Energiewende und den zunehmenden Einsatz erneuerbarer Energieträger sind in Deutschland viele konventionelle Kraftwerke bereits stillgelegt worden oder werden in den nächsten Jahren stillgelegt. Diese Veränderungen führen unter anderem zu einer erhöhten Betriebsmittelauslastung sowie zu einer verringerten Systemträgheit und somit zu einer zunehmenden Anzahl komplexer dynamischer Phänomene im elektrischen Energiesystem. Der Betrieb des Systems näher an den physikalischen Grenzen führt des Weiteren zu einer erhöhten Störanfälligkeit und zu einer verkürzten Zeitspanne, um auf kritische Ereignisse und Störungen zu reagieren. Infolgedessen wird die Aufgabe, das Stromnetz zu betreiben anspruchsvoller. Insbesondere dort wo Reaktionszeiten erforderlich sind, welche die menschlichen Fähigkeiten übersteigen sind die zuvor genannten Veränderungen intrinsische Treiber hin zu einem höheren Automatisierungsgrad in der Netzbetriebs- und Systemführung. Aufkommende Echtzeitanwendungen in den Informations- und Betriebstechnologien und eine zunehmende Menge an hochauflösenden Sensordaten ermöglichen neue Ansätze für den Entwurf und den Betrieb von cyber-physikalischen Systemen. Ein vielversprechender Ansatz, der in jüngster Zeit in diesem Zusammenhang diskutiert wurde, ist das Konzept des so genannten Digitalen Zwillings. Da das Zusammenspiel von Informations- und Betriebstechnologien im Konzept des Digitalen Zwillings vereint wird, kann es als Grundlage für eine zukünftige Leitsystemarchitektur und neuartige Anwendungen der Leittechnik herangezogen werden. In der vorliegenden Arbeit wird ein Framework entwickelt, welches einen Digitalen Zwilling in einer neuartigen modularen Leitsystemarchitektur für die Aufgabe der Überwachung und Steuerung zukünftiger Energiesysteme zweckdienlich einsetzbar macht. In Ergänzung zu den bereits vorhandenen Funktionen moderner Netzführungssysteme unterstützt das Konzept die Abbildung der Netzdynamik auf Basis eines dynamischen Netzmodells. Um eine realitätsgetreue Abbildung der Netzdynamik zu ermöglichen, werden zeitsynchrone Raumzeigermessungen für die Modellvalidierung und Modellparameterschätzung herangezogen. Dies erhöht die Aussagekraft von Sicherheitsanalysen, sowie das Vertrauen in die Modelle mit denen operative Entscheidungen generiert werden. Durch die Bereitstellung eines validierten, konsistenten und wartbaren Datenmodells auf der Grundlage von physikalischen Gesetzmäßigkeiten und während des Betriebs gewonnener Prozessdaten, adressiert der vorgestellte Architekturentwurf mehrere Schlüsselan-forderungen an moderne Netzleitsysteme. So ermöglicht das Framework einen höheren Automatisierungsgrad des Stromnetzbetriebs sowie den Einsatz von Entscheidungs-unterstützungsfunktionen bis hin zu vertrauenswürdigen Assistenzsystemen auf Basis kognitiver Systeme. Diese Funktionen können die Betriebssicherheit erhöhen und stellen einen wichtigen Beitrag zur Umsetzung der digitalen Transformation des Stromnetzbetriebs, sowie zur erfolgreichen Umsetzung der Energiewende dar. Das vorgestellte Konzept wird auf der Grundlage numerischer Simulationen untersucht, wobei die grundsätzliche Machbarkeit anhand von Fallstudien nachgewiesen wird

    A framework for evaluating the impact of communication on performance in large-scale distributed urban simulations

    Get PDF
    A primary motivation for employing distributed simulation is to enable the execution of large-scale simulation workloads that cannot be handled by the resources of a single stand-alone computing node. To make execution possible, the workload is distributed among multiple computing nodes connected to one another via a communication network. The execution of a distributed simulation involves alternating phases of computation and communication to coordinate the co-operating nodes and ensure correctness of the resulting simulation outputs. Reliably estimating the execution performance of a distributed simulation can be difficult due to non-deterministic execution paths involved in alternating computation and communication operations. However, performance estimates are useful as a guide for the simulation time that can be expected when using a given set of computing resources. Performance estimates can support decisions to commit time and resources to running distributed simulations, especially where significant amounts of funds or computing resources are necessary. Various performance estimation approaches are employed in the distributed computing literature, including the influential Bulk Synchronous Parallel (BSP) and LogP models. Different approaches make various assumptions that render them more suitable for some applications than for others. Actual performance depends on characteristics inherent to each distributed simulation application. An important aspect of these individual characteristics is the dynamic relationship between the communication and computation phases of the distributed simulation application. This work develops a framework for estimating the performance of distributed simulation applications, focusing mainly on aspects relevant to the dynamic relationship between communication and computation during distributed simulation execution. The framework proposes a meta-simulation approach based on the Multi-Agent Simulation (MAS) paradigm. Using the approach proposed by the framework, meta-simulations can be developed to investigate the performance of specific distributed simulation applications. The proposed approach enables the ability to compare various what-if scenarios. This ability is useful for comparing the effects of various parameters and strategies such as the number of computing nodes, the communication strategy, and the workload-distribution strategy. The proposed meta-simulation approach can also aid a search for optimal parameters and strategies for specific distributed simulation applications. The framework is demonstrated by implementing a meta-simulation which is based on case studies from the Urban Simulation domain

    Methodologies synthesis

    Get PDF
    This deliverable deals with the modelling and analysis of interdependencies between critical infrastructures, focussing attention on two interdependent infrastructures studied in the context of CRUTIAL: the electric power infrastructure and the information infrastructures supporting management, control and maintenance functionality. The main objectives are: 1) investigate the main challenges to be addressed for the analysis and modelling of interdependencies, 2) review the modelling methodologies and tools that can be used to address these challenges and support the evaluation of the impact of interdependencies on the dependability and resilience of the service delivered to the users, and 3) present the preliminary directions investigated so far by the CRUTIAL consortium for describing and modelling interdependencies

    Performance of Computer Systems; Proceedings of the 4th International Symposium on Modelling and Performance Evaluation of Computer Systems, Vienna, Austria, February 6-8, 1979

    Get PDF
    These proceedings are a collection of contributions to computer system performance, selected by the usual refereeing process from papers submitted to the symposium, as well as a few invited papers representing significant novel contributions made during the last year. They represent the thrust and vitality of the subject as well as its capacity to identify important basic problems and major application areas. The main methodological problems appear in the underlying queueing theoretic aspects, in the deterministic analysis of waiting time phenomena, in workload characterization and representation, in the algorithmic aspects of model processing, and in the analysis of measurement data. Major areas for applications are computer architectures, data bases, computer networks, and capacity planning. The international importance of the area of computer system performance was well reflected at the symposium by participants from 19 countries. The mixture of participants was also evident in the institutions which they represented: 35% from universities, 25% from governmental research organizations, but also 30% from industry and 10% from non-research government bodies. This proves that the area is reaching a stage of maturity where it can contribute directly to progress in practical problems
    corecore