15 research outputs found

    Децентралізована модель керування GRID-системою

    Get PDF
    У роботі розглянуто спосіб організації децентралізованої моделі управління GRID-системою за рахунок використання методу доступу до ресурсів на основі технології Р2Р, яка представляє собою перспективну платформу для використання крупномасштабованих ресурсномістких задач.In this work the method of decentralized GRID model management by using the access method which based on P2P (Peer-to-Peer) technology for represents a promising platform for scalable demanding tasks is offered

    Commercial-off-the-shelf simulation package interoperability: Issues and futures

    Get PDF
    Commercial-Off-The-Shelf Simulation Packages (CSPs) are widely used in industry to simulate discrete-event models. Interoperability of CSPs requires the use of distributed simulation techniques. Literature presents us with many examples of achieving CSP interoperability using bespoke solutions. However, for the wider adoption of CSP-based distributed simulation it is essential that, first and foremost, a standard for CSP interoperability be created, and secondly, these standards are adhered to by the CSP vendors. This advanced tutorial is on an emerging standard relating to CSP interoperability. It gives an overview of this standard and presents case studies that implement some of the proposed standards. Furthermore, interoperability is discussed in relation to large and complex models developed using CSPs that require large amount of computing resources. It is hoped that this tutorial will inform the simulation community of the issues associated with CSP interoperability, the importance of these standards and its future

    Learning Education: An ‘Educational Big Data’ approach for monitoring, steering and assessment of the process of continuous improvement of education

    Get PDF
    Changing regulations, pedagogy and didactics worldwide, have ensured that the educational system has changed severely. But the entrance of Web 2.0 and other technologies had a significant impact on the way we educate and assess our education too. The Web 2.0 applications also increase the cooperation between stakeholders in education and has led to the phenomenon ‘Learning Education’. Learning Education is a term we use for the phenomenon where educational stakeholders (i.e. teachers, students, policy-makers, partners etc.) can learn from each other in order to ultimately improve education. The developments within the Interactive Internet (Web 2.0) enabled the development of innovative and sophisticated strategies for monitoring, steering and assessing the ‘learning of education’. These developments give teachers possibilities to enhance their education with digital applications, but also to monitor, steer and assess their own behavior. This process can be done with multiple sources, for example questionnaires, interviews, panel research, but also the more innovative sources like big social data and network interactions. In this article we use the term ‘educational big data’ for these sources and use it for monitoring, steering and assessing the developments within education, according to the Plan, Do, Check, Act principle (PDCA). We specifically analyze the Check-phase and describe it with the Learning Education Check Framework (LECF). We operationalize the LECF with a Learning Education Check System (LECS), which is capable to guide itself and change those directions as well in response to changing ways and trends in education and their practices. The system supports the data-driven decision making process within the learning education processes. So, in this article we work on the LECF and propose and describe a paper-based concept of the – by educational big data driven – LECS. Besides that, we show the possibilities, reliability and validity for measuring the ‘Educational Big Data’ within an educational setting

    Using Simulation Systems for Decision Support

    Get PDF
    This chapter describes the use of simulation systems for decision support in support of real operations, which is the most challenging application domain in the discipline of modeling and simulation. To this end, the systems must be integrated as services into the operational infrastructure. To support discovery, selection, and composition of services, they need to be annotated regarding technical, syntactic, semantic, pragmatic, dynamic, and conceptual categories. The systems themselves must be complete and validated. The data must be obtainable, preferably via common protocols shared with the operational infrastructure. Agents and automated forces must produce situation adequate behavior. If these requirements for simulation systems and their annotations are fulfilled, decision support simulation can contribute significantly to the situational awareness up to cognitive levels of the decision maker

    Composable M&S web services for net-centric applications

    Get PDF
    Service-oriented architectures promise easier integration of functionality in the form of web services into operational systems than is the case with interface-driven system-oriented approaches. Although the Extensible Markup Language (XML) enables a new level of interoperability among heterogeneous systems, XML alone does not solve all interoperability problems users contend with when integrating services into operational systems. To manage the basic challenges of service interoperation, we developed the Levels of Conceptual Interoperability Model (LCIM) to enable a layered approach and gradual solution improvements. Furthermore, we developed methods of model-based data engineering (MBDE) for semantically consistent service integration as a first step. These methods have been applied in the U.S. in collaboration with industry resulting in proofs of concepts. The results are directly applicable in a net-centric and net-enabled environment

    Coalition Battle Management Language (C-BML) Study Group Final Report

    Get PDF
    Interoperability across Modeling and Simulation (M&S) and Command and Control (C2) systems continues to be a significant problem for today\u27s warfighters. M&S is well-established in military training, but it can be a valuable asset for planning and mission rehearsal if M&S and C2 systems were able to exchange information, plans, and orders more effectively. To better support the warfighter with M&S based capabilities, an open standards-based framework is needed that establishes operational and technical coherence between C2 and M&S systems

    Master/worker parallel discrete event simulation

    Get PDF
    The execution of parallel discrete event simulation across metacomputing infrastructures is examined. A master/worker architecture for parallel discrete event simulation is proposed providing robust executions under a dynamic set of services with system-level support for fault tolerance, semi-automated client-directed load balancing, portability across heterogeneous machines, and the ability to run codes on idle or time-sharing clients without significant interaction by users. Research questions and challenges associated with issues and limitations with the work distribution paradigm, targeted computational domain, performance metrics, and the intended class of applications to be used in this context are analyzed and discussed. A portable web services approach to master/worker parallel discrete event simulation is proposed and evaluated with subsequent optimizations to increase the efficiency of large-scale simulation execution through distributed master service design and intrinsic overhead reduction. New techniques for addressing challenges associated with optimistic parallel discrete event simulation across metacomputing such as rollbacks and message unsending with an inherently different computation paradigm utilizing master services and time windows are proposed and examined. Results indicate that a master/worker approach utilizing loosely coupled resources is a viable means for high throughput parallel discrete event simulation by enhancing existing computational capacity or providing alternate execution capability for less time-critical codes.Ph.D.Committee Chair: Fujimoto, Richard; Committee Member: Bader, David; Committee Member: Perumalla, Kalyan; Committee Member: Riley, George; Committee Member: Vuduc, Richar

    Resource-constraint And Scalable Data Distribution Management For High Level Architecture

    Get PDF
    In this dissertation, we present an efficient algorithm, called P-Pruning algorithm, for data distribution management problem in High Level Architecture. High Level Architecture (HLA) presents a framework for modeling and simulation within the Department of Defense (DoD) and forms the basis of IEEE 1516 standard. The goal of this architecture is to interoperate multiple simulations and facilitate the reuse of simulation components. Data Distribution Management (DDM) is one of the six components in HLA that is responsible for limiting and controlling the data exchanged in a simulation and reducing the processing requirements of federates. DDM is also an important problem in the parallel and distributed computing domain, especially in large-scale distributed modeling and simulation applications, where control on data exchange among the simulated entities is required. We present a performance-evaluation simulation study of the P-Pruning algorithm against three techniques: region-matching, fixed-grid, and dynamic-grid DDM algorithms. The P-Pruning algorithm is faster than region-matching, fixed-grid, and dynamic-grid DDM algorithms as it avoid the quadratic computation step involved in other algorithms. The simulation results show that the P-Pruning DDM algorithm uses memory at run-time more efficiently and requires less number of multicast groups as compared to the three algorithms. To increase the scalability of P-Pruning algorithm, we develop a resource-efficient enhancement for the P-Pruning algorithm. We also present a performance evaluation study of this resource-efficient algorithm in a memory-constraint environment. The Memory-Constraint P-Pruning algorithm deploys I/O efficient data-structures for optimized memory access at run-time. The simulation results show that the Memory-Constraint P-Pruning DDM algorithm is faster than the P-Pruning algorithm and utilizes memory at run-time more efficiently. It is suitable for high performance distributed simulation applications as it improves the scalability of the P-Pruning algorithm by several order in terms of number of federates. We analyze the computation complexity of the P-Pruning algorithm using average-case analysis. We have also extended the P-Pruning algorithm to three-dimensional routing space. In addition, we present the P-Pruning algorithm for dynamic conditions where the distribution of federated is changing at run-time. The dynamic P-Pruning algorithm investigates the changes among federates regions and rebuilds all the affected multicast groups. We have also integrated the P-Pruning algorithm with FDK, an implementation of the HLA architecture. The integration involves the design and implementation of the communicator module for mapping federate interest regions. We provide a modular overview of P-Pruning algorithm components and describe the functional flow for creating multicast groups during simulation. We investigate the deficiencies in DDM implementation under FDK and suggest an approach to overcome them using P-Pruning algorithm. We have enhanced FDK from its existing HLA 1.3 specification by using IEEE 1516 standard for DDM implementation. We provide the system setup instructions and communication routines for running the integrated on a network of machines. We also describe implementation details involved in integration of P-Pruning algorithm with FDK and provide results of our experiences
    corecore