126 research outputs found

    A Stochastic Model of Plausibility in Live-Virtual-Constructive Environments

    Get PDF
    Distributed live-virtual-constructive simulation promises a number of benefits for the test and evaluation community, including reduced costs, access to simulations of limited availability assets, the ability to conduct large-scale multi-service test events, and recapitalization of existing simulation investments. However, geographically distributed systems are subject to fundamental state consistency limitations that make assessing the data quality of live-virtual-constructive experiments difficult. This research presents a data quality model based on the notion of plausible interaction outcomes. This model explicitly accounts for the lack of absolute state consistency in distributed real-time systems and offers system designers a means of estimating data quality and fitness for purpose. Experiments with World of Warcraft player trace data validate the plausibility model and exceedance probability estimates. Additional experiments with synthetic data illustrate the model\u27s use in ensuring fitness for purpose of live-virtual-constructive simulations and estimating the quality of data obtained from live-virtual-constructive experiments

    Application and support for high-performance simulation

    Get PDF
    types: Editorial CommentHigh performance simulation that supports sophisticated simulation experimentation and optimization can require non-trivial amounts of computing power. Advanced distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), grid computing, cloud computing and e-Infrastructures are needed to provide effectively the computing power needed for the high performance simulation of large and complex models. In simulation there has been a long tradition of translating and adopting advances in distributed computing as shown by contributions from the parallel and distributed simulation community. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue is divided into two parts. This first part focuses on research pertaining to high performance simulation that support a range of applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. Compared to other simulation techniques agent-based modeling and simulation is relatively new; however, it is increasingly being used to study large-scale problems. Agent-based simulations present challenges for high performance simulation as they can be complex and computationally demanding, and it is therefore not surprising that this special issue includes several articles on the high performance simulation of such systems.Research Councils U

    High-performance simulation and simulation methodologies

    Get PDF
    types: Editorial CommentThe realization of high performance simulation necessitates sophisticated simulation experimentation and optimization; this often requires non-trivial amounts of computing power. Distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), e-infrastructures, grid and cloud computing can provide the required computing capacity for the execution of large and complex simulations. This extends the long tradition of adopting advances in distributed computing in simulation as evidenced by contributions from the parallel and distributed simulation community. There has arguably been a recent acceleration of innovation in distributed computing tools and techniques. This special issue presents the opportunity to showcase recent research that is assimilating these new advances in simulation. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue has two parts. The first part (published in the preceding issue of the journal) included seven studies in high performance simulation that support applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. This second part focuses on original research in high performance simulation that supports a range of methods including DEVS, Petri nets and DES. Of the four papers for this issue, the manuscript by Bergero, et al. (2013), which was submitted, reviewed and accepted for the special issue, was published in an earlier issue of SIMULATION as the author requested early publication.Research Councils U

    Approximation Algorithm for Estimating Distances in Distributed Virtual Environments

    Get PDF
    International audienceThis article deals with the issue of guaranteeing properties in Distributed Virtual Environments (DVEs) without a server and without global knowledge of the system state and therefore only by exchanging messages. This issue is particularly relevant in the case of online games, that operate in a fully distributed framework and for which network resources such as bandwidth are the critical resources. In the context of games, players typically need to know the distance between their character and other characters, at least approximately. Players all share the same position estimation algorithm but, in general, do not know the current positions of others. We provide a synchronized distributed algorithm Alc to guarantee, at any time, that the estimated distance d est between any pair of characters A and B is always a 1 + ε approximation of the current distance d act. Our result is twofold: (1) we prove that if characters move randomly on a d-dimensional grid, or follow a random continuous movement on up to three dimensions, the number of messages of Alc is optimal up to a constant factor; (2) in a more practical setting, we also observe that the number of messages of Alc for actual game traces is much less than the standard algorithm sending actual positions at a given frequency

    Dense agent-based HPC simulation of cell physics and signaling with real-time user interactions

    Get PDF
    Introduction: Distributed simulations of complex systems to date have focused on scalability and correctness rather than interactive visualization. Interactive visual simulations have particular advantages for exploring emergent behaviors of complex systems. Interpretation of simulations of complex systems such as cancer cell tumors is a challenge and can be greatly assisted by using “built-in” real-time user interaction and subsequent visualization.Methods: We explore this approach using a multi-scale model which couples a cell physics model with a cell signaling model. This paper presents a novel communication protocol for real-time user interaction and visualization with a large-scale distributed simulation with minimal impact on performance. Specifically, we explore how optimistic synchronization can be used to enable real-time user interaction and visualization in a densely packed parallel agent-based simulation, whilst maintaining scalability and determinism. We also describe the software framework created and the distribution strategy for the models utilized. The key features of the High-Performance Computing (HPC) simulation that were evaluated are scalability, deterministic verification, speed of real-time user interactions, and deadlock avoidance.Results: We use two commodity HPC systems, ARCHER (118,080 CPU cores) and ARCHER2 (750,080 CPU cores), where we simulate up to 256 million agents (one million cells) using up to 21,953 computational cores and record a response time overhead of ≃350 ms from the issued user events.Discussion: The approach is viable and can be used to underpin transformative technologies offering immersive simulations such as Digital Twins. The framework explained in this paper is not limited to the models used and can be adapted to systems biology models that use similar standards (physics models using agent-based interactions, and signaling pathways using SBML) and other interactive distributed simulations

    Characterizing the Effects of Local Latency on Aim Performance in First Person Shooters

    Get PDF
    Real-time games such as first-person shooters (FPS) are sensitive to even small amounts of lag. The effects of network latency have been studied, but less is known about local latency -- that is, the lag caused by local sources such as input devices, displays, and the application. While local latency is important to gamers, we do not know how it affects aiming performance and whether we can reduce its negative effects. To explore these issues, we tested local latency in a variety of real-world gaming systems and carried out a controlled study focusing on targeting and tracking activities in an FPS game with varying degrees of local latency. In addition, we tested the ability of a lag compensation technique (based on aim assistance) to mitigate the negative effects. To motivate the need for these studies, we also examined how aim in FPS differs from pointing in standard 2D tasks, showing significant differences in performance metrics. Our studies found local latencies in the real-world range from 23 to 243~ms that cause significant and substantial degradation in performance (even for latencies as low as 41~ms). The studies also showed that our compensation technique worked well, reducing the problems caused by lag in the case of targeting, and removing the problem altogether in the case of tracking. Our work shows that local latency is a real and substantial problem -- but game developers can mitigate the problem with appropriate compensation methods

    Medical Device Interoperability With Provable Safety Properties

    Get PDF
    Applications that can communicate with and control multiple medical devices have the potential to radically improve patient safety and the effectiveness of medical treatment. Medical device interoperability requires devices to have an open, standards-based interface that allows communication with any other device that implements the same interface. This will enable applications and functionality that can improve patient safety and outcomes. To build interoperable systems, we need to match up the capabilities of the medical devices with the needs of the application. An application that requires heart rate as an input and provides a control signal to an infusion pump requires a source of heart rate and a pump that will accept the control signal. We present means for devices to describe their capabilities and a methodology for automatically checking an application’s device requirements against the device capabilities. If such applications are going to be used for patient care, there needs to be convincing proof of their safety. The safety of a medical device is closely tied to its intended use and use environment. Medical device manufacturers create a hazard analysis of their device, where they explore the hazards associated with its intended use. We describe hazard analysis for interoperable devices and how to create system safety properties from these hazard analyses. The use environment of the application includes the application, connected devices, patient, and clinical workflow. The patient model is specific to each application and represents the patient’s response to treatment. We introduce Clinical Application Modeling Language (CAML), based on Extended Finite State Machines, and use model checking to test safety properties from the hazard analysis against the parallel composition of the application, patient model, clinical workflow, and the device models of connected devices

    Parallel implementation of a virtual reality system on a transputer architecture

    Get PDF
    A Virtual Reality is a computer model of an environment, actual or imagined, presented to a user in as realistic a fashion as possible. Stereo goggles may be used to provide the user with a view of the modelled environment from within the environment, while a data-glove is used to interact with the environment. To simulate reality on a computer, the machine has to produce realistic images rapidly. Such a requirement usually necessitates expensive equipment. This thesis presents an implementation of a virtual reality system on a transputer architecture. The system is general, and is intended to provide support for the development of various virtual environments. The three main components of the system are the output device drivers, the input device drivers, and the virtual world kernel. This last component is responsible for the simulation of the virtual world. The rendering system is described in detail. Various methods for implementing the components of the graphics pipeline are discussed. These are then generalised to make use of the facilities provided by the transputer processor for parallel processing. A number of different decomposition techniques are implemented and compared. The emphasis in this section is on the speed at which the world can be rendered, and the interaction latency involved. In the best case, where almost linear speedup is obtained, a world containing over 250 polygons is rendered at 32 frames/second. The bandwidth of the transputer links is the major factor limiting speedup. A description is given of an input device driver which makes use of a powerglove. Techniques for overcoming the limitations of this device, and for interacting with the virtual world, are discussed. The virtual world kernel is designed to make extensive use of the parallel processing facilities provided by transputers. It is capable of providing support for mUltiple worlds concurrently, and for multiple users interacting with these worlds. Two applications are described that were successfully implemented using this system. The design of the system is compared with other recently developed virtual reality systems. Features that are common or advantageous in each of the systems are discussed. The system described in this thesis compares favourably, particularly in its use of parallel processors.KMBT_22

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive human–computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included
    corecore