4,560 research outputs found

    Frameworks for Component-based Simulation

    Get PDF
    AbstractThe need to reduce development costs of simulation models has led to recent efforts for setting simulation standards that foster model reuse and interoperability. Specifically, the High Level Architecture (HLA) is a new simulation standard supported by the US Defense Modeling and Simulation Office (DMSO). It has been adopted as the standard technical architecture for all US Department of Defense simulations. In the meantime, the commercial sector has had successful efforts in developing enabling technologies for distributed computing; namely, the Common Object Request Broker Architecture (CORBA) by the Object Management Group (OMG). CORBA is a large and complex set of specifications and protocols that utilizes the objectoriented paradigm to achieve distributed object-oriented computing environments that allow object interoperability and reuse. When used as an infrastructure for simulation model reuse and interoperability, both HLA and CORBA exhibit merits and limitations. Since HLA and CORBA were developed independently, need exists for a comparative evaluation of the two architectures as a basis for component-based simulation. In this paper, both HLA and CORBA are presented in the context of component-based simulation model development and interoperability. The two architectures are compared against four comparison criteria that are related to their conceptual foundation and design

    Automated design of robust discriminant analysis classifier for foot pressure lesions using kinematic data

    Get PDF
    In the recent years, the use of motion tracking systems for acquisition of functional biomechanical gait data, has received increasing interest due to the richness and accuracy of the measured kinematic information. However, costs frequently restrict the number of subjects employed, and this makes the dimensionality of the collected data far higher than the available samples. This paper applies discriminant analysis algorithms to the classification of patients with different types of foot lesions, in order to establish an association between foot motion and lesion formation. With primary attention to small sample size situations, we compare different types of Bayesian classifiers and evaluate their performance with various dimensionality reduction techniques for feature extraction, as well as search methods for selection of raw kinematic variables. Finally, we propose a novel integrated method which fine-tunes the classifier parameters and selects the most relevant kinematic variables simultaneously. Performance comparisons are using robust resampling techniques such as Bootstrap632+632+and k-fold cross-validation. Results from experimentations with lesion subjects suffering from pathological plantar hyperkeratosis, show that the proposed method can lead tosim96sim 96%correct classification rates with less than 10% of the original features

    Parallel simulation techniques for telecommunication network modelling

    Get PDF
    In this thesis, we consider the application of parallel simulation to the performance modelling of telecommunication networks. A largely automated approach was first explored using a parallelizing compiler to speed up the simulation of simple models of circuit-switched networks. This yielded reasonable results for relatively little effort compared with other approaches. However, more complex simulation models of packet- and cell-based telecommunication networks, requiring the use of discrete event techniques, need an alternative approach. A critical review of parallel discrete event simulation indicated that a distributed model components approach using conservative or optimistic synchronization would be worth exploring. Experiments were therefore conducted using simulation models of queuing networks and Asynchronous Transfer Mode (ATM) networks to explore the potential speed-up possible using this approach. Specifically, it is shown that these techniques can be used successfully to speed-up the execution of useful telecommunication network simulations. A detailed investigation has demonstrated that conservative synchronization performs very well for applications with good look ahead properties and sufficient message traffic density and, given such properties, will significantly outperform optimistic synchronization. Optimistic synchronization, however, gives reasonable speed-up for models with a wider range of such properties and can be optimized for speed-up and memory usage at run time. Thus, it is confirmed as being more generally applicable particularly as model development is somewhat easier than for conservative synchronization. This has to be balanced against the more difficult task of developing and debugging an optimistic synchronization kernel and the application models

    On Consistency and Network Latency in Distributed Interactive Applications: A Survey—Part I

    Get PDF
    This paper is the first part of a two-part paper that documents a detailed survey of the research carried out on consistency and latency in distributed interactive applications (DIAs) in recent decades. Part I reviews the terminology associated with DIAs and offers definitions for consistency and latency. Related issues such as jitter and fidelity are also discussed. Furthermore, the various consistency maintenance mechanisms that researchers have used to improve consistency and reduce latency effects are considered. These mechanisms are grouped into one of three categories, namely time management, Information management and system architectural management. This paper presents the techniques associated with the time management category. Examples of such mechanisms include time warp, lock step synchronisation and predictive time management. The remaining two categories are presented in part two of the survey

    Reversible Computation: Extending Horizons of Computing

    Get PDF
    This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
    corecore