5,830 research outputs found

    Avoiding Pitfalls in Undergraduate Simulation Courses

    Get PDF
    Simulation development has historically been a specialized skill performed by engineers with graduate-level training and industry experience. However, advances in computing technology, coupled with the rise of model-based systems engineering, have dramatically increased the usage of simulations, such that most engineers now require a working knowledge of modeling and simulation (M&S). As such, an increasing number of undergraduate engineering programs are now requiring students to complete a simulation course. These courses are intended to reinforce foundational engineering knowledge while also teaching the students useful M&S tools that they will need in industry. Yet, a number of pitfalls are associated with teaching M&S to undergraduate students. The first major pitfall is focusing on the tool or software without properly teaching the underlying methodologies. This pitfall can result in students becoming fixated on the software, limiting their broader knowledge of M&S. The second pitfall involves the use of contrived, academic tutorials as course projects, which limits students from fully understanding the simulation design process. The third and fourth pitfalls are only superficially covering verification and validation and not building upon material that was taught in other courses. Finally, the fifth pitfall is the over-reliance on group projects and tests over individual projects. These pitfalls were uncovered during academic years (AYs) 2017 and 2018 in different undergraduate simulation courses at the United States Military Academy. The combat modeling course adapted its structure and content in AY2019 to address these pitfalls, with several lessons learned that are applicable to the broader simulation education community. Generally, students gained a broader understanding of M&S and submitted higher quality work. Additionally, the course-end feedback found an overall increase in M&S knowledge, with many students choosing to use M&S to support their honors theses and capstone projects, a trend not seen in past years

    Developing Executable Digital Models with Model-Based Systems Engineering – An Unmanned Aerial Vehicle Surveillance Scenario Example

    Get PDF
    There is an increase in complexity in modern systems that causes inconsistencies in the iterative exchange loops of the system design process and in turn, demands greater quality of system organization and optimization techniques. A recent transition from document-centric systems engineering to Model-Based Systems Engineering (MBSE) is being documented in literature from various industries to address these issues. This study aims to investigate how MBSE can be used as a starting point in developing digital twins (DT). Specifically, the adoption of MBSE for realizing DT has been investigated, resulting in various literature reviews that indicate the most prevalent methodologies and tools used to enhance and validate existing and future systems. An MBSE-enabled template for virtual model development was executed for the creation of executable models, which can serve as a research testbed for DT and system and system-of-systems optimization. This study explores the feasibility of this MBSE-enabled template by creating and simulating a surveillance system that monitors and reports on the health status and performance of an armored fighting vehicle via an Unmanned Aerial Vehicle (UAV). The objective of this template is to demonstrate how executable SysML diagrams are used to establish a collaborative working environment between multiple platforms to better convey system behavior, modifications, and analytics for various system stakeholders

    How is M&S Interoperability Different From Other Interoperability Domains?

    Get PDF
    During every standard workshop or event, the examples of working interoperability solutions are used to motivate for \u27plug and play\u27 standards for M&S as well, like standardized batteries for electronics, or the use of XML to exchange data between heterogeneous systems. While these are successful applications of standards, they are off the mark regarding M&S interoperability. The challenge of M&S is that the product that needs to be made interoperable is not the service or the system alone, but the model behind it as well. The paper shows that the alignment of conceptualizations is the real problem that is not yet dealt with in current interoperability standards

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    A Survey of Agent-Based Modeling Practices (January 1998 to July 2008)

    Get PDF
    In the 1990s, Agent-Based Modeling (ABM) began gaining popularity and represents a departure from the more classical simulation approaches. This departure, its recent development and its increasing application by non-traditional simulation disciplines indicates the need to continuously assess the current state of ABM and identify opportunities for improvement. To begin to satisfy this need, we surveyed and collected data from 279 articles from 92 unique publication outlets in which the authors had constructed and analyzed an agent-based model. From this large data set we establish the current practice of ABM in terms of year of publication, field of study, simulation software used, purpose of the simulation, acceptable validation criteria, validation techniques and complete description of the simulation. Based on the current practice we discuss six improvements needed to advance ABM as an analysis tool. These improvements include the development of ABM specific tools that are independent of software, the development of ABM as an independent discipline with a common language that extends across domains, the establishment of expectations for ABM that match their intended purposes, the requirement of complete descriptions of the simulation so others can independently replicate the results, the requirement that all models be completely validated and the development and application of statistical and non-statistical validation techniques specifically for ABM.Agent-Based Modeling, Survey, Current Practices, Simulation Validation, Simulation Purpose

    Proceedings, MSVSCC 2011

    Get PDF
    Proceedings of the 5th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 14, 2011 at VMASC in Suffolk, Virginia. 186 pp

    Prediction And Allocation Of Live To Virtual Communication Bridging Resources

    Get PDF
    This document summarizes a research effort focused on improving live-to-virtual (L-V) communication systems. The purpose of this work is to address a significant challenge facing the tactical communications training community through the development of the Live-to-Virtual Relay Radio Prediction Algorithm and implementation of the algorithm into an Integrated Live-to-Virtual Communications Server prototype device. The motivation for the work and the challenges of integrating live and virtual communications are presented. Details surrounding the formulation of the prediction algorithm and a description of the prototype system, hardware, and software architectures are shared. Experimental results from discrete event simulation analysis and prototype functionality testing accompany recommendations for future investigation. If the methods and technologies summarized are implemented, an estimated equipment savings of 25%-53% and an estimated cost savings of 150,000.00150,000.00 - 630,000.00 per site are anticipated. Thus, a solution to a critical tactical communications training problem is presented through the research discussed

    The Use Of Pc Based Simulation Systems In The Training Of Army Infantry Officers - An Evaluation Of The Rapid Decision Trainer

    Get PDF
    This research considers two modes of training Army infantry officers in initial training to conduct a platoon live fire exercise. Leaders from groups that were training with the current classroom training methods were compared to leaders from groups whose training was augmented with a PC based training system known as the Rapid Decision Trainer (RDT). The RDT was developed by the US Army Research Development and Engineering Command for the purpose of aiding in the training of tactical decision making and troop leading procedures of officers in the initial levels of training to become rifle platoon leaders. The RDT allows the leader in training to run through platoon level operations prior to live execution in a simulated combat environment. The focus of the system is on leadership tasks and decision making in areas such as unit movement, internal unit communication and contingency planning, and other dismounted infantry operations. Over the past year, some Infantry Officer Basic Course platoons at Ft. Benning have used the RDT in an experimental manner. Anecdotal evidence suggests that the system is beneficial in training IOBC officers. The Army Research Institute (ARI) conducted a preliminary evaluation of the RDT in March 2005 (Beal 2005). However, no quantitative measures were used in the evaluation of the RDT, only subjective evaluations of the users. Additionally, there were no formal evaluations by the training cadre, only the users themselves. This experiment continues the work of ARI and uses qualitative and quantitative data from both users and the evaluating cadre. In this experiment, the effectiveness of the RDT was evaluated through measuring leader behaviors and personal preferences. Three measurement approaches were used; (1) quantitative performance measures of leader actions, (2) qualitative situational awareness and evaluations of inclusion in the non leader players, and (3) a qualitative evaluation of the system\u27s usability and effectiveness by system users. Analysis reveals statistically significant findings that challenge the current norms
    corecore