22 research outputs found

    Stochastic simulation of a Commander's decision cycle (SSIM CODE)

    Get PDF
    This thesis develops a stochastic representation of a tactical commanderâ s decision cycle and applies the model within the high-resolution combat simulation: Combined Arms Analysis Tool for the 21st Century (Combat XXI). Combat XXI is a Joint Army-Marine Corps effort to replace the Combined Arms and Support Evaluation Model (CASTFOREM)â a legacy combat simulation. Combat XXI is a non-interactive, high-resolution, analytical combat simulation focused on tactical combat. Combat XXI is being developed by the U.S. Army TRADOC Analysis Center-White Sands Missile Range (TRAC- WSMR) and the Marine Corps Combat Development Command (MCCDC). Combat XXI models land and amphibious warfare for applications in the research, development and acquisition, and the advanced concepts requirements domains. Stochastic decision-making enhances Command and Control (C2) decision processes in Combat XXI. The stochastic simulation of a commanderâ s decision cycle (SSIM CODE) addresses variability in decision-making due to uncertainty, chance and the commanderâ s attributes. A Bayesian Network representation of a conditional probability model for a commanderâ s decision cycle is implemented in SSIM CODE. This thesis develops, applies and evaluates the effectiveness of SSIM CODE.http://archive.org/details/stochasticsimula109452518US Marine Corps (USMC) autho

    Big data analytics tools for improving the decision-making process in agrifood supply chain

    Get PDF
    Introduzione: Nell'interesse di garantire una sicurezza alimentare a lungo termine di fronte a circostanze mutevoli, è necessario comprendere e considerare gli aspetti ambientali, sociali ed economici del processo di produzione. Inoltre, a causa della globalizzazione, sono stati sollevati i problemi delle lunghe filiere agroalimentari, l'asimmetria informativa, la contraffazione, la difficoltà di tracciare e rintracciare l'origine dei prodotti e le numerose questioni correlate quali il benessere dei consumatori e i costi sanitari. Le tecnologie emergenti guidano verso il raggiungimento di nuovi approcci socioeconomici in quanto consentono al governo e ai singoli produttori agricoli di raccogliere ed analizzare una quantità sempre crescente di dati ambientali, agronomici, logistici e danno la possibilità ai consumatori ed alle autorità di controllo della qualità di accedere a tutte le informazioni necessarie in breve tempo e facilmente. Obiettivo: L'oggetto della ricerca riguarda lo studio delle modalità di miglioramento del processo produttivo attraverso la riduzione dell'asimmetria informativa, rendendola disponibile alle parti interessate in un tempo ragionevole, analizzando i dati sui processi produttivi, considerando l'impatto ambientale della produzione in termini di ecologia, economia, sicurezza alimentare e qualità di cibo, costruendo delle opportunità per le parti interessate nel prendere decisioni informate, oltre che semplificare il controllo della qualità, della contraffazione e delle frodi. Pertanto, l'obiettivo di questo lavoro è quello di studiare le attuali catene di approvvigionamento, identificare le loro debolezze e necessità, analizzare le tecnologie emergenti, le loro caratteristiche e gli impatti sulle catene di approvvigionamento e fornire utili raccomandazioni all'industria, ai governi e ai policy maker.Introduction: In the interest of ensuring long-term food security and safety in the face of changing circumstances, it is interesting and necessary to understand and to take into consideration the environmental, social and economic aspects of food and beverage production in relation to the consumers’ demand. Besides, due to the globalization, the problems of long supply chains, information asymmetry, counterfeiting, difficulty for tracing and tracking back the origin of the products and numerous related issues have been raised such as consumers’ well-being and healthcare costs. Emerging technologies drive to achieve new socio-economic approaches as they enable government and individual agricultural producers to collect and analyze an ever-increasing amount of environmental, agronomic, logistic data, and they give the possibility to the consumers and quality control authorities to get access to all necessary information in a short notice and easily. Aim: The object of the research essentially concerns the study of the ways for improving the production process through reducing the information asymmetry, making it available for interested parties in a reasonable time, analyzing the data about production processes considering the environmental impact of production in terms of ecology, economy, food safety and food quality and build the opportunity for stakeholders to make informed decisions, as well as simplifying the control of the quality, counterfeiting and fraud. Therefore, the aim of this work is to study current supply chains, to identify their weaknesses and necessities, to investigate the emerging technologies, their characteristics and the impacts on supply chains, and to provide with the useful recommendations the industry, governments and policymakers

    Proposal of a complementary method of data compression by discrete event methodology applied at a low level of abstraction

    Get PDF
    Orientadores: Edson Moschim, Yuzo IanoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: O presente trabalho implementa um modelo baseado em eventos discretos aplicados em um baixo nível de abstração em um sistema de telecomunicações chamado método híbrido, sendo usado o ambiente de simulação Simulink® do software Matlab®. Com o objetivo de melhorar a transmissão da informação em sistemas de telecomunicações e contribuir para a área de estudo, em ambiente de simulação é proposto um processo de pré-codificação de bits baseada na aplicação de eventos discretos no sinal antes do processo de modulação. A proposta traz uma abordagem diferente do que se é usualmente feito, na qual a transmissão de sinal no canal é realizada no domínio discreto com a implementação de entidades discretas no processo de geração de bits, tendo como ênfase o bit zero. Na simulação são considerados formatos de modulação avançada para transmissão de sinal em um canal AWGN. Os resultados mostram melhorias na utilização da memória e no desempenho computacional, sendo de 9 a 34%, assim como também ao tempo de simulação. Sendo assim, a extensão desses resultados, tem um forte impacto no melhoramento de métodos realizados em camadas mais altas, já que a proposta atua na camada físicaAbstract: The present work implements a model based on discrete events applied at a low level of abstraction in a telecommunication system named hybrid method, being used the Simulink® simulation environment of the Matlab® software. With the objective of improving the transmission of information in telecommunication systems and contribute to the study area, in simulation environment is proposed a pre-coding process of bits based in the application of discrete events in the signal before of the modulation process. The proposal brings a different approach of usual technical, in which the signal transmission on the channel is realized in the discrete domain with the implementation of discrete entities in the process of bit generation having as emphasis the zero bit. In the simulation are considered advanced modulation formats for signal transmission in an AWGN channel. The results show improvements in memory utilization and computational performance, from 9 to 34%, as well as simulation time. Thus, the extension of these results has a strong impact on the improvement of methods performed in higher layers, since the proposal acts on the physical layerMestradoEletrônica, Microeletrônica e OptoeletrônicaMestre em Engenharia Elétrica132495/2016-3CNP

    A Study of Executable Model Based Systems Engineering from DODAF Using Simulink

    Get PDF
    Diagrams and visuals often cannot adequately capture a complex system’s architecture for analysis. The Department of Defense Architectural Framework (DoDAF), written to follow the Unified Modeling Language (UML), is a collection of mandated common architectural products for interoperability among the DoD components. In this study, DoDAF products from as-is Remotely Piloted Aircraft (RPA) Satellite Communication (SATCOM) systems have been utilized for the creation of executable architectures as part of an Executable Model Based Systems Engineering (EMBSE) process. EMBSE was achieved using Simulink, a software tool for modeling, simulating and analyzing dynamic systems. This study has demonstrated that DoDAF products can be created and executed following the rules of UML for analysis. It has also shown that DoDAF products can be utilized to build analysis models. Furthermore, these analysis models and executable architectures have been presented to a panel of experts on the topic. The comments and study results show a desire for executable architectures as well as their viability as presented in Simulink. This study concludes there is a need, a use and a method to implement objective analysis using EMBSE from DoDAF products in Simulink for current and future DoD systems

    Application of Executable Architectures in Early Concept Evaluation

    Get PDF
    This research explores use of executable architectures to guide design decisions in the early stages of system development. Decisions made early in the system development cycle determine a majority of the total lifecycle costs as well as establish a baseline for long term system performance and thus it is vital to program success to choose favorable design alternatives. The development of a representative architecture followed the Architecture Based Evaluation Process as it provides a logical and systematic order of events to produce an architecture sufficient to document and model operational performance. In order to demonstrate the value in the application of executable architectures for trade space decisions, three variants of a fictional unmanned aerial system were developed and simulated. Four measures of effectiveness (MOEs) were selected for evaluation. Two parameters of interest were varied at two levels during simulation to create four test case scenarios against which to evaluate each variant. Analysis of the resulting simulation demonstrated the ability to obtain a statistically significant difference in MOE performance for 10 out of 16 possible test case-MOE combinations. Additionally, for the given scenarios, the research demonstrated the ability to make a conclusive selection of the superior variant for additional development

    Network coding for computer networking

    Get PDF
    Conventional communication networks route data packets in a store-and-forward mode. A router buffers received packets and forwards them intact towards their intended destination. Network Coding (NC), however, generalises this method by allowing the router to perform algebraic operations on the packets before forwarding them. The purpose of NC is to improve the network performance to achieve its maximum capacity also known as max-flow min-cut bound. NC has become very well established in the field of information theory, however, practical implementations in real-world networks is yet to be explored. In this thesis, new implementations of NC are brought forward. The effect of NC on flow error control protocols and queuing over computer networks is investigated by establishing and designing a mathematical and simulation framework. One goal of such investigation is to understand how NC technique can reduce the number of packets required to acknowledge the reception of those sent over the network while error-control schemes are employed. Another goal is to control the network queuing stability by reducing the number of packets required to convey a set of information. A custom-built simulator based on SimEvents® has been developed in order to model several scenarios within this approach. The work in this thesis is divided into two key parts. The objective of the first part is to study the performance of communication networks employing error control protocols when NC is adopted. In particular, two main Automatic Repeat reQuest (ARQ) schemes are invoked, namely the Stop-and-Wait (SW) and Selective Repeat (SR) ARQ. Results show that in unicast point-to point communication, the proposed NC scheme offers an increase in the throughput over traditional SW ARQ between 2.5% and 50.5% at each link, with negligible decoding delay. Additionally, in a Butterfly network, SR ARQ employing NC achieves a throughput gain between 22% and 44% over traditional SR ARQ when the number of incoming links to the intermediate node varies between 2 and 5. Moreover, in an extended Butterfly network, NC offered a throughput increase of up to 48% under an error-free scenario and 50% in the presence of errors. Despite the extensive research on synchronous NC performance in various fields, little has been said about its queuing behaviour. One assumption is that packets are served following a Poisson distribution. The packets from different streams are coded prior to being served and then exit through only one stream. This study determines the arrival distribution that coded packets follow at the serving node. In general this leads to study general queuing systems of type G/M/1. Hence, the objective of the second part of this study is twofold. The study aims to determine the distribution of the coded packets and estimate the waiting time faced by coded packets before their complete serving process. Results show that NC brings a new solution for queuing stability as evidenced by the small waiting time the coded packets spend in the intermediate node queue before serving. This work is further enhanced by studying the server utilization in traditional routing and NC scenarios. NC-based M/M/1 with finite capacity K is also analysed to investigate packet loss probability for both scenarios. Based on the results achieved, the utilization of NC in error-prone and long propagation delay networks is recommended. Additionally, since the work provides an insightful prediction of particular networks queuing behaviour, employing synchronous NC can bring a solution for systems’ stability with packet-controlled sources and limited input buffers

    A Game-Theoretic Decision-Making Framework for Engineering Self-Protecting Software Systems

    Get PDF
    Targeted and destructive nature of strategies used by attackers to break down a software system require mitigation approaches with dynamic awareness. Making a right decision, when facing today’s sophisticated and dynamic attacks, is one of the most challenging aspects of engineering self-protecting software systems. The challenge is due to: (i) the consideration of the satisfaction of various security and non-security quality goals and their inherit conflicts with each other when selecting a countermeasure, (ii) the proactive and dynamic nature of these security attacks which make their detection and consequently their mitigation challenging, and (iii) the incorporation of uncertainties such as the intention and strategy of the adversary to attack the software system. These factors motivated the need for a decision-making engine that facilitates adaptive security from a holistic view of the software system and the attacker. Inspired by game theory, in this research work, we model the interactions between the attacker and the software system as a two-player game. Using game-theoretic techniques, the self-protecting software systems is able to: (i) fuse the strategies of attackers into the decision-making model, and (ii) refine the strategies in dynamic attack scenarios by utilizing what has learned from the system’s and adversary’s interactions. This PhD research devises a novel framework with three phases: (i) modeling quality/malicious goals aiming at quantifying them into the decision-making engine, (ii) designing game-theoretic techniques which build the decision model based on the satisfaction level of quality/malicious goals, and (iii) realizing the decision-making engine in a working software system. The framework aims at exhibiting a plug-and-play capability to adapt a game-theoretic technique that suite security goals and requirements of the software. In order to illustrate the plug-and-play capability of our proposed framework, we have designed and developed three decision-making engines. Each engine aims at addressing a different challenge in adaptive security. Hence, three distinct techniques are designed: (i) incentive-based (“IBSP”), (ii) learning-based (“MARGIN”), and (iii) uncertainty-based (“UBSP”). For each engine a game-theoretic approach is taken considering the security requirements and the input information. IBSP maps the quality goals and the incentives of the attacker to the interdependencies among defense and attack strategies. MARGIN, protects the software system against dynamic strategies of attacker. UBSP, handles adversary-type uncertainty. The evaluations of these game-theoretic approaches show the benefits of the proposed framework in terms of satisfaction of security and non-security goals of the software system

    Skalierbarkeit einer Szenarien- und Template-basierten Simulation von Elektrik/Elektronik-Architekturen in reaktiven Umgebungen

    Get PDF
    Die Automobilindustrie befindet sich in einem Wandel. Zukünftige Fahrzeuge sind elektrisch, autonom, vernetzt, werden geteilt und regelmäßig aktualisiert. Die Auswirkung davon ist ein starkes Wachstum der Software in zukünftigen Fahrzeugen, das vor allem auf die Implementierung von autonomen Fahrerverhalten und herstellerspezifischen Betriebssystemen zurückzuführen ist. Zur sicheren Ausführung dieser Software werden leistungsstarke Zentralrechner benötigt. Daneben führen ein steigender Bedarf an Sicherheitsmechanismen gegen Cyberangriffe, der Einzug von Leistungselektronik und die notwendige Gewährleistung der Ausfallsicherheit zu einem Anstieg der Komplexität bei der Entwicklung von automobilen Elektrik/Elektronik-Architekturen (E/E-Architekturen). Im Bereich der Leistungselektronik liegt dies etwa an der benötigten Realisierung einer galvanischen Trennung zwischen Hochvolt- und Niedervoltnetz, um die Unversehrtheit der Insassen zu gewährleisten. Außerdem erfordert der Einsatz von permanenterregten Synchronmaschinen die sichere Auslegung und das Design entsprechender Schaltungen zur Ansteuerung. Cyberangriffe erfordern hingegen Mechanismen zur Abwehr und Gewährleistung der Informationssicherheit. Dazu zählen präventive Firewalls oder proaktive Angriffserkennungssysteme. Eine Ausfallsicherheit wird dagegen durch Komponenten- oder Informationsredundanz ermöglicht. Um entsprechende Ausfallmaßnahmen einzuleiten, kann zusätzlich die Implementierung eines entsprechenden Monitorings nötig sein. Im Zuge des Wandels wachsen die E/E-Architekturmodelle und weisen einen höheren Vernetzungsgrad auf. Dadurch haben E/E-Architekten mehr Designentscheidungen zu treffen, wobei Lösungen mehr Freiheitsgrade aufweisen und Auswirkungen schwieriger zu beurteilen sind. Jedoch müssen frühestmöglich im Entwicklungsprozess überprüfbar richtige Entscheidungen getroffen werden. Die Einführung frühzeitiger Tests in zukünftigen Zulassungsprozessen gibt dieser Anforderung ein weiteres Gewicht. In existierenden Arbeiten wurde gezeigt, dass eine in E/E-Architekturentwicklungswerkzeugen integrierte Simulationen einen Mehrwert für E/E-Architekten bei der frühzeitigen Findung von Designentscheidungen bietet. In dieser Arbeit werden dagegen die Grenzen der Skalierbarkeit einer solchen Simulation untersucht. Dies geschieht mithilfe von industriell relevanten Anwendungsfällen. Ein bestehender Ansatz zur automatisierten Synthese von Simulationsmodellen aus PREEvision-E/E-Architekturmodellen wird dabei unter Berücksichtigung der Anforderungen bei großmaßstäblichen Modellen erweitert und angepasst. Hierzu werden zunächst Simulatoren hinsichtlich ihrer Eignung für einen Einsatz im industriellen Umfeld untersucht. Dies erfolgt anhand in der Arbeit definierten Auswahlkriterien sowie mithilfe von synthetischen und skalierbaren Benchmarks. Im Anschluss werden Konzepte untersucht, welche die Erhöhung der Skalierbarkeit einer E/E-Architektursimulation adressieren. Zu den Aspekten der Skalierbarkeit gehören neben der Performanz auch die Anwendbarkeit und die Validierbarkeit, welche von der Emergenz generierter Modelle beeinflusst werden. Als Lösung werden in dieser Arbeit ausführbare Szenarienmodelle zur zustandsabhängigen Generierung von Stimuli und der reaktiven Evaluierung von Signalwerten verwendet. Durch deren Schnittstellen können gezielt die für einen Anwendungsfall relevanten Modellkomponenten der E/E-Architektur identifiziert werden, welche in Summe das sogenannte “System of Interest“ bilden. Auf diese Weise kann die Simulationsmodellgröße reduziert werden. Darüber hinaus werden parametrisierbare, pre-validierte und performanzoptimierte Teilmodelle, sogenannte „Templates“, bei der Generierung verwendet. Neben einer manuellen Zuweisung der Templates zu E/E-Architekturmodellkomponenten über die in dieser Arbeit verwendeten Template And Layer Integration Architecture (TALIA), haben spezifische Komponenten auf der Leistungssatzebene, wie Batterien, Stecker oder Kabel, bereits Standard-Templates zugewiesen. Simulationsmodelle können dadurch ohne manuelle Verhaltensmodellierung und zugehörige Validierung generiert werden. Damit Standard-Templates verwendet werden können, wird eine Hardware-zentrierte Abbildung verfolgt. Die physikalische E/E-Architektur aus der Realität bildet dabei die Grundlage für die generierten Simulationsmodelle. Softwaremodelle werden ergänzend über die Modelle der Steuergeräte bzw. ECUs integriert. Ebenso sind die Szenarienmodelle nach der Generierung ein Teil der Simulationsmodelle. Damit findet die Integration unterschiedlicher E/E-Architekturebenen statt, wodurch hybride Simulationsmodelle entstehen. Für die Evaluation werden Anwendungsfälle für Simulationen aus möglichen Designentscheidungsfragen abgeleitet und anhand definierter Kriterien für die weitere Betrachtung ausgewählt. Designentscheidungsfragen ergeben sich beim Technologieentscheid, der Dimensionierung von Komponente oder bei Optimierungen. Die Anwendungsfälle bestimmen das benötigte Testmodell, bestehend aus dem zu evaluierenden System of Interest und dem Prüfstandmodell, realisiert als Szenariomodell. Da das Testmodell die Basis des Simulationsmodells bildet und damit dessen Komplexität bestimmt, lässt sich anhand der Anwendungsfälle die Skalierbarkeit der E/E-Architektursimulation beurteilen. Insbesondere wird in dieser Arbeit der Einfluss emergenter Modelleigenschaften auf die Skalierbarkeit untersucht

    Multilevel Runtime Verification for Safety and Security Critical Cyber Physical Systems from a Model Based Engineering Perspective

    Get PDF
    Advanced embedded system technology is one of the key driving forces behind the rapid growth of Cyber-Physical System (CPS) applications. CPS consists of multiple coordinating and cooperating components, which are often software-intensive and interact with each other to achieve unprecedented tasks. Such highly integrated CPSs have complex interaction failures, attack surfaces, and attack vectors that we have to protect and secure against. This dissertation advances the state-of-the-art by developing a multilevel runtime monitoring approach for safety and security critical CPSs where there are monitors at each level of processing and integration. Given that computation and data processing vulnerabilities may exist at multiple levels in an embedded CPS, it follows that solutions present at the levels where the faults or vulnerabilities originate are beneficial in timely detection of anomalies. Further, increasing functional and architectural complexity of critical CPSs have significant safety and security operational implications. These challenges are leading to a need for new methods where there is a continuum between design time assurance and runtime or operational assurance. Towards this end, this dissertation explores Model Based Engineering methods by which design assurance can be carried forward to the runtime domain, creating a shared responsibility for reducing the overall risk associated with the system at operation. Therefore, a synergistic combination of Verification & Validation at design time and runtime monitoring at multiple levels is beneficial in assuring safety and security of critical CPS. Furthermore, we realize our multilevel runtime monitor framework on hardware using a stream-based runtime verification language

    A METHODOLOGY FOR THE MODULARIZATION OF OPERATIONAL SCENARIOS FOR MODELLING AND SIMULATION

    Get PDF
    As military operating environments and potential global threats rapidly evolve, military planning processes required to maintain international security and national defense increase in complexity and involve unavoidable uncertainties. The challenges in the field are diverse, including dealing with reemergence of long-term, strategic competition over destabilizing effects of rogue regimes, and the asymmetric non-state actors’ threats such as terrorism and international crime. The military forces are expected to handle increased multi-role, multi-mission demands because of the interconnected character of these threats. The objective of this thesis is to discuss enhancing system-of-systems analysis capabilities by considering diverse operational requirements and operational ways in a parameterized fashion within Capabilities Based Assessments process. These assessments require an open-ended exploratory approach of means and ways, situated in the early stages of planning and acquisition processes. In order to enhance the reflection of increased demands in the process, the integration of multi-scenario capabilities into a process with low-fidelity modelling and simulation is of particular interest. This allows the consideration of a high quantity of feasible alternatives in a timely manner, spanning across a diverse set of dimensions and its parameters. A methodology has been devised as an enhanced Capabilities Based Assessment approach to provide for a formalized process for the consideration and infusion of operational scenarios, and properly constrain the design space prior to computational analysis. In this context, operational scenarios are a representative set of statements and conditions that address a defined problem and include testable metrics to analyze performance and effectiveness. The scenario formalization uses an adjusted elementary definition approach to decompose, define, and recompose operational scenarios to create standardized architectures, allowing their rapid infusion into environments, and to enable the consideration of diverse operational requirements in a conjoint approach overall. Pursuant to this process, discrete event simulations as low-fidelity approach are employed to reflect the elementary structure of the scenarios. In addition, the exploration of the design and options space is formalized, including the collection of alternative approaches within different materiel and non-materiel dimensions and subsequent analysis of their relationship prior to the creation of combinatorial test cases. In the progress of this thesis, the devised methodology as a whole and the two developed augmentations to the Capabilities Based Assessment are tested and validated in a series of experiments. As an overall case study, the decision-making process surrounding the deployment of vertical airlift assets of varying type and quantity for Humanitarian Aid and Disaster Relief operations is utilized. A demonstration experiment is provided exercising the entire methodology to test specifically for its suitability to handle a variety of different scenarios through process, as well as a comprehensive set of materiel and non-materiel parameters. Based on a mission statement and performance targets, the status quo could be evaluated and alternative options for the required performance improvements could be presented. The methodology created in this thesis enables the Capabilities Based Assessment and general defense acquisition considerations to be initially approached in a more open and less constrained manner. This capability is provided through the use of low-fidelity modelling and simulation that enables the evaluation of a large amount of alternatives. In advances to the state of the art, the methodology presented removes subject-matter expert and operator driven constraints, allowing the discovery of solutions that would not be considered in a traditional process. It will support the work of not only defense acquisition analysts and decision-makers, but also provide benefits to policy planners through its ability to instantly revise and analyze cases in a rapid fashion.Ph.D
    corecore