5,223 research outputs found

    Human Performance Modeling: Analysis of the Effects of Manned-Unmanned Teaming on Pilot Workload and Mission Performance

    Get PDF
    Due to the advent of autonomous technology coupled with the extreme expense of manned aircraft, the Department of Defense (DoD) has increased interest in developing affordable, expendable Unmanned Aerial Vehicles (UAVs) to become autonomous wingmen for jet fighters in mosaic warfare. Like a mosaic that forms a whole picture out of smaller pieces, battlefield commanders can utilize disaggregated capabilities, such as Manned-Unmanned Teaming (MUM-T), to operate in contested environments. With a single pilot controlling both the UAVs and manned aircraft, it may be challenging for pilots to manage all systems should the system design not be conducive to a steady state level of workload. To understand the potential effects of MUM-T on the pilot’s cognitive workload, an Improved Performance Research Integration Tool (IMPRINT) Pro pilot workload model was developed. The model predicts the cognitive workload of the pilot in a simulated environment when interacting with both the cockpit and multiple UAVs to provide insight into the effect of Human-Agent Interactions (HAI) and increasing autonomous control abstraction on the pilot’s cognitive workload and mission performance. This research concluded that peaks in workload occur for the pilot during periods of high communications load and this communication may be degraded or delayed during air-to-air engagements. Nonetheless, autonomous control of the UAVs through a combination of Vector Steering, Pilot Directed Engagements, and Tactical Battle Management would enable pilots to successfully command up to 3 UAVs as well as their own aircraft against 4 enemy targets, while maintaining acceptable pilot cognitive workload in an air-to-air mission scenario

    Real-time futures graph tracking visualization and analysis tool

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 34-35).A hybrid envisionment is a novel representation of a simulated state graph, specifying all possible states and transitions of the system, characterized by both qualitative and quantitative state variables. The Deep Green project creates a hybrid envisionment, called a futures graph, to depict all possible occurrences and outcomes of a combat engagement between friendly and enemy units on a battlefield. During combat, Al state estimation techniques are utilized to efficiently track the state of the battle in a futures graph, giving the commander an up-to-date analysis of what is taking place on the battlefield and how it the battle could turn out. Because state estimation of highly complex hybrid envisionments is a relatively unexplored and novel process, it is important to ensure that it is handled efficiently and accurately enough for usage on the field. This paper explores an approach for discerning the behavior in state estimation through the use of an analysis suite. By accompanying Deep Green state estimation with the analysis suite developed, estimation techniques could be benchmarked and analyzed over various implementations through both numerical and graphical metrics. The metrics generated greatly helped to improve the estimation algorithm over the course of its development.by Joseph M. Fahey.M.Eng

    Energy-efficient hardware design based on high-level synthesis

    Get PDF
    This dissertation describes research activities broadly concerning the area of High-level synthesis (HLS), but more specifically, regarding the HLS-based design of energy-efficient hardware (HW) accelerators. HW accelerators, mostly implemented on FPGAs, are integral to the heterogeneous architectures employed in modern high performance computing (HPC) systems due to their ability to speed up the execution while dramatically reducing the energy consumption of computationally challenging portions of complex applications. Hence, the first activity was regarding an HLS-based approach to directly execute an OpenCL code on an FPGA instead of its traditional GPU-based counterpart. Modern FPGAs offer considerable computational capabilities while consuming significantly smaller power as compared to high-end GPUs. Several different implementations of the K-Nearest Neighbor algorithm were considered on both FPGA- and GPU-based platforms and their performance was compared. FPGAs were generally more energy-efficient than the GPUs in all the test cases. Eventually, we were also able to get a faster (in terms of execution time) FPGA implementation by using an FPGA-specific OpenCL coding style and utilizing suitable HLS directives. The second activity was targeted towards the development of a methodology complementing HLS to automatically derive power optimization directives (also known as "power intent") from a system-level design description and use it to drive the design steps after HLS, by producing a directive file written using the common power format (CPF) to achieve power shut-off (PSO) in case of an ASIC design. The proposed LP-HLS methodology reduces the design effort by enabling designers to infer low power information from the system-level description of a design rather than at the RTL. This methodology required a SystemC description of a generic power management module to describe the design context of a HW module also modeled in SystemC, along with the development of a tool to automatically produce the CPF file to accomplish PSO. Several test cases were considered to validate the proposed methodology and the results demonstrated its ability to correctly extract the low power information and apply it to achieve power optimization in the backend flow

    Executable Architectures and their Application to a Geographically Distributed Air Operations Center

    Get PDF
    Integrated Architectures and Network Centric Warfare represent two central concepts in the Department of Defense\u27s (DoD) on-going transformation. The true power of integrated architectures is brought to bear when they are combined with simulation to move beyond a static representation and create an executable architecture. This architecture can then be used to experiment with system configurations and parameter values to guide employment decisions. The process of developing and utilizing an executable architecture will be employed to assess an Air Operations Center (AOC). This thesis applies and expands upon the methodology of Dr. Alexander Levis, former Chief Scientist of the Air Force, to the static architecture representing the Aerospace Operations Center (AOC). Using Colored Petri Nets and other simulation tools, an executable architecture for the AOC\u27s Air Tasking Order (ATO) production thread was developed. These models were then used to compare the performance of a current, forward-deployed AOC configuration to three other potential configurations that utilize a network centric environment to deploy a portion of the AOC and provide reach-back capabilities to the non-deployed units. Performance was measured by the amount of time required to execute the ATO cycle under each configuration. Communication requirements were analyzed for each configuration and stochastic delays were modeled for all transactions in which requirements could not be met due to the physical configuration of the AOC elements. All four configurations were found to exhibit statistically different behavior with regard to ATO cycle time

    Text-based Adventures of the Golovin AI Agent

    Full text link
    The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition

    Images delegitimized and discouraged: explicitly political art and the arbitrariness of the unspeakable

    Get PDF
    While the increasing interest in contemporary art from Turkey has centered on explicitly political works, discussions on the limitations of the freedom of expression have likewise come under the spotlight, not least with regard to Turkey's EU candidacy. In contrast to the attempts of complete suppression marking the 1980 coup d'etat and its aftermath, current censorship mechanisms aim to delegitimize and discourage artistic expressions (and their circulation) that can be construed as threatening the territorial integrity and sovereignty of the Turkish state, and to turn their producers into targets. This article investigates selected images produced in the contemporary art world between 2005 and 2008, which were taken to transcend the limits of what constitutes tolerable depictions of Turkey's socio-political realities. It examines current modalities of censorship in the visual arts and the different actors involved in silencing efforts. The cases show that within these fields of delimitation there are considerable contingencies: The domain of the unspeakable remains unclearly mapped. I argue that it is because, not despite, this arbitrariness that delegitimizing interventions are successful, in that they (a) create incentives for self-censorship, and (b) produce defenses of artistic freedom that, by highlighting the autonomy of art, to some extent consolidate a conceptual separation of art from politics

    A Model-Based Holistic Power Management Framework: A Study on Shipboard Power Systems for Navy Applications

    Get PDF
    The recent development of Integrated Power Systems (IPS) for shipboard application has opened the horizon to introduce new technologies that address the increasing power demand along with the associated performance specifications. Similarly, the Shipboard Power System (SPS) features system components with multiple dynamic characteristics and require stringent regulations, leveraging a challenge for an efficient system level management. The shipboard power management needs to support the survivability, reliability, autonomy, and economy as the key features for design consideration. To address these multiple issues for an increasing system load and to embrace future technologies, an autonomic power management framework is required to maintain the system level objectives. To address the lack of the efficient management scheme, a generic model-based holistic power management framework is developed for naval SPS applications. The relationship between the system parameters are introduced in the form of models to be used by the model-based predictive controller for achieving the various power management goals. An intelligent diagnostic support system is developed to support the decision making capabilities of the main framework. Naïve Bayes’ theorem is used to classify the status of SPS to help dispatch the appropriate controls. A voltage control module is developed and implemented on a real-time test bed to verify the computation time. Variants of the limited look-ahead controls (LLC) are used throughout the dissertation to support the management framework design. Additionally, the ARIMA prediction is embedded in the approach to forecast the environmental variables in the system design. The developed generic framework binds the multiple functionalities in the form of overall system modules. Finally, the dissertation develops the distributed controller using the Interaction Balance Principle to solve the interconnected subsystem optimization problem. The LLC approach is used at the local level, and the conjugate gradient method coordinates all the lower level controllers to achieve the overall optimal solution. This novel approach provides better computing performance, more flexibility in design, and improved fault handling. The case-study demonstrates the applicability of the method and compares with the centralized approach. In addition, several measures to characterize the performance of the distributed controls approach are studied

    Application of Real Options Theory to Software-intensive System Acquisitions

    Get PDF
    Proceedings Paper (for Acquisition Research Program)In the Department of Defense (DoD), the typical outcome of a software acquisition program has been massive cost escalation, slipping planned delivery dates and making major cuts in the planned software functionality to guarantee program success. To counter this dilemma, the DoD put forth a new weapons acquisition policy in 2003 based on an evolutionary acquisition approach to foster increased efficiency while building flexibility in the acquisition process. However, the evolutionary acquisition approach often relies on the spiral development process, which assumes end-state requirements are known at the inception of the development process, a misrepresentation of reality in the acquisition of DoD software-intensive weapons systems. This article presents a framework to address requirements uncertainty as it relates to software acquisition. The framework is based on Real Options theory and aims at mitigating risks associated with requirement volatility based on the technology objectives''constraints as put forth by the customer at the acquisition decision-making level.Naval Postgraduate School Acquisition Research ProgramApproved for public release; distribution is unlimited
    • …
    corecore