1,325 research outputs found

    AI-AUGMENTED DECISION SUPPORT SYSTEMS: APPLICATION IN MARITIME DECISION MAKING UNDER CONDITIONS OF METOC UNCERTAINTY

    Get PDF
    The ability for a human to overlay information from disparate sensor systems or remote databases into a common operational picture can enhance rapid decision making and implementation in a complex environment. This thesis focuses on operational uncertainty as a function of meteorological and oceanographic (METOC) effects on maritime route planning. Using an existing decision support system (DSS) with artificial intelligence (AI) algorithms developed by New Jersey Institute of Technology and University of Connecticut, cognitive load and time to decision were assessed for users of an AI-augmented DSS, accounting for METOC conditions and their effects, and users of a baseline, 'as is,' DSS system. Scenario uncertainty for the user was presented in the relative number of Pareto-optimal routes from two locations. Key results were (a) users of an AI-augmented DSS with a simplified interface completed assigned tasks in significantly less time than users of an information-dense, complex-interface AI-augmented DSS; (b) users of simplified, AI-augmented DSS arrived at decisions with lower cognitive load than baseline DSS and complex-interface AI-augmented DSS users; and (c) users relied mainly on quantitative data presented in tabular form to make route decisions. The differences found in user performance and cognitive load between levels of AI augmentation and interface complexity serve as a starting point for further exploration into maximizing the potential of human-machine teaming.Office of Naval ResearchMajor, United States Marine CorpsApproved for public release. distribution is unlimite

    Indiana Jones and the Joystick of Doom: understanding the past via computer games

    Get PDF
    In 1997 Jane Murray published "Hamlet on the holodeck: the future of narrative in cyberspace", which forecast the computer as a future platform for interactive drama. Yet a great deal of recent literature has focused on the failure rather than success of virtual environments (particularly three-dimensional ones) as an engaging medium of entertainment and education. In this article I will discuss three key problems in designing virtual environments that in some way depict the values of past cultures. The first problem is how to create a feeling of immersion or of presence in a virtual environment - how we make the past come alive for people so that they feel they are transported "there". This goal is often seen as limited by technical constraints such as the speed of the Internet or network connection, limited processing power, or the computer's capacity to render a large number of objects on the screen in real-time that are seen to impede the production of realistic virtual scenes. By contrast, this article emphasises the need to foster engagement not through realism but interaction. Secondly, our idea of what reality is may be at odds with understanding the past or a distant place from a local perspective. What does reality mean when we are trying to recreate and understand cultural perspectives? Is it useful, desirable or even possible to interact with digital reconstructions of different cultures in a meaningful way? Culture understood from the distance of a hotel or guidebook is obviously not the same as the culture that guides, constrains and nourishes a local inhabitant. I would like to bring the same distinction to culture experienced through virtual environments, and argue that a virtual traveler is not the same as a virtual tourist. Despite or perhaps because they have a goal to solve, and have more constraints and more direct immersion in the local way of doing things, people who travel rather than tour arguably have richer and more interesting experiences. Thirdly, if we do manage to create an engaging and believable virtual environment, will the novelty or entertainment value actually interfere with the cultural understanding gained by the users? In virtual heritage environments this is particularly evident in the conflict between individual freedom to explore and the more pragmatic need to convey historical information. We may for example create an entertaining game but will that allow us to convey varying levels of historical accuracy in reconstructing the past

    Energy Efficient Hardware Design for Securing the Internet-of-Things

    Full text link
    The Internet of Things (IoT) is a rapidly growing field that holds potential to transform our everyday lives by placing tiny devices and sensors everywhere. The ubiquity and scale of IoT devices require them to be extremely energy efficient. Given the physical exposure to malicious agents, security is a critical challenge within the constrained resources. This dissertation presents energy-efficient hardware designs for IoT security. First, this dissertation presents a lightweight Advanced Encryption Standard (AES) accelerator design. By analyzing the algorithm, a novel method to manipulate two internal steps to eliminate storage registers and replace flip-flops with latches to save area is discovered. The proposed AES accelerator achieves state-of-art area and energy efficiency. Second, the inflexibility and high Non-Recurring Engineering (NRE) costs of Application-Specific-Integrated-Circuits (ASICs) motivate a more flexible solution. This dissertation presents a reconfigurable cryptographic processor, called Recryptor, which achieves performance and energy improvements for a wide range of security algorithms across public key/secret key cryptography and hash functions. The proposed design employs circuit techniques in-memory and near-memory computing and is more resilient to power analysis attack. In addition, a simulator for in-memory computation is proposed. It is of high cost to design and evaluate new-architecture like in-memory computing in Register-transfer level (RTL). A C-based simulator is designed to enable fast design space exploration and large workload simulations. Elliptic curve arithmetic and Galois counter mode are evaluated in this work. Lastly, an error resilient register circuit, called iRazor, is designed to tolerate unpredictable variations in manufacturing process operating temperature and voltage of VLSI systems. When integrated into an ARM processor, this adaptive approach outperforms competing industrial techniques such as frequency binning and canary circuits in performance and energy.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147546/1/zhyiqun_1.pd

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    Framework for a Comprehensive Education Data System in California: Unlocking the Power of Data to Continually Improve Public Education

    Get PDF
    Recommends ways to improve student achievement by using data to drive decision-making, sharing best practices, encouraging innovation, and supporting improvement through professional development. Details how to expand and enhance current data systems

    Европейский и национальный контексты в научных исследованиях

    Get PDF
    В настоящем электронном сборнике «Европейский и национальный контексты в научных исследованиях. Технология» представлены работы молодых ученых по геодезии и картографии, химической технологии и машиностроению, информационным технологиям, строительству и радиотехнике. Предназначены для работников образования, науки и производства. Будут полезны студентам, магистрантам и аспирантам университетов.=In this Electronic collected materials “National and European dimension in research. Technology” works in the fields of geodesy, chemical technology, mechanical engineering, information technology, civil engineering, and radio-engineering are presented. It is intended for trainers, researchers and professionals. It can be useful for university graduate and post-graduate students

    A framework for evolving grid computing systems.

    Get PDF
    Grid computing was born in the 1990s, when researchers were looking for a way to share expensive computing resources and experiment equipment. Grid computing is becoming increasingly popular because it promotes the sharing of distributed resources that may be heterogeneous in nature, and it enables scientists and engineering professionals to solve large scale computing problems. In reality, there are already huge numbers of grid computing facilities distributed around the world, each one having been created to serve a particular group of scientists such as weather forecasters, or a group of users such as stock markets. However, the need to extend the functionalities of current grid systems lends itself to the consideration of grid evolution. This allows the combination of many disjunct grids into a single powerful grid that can operate as one vast computational resource, as well as for grid environments to be flexible, to be able to change and to evolve. The rationale for grid evolution is the current rapid and increasing advances in both software and hardware. Evolution means adding or removing capabilities. This research defines grid evolution as adding new functions and/or equipment and removing unusable resources that affect the performance of some nodes. This thesis produces a new technique for grid evolution, allowing it to be seamless and to operate at run time. Within grid computing, evolution is an integration of software and hardware and can be of two distinct types, external and internal. Internal evolution occurs inside the grid boundary by migrating special resources such as application software from node to node inside the grid. While external evolution occurs between grids. This thesis develops a framework for grid evolution that insulates users from the complexities of grids. This framework has at its core a resource broker together with a grid monitor to cope with internal and external evolution, advance reservation, fault tolerance, the monitoring of the grid environment, increased resource utilisation and the high availability of grid resources. The starting point for the present framework of grid evolution is when the grid receives a job whose requirements do not exist on the required node which triggers grid evolution. If the grid has all the requirements scattered across its nodes, internal evolution enabling the grid to migrate the required resources to the required node in order to satisfy job requirements ensues, but if the grid does not have these resources, external evolution enables the grid either to collect them from other grids (permanent evolution) or to send the job to other grids for execution (just in time) evolution. Finally a simulation tool called (EVOSim) has been designed, developed and tested. It is written in Oracle 10g and has been used for the creation of four grids, each of which has a different setup including different nodes, application software, data and polices. Experiments were done by submitting jobs to the grid at run time, and then comparing the results and analysing the performance of those grids that use the approach of evolution with those that do not. The results of these experiments have demonstrated that these features significantly improve the performance of grid environments and provide excellent scheduling results, with a decreasing number of rejected jobs
    corecore