15,785 research outputs found

    Brain-wave measures of workload in advanced cockpits: The transition of technology from laboratory to cockpit simulator, phase 2

    Get PDF
    The present Phase 2 small business innovation research study was designed to address issues related to scalp-recorded event-related potential (ERP) indices of mental workload and to transition this technology from the laboratory to cockpit simulator environments for use as a systems engineering tool. The project involved five main tasks: (1) Two laboratory studies confirmed the generality of the ERP indices of workload obtained in the Phase 1 study and revealed two additional ERP components related to workload. (2) A task analysis' of flight scenarios and pilot tasks in the Advanced Concepts Flight Simulator (ACFS) defined cockpit events (i.e., displays, messages, alarms) that would be expected to elicit ERPs related to workload. (3) Software was developed to support ERP data analysis. An existing ARD-proprietary package of ERP data analysis routines was upgraded, new graphics routines were developed to enhance interactive data analysis, and routines were developed to compare alternative single-trial analysis techniques using simulated ERP data. (4) Working in conjunction with NASA Langley research scientists and simulator engineers, preparations were made for an ACFS validation study of ERP measures of workload. (5) A design specification was developed for a general purpose, computerized, workload assessment system that can function in simulators such as the ACFS

    Aerospace medicine and biology. A continuing bibliography with indexes, supplement 206, May 1980

    Get PDF
    This bibliography lists 169 reports, articles, and other documents introduced into the NASA scientific and technical information system in April 1980

    Research and Technology

    Get PDF
    Langley Research Center is engaged in the basic an applied research necessary for the advancement of aeronautics and space flight, generating advanced concepts for the accomplishment of related national goals, and provding research advice, technological support, and assistance to other NASA installations, other government agencies, and industry. Highlights of major accomplishments and applications are presented

    Moving from Data-Constrained to Data-Enabled Research: Experiences and Challenges in Collecting, Validating and Analyzing Large-Scale e-Commerce Data

    Get PDF
    Widespread e-commerce activity on the Internet has led to new opportunities to collect vast amounts of micro-level market and nonmarket data. In this paper we share our experiences in collecting, validating, storing and analyzing large Internet-based data sets in the area of online auctions, music file sharing and online retailer pricing. We demonstrate how such data can advance knowledge by facilitating sharper and more extensive tests of existing theories and by offering observational underpinnings for the development of new theories. Just as experimental economics pushed the frontiers of economic thought by enabling the testing of numerous theories of economic behavior in the environment of a controlled laboratory, we believe that observing, often over extended periods of time, real-world agents participating in market and nonmarket activity on the Internet can lead us to develop and test a variety of new theories. Internet data gathering is not controlled experimentation. We cannot randomly assign participants to treatments or determine event orderings. Internet data gathering does offer potentially large data sets with repeated observation of individual choices and action. In addition, the automated data collection holds promise for greatly reduced cost per observation. Our methods rely on technological advances in automated data collection agents. Significant challenges remain in developing appropriate sampling techniques integrating data from heterogeneous sources in a variety of formats, constructing generalizable processes and understanding legal constraints. Despite these challenges, the early evidence from those who have harvested and analyzed large amounts of e-commerce data points toward a significant leap in our ability to understand the functioning of electronic commerce.Comment: Published at http://dx.doi.org/10.1214/088342306000000231 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Collaborative Edge Computing in Mobile Internet of Things

    Get PDF
    The proliferation of Internet-of-Things (IoT) devices has opened a plethora of opportunities for smart networking, connected applications and data driven intelligence. The large distribution of IoT devices within a finite geographical area and the pervasiveness of wireless networking present an opportunity for such devices to collaborate. Centralized decision systems have so far dominated the field, but they are starting to lose relevance in the wake of heterogeneity of the device pool. This thesis is driven by three key hypothesis: (i) In solving complex problems, it is possible to harness unused compute capabilities of the device pool instead of always relying on centralized infrastructures; (ii) When possible, collaborating with neighbors to identify security threats scales well in large environments; (iii) Given the abundance of data from a large pool of devices with possible privacy constraints, collaborative learning drives scalable intelligence. This dissertation defines three frameworks for these hypotheses; collaborative computing, collaborative security and collaborative privacy intelligence. The first framework, Opportunistic collaboration among IoT devices for workload execution, profiles applications and matches resource grants to requests using blockchain to put excess capacity at the edge to good use. The evaluation results show app execution latency comparable to the centralized edge and an outstanding resource utilization at the edge. The second framework, Integrity Threat Identification for Distributed IoT, uses a new spatio-temporal algorithm, based on Local Outlier Factor (LOF) uniquely using mean and variance collaboratively across spatial and temporal dimensions to identify potential threats. Evaluation results on real world underground sensor dataset (Thoreau) show good accuracy and efficiency. The third frame- work, Collaborative Privacy Intelligence, aims to understand privacy invasion by reverse engineering a user’s privacy model using sensors data, and score the level of intrusion for various dimensions of privacy. By having sensors track activities, and learning rule books from the collective insights, we are able to predict ones privacy attributes and states, with reasonable accuracy. As the Edge gains more prominence with computation moving closer to the data source, the above frameworks will drive key solutions and research in areas of Edge federation and collaboration

    Learning dynamic algorithm portfolios

    Get PDF
    Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a trade-off between the performance of model-based algorithm selection, and the cost of learning the model. In this paper, we treat this trade-off in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the model-based shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark; and with a set of solvers for the Auction Winner Determination proble

    Query-Based Learning for Aerospace Applications

    Get PDF
    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem
    corecore