6,812 research outputs found

    The Y\u27Barbo Legend and Early Spanish Settlement

    Get PDF

    Implementation of a data management software system for SSME test history data

    Get PDF
    The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000

    Use of an expert system data analysis manager for space shuttle main engine test evaluation

    Get PDF
    The ability to articulate, collect, and automate the application of the expertise needed for the analysis of space shuttle main engine (SSME) test data would be of great benefit to NASA liquid rocket engine experts. This paper describes a project whose goal is to build a rule-based expert system which incorporates such expertise. Experiential expertise, collected directly from the experts currently involved in SSME data analysis, is used to build a rule base to identify engine anomalies similar to those analyzed previously. Additionally, an alternate method of expertise capture is being explored. This method would generate rules inductively based on calculations made using a theoretical model of the SSME's operation. The latter rules would be capable of diagnosing anomalies which may not have appeared before, but whose effects can be predicted by the theoretical model

    A Monte Carlo study of Weibull reliability analysis for space shuttle main engine components

    Get PDF
    The incorporation of a number of additional capabilities into an existing Weibull analysis computer program and the results of Monte Carlo computer simulation study to evaluate the usefulness of the Weibull methods using samples with a very small number of failures and extensive censoring are discussed. Since the censoring mechanism inherent in the Space Shuttle Main Engine (SSME) data is hard to analyze, it was decided to use a random censoring model, generating censoring times from a uniform probability distribution. Some of the statistical techniques and computer programs that are used in the SSME Weibull analysis are described. The methods documented in were supplemented by adding computer calculations of approximate (using iteractive methods) confidence intervals for several parameters of interest. These calculations are based on a likelihood ratio statistic which is asymptotically a chisquared statistic with one degree of freedom. The assumptions built into the computer simulations are described. The simulation program and the techniques used in it are described there also. Simulation results are tabulated for various combinations of Weibull shape parameters and the numbers of failures in the samples

    Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier

    Full text link
    This paper explores a surprising equivalence between two seemingly-distinct convex optimization methods. We show that simulated annealing, a well-studied random walk algorithms, is directly equivalent, in a certain sense, to the central path interior point algorithm for the the entropic universal barrier function. This connection exhibits several benefits. First, we are able improve the state of the art time complexity for convex optimization under the membership oracle model. We improve the analysis of the randomized algorithm of Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that underly the central path following interior point algorithm. We are able to tighten the temperature schedule for simulated annealing which gives an improved running time, reducing by square root of the dimension in certain instances. Second, we get an efficient randomized interior point method with an efficiently computable universal barrier for any convex set described by a membership oracle. Previously, efficiently computable barriers were known only for particular convex sets

    A Collaborative Mechanism for Crowdsourcing Prediction Problems

    Full text link
    Machine Learning competitions such as the Netflix Prize have proven reasonably successful as a method of "crowdsourcing" prediction tasks. But these competitions have a number of weaknesses, particularly in the incentive structure they create for the participants. We propose a new approach, called a Crowdsourced Learning Mechanism, in which participants collaboratively "learn" a hypothesis for a given prediction task. The approach draws heavily from the concept of a prediction market, where traders bet on the likelihood of a future event. In our framework, the mechanism continues to publish the current hypothesis, and participants can modify this hypothesis by wagering on an update. The critical incentive property is that a participant will profit an amount that scales according to how much her update improves performance on a released test set.Comment: Full version of the extended abstract which appeared in NIPS 201

    Rate of Price Discovery in Iterative Combinatorial Auctions

    Full text link
    We study a class of iterative combinatorial auctions which can be viewed as subgradient descent methods for the problem of pricing bundles to balance supply and demand. We provide concrete convergence rates for auctions in this class, bounding the number of auction rounds needed to reach clearing prices. Our analysis allows for a variety of pricing schemes, including item, bundle, and polynomial pricing, and the respective convergence rates confirm that more expressive pricing schemes come at the cost of slower convergence. We consider two models of bidder behavior. In the first model, bidders behave stochastically according to a random utility model, which includes standard best-response bidding as a special case. In the second model, bidders behave arbitrarily (even adversarially), and meaningful convergence relies on properly designed activity rules

    Fighting Bandits with a New Kind of Smoothness

    Full text link
    We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the \emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the Θ(TN)\Theta(\sqrt{TN}) minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(TNlogN)O(\sqrt{TN \log N}) if the perturbation distribution has a bounded hazard rate. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.Comment: In Proceedings of NIPS, 201

    Probing expert anticipation with the temporal occlusion paradigm: Experimental investigations of some methodological issues

    Get PDF
    Copyright @ 2005 Human KineticsTwo experiments were conducted to examine whether the conclusions drawn regarding the timing of anticipatory information pick-up from temporal occlusion studies are influenced by whether (a) the viewing period is of variable or fixed duration and (b) the task is a laboratory-based one with simple responses or a natural one requiring a coupled, interceptive movement response. Skilled and novice tennis players either made pencil-and-paper predictions of service direction (Experiment 1) or attempted to hit return strokes (Experiment 2) to tennis serves while their vision was temporally occluded in either a traditional progressive mode (where more information was revealed in each subsequent occlusion condition) or a moving window mode (where the visual display was only available for a fixed duration with this window shifted to different phases of the service action). Conclusions regarding the timing of information pick-up were generally consistent across display mode and across task setting lending support to the veracity and generalisability of findings regarding perceptual expertise in existing laboratory-based progressive temporal occlusion studies.This study is funded by the Australian Institute of Sport Tennis program
    corecore