1,552 research outputs found

    The influence of radiation shielding on reusable nuclear shuttle design

    Get PDF
    Alternate reusable nuclear shuttle configurations were synthesized and evaluated. Particular attention was given to design factors which reduced tank exposure to direct and scattered radiation, increased payload-engine separation, and improved self-shielding by the LH2 propellant. The most attractive RNS concept in terms of cost effectiveness consists of a single conical aft bulkhead tank with a high fineness ratio. Launch is accomplished by the INT-21 with the tank positioned in the inverted attitude. The NERVA engine is delivered to orbit separately where final stage assembly and checkout are accomplished. This approach is consistent with NERVA definition criteria and required operating procedures to support an economically viable nuclear shuttle transportation program in the post-1980 period

    ATTac-2000: An Adaptive Autonomous Bidding Agent

    Full text link
    The First Trading Agent Competition (TAC) was held from June 22nd to July 8th, 2000. TAC was designed to create a benchmark problem in the complex domain of e-marketplaces and to motivate researchers to apply unique approaches to a common task. This article describes ATTac-2000, the first-place finisher in TAC. ATTac-2000 uses a principled bidding strategy that includes several elements of adaptivity. In addition to the success at the competition, isolated empirical results are presented indicating the robustness and effectiveness of ATTac-2000's adaptive strategy

    Reinforcement Learning: A Survey

    Full text link
    This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning.Comment: See http://www.jair.org/ for any accompanying file

    Contingent planning under uncertainty via stochastic satisfiability

    Get PDF
    We describe a new planning technique that efficiently solves probabilistic propositional contingent planning problems by converting them into instances of stochastic satisfiability (SSAT) and solving these problems instead. We make fundamental contributions in two areas: the solution of SSAT problems and the solution of stochastic planning problems. This is the first work extending the planning-as-satisfiability paradigm to stochastic domains. Our planner, ZANDER, can solve arbitrary, goal-oriented, finite-horizon partially observable Markov decision processes (POMDPs). An empirical study comparing ZANDER to seven other leading planners shows that its performance is competitive on a range of problems. © 2003 Elsevier Science B.V. All rights reserved

    Multinode reconfigurable pipeline computer

    Get PDF
    A multinode parallel-processing computer is made up of a plurality of innerconnected, large capacity nodes each including a reconfigurable pipeline of functional units such as Integer Arithmetic Logic Processors, Floating Point Arithmetic Processors, Special Purpose Processors, etc. The reconfigurable pipeline of each node is connected to a multiplane memory by a Memory-ALU switch NETwork (MASNET). The reconfigurable pipeline includes three (3) basic substructures formed from functional units which have been found to be sufficient to perform the bulk of all calculations. The MASNET controls the flow of signals from the memory planes to the reconfigurable pipeline and vice versa. the nodes are connectable together by an internode data router (hyperspace router) so as to form a hypercube configuration. The capability of the nodes to conditionally configure the pipeline at each tick of the clock, without requiring a pipeline flush, permits many powerful algorithms to be implemented directly

    The Coming Age of Parallel-Processing Supercomputer

    Get PDF
    It is anticipated that the needs of scientific computation will dramatically outpace the performance of general-purpose supercomputers over the next decade. These needs will, however, be addressed by an emerging class of parallelprocessing supercomputers (PPS). The Princeton University Navier-Stokes Computer (NSC) is a PPS geared toward simulating complex flows. It has a projected speed and capacity two orders of magnitude beyond that of current supercomputers. The architecture of the NSC and a discussion of a working prototype is presented

    Create a translational medicine knowledge repository - Research downsizing, mergers and increased outsourcing have reduced the depth of in-house translational medicine expertise and institutional memory at many pharmaceutical and biotech companies: how will they avoid relearning old lessons?

    Get PDF
    Pharmaceutical industry consolidation and overall research downsizing threatens the ability of companies to benefit from their previous investments in translational research as key leaders with the most knowledge of the successful use of biomarkers and translational pharmacology models are laid off or accept their severance packages. Two recently published books may help to preserve this type of knowledge but much of this type of information is not in the public domain. Here we propose the creation of a translational medicine knowledge repository where companies can submit their translational research data and access similar data from other companies in a precompetitive environment. This searchable repository would become an invaluable resource for translational scientists and drug developers that could speed and reduce the cost of new drug development
    corecore