61 research outputs found

    RCRA Citizen Suits and Restitution: The Eighth Circuit\u27s Full Cort Press Strangles Equity\u27s Traditional Remedial Play

    Get PDF
    Congress creates a federal right of action for private citizens in two ways. First, Congress can expressly grant this right in the statute\u27s language. Second, Congress can implicitly create a right of action. In Cort v. Ash, the Supreme Court set forth a method of analyzing a statute to determine whether Congress implied a private right of action. This Note will address Furrer v. Brown, a recent decision highlighting the Eighth Circuit\u27s confusion in the distinction between finding an implicit right of action and determining the available remedies for an existing right of action

    Prompting for a conversation: How to control a dialog model?

    Full text link
    Dialog modelling faces a difficult trade-off. Models are trained on a large amount of text, yet their responses need to be limited to a desired scope and style of a dialog agent. Because the datasets used to achieve the former contain language that is not compatible with the latter, pre-trained dialog models are fine-tuned on smaller curated datasets. However, the fine-tuning process robs them of the ability to produce diverse responses, eventually reducing them to dull conversation partners. In this paper we investigate if prompting can mitigate the above trade-off. Specifically, we experiment with conditioning the prompt on the query, rather than training a single prompt for all queries. By following the intuition that freezing the pre-trained language model will conserve its expressivity, we find that compared to fine-tuning, prompting can achieve a higher BLEU score and substantially improve the diversity and novelty of the responses

    Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems

    Full text link
    Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact both on usability and perceived quality. Most NLG systems in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. They are also not easily scaled to systems covering multiple domains and languages. This paper presents a statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure. The LSTM generator can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates. With fewer heuristics, an objective evaluation in two differing test domains showed the proposed method improved performance compared to previous methods. Human judges scored the LSTM system higher on informativeness and naturalness and overall preferred it to the other systems.Comment: To be appear in EMNLP 201

    Reward Shaping with Recurrent Neural Networks for Speeding up On-Line Policy Learning in Spoken Dialogue Systems

    Full text link
    Statistical spoken dialogue systems have the attractive property of being able to be optimised from data via interactions with real users. However in the reinforcement learning paradigm the dialogue manager (agent) often requires significant time to explore the state-action space to learn to behave in a desirable manner. This is a critical issue when the system is trained on-line with real users where learning costs are expensive. Reward shaping is one promising technique for addressing these concerns. Here we examine three recurrent neural network (RNN) approaches for providing reward shaping information in addition to the primary (task-orientated) environmental feedback. These RNNs are trained on returns from dialogues generated by a simulated user and attempt to diffuse the overall evaluation of the dialogue back down to the turn level to guide the agent towards good behaviour faster. In both simulated and real user scenarios these RNNs are shown to increase policy learning speed. Importantly, they do not require prior knowledge of the user's goal.Comment: Accepted for publication in SigDial 201

    Grid generation: A view from the trenches

    Get PDF
    This paper presents 'A view from the trenches' on CFD grid generation from a Pratt & Whitney perspective. We anticipate that other organizations have similar views. We focus on customer expectations and the consequent requirements. We enunciate a vision for grid generation, discuss issues that developers must recognize

    Status and Design Concepts for the Hydrogen On-Orbit Storage and Supply Experiment

    Get PDF
    This paper studies concepts for the Hydrogen On-Orbit Storage and Supply Experiment (HOSS). HOSS is a space flight experiment whose objectives are: Show stable gas supply for storage and direct gain solar-thermal thruster designs; and evaluate and compare low-g performance of active and passive pressure control via a thermodynamic vent system (TVS) suitable for solar-thermal upper stages. This paper shows that the necessary experimental equipment for HOSS can be accommodated in a small hydrogen dewar of 36 to 80 liter. Thermal designs for these dewars which meet the on-orbit storage requirements can be achieved. Furthermore ground hold insulation and shielding concepts are achieved which enable storing initially subcooled liquid hydrogen in these small dewars without venting in excess of 144 hours

    Non-Autoregressive Text Generation with Pre-trained Language Models

    Get PDF
    Non-autoregressive generation (NAG) has recently attracted great attention due to its fast inference speed. However, the generation quality of existing NAG models still lags behind their autoregressive counterparts. In this work, we show that BERT can be employed as the backbone of a NAG model for a greatly improved performance. Additionally, we devise two mechanisms to alleviate the two common problems of vanilla NAG models: the inflexibility of prefixed output length and the conditional independence of individual token predictions. To further strengthen the speed advantage of the proposed model, we propose a new decoding strategy, ratio-first, for applications where the output lengths can be approximately estimated beforehand. For a comprehensive evaluation, we test the proposed model on three text generation tasks, including text summarization, sentence compression and machine translation. Experimental results show that our model significantly outperforms existing non-autoregressive baselines and achieves competitive performance with many strong autoregressive models. In addition, we also conduct extensive analysis experiments to reveal the effect of each proposed component

    Rocket Engine Plume Diagnostics at Stennis Space Center

    Get PDF
    The Stennis Space Center has been at the forefront of development and application of exhaust plume spectroscopy to rocket engine health monitoring since 1989. Various spectroscopic techniques, such as emission, absorption, FTIR, LIF, and CARS, have been considered for application at the engine test stands. By far the most successful technology h a been exhaust plume emission spectroscopy. In particular, its application to the Space Shuttle Main Engine (SSME) ground test health monitoring has been invaluable in various engine testing and development activities at SSC since 1989. On several occasions, plume diagnostic methods have successfully detected a problem with one or more components of an engine long before any other sensor indicated a problem. More often, they provide corroboration for a failure mode, if any occurred during an engine test. This paper gives a brief overview of our instrumentation and computational systems for rocket engine plume diagnostics at SSC. Some examples of successful application of exhaust plume spectroscopy (emission as well as absorption) to the SSME testing are presented. Our on-going plume diagnostics technology development projects and future requirements are discussed
    • …
    corecore