1,234 research outputs found

    Design and flight testing of a nullable compressor face rake

    Get PDF
    A compressor face rake with an internal valve arrangement to permit nulling was designed, constructed, and tested in the laboratory and in flight at the NASA Flight Research Center. When actuated by the pilot in flight, the nullable rake allowed the transducer zero shifts to be determined and then subsequently removed during data reduction. Design details, the fabrication technique, the principle of operation, brief descriptions of associated digital zero-correction programs and the qualification tests, and test results are included. Sample flight data show that the zero shifts were large and unpredictable but could be measured in flight with the rake. The rake functioned reliably and as expected during 25 hours of operation under flight environmental conditions and temperatures from 230 K (-46 F) to greater than 430 K (314 F). The rake was nulled approximately 1000 times. The in-flight zero-shift measurement technique, as well as the rake design, was successful and should be useful in future applications, particularly where accurate measurements of both steady-state and dynamic pressures are required under adverse environmental conditions

    Hybrid Propulsion Technology Program

    Get PDF
    Future launch systems of the United States will require improvements in booster safety, reliability, and cost. In order to increase payload capabilities, performance improvements are also desirable. The hybrid rocket motor (HRM) offers the potential for improvements in all of these areas. The designs are presented for two sizes of hybrid boosters, a large 4.57 m (180 in.) diameter booster duplicating the Advanced Solid Rocket Motor (ASRM) vacuum thrust-time profile and smaller 2.44 m (96 in.), one-quater thrust level booster. The large booster would be used in tandem, while eight small boosters would be used to achieve the same total thrust. These preliminary designs were generated as part of the NASA Hybrid Propulsion Technology Program. This program is the first phase of an eventual three-phaes program culminating in the demonstration of a large subscale engine. The initial trade and sizing studies resulted in preferred motor diameters, operating pressures, nozzle geometry, and fuel grain systems for both the large and small boosters. The data were then used for specific performance predictions in terms of payload and the definition and selection of the requirements for the major components: the oxidizer feed system, nozzle, and thrust vector system. All of the parametric studies were performed using realistic fuel regression models based upon specific experimental data

    Evaluating rater effects in the context of ethical reasoning essay assessment: An application of the many-facets rasch measurement model

    Get PDF
    Performance assessments are an often desired type of assessment due to their potential for alignment between the assessment and reality. However, due to the rater-mediated nature of scoring (Eckes, 2015), performance assessments have psychometric challenges that cannot be ignored in testing and assessment work. Specifically, performance assessment scores are prone to rater effects, or systematic differences in how raters evaluate performance assessment products (Myford & Wolfe, 2003). The purpose of this project was to evaluate ethical reasoning essay scores for rater effects. The Many-Facets Rasch Measurement (MFRM) model was used to evaluate ethical reasoning essay scores for rater leniency/severity effects, restriction of range, and rater leniency/severity by rubric element interaction effects. Individual rater leniency/severity effects were observed in this sample of raters, as was an interaction effect between rater leniency/severity and rubric element. Moreover, a restriction of range effect was observed, with scores restricted primarily to the lower end of the rubric score categories. To provide a preliminary explanation for differences in rater leniency/severity, the relationship between raters’ knowledge of ethical reasoning and their leniency/severity was evaluated. No relationship between raters’ knowledge of ethical reasoning and their leniency/severity was observed in this study. Based on findings, recommendations are made for rater training. Specifically, ethical reasoning program coordinators may consider using the MFRM analysis during rating to identify individual raters who are exhibiting rater effects. Program coordinators may then work with individual raters on additional training and rubric calibration to mitigate individual rater effects. Additionally, recommendations are made regarding the statistical adjustment of student scores to mitigate rater leniency/severity effects in the ethical reasoning scores. Though score adjustment is attractive if the goal is to mitigate rater leniency/severity effects, it has implications for inferences made from scores. Future research may focus on further identifying causes of rater effects, as well as methods for mitigating rater effects

    Single Parameter Combinatorial Auctions with Partially Public Valuations

    Full text link
    We consider the problem of designing truthful auctions, when the bidders' valuations have a public and a private component. In particular, we consider combinatorial auctions where the valuation of an agent ii for a set SS of items can be expressed as vif(S)v_if(S), where viv_i is a private single parameter of the agent, and the function ff is publicly known. Our motivation behind studying this problem is two-fold: (a) Such valuation functions arise naturally in the case of ad-slots in broadcast media such as Television and Radio. For an ad shown in a set SS of ad-slots, f(S)f(S) is, say, the number of {\em unique} viewers reached by the ad, and viv_i is the valuation per-unique-viewer. (b) From a theoretical point of view, this factorization of the valuation function simplifies the bidding language, and renders the combinatorial auction more amenable to better approximation factors. We present a general technique, based on maximal-in-range mechanisms, that converts any α\alpha-approximation non-truthful algorithm (α1\alpha \leq 1) for this problem into Ω(αlogn)\Omega(\frac{\alpha}{\log{n}}) and Ω(α)\Omega(\alpha)-approximate truthful mechanisms which run in polynomial time and quasi-polynomial time, respectively

    The effect of high-temperature sterilization on the Solo papaya

    Get PDF

    Tidal Synchronization and Differential Rotation of Kepler Eclipsing Binaries

    Get PDF
    Few observational constraints exist for the tidal synchronization rate of late-type stars, despite its fundamental role in binary evolution. We visually inspected the light curves of 2278 eclipsing binaries (EBs) from the Kepler Eclipsing Binary Catalog to identify those with starspot modulations, as well as other types of out-of-eclipse variability. We report rotation periods for 816 EBs with starspot modulations, and find that 79% of EBs with orbital periods less than ten days are synchronized. However, a population of short period EBs exists with rotation periods typically 13% slower than synchronous, which we attribute to the differential rotation of high latitude starspots. At 10 days, there is a transition from predominantly circular, synchronized EBs to predominantly eccentric, pseudosynchronized EBs. This transition period is in good agreement with the predicted and observed circularization period for Milky Way field binaries. At orbital periods greater than about 30 days, the amount of tidal synchronization decreases. We also report 12 previously unidentified candidate δ\delta Scuti and γ\gamma Doradus pulsators, as well as a candidate RS CVn system with an evolved primary that exhibits starspot occultations. For short period contact binaries, we observe a period-color relation, and compare it to previous studies. As a whole, these results represent the largest homogeneous study of tidal synchronization of late-type stars.Comment: Accepted for publication in the Astronomical Journal. EB rotation periods and classifications available at https://github.com/jlurie/decatur/blob/master/decatur/data/final_catalog.cs

    CMS Software Distribution on the LCG and OSG Grids

    Full text link
    The efficient exploitation of worldwide distributed storage and computing resources available in the grids require a robust, transparent and fast deployment of experiment specific software. The approach followed by the CMS experiment at CERN in order to enable Monte-Carlo simulations, data analysis and software development in an international collaboration is presented. The current status and future improvement plans are described.Comment: 4 pages, 1 figure, latex with hyperref

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    Optimal Scheduling Using Branch and Bound with SPIN 4.0

    Get PDF
    The use of model checkers to solve discrete optimisation problems is appealing. A model checker can first be used to verify that the model of the problem is correct. Subsequently, the same model can be used to find an optimal solution for the problem. This paper describes how to apply the new PROMELA primitives of SPIN 4.0 to search effectively for the optimal solution. We show how Branch-and-Bound techniques can be added to the LTL property that is used to find the solution. The LTL property is dynamically changed during the verification. We also show how the syntactical reordering of statements and/or processes in the PROMELA model can improve the search even further. The techniques are illustrated using two running examples: the Travelling Salesman Problem and a job-shop scheduling problem
    corecore