520 research outputs found

    The light of a new age

    Get PDF
    Given here is the address of NASA Administrator Daniel S. Goldin to the Association of Space Explorers. Mr. Goldin's remarks are on the topic of why we should go to Mars, a subject he approaches by first answering the question, What would it mean if we decided today not to go to Mars? After a discussion of the meaning of Columbus' voyage to America, he answers the question by saying that if we decide not to go to Mars, our generation will truly achieve a first in human history - we will be the first to stop at a frontier. After noting that the need to explore is intrinsic to life itself, Mr. Goldin presents several reasons why we should go to the Moon and go to Mars. One reason is economic, another is to increase our scientific knowledge, and yet another is to further the political evolution of humankind through the international cooperation required for building settlements on the Moon and Mars. He concludes by expanding upon the idea that this nation has never been one to shrink from a challenge

    Remarks by NASA administrator Daniel S. Goldin

    Get PDF
    The text of a brief speech addressing the technical and social benefits of Space Station Freedom is presented

    Opening Remarks

    Get PDF
    In these opening remarks to a symposium reflecting on forty years of U.S. Human Spaceflight, NASA Administrator Daniel Goldin, reviews the impact that Alan Shepard had on him personally, to NASA, and to the whole idea of manned spaceflight. Mr Goldin cites Shepard as an example of the past and future of manned spaceflight

    Rationalizations and mistakes: optimal policy with normative ambiguity

    Get PDF
    Behavior that appears to violate neoclassical assumptions can often be rationalized by incorporating an optimization cost into decision-makers' utility functions. Depending on the setting, these costs may reflect either an actual welfare loss for the decision-maker who incurs them or a convenient (but welfare irrelevant) modeling device. We consider how the resolution of this normative ambiguity shapes optimal policy in a number of contexts, including default options, inertia in health plan selection, take-up of social programs, programs that encourage moving to a new neighborhood, and tax salience

    Revealed-preference analysis with framing effects

    Get PDF
    In many settings, decision makers’ behavior is observed to vary on the basis of seemingly arbitrary factors. Such framing effects cast doubt on the welfare conclusions drawn from revealed-preference analysis. We relax the assumptions underlying that approach to accommodate settings in which framing effects are present. Plausible restrictions of varying strength permit either partial or point identification of preferences for the decision makers who choose consistently across frames. Recovering population preferences requires understanding the empirical relationship between decision makers’ preferences and their sensitivity to the frame. We develop tools for studying this relationship and illustrate them with data on automatic enrollment into pension plans

    Optimal defaults with normative ambiguity

    Get PDF
    Default effects are pervasive, but the reason they arise is often unclear. We study optimal policy when the planner does not know whether an observed default effect reflects a welfare-relevant preference or a mistake. Within a broad class of models, we find that determining optimal policy is impossible without resolving this ambiguity. Depending on the resolution, optimal policy tends in opposite directions: either minimizing the number of non-default choices or inducing active choice. We show how these considerations depend on whether active choosers make mistakes when selecting among non-default options. We illustrate our results using data on pension contribution defaults

    The REVERE project:Experiments with the application of probabilistic NLP to systems engineering

    Get PDF
    Despite natural language’s well-documented shortcomings as a medium for precise technical description, its use in software-intensive systems engineering remains inescapable. This poses many problems for engineers who must derive problem understanding and synthesise precise solution descriptions from free text. This is true both for the largely unstructured textual descriptions from which system requirements are derived, and for more formal documents, such as standards, which impose requirements on system development processes. This paper describes experiments that we have carried out in the REVERE1 project to investigate the use of probabilistic natural language processing techniques to provide systems engineering support

    Assessing the Amazon Cloud Suitability for CLARREO's Computational Needs

    Get PDF
    In this document we compare the performance of the Amazon Web Services (AWS), also known as Amazon Cloud, with the CLARREO (Climate Absolute Radiance and Refractivity Observatory) cluster and assess its suitability for computational needs of the CLARREO mission. A benchmark executable to process one month and one year of PARASOL (Polarization and Anistropy of Reflectances for Atmospheric Sciences coupled with Observations from a Lidar) data was used. With the optimal AWS configuration, adequate data-processing times, comparable to the CLARREO cluster, were found. The assessment of alternatives to the CLARREO cluster continues and several options, such as a NASA-based cluster, are being considered

    Quantifying the Uncertainty of Imputed Demographic Disparity Estimates: The Dual-Bootstrap

    Full text link
    Measuring average differences in an outcome across racial or ethnic groups is a crucial first step for equity assessments, but researchers often lack access to data on individuals' races and ethnicities to calculate them. A common solution is to impute the missing race or ethnicity labels using proxies, then use those imputations to estimate the disparity. Conventional standard errors mischaracterize the resulting estimate's uncertainty because they treat the imputation model as given and fixed, instead of as an unknown object that must be estimated with uncertainty. We propose a dual-bootstrap approach that explicitly accounts for measurement uncertainty and thus enables more accurate statistical inference, which we demonstrate via simulation. In addition, we adapt our approach to the commonly used Bayesian Improved Surname Geocoding (BISG) imputation algorithm, where direct bootstrapping is infeasible because the underlying Census Bureau data are unavailable. In simulations, we find that measurement uncertainty is generally insignificant for BISG except in particular circumstances; bias, not variance, is likely the predominant source of error. We apply our method to quantify the uncertainty of prevalence estimates of common health conditions by race using data from the American Family Cohort.Comment: 31 pages; 7 figures; CRIW Race, Ethnicity, and Economic Statistics for the 21st Century, Spring 202
    • …
    corecore