556 research outputs found

    Photoheliograph study for the Apollo telescope mount

    Get PDF
    Photoheliograph study for Apollo telescope moun

    The Pioneer maser signal anomaly: Possible confirmation of spontaneous photon blueshifting

    Full text link
    The novel physics methodology of subquantum kinetics predicted in 1980 that photons should blueshift their frequency at a rate that varies directly with negative gravitational potential, the rate of blueshifting for photons traveling between Earth and Jupiter having been estimated to average approximately (1.3 +/- 0.65) X 10^-18 s^-1, or (1.1 +/- 0.6) X 10^-18 s^-1 for signals traveling a roundtrip distance of 65 AU through the outer solar system. A proposal was made in 1980 to test this blueshifting effect by transponding a maser signal over a 10 AU round-trip distance between two spacecraft. This blueshift prediction has more recently been corroborated by observations of maser signals transponded to the Pioneer 10 spacecraft. These measurements indicate a frequency shifting of approximately (2.28 +/- 0.4) X 10^-18 s^-1 which lies within 2 sigma of the subquantum kinetics prediction and which cannot be accounted for in terms of known forces acting on the craft. This blueshifting phenomenon implies the existence of a new source of energy which is able to account for the luminosities of red dwarf and brown dwarf stars and planets, and their observed sharing of a common mass-luminosity relation.Comment: 20 pages, 3 figures, 2 table

    Programming and Reasoning with Partial Observability

    Full text link
    Computer programs are increasingly being deployed in partially-observable environments. A partially observable environment is an environment whose state is not completely visible to the program, but from which the program receives partial observations. Developers typically deal with partial observability by writing a state estimator that, given observations, attempts to deduce the hidden state of the environment. In safety-critical domains, to formally verify safety properties developers may write an environment model. The model captures the relationship between observations and hidden states and is used to prove the software correct. In this paper, we present a new methodology for writing and verifying programs in partially observable environments. We present belief programming, a programming methodology where developers write an environment model that the program runtime automatically uses to perform state estimation. A belief program dynamically updates and queries a belief state that captures the possible states the environment could be in. To enable verification, we present Epistemic Hoare Logic that reasons about the possible belief states of a belief program the same way that classical Hoare logic reasons about the possible states of a program. We develop these concepts by defining a semantics and a program logic for a simple core language called BLIMP. In a case study, we show how belief programming could be used to write and verify a controller for the Mars Polar Lander in BLIMP. We present an implementation of BLIMP called CBLIMP and evaluate it to determine the feasibility of belief programming

    An assessment of inductive coupling roadway powered vehicles

    Get PDF
    The technical concept underlying the roadway powered vehicle system is the combination of an electrical power source embedded in the roadway and a vehicle-mounted power pickup that is inductively coupled to the roadway power source. The feasibility of such a system, implemented on a large scale was investigated. Factors considered included current and potential transportation modes and requirements, economics, energy, technology, social and institutional issues. These factors interrelate in highly complex ways, and a firm understanding of each of them does not yet exist. The study therefore was structured to manipulate known data in equally complex ways to produce a schema of options and useful questions that can form a basis for further, harder research. A dialectical inquiry technique was used in which two adversary teams, mediated by a third-party team, debated each factor and its interrelationship with the whole of the known information on the topic

    Geothermal probabilistic cost study

    Get PDF
    A tool is presented to quantify the risks of geothermal projects, the Geothermal Probabilistic Cost Model (GPCM). The GPCM model was used to evaluate a geothermal reservoir for a binary-cycle electric plant at Heber, California. Three institutional aspects of the geothermal risk which can shift the risk among different agents was analyzed. The leasing of geothermal land, contracting between the producer and the user of the geothermal heat, and insurance against faulty performance were examined

    Publications of the Jet Propulsion Laboratory, July 1968 through June 1969

    Get PDF
    Annotated bibliography on space exploration, materials, and physical science

    What should DOE do to help establish voluntary consensus standards for measuring and rating the performance of PV modules?

    Get PDF
    In response to concern expressed by the photovoltaics community over progress toward the establishment and issuance of concensus standards on photovoltaic performance measurements, a review of the status of and progress in developing these standards was conducted. It examined the roles of manufacturers, and consumers and the national laboratories funded by the U.S. Department of Energy (DOE) in supporting this effort. This was done by means of a series of discussions with knowledgeable members of the photovoltaic community. Results of these interviews are summarized and a new approach to managing support of standards activity is recommended that responds to specific problems found in the performance measurement standards area. The study concludes that there is a positive role to be played by the U.S. Department of Energy in establishing collector performance measurement standards. It recommends that DOE continue to provide direct financial support for selected committees and for research at national laboratories, and that management of the activity be restructured to increase the authority and responsibility of the consensus committees

    Engineering Resilient Space Systems

    Get PDF
    Several distinct trends will influence space exploration missions in the next decade. Destinations are becoming more remote and mysterious, science questions more sophisticated, and, as mission experience accumulates, the most accessible targets are visited, advancing the knowledge frontier to more difficult, harsh, and inaccessible environments. This leads to new challenges including: hazardous conditions that limit mission lifetime, such as high radiation levels surrounding interesting destinations like Europa or toxic atmospheres of planetary bodies like Venus; unconstrained environments with navigation hazards, such as free-floating active small bodies; multielement missions required to answer more sophisticated questions, such as Mars Sample Return (MSR); and long-range missions, such as Kuiper belt exploration, that must survive equipment failures over the span of decades. These missions will need to be successful without a priori knowledge of the most efficient data collection techniques for optimum science return. Science objectives will have to be revised ‘on the fly’, with new data collection and navigation decisions on short timescales. Yet, even as science objectives are becoming more ambitious, several critical resources remain unchanged. Since physics imposes insurmountable light-time delays, anticipated improvements to the Deep Space Network (DSN) will only marginally improve the bandwidth and communications cadence to remote spacecraft. Fiscal resources are increasingly limited, resulting in fewer flagship missions, smaller spacecraft, and less subsystem redundancy. As missions visit more distant and formidable locations, the job of the operations team becomes more challenging, seemingly inconsistent with the trend of shrinking mission budgets for operations support. How can we continue to explore challenging new locations without increasing risk or system complexity? These challenges are present, to some degree, for the entire Decadal Survey mission portfolio, as documented in Vision and Voyages for Planetary Science in the Decade 2013–2022 (National Research Council, 2011), but are especially acute for the following mission examples, identified in our recently completed KISS Engineering Resilient Space Systems (ERSS) study: 1. A Venus lander, designed to sample the atmosphere and surface of Venus, would have to perform science operations as components and subsystems degrade and fail; 2. A Trojan asteroid tour spacecraft would spend significant time cruising to its ultimate destination (essentially hibernating to save on operations costs), then upon arrival, would have to act as its own surveyor, finding new objects and targets of opportunity as it approaches each asteroid, requiring response on short notice; and 3. A MSR campaign would not only be required to perform fast reconnaissance over long distances on the surface of Mars, interact with an unknown physical surface, and handle degradations and faults, but would also contain multiple components (launch vehicle, cruise stage, entry and landing vehicle, surface rover, ascent vehicle, orbiting cache, and Earth return vehicle) that dramatically increase the need for resilience to failure across the complex system. The concept of resilience and its relevance and application in various domains was a focus during the study, with several definitions of resilience proposed and discussed. While there was substantial variation in the specifics, there was a common conceptual core that emerged—adaptation in the presence of changing circumstances. These changes were couched in various ways—anomalies, disruptions, discoveries—but they all ultimately had to do with changes in underlying assumptions. Invalid assumptions, whether due to unexpected changes in the environment, or an inadequate understanding of interactions within the system, may cause unexpected or unintended system behavior. A system is resilient if it continues to perform the intended functions in the presence of invalid assumptions. Our study focused on areas of resilience that we felt needed additional exploration and integration, namely system and software architectures and capabilities, and autonomy technologies. (While also an important consideration, resilience in hardware is being addressed in multiple other venues, including 2 other KISS studies.) The study consisted of two workshops, separated by a seven-month focused study period. The first workshop (Workshop #1) explored the ‘problem space’ as an organizing theme, and the second workshop (Workshop #2) explored the ‘solution space’. In each workshop, focused discussions and exercises were interspersed with presentations from participants and invited speakers. The study period between the two workshops was organized as part of the synthesis activity during the first workshop. The study participants, after spending the initial days of the first workshop discussing the nature of resilience and its impact on future science missions, decided to split into three focus groups, each with a particular thrust, to explore specific ideas further and develop material needed for the second workshop. The three focus groups and areas of exploration were: 1. Reference missions: address/refine the resilience needs by exploring a set of reference missions 2. Capability survey: collect, document, and assess current efforts to develop capabilities and technology that could be used to address the documented needs, both inside and outside NASA 3. Architecture: analyze the impact of architecture on system resilience, and provide principles and guidance for architecting greater resilience in our future systems The key product of the second workshop was a set of capability roadmaps pertaining to the three reference missions selected for their representative coverage of the types of space missions envisioned for the future. From these three roadmaps, we have extracted several common capability patterns that would be appropriate targets for near-term technical development: one focused on graceful degradation of system functionality, a second focused on data understanding for science and engineering applications, and a third focused on hazard avoidance and environmental uncertainty. Continuing work is extending these roadmaps to identify candidate enablers of the capabilities from the following three categories: architecture solutions, technology solutions, and process solutions. The KISS study allowed a collection of diverse and engaged engineers, researchers, and scientists to think deeply about the theory, approaches, and technical issues involved in developing and applying resilience capabilities. The conclusions summarize the varied and disparate discussions that occurred during the study, and include new insights about the nature of the challenge and potential solutions: 1. There is a clear and definitive need for more resilient space systems. During our study period, the key scientists/engineers we engaged to understand potential future missions confirmed the scientific and risk reduction value of greater resilience in the systems used to perform these missions. 2. Resilience can be quantified in measurable terms—project cost, mission risk, and quality of science return. In order to consider resilience properly in the set of engineering trades performed during the design, integration, and operation of space systems, the benefits and costs of resilience need to be quantified. We believe, based on the work done during the study, that appropriate metrics to measure resilience must relate to risk, cost, and science quality/opportunity. Additional work is required to explicitly tie design decisions to these first-order concerns. 3. There are many existing basic technologies that can be applied to engineering resilient space systems. Through the discussions during the study, we found many varied approaches and research that address the various facets of resilience, some within NASA, and many more beyond. Examples from civil architecture, Department of Defense (DoD) / Defense Advanced Research Projects Agency (DARPA) initiatives, ‘smart’ power grid control, cyber-physical systems, software architecture, and application of formal verification methods for software were identified and discussed. The variety and scope of related efforts is encouraging and presents many opportunities for collaboration and development, and we expect many collaborative proposals and joint research as a result of the study. 4. Use of principled architectural approaches is key to managing complexity and integrating disparate technologies. The main challenge inherent in considering highly resilient space systems is that the increase in capability can result in an increase in complexity with all of the 3 risks and costs associated with more complex systems. What is needed is a better way of conceiving space systems that enables incorporation of capabilities without increasing complexity. We believe principled architecting approaches provide the needed means to convey a unified understanding of the system to primary stakeholders, thereby controlling complexity in the conception and development of resilient systems, and enabling the integration of disparate approaches and technologies. A representative architectural example is included in Appendix F. 5. Developing trusted resilience capabilities will require a diverse yet strategically directed research program. Despite the interest in, and benefits of, deploying resilience space systems, to date, there has been a notable lack of meaningful demonstrated progress in systems capable of working in hazardous uncertain situations. The roadmaps completed during the study, and documented in this report, provide the basis for a real funded plan that considers the required fundamental work and evolution of needed capabilities. Exploring space is a challenging and difficult endeavor. Future space missions will require more resilience in order to perform the desired science in new environments under constraints of development and operations cost, acceptable risk, and communications delays. Development of space systems with resilient capabilities has the potential to expand the limits of possibility, revolutionizing space science by enabling as yet unforeseen missions and breakthrough science observations. Our KISS study provided an essential venue for the consideration of these challenges and goals. Additional work and future steps are needed to realize the potential of resilient systems—this study provided the necessary catalyst to begin this process

    A Decision-Theoretic Approach to Measuring Security

    Get PDF
    The question “is this system secure?” is notoriously difficult to answer. The question implies that there is a system-wide property called “security,” which we can measure with some meaningful threshold of sufficiency. In this concept paper, we discuss the difficulty of measuring security sufficiency, either directly or through proxy such as the number of known vulnerabilities. We propose that the question can be better addressed by measuring confidence and risk in the decisions that depend on security. A novelty of this approach is that it integrates use of both subjective information (e.g. expert judgment) and empirical data. We investigate how this approach uses well-known methods from the discipline of decision-making under uncertainty to provide a more rigorous and useable measure of security sufficiency

    Assessing satellite-derived land product quality for earth system science applications: results from the ceos lpv sub-group

    Get PDF
    The value of satellite derived land products for science applications and research is dependent upon the known accuracy of the data. CEOS (Committee on Earth Observation Satellites), the space arm of the Group on Earth Observations (GEO), plays a key role in coordinating the land product validation process. The Land Product Validation (LPV) sub-group of the CEOS Working Group on Calibration and Validation (WGCV) aims to address the challenges associated with the validation of global land products. This paper provides an overview of LPV sub-group focus area activities, which cover seven terrestrial Essential Climate Variables (ECVs). The contribution will enhance coordination of the scientific needs of the Earth system communities with global LPV activities
    corecore