26,379 research outputs found
Computer simulation of on-orbit manned maneuvering unit operations
Simulation of spacecraft on-orbit operations is discussed in reference to Martin Marietta's Space Operations Simulation laboratory's use of computer software models to drive a six-degree-of-freedom moving base carriage and two target gimbal systems. In particular, key simulation issues and related computer software models associated with providing real-time, man-in-the-loop simulations of the Manned Maneuvering Unit (MMU) are addressed with special attention given to how effectively these models and motion systems simulate the MMU's actual on-orbit operations. The weightless effects of the space environment require the development of entirely new devices for locomotion. Since the access to space is very limited, it is necessary to design, build, and test these new devices within the physical constraints of earth using simulators. The simulation method that is discussed here is the technique of using computer software models to drive a Moving Base Carriage (MBC) that is capable of providing simultaneous six-degree-of-freedom motions. This method, utilized at Martin Marietta's Space Operations Simulation (SOS) laboratory, provides the ability to simulate the operation of manned spacecraft, provides the pilot with proper three-dimensional visual cues, and allows training of on-orbit operations. The purpose here is to discuss significant MMU simulation issues, the related models that were developed in response to these issues and how effectively these models simulate the MMU's actual on-orbiter operations
Visual Communications on the Road in Arkansas: Analysis of Secondary Students Videos
In the summer of 2010, the Visual Communications on the Road in Arkansas: Creative Photo and Video Projects to Promote Agriculture program was initiated. The program consisted of a two-week agricultural communications curriculum that would be taught by agricultural science teachers in Arkansas. The curriculum was composed of lessons about photography, writing, and videography, and the program introduced students to digital photography and videography equipment and the proper uses of equipment. Once the curriculum was taught in secondary schools, a mobile classroom unit—consisting of a travel trailer, photography and videography equipment, and laptop computers equipped with editing software—would visit the school to assist students with the creation of short promotional videos about agriculture. The student-created videos were used as a hands-on extension of the curriculum learned in the classroom. Completed videos were posted to YouTube and then analyzed to assess student application of competencies taught in the curriculum. The researchers created a coding sheet to systematically assess all posted videos and inter- and intrarater reliability was maintained. An analysis of data gathered from the video assessment showed that secondary students were able to effectively apply many of the techniques taught in the curriculum through the agricultural videos created. Additional findings and recommendations for application and future research are presented
Steps Towards Precise Ar/Ar Chronologies for Fluid-Rock Interaction Throughout the Solar System
Importance Sampling: Intrinsic Dimension and Computational Cost
The basic idea of importance sampling is to use independent samples from a
proposal measure in order to approximate expectations with respect to a target
measure. It is key to understand how many samples are required in order to
guarantee accurate approximations. Intuitively, some notion of distance between
the target and the proposal should determine the computational cost of the
method. A major challenge is to quantify this distance in terms of parameters
or statistics that are pertinent for the practitioner. The subject has
attracted substantial interest from within a variety of communities. The
objective of this paper is to overview and unify the resulting literature by
creating an overarching framework. A general theory is presented, with a focus
on the use of importance sampling in Bayesian inverse problems and filtering.Comment: Statistical Scienc
Well-Posedness And Accuracy Of The Ensemble Kalman Filter In Discrete And Continuous Time
The ensemble Kalman filter (EnKF) is a method for combining a dynamical model
with data in a sequential fashion. Despite its widespread use, there has been
little analysis of its theoretical properties. Many of the algorithmic
innovations associated with the filter, which are required to make a useable
algorithm in practice, are derived in an ad hoc fashion. The aim of this paper
is to initiate the development of a systematic analysis of the EnKF, in
particular to do so in the small ensemble size limit. The perspective is to
view the method as a state estimator, and not as an algorithm which
approximates the true filtering distribution. The perturbed observation version
of the algorithm is studied, without and with variance inflation. Without
variance inflation well-posedness of the filter is established; with variance
inflation accuracy of the filter, with resepct to the true signal underlying
the data, is established. The algorithm is considered in discrete time, and
also for a continuous time limit arising when observations are frequent and
subject to large noise. The underlying dynamical model, and assumptions about
it, is sufficiently general to include the Lorenz '63 and '96 models, together
with the incompressible Navier-Stokes equation on a two-dimensional torus. The
analysis is limited to the case of complete observation of the signal with
additive white noise. Numerical results are presented for the Navier-Stokes
equation on a two-dimensional torus for both complete and partial observations
of the signal with additive white noise
Overcoming the false-minima problem in direct methods: Structure determination of the packaging enzyme P4 from bacteriophage φ13
The problems encountered during the phasing and structure determination of the packaging enzyme P4 from bacteriophage φ13 using the anomalous signal from selenium in a single-wavelength anomalous dispersion experiment (SAD) are described. The oligomeric state of P4 in the virus is a hexamer (with sixfold rotational symmetry) and it crystallizes in space group C2, with four hexamers in the crystallographic asymmetric unit. Current state-of-the-art ab initio phasing software yielded solutions consisting of 96 atoms arranged as sixfold symmetric clusters of Se atoms. However, although these solutions showed high correlation coefficients indicative that the substructure had been solved, the resulting phases produced uninterpretable electron-density maps. Only after further analysis were correct solutions found (also of 96 atoms), leading to the eventual identification of the positions of 120 Se atoms. Here, it is demonstrated how the difficulties in finding a correct phase solution arise from an intricate false-minima problem. © 2005 International Union of Crystallography - all rights reserved
MCMC methods for functions modifying old algorithms to make\ud them faster
Many problems arising in applications result in the need\ud
to probe a probability distribution for functions. Examples include Bayesian nonparametric statistics and conditioned diffusion processes. Standard MCMC algorithms typically become arbitrarily slow under the mesh refinement dictated by nonparametric description of the unknown function. We describe an approach to modifying a whole range of MCMC methods which ensures that their speed of convergence is robust under mesh refinement. In the applications of interest the data is often sparse and the prior specification is an essential part of the overall modeling strategy. The algorithmic approach that we describe is applicable whenever the desired probability measure has density with respect to a Gaussian process or Gaussian random field prior, and to some useful non-Gaussian priors constructed through random truncation. Applications are shown in density estimation, data assimilation in fluid mechanics, subsurface geophysics and image registration. The key design principle is to formulate the MCMC method for functions. This leads to algorithms which can be implemented via minor modification of existing algorithms, yet which show enormous speed-up on a wide range of applied problems
- …