58,785 research outputs found
Autodocumentation
Automated documentation systems with expanded capabilities for reusing existing programs are considered by allowing a relatively inexpensive determination of program capabilities. A theoretical approach to such a system is proposed. Tree structure variations are examined, along with assignment statements, conditional branches, loops, and an automated analyzer
Atom-atom ionization mechanism in Argon-Xenon mixtures
The atom-atom ionization process occurring in high-purity argon-xenon mixtures has been investigated by means of a conventional shock tube employing a microwave probe to monitor the electron-generation rate. All tests were conducted at approximately atmospheric pressure and at temperatures in the range between 5000° and 9000°K, corresponding to a neutral-particle density of 7.0 X 10^(17) cm^(-3). The cross-sectional slope constant for xenon ionized by collision with an argon atom is 1.8 X 10^(-20) cm^2/eV±20%, that is, equal to that for xenon ionized by collision with another xenon atom. The data for the reaction of argon ionizing xenon are consistent with an activation energy of 8.315 eV, that is, of the xenon-xenon, atom-atom ionization process. No data were obtained for xenon ionizing argon. Good correlation was obtained between the cross sections for electron elastic momentum exchange derived from the microwave experiment and those obtained from beam experiments. The argon-xenon ionization cross section implies that, for atom-atom processes in the noble gases at pressures ~ 1 atm and temperatures ~2/3 eV, the ionization cross section is independent of the electronic
structure of the projectile atom
The Securities and Exchange Commission and Accounting Principles
In this thesis we address the problem of optimal code generation for irregular architectures such as Digital Signal Processors (DSPs). Code generation consists mainly of three interrelated optimization tasks: instruction selection (with resource allocation), instruction scheduling and register allocation. These tasks have been discovered to be NP-hard for most architectures and most situations. A common approach to code generation consists in solving each task separately, i.e. in a decoupled manner, which is easier from a software engineering point of view. Phase-decoupled compilers produce good code quality for regular architectures, but if applied to DSPs the resulting code is of significantly lower performance due to strong interdependences between the different tasks. We developed a novel method for fully integrated code generation at the basic block level, based on dynamic programming. It handles the most important tasks of code generation in a single optimization step and produces an optimal code sequence. Our dynamic programming algorithm is applicable to small, yet not trivial problem instances with up to 50 instructions per basic block if data locality is not an issue, and up to 20 instructions if we take data locality with optimal scheduling of data transfers on irregular processor architectures into account. For larger problem instances we have developed heuristic relaxations. In order to obtain a retargetable framework we developed a structured architecture specification language, xADML, which is based on XML. We implemented such a framework, called OPTIMIST that is parameterized by an xADML architecture specification. The thesis further provides an Integer Linear Programming formulation of fully integrated optimal code generation for VLIW architectures with a homogeneous register file. Where it terminates successfully, the ILP-based optimizer mostly works faster than the dynamic programming approach; on the other hand, it fails for several larger examples where dynamic programming still provides a solution. Hence, the two approaches complement each other. In particular, we show how the dynamic programming approach can be used to precondition the ILP formulation. As far as we know from the literature, this is for the first time that the main tasks of code generation are solved optimally in a single and fully integrated optimization step that additionally considers data placement in register sets and optimal scheduling of data transfers between different registers sets
Atom-atom ionization cross sections of the noble gases-Argon, Krypton, and Xenon
An experimental investigation of the initial phase of shock produced ionization in argon, krypton, and xenon has been conducted in order to elucidate the atom-atom ionization reaction and to determine the atom-atom ionization cross sections. A high-purity shock tube was employed to heat these gases to temperatures in the range from 5000° to 9000°K at neutral particle densities of 4.4 X 10^(17), 7.0 X 10^(17), and 13.3 X 10^(17) cm^(-3), and impurity levels of approximately 10^(-6) A K-band (24-GHz) microwave system situated so that the microwave-beam propagation direction was normal to the shock tube, monitored the ionization relaxation process occurring immediately after the passage of the shock front. Electron density was calculated from
the microwave data using a plane-wave-plane-plasma slab interaction theory corrected for near field effects
associated with the coupling of the microwave energy to the plasma. These data, adjusted to compensate for the effects of shock attenuation, verified that the dominant electron-generation process involve a two-step, atom-atom ionization reaction, the first step (excitation to the first excited states) being rate determining. The quadratic dependence on neutral density associated with this reaction was experimentally demonstrated (with an uncertainty of ± 15%). The cross section, characterized as having a constant slope from threshold (first excited energy level), represented as the cross-sectional slope constant C, was found to be equal to 1.2 X 10^(-19)±15% cm^2/eV, 1.4 X 10^(-19)±15% cm^2/eV, and 1.8 X 10^(-20)±15% cm^2/eV for argon, krypton, and xenon, respectively. The electron-atom elastic momentum-exchange cross sections derived from the microwave data correlated quite well with Maxwell-averaged beam data, the agreement for the case of argon being ±20%; krypton, ±30%; and xenon, within a factor of 2
Accounting for Obsolescence: An Evaluation of Current NIPA Practice
This work raises questions about what obsolescence is and whether it is properly accounted for in BEA's methodology
Computational chemistry
With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined
A simplified grid technique for determining scan lines generated by the Tiros scanning radiometer
Grid method for constructing scan lines generated by Tiros scanning radiometer
Remote geochemical sensing of asteroids
Remote geochemical measurements with gamma-ray spectrometers and X-ray fluorescence spectrometers are discussed. These instruments have proved themselves in lunar orbit, and seem best suited to determining the elemental content of asteroid surfaces
Symmetric path integrals for stochastic equations with multiplicative noise
A Langevin equation with multiplicative noise is an equation schematically of
the form dq/dt = - F(q) + e(q) xi, where e(q) xi is Gaussian white noise whose
amplitude e(q) depends on q itself. I show how to convert such equations into
path integrals. The definition of the path integral depends crucially on the
convention used for discretizing time, and I specifically derive the correct
path integral when the convention used is the natural, time-symmetric one that
time derivatives are (q_t - q_{t-\Delta t}) / \Delta t and coordinates are (q_t
+ q_{t-\Delta t}) / 2. [This is the convention that permits standard
manipulations of calculus on the action, like naive integration by parts.] It
has sometimes been assumed in the literature that a Stratanovich Langevin
equation can be quickly converted to a path integral by treating time as
continuous but using the rule \theta(t=0) = 1/2. I show that this prescription
fails when the amplitude e(q) is q-dependent.Comment: 8 page
- …