53,026 research outputs found

    Phased models for evaluating the performability of computing systems

    Get PDF
    A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability

    Two-temperature coronal flow above a thin disk

    Full text link
    We extended the disk corona model (Meyer & Meyer-Hofmeister 1994; Meyer, Liu, & Meyer-Hofmeister 2000a) to the inner region of galactic nuclei by including different temperatures in ions and electrons as well as Compton cooling. We found that the mass evaporation rate and hence the fraction of accretion energy released in the corona depend strongly on the rate of incoming mass flow from outer edge of the disk, a larger rate leading to more Compton cooling, less efficient evaporation and a weaker corona. We also found a strong dependence on the viscosity, higher viscosity leading to an enhanced mass flow in the corona and therefore more evaporation of gas from the disk below. If we take accretion rates in units of the Eddington rate our results become independent on the mass of the central black hole. The model predicts weaker contributions to the hard X-rays for objects with higher accretion rate like narrow-line Seyfert 1 galaxies (NLS1s), in agreement with observations. For luminous active galactic nuclei (AGN) strong Compton cooling in the innermost corona is so efficient that a large amount of additional heating is required to maintain the corona above the thin disk.Comment: 17 pages, 6 figures. ApJ accepte

    Modelling the impact of the ‘Fast Track’ land reform policy on Zimbabwe’s maize sector

    Get PDF
    This paper attempts to analyze the impacts of the ‘fast track’ land reform policy on maize production in Zimbabwe through the construction of a partial equilibrium model that depicts what could have happened if no further policy shifts had taken place after 2000. The resimulated baseline model was used to make projections based on the various trends of exogenous variables in 2000. This means that the model generated an artificial data set based on what the maize market would have looked like under a set of the pre-2000 existent policy conditions. The ‘fast track’ land reform policy was thus assessed based on the performance of the baseline model using a range of “what if” assumptions. Commercial area harvested was 39 % less than what could have been harvested in 2001, and declining by negative 80.57 % in 2007. Results showed total maize production was 61.85 % and 43.88 % less than what could have been produced in the 2002 and 2005 droughts, respectively. This may imply that droughts would have been less severe if the ‘fast track’ land reform was not implemented. Therefore, the ‘fast track’ land reform had a negative effect on maize production. Thus, the econometric model system developed provided a basis through which the effects of the FTLRP on the maize market may be analyzed and understood.‘fast track’ land reform programme, partial equilibrium model, maize, Zimbabwe, Crop Production/Industries, Land Economics/Use,

    Properties of Intercalated 2H-NbSe2, 4Hb-TaS2 and 1T-TaS2

    Get PDF
    The layered compounds 2H-NbSe, 24Hb-TaS, 2and 1T-TaS2 have been intercalated with organic molecules; and the resulting crystal structure, heat capacity, conductivity, and superconductivity have been studied. The coordination in the disulfide layers was found to be unchanged in the product phase. Resistance minima appear and the superconducting transition temperature is reduced in the NbSe2 complex. Conversely, superconductivity is induced in the 4Hb-TaS2 complex. Corresponding evidence of a large change of the density of states, negative for 2H-NbSe2 and positive for 4Hb-TaS2, was also observed upon intercalation. The transport properties of all the intercalation complexes show a pronounced dependence upon the coordination of the transition metal

    Performability evaluation of the SIFT computer

    Get PDF
    Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed

    Impact of Electrostatic Forces in Contact Mode Scanning Force Microscopy

    Full text link
    In this \ll contribution we address the question to what extent surface charges affect contact-mode scanning force microscopy measurements. % We therefore designed samples where we could generate localized electric field distributions near the surface as and when required. % We performed a series of experiments where we varied the load of the tip, the stiffness of the cantilever and the hardness of the sample surface. % It turned out that only for soft cantilevers could an electrostatic interaction between tip and surface charges be detected, irrespective of the surface properties, i.\,e. basically regardless its hardness. % We explain these results through a model based on the alteration of the tip-sample potential by the additional electric field between charged tip and surface charges

    Heavy Meson Production in NN Collisions with Polarized Beam and Target -- A new facility for COSY

    Full text link
    The study of near--threshold meson production in pp and pd collisions involving polarized beams and polarized targets offers the rare opportunity to gain insight into short--range features of the nucleon--nucleon interaction. The Cooler Synchrotron COSY at FZ--J\"ulich is a unique environment to perform such studies. Measurements of polarization observables require a cylindrically symmetrical detector, capable to measure the momenta and the directions of outgoing charged hadrons. The wide energy range of COSY leads to momenta of outgoing protons to be detected in a single meson production reaction between 300 and 2500 MeV/c. Scattering angles of protons to be covered extend to about 4545^{\circ} in the laboratory system. An azimuthal angular coverage of the device around 98% seems technically achievable. The required magnetic spectrometer could consist of a superconducting toroid, providing fields around 3 T.Comment: 6 pages, 1 figure, submitted to Czechoslovak Journal of Physic

    Optimal uncertainty quantification for legacy data observations of Lipschitz functions

    Get PDF
    We consider the problem of providing optimal uncertainty quantification (UQ) --- and hence rigorous certification --- for partially-observed functions. We present a UQ framework within which the observations may be small or large in number, and need not carry information about the probability distribution of the system in operation. The UQ objectives are posed as optimization problems, the solutions of which are optimal bounds on the quantities of interest; we consider two typical settings, namely parameter sensitivities (McDiarmid diameters) and output deviation (or failure) probabilities. The solutions of these optimization problems depend non-trivially (even non-monotonically and discontinuously) upon the specified legacy data. Furthermore, the extreme values are often determined by only a few members of the data set; in our principal physically-motivated example, the bounds are determined by just 2 out of 32 data points, and the remainder carry no information and could be neglected without changing the final answer. We propose an analogue of the simplex algorithm from linear programming that uses these observations to offer efficient and rigorous UQ for high-dimensional systems with high-cardinality legacy data. These findings suggest natural methods for selecting optimal (maximally informative) next experiments.Comment: 38 page

    Plane-wave based electronic structure calculations for correlated materials using dynamical mean-field theory and projected local orbitals

    Full text link
    The description of realistic strongly correlated systems has recently advanced through the combination of density functional theory in the local density approximation (LDA) and dynamical mean field theory (DMFT). This LDA+DMFT method is able to treat both strongly correlated insulators and metals. Several interfaces between LDA and DMFT have been used, such as (N-th order) Linear Muffin Tin Orbitals or Maximally localized Wannier Functions. Such schemes are however either complex in use or additional simplifications are often performed (i.e., the atomic sphere approximation). We present an alternative implementation of LDA+DMFT, which keeps the precision of the Wannier implementation, but which is lighter. It relies on the projection of localized orbitals onto a restricted set of Kohn-Sham states to define the correlated subspace. The method is implemented within the Projector Augmented Wave (PAW) and within the Mixed Basis Pseudopotential (MBPP) frameworks. This opens the way to electronic structure calculations within LDA+DMFT for more complex structures with the precision of an all-electron method. We present an application to two correlated systems, namely SrVO3 and beta-NiS (a charge-transfer material), including ligand states in the basis-set. The results are compared to calculations done with Maximally Localized Wannier functions, and the physical features appearing in the orbitally resolved spectral functions are discussed.Comment: 15 pages, 17 figure

    The Interstellar N/O Abundance Ratio: Evidence for Local Infall?

    Full text link
    Sensitive measurements of the interstellar gas-phase oxygen abundance have revealed a slight oxygen deficiency (\sim 15%) toward stars within 500 pc of the Sun as compared to more distant sightlines. Recent FUSEFUSE observations of the interstellar gas-phase nitrogen abundance indicate larger variations, but no trends with distance were reported due to the significant measurement uncertainties for many sightlines. By considering only the highest quality (\geq 5 σ\sigma) N/O abundance measurements, we find an intriguing trend in the interstellar N/O ratio with distance. Toward the seven stars within \sim 500 pc of the Sun, the weighted mean N/O ratio is 0.217 ±\pm 0.011, while for the six stars further away the weighted mean value (N/O = 0.142 ±\pm 0.008) is curiously consistent with the current Solar value (N/O = 0.1380.18+0.20^{+0.20}_{-0.18}). It is difficult to imagine a scenario invoking environmental (e.g., dust depletion, ionization, etc.) variations alone that explains this abundance anomaly. Is the enhanced nitrogen abundance localized to the Solar neighborhood or evidence of a more widespread phenomenon? If it is localized, then recent infall of low metallicity gas in the Solar neighborhood may be the best explanation. Otherwise, the N/O variations may be best explained by large-scale differences in the interstellar mixing processes for AGB stars and Type II supernovae.Comment: accepted for publication in the Astrophysical Journal Letter
    corecore