82,027 research outputs found

    Dynamic Programming for General Linear Quadratic Optimal Stochastic Control with Random Coefficients

    Full text link
    We are concerned with the linear-quadratic optimal stochastic control problem with random coefficients. Under suitable conditions, we prove that the value field V(t,x,ω),(t,x,ω)[0,T]×Rn×ΩV(t,x,\omega), (t,x,\omega)\in [0,T]\times R^n\times \Omega, is quadratic in xx, and has the following form: V(t,x)=Ktx,xV(t,x)=\langle K_tx, x\rangle where KK is an essentially bounded nonnegative symmetric matrix-valued adapted processes. Using the dynamic programming principle (DPP), we prove that KK is a continuous semi-martingale of the form Kt=K0+0tdks+i=1d0tLsidWsi,t[0,T]K_t=K_0+\int_0^t \, dk_s+\sum_{i=1}^d\int_0^tL_s^i\, dW_s^i, \quad t\in [0,T] with kk being a continuous process of bounded variation and E[(0TLs2ds)p]<,p2;E\left[\left(\int_0^T|L_s|^2\, ds\right)^p\right] <\infty, \quad \forall p\ge 2; and that (K,L)(K, L) with L:=(L1,,Ld)L:=(L^1, \cdots, L^d) is a solution to the associated backward stochastic Riccati equation (BSRE), whose generator is highly nonlinear in the unknown pair of processes. The uniqueness is also proved via a localized completion of squares in a self-contained manner for a general BSRE. The existence and uniqueness of adapted solution to a general BSRE was initially proposed by the French mathematician J. M. Bismut (1976, 1978). It had been solved by the author (2003) via the stochastic maximum principle with a viewpoint of stochastic flow for the associated stochastic Hamiltonian system. The present paper is its companion, and gives the {\it second but more comprehensive} adapted solution to a general BSRE via the DDP. Further extensions to the jump-diffusion control system and to the general nonlinear control system are possible.Comment: 16 page

    Comparing Income Distributions Between Economies That Reward Innovation And Those That Reward Knowledge

    Get PDF
    In this paper, we develop an optimal control model of labor allocation in two types of economy - one economy is for innovative workers and the other one for knowledge workers. In both economies, workers allocate time between learning and discovering new knowledge. Both markets consist of a continuum of heterogeneous agents that are distinguished by their learning ability. Workers are rewarded for the knowledge they possess in the knowledge economy, and only for the new knowledge they create in the innovative economy. We show that, at steady state, while human capital accumulation is higher in the knowledge economy, the rate of knowledge creation is not necessarily higher in the innovative economy. Secondly, we prove that when the cost of learning is sufficiently high, the distribution of net wage income in the knowledge economy dominates that in the innovative economy in the first degree.

    Localization of a spin-orbit coupled Bose-Einstein condensate in a bichromatic optical lattice

    Full text link
    We study the localization of a noninteracting and weakly interacting Bose-Einstein condensate with spin-orbit coupling loaded in a quasiperiodic bichromatic optical lattice potential using the numerical solution and variational approximation of a binary mean-field Gross-Pitaevskii equation with two pseudo-spin components. We confirm the existence of the stationary localized states in the presence of the spin-orbit and Rabi couplings for an equal distribution of atoms in the two components. We find that the interaction between the spin-orbit and Rabi couplings favors the localization or delocalization of the BEC depending on the the phase difference between the components. We also studied the oscillation dynamics of the localized states for an initial population imbalance between the two components

    Experimental analysis of computer system dependability

    Get PDF
    This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance

    Seismic analysis of 70 Ophiuchi A: A new quantity proposed

    Full text link
    The basic intent of this paper is to model 70 Ophiuchi A using the latest asteroseismic observations as complementary constraints and to determine the fundamental parameters of the star. Additionally, we propose a new quantity to lift the degeneracy between the initial chemical composition and stellar age. Using the Yale stellar evolution code (YREC7), we construct a series of stellar evolutionary tracks for the mass range MM = 0.85 -- 0.93 MM_{\odot} with different composition YiY_{i} (0.26 -- 0.30) and ZiZ_{i} (0.017 -- 0.023). Along these tracks, we select a grid of stellar model candidates that fall within the error box in the HR diagram to calculate the theoretical frequencies, the large- and small- frequency separations using the Guenther's stellar pulsation code. Following the asymptotic formula of stellar pp-modes, we define a quantity r01r_{01} which is correlated with stellar age. Also, we test it by theoretical adiabatic frequencies of many models. Many detailed models of 70 Ophiuchi A have been listed in Table 3. By combining all non-asteroseismic observations available for 70 Ophiuchi A with these seismological data, we think that Model 60, Model 125 and Model 126, listed in Table 3, are the optimum models presently. Meanwhile, we predict that the radius of this star is about 0.860 -- 0.865 RR_{\odot} and the age is about 6.8 -- 7.0 Gyr with mass 0.89 -- 0.90 MM_{\odot}. Additionally, we prove that the new quantity r01r_{01} can be a useful indicator of stellar age.Comment: 23 pages, 5 figures, accepted by New Astronom

    Uniform fractional factorial designs

    Full text link
    The minimum aberration criterion has been frequently used in the selection of fractional factorial designs with nominal factors. For designs with quantitative factors, however, level permutation of factors could alter their geometrical structures and statistical properties. In this paper uniformity is used to further distinguish fractional factorial designs, besides the minimum aberration criterion. We show that minimum aberration designs have low discrepancies on average. An efficient method for constructing uniform minimum aberration designs is proposed and optimal designs with 27 and 81 runs are obtained for practical use. These designs have good uniformity and are effective for studying quantitative factors.Comment: Published in at http://dx.doi.org/10.1214/12-AOS987 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    NASA ground terminal communication equipment automated fault isolation expert systems

    Get PDF
    The prototype expert systems are described that diagnose the Distribution and Switching System I and II (DSS1 and DSS2), Statistical Multiplexers (SM), and Multiplexer and Demultiplexer systems (MDM) at the NASA Ground Terminal (NGT). A system level fault isolation expert system monitors the activities of a selected data stream, verifies that the fault exists in the NGT and identifies the faulty equipment. Equipment level fault isolation expert systems are invoked to isolate the fault to a Line Replaceable Unit (LRU) level. Input and sometimes output data stream activities for the equipment are available. The system level fault isolation expert system compares the equipment input and output status for a data stream and performs loopback tests (if necessary) to isolate the faulty equipment. The equipment level fault isolation system utilizes the process of elimination and/or the maintenance personnel's fault isolation experience stored in its knowledge base. The DSS1, DSS2 and SM fault isolation systems, using the knowledge of the current equipment configuration and the equipment circuitry issues a set of test connections according to the predefined rules. The faulty component or board can be identified by the expert system by analyzing the test results. The MDM fault isolation system correlates the failure symptoms with the faulty component based on maintenance personnel experience. The faulty component can be determined by knowing the failure symptoms. The DSS1, DSS2, SM, and MDM equipment simulators are implemented in PASCAL. The DSS1 fault isolation expert system was converted to C language from VP-Expert and integrated into the NGT automation software for offline switch diagnoses. Potentially, the NGT fault isolation algorithms can be used for the DSS1, SM, amd MDM located at Goddard Space Flight Center (GSFC)

    Generation of spin current and polarization under dynamic gate control of spin-orbit interaction in low-dimensional semiconductor systems

    Full text link
    Based on the Keldysh formalism, the Boltzmann kinetic equation and the drift diffusion equation have been derived for studying spin polarization flow and spin accumulation under effect of the time dependent Rashba spin-orbit interaction in a semiconductor quantum well. The time dependent Rashba interaction is provided by time dependent electric gates of appropriate shapes. Several examples of spin manipulation by gates have been considered. Mechanisms and conditions for obtaining the stationary spin density and the induced rectified DC spin current are studied.Comment: 10 pages, 3 figures, RevTeX
    corecore