923 research outputs found

    Robust H∞ control for a class of nonlinear discrete time-delay stochastic systems with missing measurements

    Get PDF
    This is the post print version of the article. The official published version can be obtained from the link - Copyright 2009 Elsevier LtdThis paper is concerned with the problem of robust H∞ output feedback control for a class of uncertain discrete-time delayed nonlinear stochastic systems with missing measurements. The parameter uncertainties enter into all the system matrices, the time-varying delay is unknown with given low and upper bounds, the nonlinearities satisfy the sector conditions, and the missing measurements are described by a binary switching sequence that obeys a conditional probability distribution. The problem addressed is the design of an output feedback controller such that, for all admissible uncertainties, the resulting closed-loop system is exponentially stable in the mean square for the zero disturbance input and also achieves a prescribed H∞ performance level. By using the Lyapunov method and stochastic analysis techniques, sufficient conditions are first derived to guarantee the existence of the desired controllers, and then the controller parameters are characterized in terms of linear matrix inequalities (LMIs). A numerical example is exploited to show the usefulness of the results obtained.This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor Dragan Nešic under the direction of Editor Hassan K. Khalil. This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the U.K. under Grant GR/S27658/01, the City University of Hong Kong under Grant 7001992, the Royal Society of the U.K. under an International Joint Project, the Natural Science Foundation of Jiangsu Province of China under Grant BK2007075, the National Natural Science Foundation of China under Grant 60774073, and the Alexander von Humboldt Foundation of Germany

    Comparing benefits from many possible computed tomography lung cancer screening programs: Extrapolating from the National Lung Screening Trial using comparative modeling

    Get PDF
    Background: The National Lung Screening Trial (NLST) demonstrated that in current and former smokers aged 55 to 74 years, with at least 30 pack-years of cigarette smoking history and who had quit smoking no more than 15 years ago, 3 annual computed tomography (CT) screens reduced lung cancer-specific mortality by 20% relative to 3 annual chest X-ray screens. We compared the benefits achievable with 576 lung cancer screening programs that varied CT screen number and frequency, ages of screening, and eligibility based on smoking. Methods and Findings: We used five independent microsimulation models with lung cancer natural history parameters previously calibrated to the NLST to simulate life histories of the US cohort born in 1950 under all 576 programs. 'Efficient' (within model) programs prevented the greatest number of lung cancer deaths, compared to no screening, for a given number of CT screens. Among 120 'consensus efficient' (identified as efficient across models) programs, the average starting age was 55 years, the stopping age was 80 or 85 years, the average minimum pack-years was 27, and the maximum years since quitting was 20. Among consensus efficient programs, 11% to 40% of the cohort was screened, and 153 to 846 lung cancer deaths were averted per 100,000 people. In all models, annual screening based on age and smoking eligibility in NLST was not efficient; continuing screening to age 80 or 85 years was more efficient. Conclusions: Consensus results from five models identified a set of efficient screening programs that include annual CT lung cancer screening using criteria like NLST eligibility but extended to older ages. Guidelines for screening should also consider harms of screening and individual patient characteristics

    Recent searches for solar axions and large extra dimensions

    Full text link
    We analyze the data from two recent experiments designed to search for solar axions within the context of multidimensional theories of the Kaluza-Klein type. In these experiments, axions were supposed to be emitted from the solar core, in M1 transitions between the first excited state and the ground state of 57Fe and 7Li. Because of the high multiplicity of axionic Kaluza-Klein states which couple with the strength of ordinary QCD axions, we obtain much more stringent experimental limits on the four-dimensional Peccei-Quinn breaking scale f_{PQ}, compared with the solar QCD axion limit. Specifically, for the 57Fe experiment, f_{PQ}>1x10^6 GeV in theories with two extra dimensions and a higher-dimensional gravitational scale M_H of order 100 TeV, and f_{PQ}>1x10^6 GeV in theories with three extra dimensions and M_H of order 1 TeV (to be compared with the QCD axion limit, f_{PQ}>8x10^3 GeV). For the 7Li experiment, f_{PQ}>1.4x10^5 GeV and 3.4x10^5 GeV, respectively (to be compared with the QCD axion limit, f_{PQ}>1.9x10^2 GeV). It is an interesting feature of our results that, in most cases, the obtained limit on f_{PQ} cannot be coupled with the mass of the axion, which is essentially set by the (common) radius of the extra dimensions.Comment: 4 pages, revtex 4, minor changes, version accepted by PR

    System Size and Energy Dependence of Jet-Induced Hadron Pair Correlation Shapes in Cu+Cu and Au+Au Collisions at sqrt(s_NN) = 200 and 62.4 GeV

    Get PDF
    We present azimuthal angle correlations of intermediate transverse momentum (1-4 GeV/c) hadrons from {dijets} in Cu+Cu and Au+Au collisions at sqrt(s_NN) = 62.4 and 200 GeV. The away-side dijet induced azimuthal correlation is broadened, non-Gaussian, and peaked away from \Delta\phi=\pi in central and semi-central collisions in all the systems. The broadening and peak location are found to depend upon the number of participants in the collision, but not on the collision energy or beam nuclei. These results are consistent with sound or shock wave models, but pose challenges to Cherenkov gluon radiation models.Comment: 464 authors from 60 institutions, 6 pages, 3 figures, 2 tables. Submitted to Physical Review Letters. Plain text data tables for the points plotted in figures for this and previous PHENIX publications are (or will be) publicly available at http://www.phenix.bnl.gov/papers.htm

    Fitting the integrated Spectral Energy Distributions of Galaxies

    Full text link
    Fitting the spectral energy distributions (SEDs) of galaxies is an almost universally used technique that has matured significantly in the last decade. Model predictions and fitting procedures have improved significantly over this time, attempting to keep up with the vastly increased volume and quality of available data. We review here the field of SED fitting, describing the modelling of ultraviolet to infrared galaxy SEDs, the creation of multiwavelength data sets, and the methods used to fit model SEDs to observed galaxy data sets. We touch upon the achievements and challenges in the major ingredients of SED fitting, with a special emphasis on describing the interplay between the quality of the available data, the quality of the available models, and the best fitting technique to use in order to obtain a realistic measurement as well as realistic uncertainties. We conclude that SED fitting can be used effectively to derive a range of physical properties of galaxies, such as redshift, stellar masses, star formation rates, dust masses, and metallicities, with care taken not to over-interpret the available data. Yet there still exist many issues such as estimating the age of the oldest stars in a galaxy, finer details ofdust properties and dust-star geometry, and the influences of poorly understood, luminous stellar types and phases. The challenge for the coming years will be to improve both the models and the observational data sets to resolve these uncertainties. The present review will be made available on an interactive, moderated web page (sedfitting.org), where the community can access and change the text. The intention is to expand the text and keep it up to date over the coming years.Comment: 54 pages, 26 figures, Accepted for publication in Astrophysics & Space Scienc

    Global Search for New Physics with 2.0/fb at CDF

    Get PDF
    Data collected in Run II of the Fermilab Tevatron are searched for indications of new electroweak-scale physics. Rather than focusing on particular new physics scenarios, CDF data are analyzed for discrepancies with the standard model prediction. A model-independent approach (Vista) considers gross features of the data, and is sensitive to new large cross-section physics. Further sensitivity to new physics is provided by two additional algorithms: a Bump Hunter searches invariant mass distributions for "bumps" that could indicate resonant production of new particles; and the Sleuth procedure scans for data excesses at large summed transverse momentum. This combined global search for new physics in 2.0/fb of ppbar collisions at sqrt(s)=1.96 TeV reveals no indication of physics beyond the standard model.Comment: 8 pages, 7 figures. Final version which appeared in Physical Review D Rapid Communication

    Observation of Orbitally Excited B_s Mesons

    Get PDF
    We report the first observation of two narrow resonances consistent with states of orbitally excited (L=1) B_s mesons using 1 fb^{-1} of ppbar collisions at sqrt{s} = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. We use two-body decays into K^- and B^+ mesons reconstructed as B^+ \to J/\psi K^+, J/\psi \to \mu^+ \mu^- or B^+ \to \bar{D}^0 \pi^+, \bar{D}^0 \to K^+ \pi^-. We deduce the masses of the two states to be m(B_{s1}) = 5829.4 +- 0.7 MeV/c^2 and m(B_{s2}^*) = 5839.7 +- 0.7 MeV/c^2.Comment: Version accepted and published by Phys. Rev. Let

    Shrinking a large dataset to identify variables associated with increased risk of Plasmodium falciparum infection in Western Kenya

    Get PDF
    Large datasets are often not amenable to analysis using traditional single-step approaches. Here, our general objective was to apply imputation techniques, principal component analysis (PCA), elastic net and generalized linear models to a large dataset in a systematic approach to extract the most meaningful predictors for a health outcome. We extracted predictors for Plasmodium falciparum infection, from a large covariate dataset while facing limited numbers of observations, using data from the People, Animals, and their Zoonoses (PAZ) project to demonstrate these techniques: data collected from 415 homesteads in western Kenya, contained over 1500 variables that describe the health, environment, and social factors of the humans, livestock, and the homesteads in which they reside. The wide, sparse dataset was simplified to 42 predictors of P. falciparum malaria infection and wealth rankings were produced for all homesteads. The 42 predictors make biological sense and are supported by previous studies. This systematic data-mining approach we used would make many large datasets more manageable and informative for decision-making processes and health policy prioritization

    Search for the associated production of a b quark and a neutral supersymmetric Higgs boson which decays to tau pairs

    Get PDF
    We report results from a search for production of a neutral Higgs boson in association with a bb quark. We search for Higgs decays to τ\tau pairs with one τ\tau subsequently decaying to a muon and the other to hadrons. The data correspond to 2.7fb1^{-1} of \ppbar collisions recorded by the D0 detector at s=1.96\sqrt{s} = 1.96TeV. The data are found to be consistent with background predictions. The result allows us to exclude a significant region of parameter space of the minimal supersymmetric model.Comment: Submitted to Phys. Rev. Letter
    corecore