351,899 research outputs found

    Open clusters: probes of galaxy evolution and bench tests of stellar models

    Full text link
    Open clusters are the only example of single-age, single initial chemical composition populations in the Galaxy, and they play an important role in the study of the formation and evolution of the Galactic disk. In addition, they have been traditionally employed to test theoretical stellar evolution models. A brief review of constraints/tests of white dwarf models/progenitors, and rotating star models based on Galactic open clusters' observations is presented, introducing also recent contributions of asteroseismic analyses.Comment: Proc. of the workshop "Asteroseismology of stellar populations in the Milky Way" (Sesto, 22-26 July 2013), Astrophysics and Space Science Proceedings, (eds. A. Miglio, L. Girardi, P. Eggenberger, J. Montalban

    An optimization-based control strategy for energy efficiency of discrete manufacturing systems

    Get PDF
    In order to reduce the global energy consumption and avoid highest power peaks during operation of manufacturing systems, an optimization-based controller for selective switching on/off of peripheral devices in a test bench that emulates the energy consumption of a periodic system is proposed. First, energy consumption models for the test-bench devices are obtained based on data and subspace identification methods. Next, a control strategy is designed based on both optimization and receding horizon approach, considering the energy consumption models, operating constraints, and the real processes performed by peripheral devices. Thus, a control policy based on dynamical models of peripheral devices is proposed to reduce the energy consumption of the manufacturing systems without sacrificing the productivity. Afterward, the proposed strategy is validated in the test bench and comparing to a typical rule-based control scheme commonly used for these manufacturing systems. Based on the obtained results, reductions near 7% could be achieved allowing improvements in energy efficiency via minimization of the energy costs related to nominal power purchased.Peer ReviewedPostprint (author's final draft

    Getting started in probabilistic graphical models

    Get PDF
    Probabilistic graphical models (PGMs) have become a popular tool for computational analysis of biological data in a variety of domains. But, what exactly are they and how do they work? How can we use PGMs to discover patterns that are biologically relevant? And to what extent can PGMs help us formulate new hypotheses that are testable at the bench? This note sketches out some answers and illustrates the main ideas behind the statistical approach to biological pattern discovery.Comment: 12 pages, 1 figur

    Prudence and Robustness as Explanations for Precautionary Savings; an Evaluation

    Get PDF
    This paper evaluates approximation methods to make manageable the numerical solution of overlapping generation models with aggregate risk. The paper starts with a model in which households maximize expected utility over their life cycle. Instantaneous utility is characterized by constant relative risk aversion. Prudence, a characteristic of the utility function, leads to precautionary saving. The first-order conditions include expectations. One source of uncertainty is not prohibitive for numerical integration of the expectation term. Because of its accuracy numerical integration results are used as a bench mark. Taylor series approximations can lead to the same results dependent on the linearization point. A linear quadratic approximation of the household model is evaluated subsequently. Alternatively, precautionary saving effects can be the result of robust decision making. This approach leads to linear policy functions and gives a rather good approximation of the bench mark model, although not as good as the Taylor series approximation.

    Bench Plot and Mixed Effects Models: First steps toward a comprehensive benchmark analysis toolbox

    Get PDF
    Benchmark experiments produce data in a very specific format. The observations are drawn from the performance distributions of the candidate algorithms on resampled data sets. In this paper we introduce new visualisation techniques and show how formal test procedures can be used to evaluate the results. This is the first step towards a comprehensive toolbox of exploratory and inferential analysis methods for benchmark experiments

    Redox stress defines the small artery vasculopathy of hypertension: how do we bridge the bench-to-bedside gap?

    Get PDF
    Although convincing experimental evidence demonstrates the importance of vascular reactive oxygen and nitrogen species (RONS), oxidative stress, and perturbed redox signaling as causative processes in the vasculopathy of hypertension, this has not translated to the clinic. We discuss this bench-to-bedside disparity and the urgency to progress vascular redox pathobiology from experimental models to patients by studying disease-relevant human tissues. It is only through such approaches that the unambiguous role of vascular redox stress will be defined so that mechanism-based therapies in a personalized and precise manner can be developed to prevent, slow, or reverse progression of small-vessel disorders and consequent hypertension

    Hot-bench simulation of the active flexible wing wind-tunnel model

    Get PDF
    Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described

    Forecast evaluation of small nested model sets

    Get PDF
    We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the bench-mark to all the alternative models simultaneously rather than sequentially, and do not require re-estimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White’s (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate, our procedures by comparing forecasts of different models for U.S. inflation. JEL Classification: C32, C53, E37Inflation forecasting, multiple model comparisons, Out-of-Sample, prediction, testing
    corecore