9,769 research outputs found

    The statistical mechanics of a polygenic characterunder stabilizing selection, mutation and drift

    Full text link
    By exploiting an analogy between population genetics and statistical mechanics, we study the evolution of a polygenic trait under stabilizing selection, mutation, and genetic drift. This requires us to track only four macroscopic variables, instead of the distribution of all the allele frequencies that influence the trait. These macroscopic variables are the expectations of: the trait mean and its square, the genetic variance, and of a measure of heterozygosity, and are derived from a generating function that is in turn derived by maximizing an entropy measure. These four macroscopics are enough to accurately describe the dynamics of the trait mean and of its genetic variance (and in principle of any other quantity). Unlike previous approaches that were based on an infinite series of moments or cumulants, which had to be truncated arbitrarily, our calculations provide a well-defined approximation procedure. We apply the framework to abrupt and gradual changes in the optimum, as well as to changes in the strength of stabilizing selection. Our approximations are surprisingly accurate, even for systems with as few as 5 loci. We find that when the effects of drift are included, the expected genetic variance is hardly altered by directional selection, even though it fluctuates in any particular instance. We also find hysteresis, showing that even after averaging over the microscopic variables, the macroscopic trajectories retain a memory of the underlying genetic states.Comment: 35 pages, 8 figure

    Linking Image and Text with 2-Way Nets

    Full text link
    Linking two data sources is a basic building block in numerous computer vision problems. Canonical Correlation Analysis (CCA) achieves this by utilizing a linear optimizer in order to maximize the correlation between the two views. Recent work makes use of non-linear models, including deep learning techniques, that optimize the CCA loss in some feature space. In this paper, we introduce a novel, bi-directional neural network architecture for the task of matching vectors from two data sources. Our approach employs two tied neural network channels that project the two views into a common, maximally correlated space using the Euclidean loss. We show a direct link between the correlation-based loss and Euclidean loss, enabling the use of Euclidean loss for correlation maximization. To overcome common Euclidean regression optimization problems, we modify well-known techniques to our problem, including batch normalization and dropout. We show state of the art results on a number of computer vision matching tasks including MNIST image matching and sentence-image matching on the Flickr8k, Flickr30k and COCO datasets.Comment: 14 pages, 2 figures, 6 table

    Reference service effectiveness

    Get PDF

    Cross Hedging with Single Stock Futures

    Get PDF
    This study evaluates the efficiency of cross hedging with the new single stock futures (SSF) contracts recently introduced in the United States. We use matched sample estimation techniques to select SSF contracts that will reduce the basis risk of crossing hedging and will yield the most efficient hedging portfolio. Employing multivariate matching techniques with cross-sectional matching characteristics, we can improve hedging efficiency while at the same time overcoming the contingency of the correlation between spot and futures prices on the sample period and length. Overall, we find that the best hedging performance is achieved through a portfolio that is hedged with market index futures and a SSF matched by both historical return correlation and cross-sectional matching characteristics. We also find it preferable to retain the chosen SSF contracts for the whole out-of-sample period but to re-estimate the optimal hedge ratio for each rolling window.

    Entry-exit, learning, and productivity change : evidence from Chile

    Get PDF
    This paper applies econometric techniques from the efficiency frontiers literature and the panel data literature to construct plant-specific time-variant technical efficiency indices for surviving, exiting, and entering cohorts. These are then used to compare productivity growth rates across plant cohorts and to examine the net effect of plant turnover and learning patterns on manufacturing-wide productivity growth. The analysis is based on plant-level panel data from Chile covering the period 1979-86. For several reasons, these data provide an excellent basis for inference. First, they include all Chilean manufacturing plants with at least 10 workers. Second, from 1974 to 1979 Chile underwent sweeping reform programs to liberalize its trade regime, privatize state firms, and deregulate markets. The author finds the importance of plant turnover and different learning patterns across cohorts in driving the Chilean manufacturing-wide productivity changes. She finds that: the evidence supports the hypothesis that competitive pressures force less efficient producers to fail more often than others; the ratio of skilled labor to unskilled labor is higher and increasing more rapidly among incumbents and entrants than among exiting plants; although the economywide recession affected the productivity of each cohort to different degrees, there are steady increases in productivity over the sample period.Environmental Economics&Policies,Economic Theory&Research,Health Monitoring&Evaluation,Banks&Banking Reform,Industrial Management

    Uncertainty estimation of wind power forecasts: Comparison of Probabilistic Modelling Approaches

    No full text
    International audienceShort-term wind power forecasting tools providing “single-valued” (spot) predictions are nowadays widely used. However, end-users may require to have additional information on the uncertainty associated to the future wind power production for performing more efficiently functions such as reserves estimation, unit commitment, trading in electricity markets, a.o. Several models for on-line uncertainty estimation have been proposed in the literature and new products from numerical weather prediction systems (ensemble predictions) have recently become available, which has increased the modelling possibilities. In order to provide efficient on-line uncertainty estimation, choices have to be made on which model and modelling architecture should be preferred. Towards this goal we proposes to classify different approaches and modelling architectures for probabilistic wind power forecasting. Then, a comparison is carried out on representatives models using real data from several wind farms

    Evaluation of an uncertainty reduction methodology based on Iterative Sensitivity Analysis (ISA) applied to naturally fractured reservoirs

    Get PDF
    International audienceHistory matching for naturally fractured reservoirs is challenging because of the complexity of flow behavior in the fracture-matrix combination. Calibrating these models in a history-matching procedure normally requires integration with geostatistical techniques (Big Loop, where the history matching is integrated to reservoir modeling) for proper model characterization. In problems involving complex reservoir models, it is common to apply techniques such as sensitivity analysis to evaluate and identify most influential attributes to focus the efforts on what most impact the response. Conventional Sensitivity Analysis (CSA), in which a subset of attributes is fixed at a unique value, may over-reduce the search space so that it might not be properly explored. An alternative is an Iterative Sensitivity Analysis (ISA), in which CSA is applied multiple times throughout the iterations. ISA follows three main steps: (a) CSA identifies Group i of influential attributes (i = 1, 2, 3, …, n); (b) reduce uncertainty of Group i, with other attributes with fixed values; and (c) return to step (a) and repeat the process. Conducting CSA multiple times allows the identification of influential attributes hidden by the high uncertainty of the most influential attributes. In this work, we assess three methods: Method 1 – ISA, Method 2 – CSA, and Method 3 – without sensitivity analysis, i.e., varying all uncertain attributes (larger searching space). Results showed that the number of simulation runs for Method 1 dropped 24% compared to Method 3 and 12% to Method 2 to reach a similar matching quality of acceptable models. In other words, Method 1 reached a similar quality of results with fewer simulations. Therefore, ISA can perform as good as CSA demanding fewer simulations. All three methods identified the same five most influential attributes of the initial 18. Even with many uncertain attributes, only a small percentage is responsible for most of the variability of responses. Also, their identification is essential for efficient history matching. For the case presented in this work, few fracture attributes were responsible for most of the variability of the responses

    Energy storage sizing for wind power: impact of the autocorrelation of day-ahead forecast errors

    No full text
    International audienceAvailability of day-ahead production forecast is an important step towards better dispatchability of wind power production. However, the stochastic nature of forecast errors prevents a wind farm operator from holding a firm production commitment. In order to mitigate the deviation from the commitment, an energy storage system connected to the wind farm is considered. One statistical characteristic of day-ahead forecast errors has a major impact on storage performance: errors are significantly correlated along several hours. We thus use a data-fitted autoregressive model that captures this correlation to quantify the impact of correlation on storage sizing. With a Monte Carlo approach, we study the behavior and the performance of an energy storage system (ESS) using the autoregressive model as an input. The ability of the storage system to meet a production commitment is statistically assessed for a range of capacities, using a mean absolute deviation criterion. By parametrically varying the correlation level, we show that disregarding correlation can lead to an underes- timation of a storage capacity by an order of magnitude. Finally, we compare the results obtained from the model and from field data to validate the model

    Modeling of pulsed laser guide stars for the Thirty Meter Telescope project

    Get PDF
    The Thirty Meter Telescope (TMT) has been designed to include an adaptive optics system and associated laser guide star (LGS) facility to correct for the image distortion due to Earth's atmospheric turbulence and achieve diffraction-limited imaging. We have calculated the response of mesospheric sodium atoms to a pulsed laser that has been proposed for use in the LGS facility, including modeling of the atomic physics, the light-atom interactions, and the effect of the geomagnetic field and atomic collisions. This particular pulsed laser format is shown to provide comparable photon return to a continuous-wave (cw) laser of the same average power; both the cw and pulsed lasers have the potential to satisfy the TMT design requirements for photon return flux.Comment: 16 pages, 20 figure
    • …
    corecore