596 research outputs found

    Optical Properties of MgF2 / MgF2 / Glass and MgF2 / TiO2 / Glass

    Get PDF
    MgF2 thin films by thickness of 93 nm were deposited on MgF2 / glass and TiO2 / glass thin layers by resistance evaporation method under ultra-high vacuum (UHV) conditions, rotating pre layer for sample one and normal deposition for second one. Optical properties were measured via spectrophotometer in spectral range of 300-1100 nm wave length. The optical constants such as, real part of refractive index (n), imaginary part of refractive index (k), real and imaginary parts of dielectric function ε1, ε2 respectively and absorption coefficient (), were obtained from Kramers-Kronig analysis of reflectivity curves. Band-gap energy was also estimated for these films. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/3554

    Dynamic Active Earth Pressure Against Retaining Walls

    Get PDF
    Equations of equilibrium expressed along the stress characteristics are transformed onto the Zero Extension Line (ZEL) directions. The new dynamic equilibrium equations are then applied to simple ZEL field (composed of Rankine, Goursat, and Coulomb zones) behind retaining walls. Integration of differential equilibrium equations along the assumed field boundary, thus provide the final equations for the active static (Kast) and dynamic (Kady) earth pressure coefficients, which are functions of friction and dilation angles of the soil and friction angle of the wall surface. Numerical evaluation of Kast, and Kady indicates that these coefficients are not sensitive to the wall roughness for practical values of angle of friction of backfill material between 35° and 45°. In this range, the coefficients can be approximated by: Kast=tan2(π/4 -φ/2) and Kady =tan(π/4 - ν/2)

    SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases

    Get PDF
    The Internet has enabled the creation of a growing number of large-scale knowledge bases in a variety of domains containing complementary information. Tools for automatically aligning these knowledge bases would make it possible to unify many sources of structured knowledge and answer complex queries. However, the efficient alignment of large-scale knowledge bases still poses a considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a simple algorithm for aligning knowledge bases with millions of entities and facts. SiGMa is an iterative propagation algorithm which leverages both the structural information from the relationship graph as well as flexible similarity measures between entity properties in a greedy local search, thus making it scalable. Despite its greedy nature, our experiments indicate that SiGMa can efficiently match some of the world's largest knowledge bases with high precision. We provide additional experiments on benchmark datasets which demonstrate that SiGMa can outperform state-of-the-art approaches both in accuracy and efficiency.Comment: 10 pages + 2 pages appendix; 5 figures -- initial preprin

    Neural adaptive sequential Monte Carlo

    Get PDF
    Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Kullback-Leibler divergence between the true posterior and the proposal distribution. The method is very flexible, applicable to any parameterized proposal distribution and it supports online and batch variants. We use the new framework to adapt powerful proposal distributions with rich parameterizations based upon neural networks leading to Neural Adaptive Sequential Monte Carlo (NASMC). Experiments indicate that NASMC significantly improves inference in a non-linear state space model outperforming adaptive proposal methods including the Extended Kalman and Unscented Particle Filters. Experiments also indicate that improved inference translates into improved parameter learning when NASMC is used as a subroutine of Particle Marginal Metropolis Hastings. Finally we show that NASMC is able to train a latent variable recurrent neural network (LV-RNN) achieving results that compete with the state-of-the-art for polymorphic music modelling. NASMC can be seen as bridging the gap between adaptive SMC methods and the recent work in scalable, black-box variational inference

    Climate change impact, adaptation, and mitigation in temperate grazing systems: a review

    Get PDF
    Managed temperate grasslands occupy 25% of the world, which is 70% of global agricultural land. These lands are an important source of food for the global population. This review paper examines the impacts of climate change on managed temperate grasslands and grassland-based livestock and effectiveness of adaptation and mitigation options and their interactions. The paper clarifies that moderately elevated atmospheric CO2 (eCO2) enhances photosynthesis, however it may be restiricted by variations in rainfall and temperature, shifts in plant’s growing seasons, and nutrient availability. Different responses of plant functional types and their photosynthetic pathways to the combined effects of climatic change may result in compositional changes in plant communities, while more research is required to clarify the specific responses. We have also considered how other interacting factors, such as a progressive nitrogen limitation (PNL) of soils under eCO2, may affect interactions of the animal and the environment and the associated production. In addition to observed and modelled declines in grasslands productivity, changes in forage quality are expected. The health and productivity of grassland-based livestock are expected to decline through direct and indirect effects from climate change. Livestock enterprises are also significant cause of increased global greenhouse gas (GHG) emissions (about 14.5%), so climate risk-management is partly to develop and apply effective mitigation measures. Overall, our finding indicates complex impact that will vary by region, with more negative than positive impacts. This means that both wins and losses for grassland managers can be expected in different circumstances, thus the analysis of climate change impact required with potential adaptations and mitigation strategies to be developed at local and regional levels

    Q-PrOP: Sample-efficient policy gradient with an off-policy critic

    Get PDF
    Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy gradient methods offer stable learning, but at the cost of high variance, which often requires large batches. TD-style methods, such as off-policy actor-critic and Q-learning, are more sample-efficient but biased, and often require costly hyperparameter sweeps to stabilize. In this work, we aim to develop methods that combine the stability of policy gradients with the efficiency of off-policy RL. We present Q-Prop, a policy gradient method that uses a Taylor expansion of the off-policy critic as a control variate. Q-Prop is both sample efficient and stable, and effectively combines the benefits of on-policy and off-policy methods. We analyze the connection between Q-Prop and existing model-free algorithms, and use control variate theory to derive two variants of Q-Prop with conservative and aggressive adaptation. We show that conservative Q-Prop provides substantial gains in sample efficiency over trust region policy optimization (TRPO) with generalized advantage estimation (GAE), and improves stability over deep deterministic policy gradient (DDPG), the state-of-the-art on-policy and off-policy methods, on OpenAI Gym's MuJoCo continuous control environments

    On the 'independence of trials-assumption' in geometric distribution

    Get PDF
    In this note, it is shown through an example that the assumption of the independence of Bernoulli trials in the geometric experiment may unexpectedly not be satisfied. The example can serve as a suitable and useful classroom activity for students in introductory probability course

    Bayesian inference on random simple graphs with power law degree distributions

    Get PDF
    We present a model for random simple graphs with power law (i.e., heavy-tailed) degree dis- tributions. To attain this behavior, the edge probabilities in the graph are constructed from Bertoin–Fujita–Roynette–Yor (BFRY) random variables, which have been recently utilized in Bayesian statistics for the construction of power law models in several applications. Our construction readily extends to capture the structure of latent factors, similarly to stochastic block- models, while maintaining its power law degree distribution. The BFRY random variables are well approximated by gamma random variables in a variational Bayesian inference routine, which we apply to several network datasets for which power law degree distributions are a natural assumption. By learning the parameters of the BFRY distribution via probabilistic inference, we are able to automatically select the appropriate power law behavior from the data. In order to further scale our inference procedure, we adopt stochastic gradient ascent routines where the gradients are computed on minibatches (i.e., sub- sets) of the edges in the graph.J. Lee and S. Choi were partly supported by an Institute for Information & Communications Technology Promotion (IITP) grant, funded by the Korean government (MSIP) (No.2014- 0-00147, Basic Software Research in Human-level Life- long Machine Learning (Machine Learning Center)) and Naver, Inc. C. Heaukulani undertook this work in part while a visiting researcher at the Hong Kong University of Science and Technology, who along with L. F. James was funded by grant rgc-hkust 601712 of the Hong Kong Special Administrative Region. EPSRC Grant EP/N014162/1 ATI Grant EP/N510129/

    Interpolated policy gradient: Merging on-policy and off-policy gradient estimation for deep reinforcement learning

    Get PDF
    Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques. On the other hand, on-policy algorithms are often more stable and easier to use. This paper examines, both theoretically and empirically, approaches to merging on- and off-policy updates for deep reinforcement learning. Theoretical results show that off-policy updates with a value function estimator can be interpolated with on-policy policy gradient updates whilst still satisfying performance bounds. Our analysis uses control variate methods to produce a family of policy gradient algorithms, with several recently proposed algorithms being special cases of this family. We then provide an empirical comparison of these techniques with the remaining algorithmic details fixed, and show how different mixing of off-policy gradient estimates with on-policy samples contribute to improvements in empirical performance. The final algorithm provides a generalization and unification of existing deep policy gradient techniques, has theoretical guarantees on the bias introduced by off-policy updates, and improves on the state-of-the-art model-free deep RL methods on a number of OpenAI Gym continuous control benchmarks
    corecore