180,877 research outputs found

    Pathways to Coastal Resiliency: the Adaptive Gradients Framework

    Get PDF
    Current and future climate-related coastal impacts such as catastrophic and repetitive flooding, hurricane intensity, and sea level rise necessitate a new approach to developing and managing coastal infrastructure. Traditional “hard” or “grey” engineering solutions are proving both expensive and inflexible in the face of a rapidly changing coastal environment. Hybrid solutions that incorporate natural, nature-based, structural, and non-structural features may better achieve a broad set of goals such as ecological enhancement, long-term adaptation, and social benefits, but broad consideration and uptake of these approaches has been slow. One barrier to the widespread implementation of hybrid solutions is the lack of a relatively quick but holistic evaluation framework that places these broader environmental and societal goals on equal footing with the more traditional goal of exposure reduction. To respond to this need, the Adaptive Gradients Framework was developed and pilot-tested as a qualitative, flexible, and collaborative process guide for organizations to understand, evaluate, and potentially select more diverse kinds of infrastructural responses. These responses would ideally include natural, nature-based, and regulatory/cultural approaches, as well as hybrid designs combining multiple approaches. It enables rapid expert review of project designs based on eight metrics called “gradients”, which include exposure reduction, cost efficiency, institutional capacity, ecological enhancement, adaptation over time, greenhouse gas reduction, participatory process, and social benefits. The framework was conceptualized and developed in three phases: relevant factors and barriers were collected from practitioners and experts by survey; these factors were ranked by importance and used to develop the initial framework; several case studies were iteratively evaluated using this technique; and the framework was finalized for implementation. The article presents the framework and a pilot test of its application, along with resources that would enable wider application of the framework by practitioners and theorists

    A Neural Network model with Bidirectional Whitening

    Full text link
    We present here a new model and algorithm which performs an efficient Natural gradient descent for Multilayer Perceptrons. Natural gradient descent was originally proposed from a point of view of information geometry, and it performs the steepest descent updates on manifolds in a Riemannian space. In particular, we extend an approach taken by the "Whitened neural networks" model. We make the whitening process not only in feed-forward direction as in the original model, but also in the back-propagation phase. Its efficacy is shown by an application of this "Bidirectional whitened neural networks" model to a handwritten character recognition data (MNIST data).Comment: 16page

    Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles

    Get PDF
    We present a canonical way to turn any smooth parametric family of probability distributions on an arbitrary search space XX into a continuous-time black-box optimization method on XX, the \emph{information-geometric optimization} (IGO) method. Invariance as a design principle minimizes the number of arbitrary choices. The resulting \emph{IGO flow} conducts the natural gradient ascent of an adaptive, time-dependent, quantile-based transformation of the objective function. It makes no assumptions on the objective function to be optimized. The IGO method produces explicit IGO algorithms through time discretization. It naturally recovers versions of known algorithms and offers a systematic way to derive new ones. The cross-entropy method is recovered in a particular case, and can be extended into a smoothed, parametrization-independent maximum likelihood update (IGO-ML). For Gaussian distributions on Rd\mathbb{R}^d, IGO is related to natural evolution strategies (NES) and recovers a version of the CMA-ES algorithm. For Bernoulli distributions on {0,1}d\{0,1\}^d, we recover the PBIL algorithm. From restricted Boltzmann machines, we obtain a novel algorithm for optimization on {0,1}d\{0,1\}^d. All these algorithms are unified under a single information-geometric optimization framework. Thanks to its intrinsic formulation, the IGO method achieves invariance under reparametrization of the search space XX, under a change of parameters of the probability distributions, and under increasing transformations of the objective function. Theory strongly suggests that IGO algorithms have minimal loss in diversity during optimization, provided the initial diversity is high. First experiments using restricted Boltzmann machines confirm this insight. Thus IGO seems to provide, from information theory, an elegant way to spontaneously explore several valleys of a fitness landscape in a single run.Comment: Final published versio

    Sequential Recommendation with Self-Attentive Multi-Adversarial Network

    Full text link
    Recently, deep learning has made significant progress in the task of sequential recommendation. Existing neural sequential recommenders typically adopt a generative way trained with Maximum Likelihood Estimation (MLE). When context information (called factor) is involved, it is difficult to analyze when and how each individual factor would affect the final recommendation performance. For this purpose, we take a new perspective and introduce adversarial learning to sequential recommendation. In this paper, we present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation. Specifically, our proposed MFGAN has two kinds of modules: a Transformer-based generator taking user behavior sequences as input to recommend the possible next items, and multiple factor-specific discriminators to evaluate the generated sub-sequence from the perspectives of different factors. To learn the parameters, we adopt the classic policy gradient method, and utilize the reward signal of discriminators for guiding the learning of the generator. Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time. Extensive experiments conducted on three real-world datasets demonstrate the superiority of our proposed model over the state-of-the-art methods, in terms of effectiveness and interpretability

    European Mixed Forests: definition and research perspectives

    Get PDF
    peer-reviewedAim of study: We aim at (i) developing a reference definition of mixed forests in order to harmonize comparative research in mixed forests and (ii) briefly review the research perspectives in mixed forests. Area of study: The definition is developed in Europe but can be tested worldwide. Material and methods: Review of existent definitions of mixed forests based and literature review encompassing dynamics, management and economic valuation of mixed forests. Main results: A mixed forest is defined as a forest unit, excluding linear formations, where at least two tree species coexist at any developmental stage, sharing common resources (light, water, and/or soil nutrients). The presence of each of the component species is normally quantified as a proportion of the number of stems or of basal area, although volume, biomass or canopy cover as well as proportions by occupied stand area may be used for specific objectives. A variety of structures and patterns of mixtures can occur, and the interactions between the component species and their relative proportions may change over time. The research perspectives identified are (i) species interactions and responses to hazards, (ii) the concept of maximum density in mixed forests, (iii) conversion of monocultures to mixed-species forest and (iv) economic valuation of ecosystem services provided by mixed forests. Research highlights: The definition is considered a high-level one which encompasses previous attempts to define mixed forests. Current fields of research indicate that gradient studies, experimental design approaches, and model simulations are key topics providing new research opportunities.The networking in this study has been supported by COST Action FP1206 EuMIXFOR

    Building Program Vector Representations for Deep Learning

    Full text link
    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis. Our representation learning approach directly makes deep learning a reality in this new field. We evaluate the learned vector representations both qualitatively and quantitatively. We conclude, based on the experiments, the coding criterion is successful in building program representations. To evaluate whether deep learning is beneficial for program analysis, we feed the representations to deep neural networks, and achieve higher accuracy in the program classification task than "shallow" methods, such as logistic regression and the support vector machine. This result confirms the feasibility of deep learning to analyze programs. It also gives primary evidence of its success in this new field. We believe deep learning will become an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1
    • …
    corecore