21,812 research outputs found

    Water Consumption and Long-Run Urban Development: The Case of Milan

    Get PDF
    Analyses of long run consumption series are rare in literature. We study the evolution of water consumption in Milan in the twentieth century. The objective is twofold: on one side, the univariate analysis tries both to assess the impact of relevant socio-economic and environmental changes on water consumption in Milan and verify if consumers have deeply rooted consumption habits. On the other side, the multivariate analysis is used to identify the socio-economic factors that are relevant in explaining consumption evolution. Results indicate both that water users have well entrenched consumption habits and that population, climate and economic structure behave more similarly, in Euclidean terms, to water consumption than to other economic and social variables.Urban consumption, Long-run, Development, Environmental changes

    A Preference-guided Multiobjective Evolutionary Algorithm based on Decomposition

    Get PDF
    Multiobjective evolutionary algorithms based on decomposition (MOEA/Ds) represent a class of widely employed problem solvers for multicriteria optimization problems. In this work we investigate the adaptation of these methods for incorporating preference information prior to the optimization, so that the search process can be biased towards a Pareto-optimal region that better satisfies the aspirations of a decision-making entity. The incorporation of the Preference-based Adaptive Region-of-interest (PAR) framework into the MOEA/D requires only the modification of the reference points used within the scalarization function, which in principle allows a straightforward use in more sophisticated versions of the base algorithm. Experimental results using the UF benchmark set suggest gains in diversity within the region of interest, without significant losses in convergence

    ENNigma: A Framework for Private Neural Networks

    Get PDF
    The increasing concerns about data privacy and the stringent enforcement of data protection laws are placing growing pressure on organizations to secure large datasets. The challenge of ensuring data privacy becomes even more complex in the domains of Artificial Intelligence and Machine Learning due to their requirement for large amounts of data. While approaches like differential privacy and secure multi-party computation allow data to be used with some privacy guarantees, they often compromise data integrity or accessibility as a tradeoff. In contrast, when using encryption-based strategies, this is not the case. While basic encryption only protects data during transmission and storage, Homomorphic Encryption (HE) is able to preserve data privacy during its processing on a centralized server. Despite its advantages, the computational overhead HE introduces is notably challenging when integrated into Neural Networks (NNs), which are already computationally expensive. In this work, we present a framework called ENNigma, which is a Private Neural Network (PNN) that uses HE for data privacy preservation. Unlike some state-of-the-art approaches, ENNigma guarantees data security throughout every operation, maintaining this guarantee even if the server is compromised. The impact of this privacy preservation layer on the NN performance is minimal, with the only major drawback being its computational cost. Several optimizations were implemented to maximize the efficiency of ENNigma, leading to occasional computational time reduction above 50%. In the context of the Network Intrusion Detection System application domain, particularly within the sub-domain of Distributed Denial of Service attack detection, several models were developed and employed to assess ENNigma’s performance in a real-world scenario. These models demonstrated comparable performance to non-private NNs while also achiev ing the two-and-a-half-minute inference latency mark. This suggests that our framework is approaching a state where it can be effectively utilized in real-time applications. The key takeaway is that ENNigma represents a significant advancement in the field of PNN as it ensures data privacy with minimal impact on NN performance. While it is not yet ready for real-world deployment due to its computational complexity, this framework serves as a milestone toward realizing fully private and efficient NNs.As preocupaçÔes crescentes com a privacidade de dados e a implementação de leis que visam endereçar este problema, estĂŁo a pressionar as organizaçÔes para assegurar a segurança das suas bases de dados. Este desafio torna-se ainda mais complexo nos domĂ­nios da InteligĂȘncia Artificial e Machine Learning, que dependem do acesso a grandes volumes de dados para obterem bons resultados. As abordagens existentes, tal como Differential Privacy e Secure Multi-party Computation, jĂĄ permitem o uso de dados com algumas garantias de privacidade. No entanto, na maioria das vezes, comprometem a integridade ou a acessibilidade aos mesmos. Por outro lado, ao usar estratĂ©gias baseadas em cifras, isso nĂŁo ocorre. Ao contrĂĄrio das cifras mais tradicionais, que apenas protegem os dados durante a transmissĂŁo e armazenamento, as cifras homomĂłrficas sĂŁo capazes de preservar a privacidade dos dados durante o seu processamento. Nomeadamente se o mesmo for centralizado num Ășnico servidor. Apesar das suas vantagens, o custo computacional introduzido por este tipo de cifras Ă© bastante desafiador quando integrado em Redes Neurais que, por natureza, jĂĄ sĂŁo computacionalmente pesadas. Neste trabalho, apresentamos uma biblioteca chamada ENNigma, que Ă© uma Rede Neural Privada construĂ­da usando cifras homomĂłrficas para preservar a privacidade dos dados. Ao contrĂĄrio de algumas abordagens estado-da-arte, a ENNigma garante a segurança dos dados em todas as operaçÔes, mantendo essa garantia mesmo que o servidor seja comprometido. O impacto da introdução desta camada de segurança, no desempenho da rede neural, Ă© mĂ­nimo, sendo a sua Ășnica grande desvantagem o seu custo computacional. Foram ainda implementadas diversas otimizaçÔes para maximizar a eficiĂȘncia da biblioteca apresentada, levando a reduçÔes ocasionais no tempo computacional acima de 50%. No contexto do domĂ­nio de aplicação de Sistemas de Detecção de IntrusĂŁo em Redes de Computadores, em particular dentro do subdomĂ­nio de detecção de ataques do tipo Distributed Denial of Service, vĂĄrios modelos foram desenvolvidos para avaliar o desempenho da ENNigma num cenĂĄrio real. Estes modelos demonstraram desempenho comparĂĄvel Ă s redes neurais nĂŁo privadas, ao mesmo tempo que alcançaram uma latĂȘncia de inferĂȘncia de dois minutos e meio. Isso sugere que a biblioteca apresentada estĂĄ a aproximar-se de um estado em que pode ser utilizada em aplicaçÔes em tempo real. A principal conclusĂŁo Ă© que a biblioteca ENNigma representa um avanço significativo na ĂĄrea das Redes Neurais Privadas, pois assegura a privacidade dos dados com um impacto mĂ­nimo no desempenho da rede neural. Embora esta ferramenta ainda nĂŁo esteja pronta para utilização no mundo real, devido Ă  sua complexidade computacional, serve como um marco importante para o desenvolvimento de redes neurais totalmente privadas e eficientes

    The MOEADr Package – A Component-Based Framework for Multiobjective Evolutionary Algorithms Based on Decomposition

    Get PDF
    Multiobjective Evolutionary Algorithms based on Decomposition (MOEA/D) represent a widely used class of population-based metaheuristics for the solution of multicriteria optimization problems. We introduce the MOEADr package, which offers many of these variants as instantiations of a component-oriented framework. This approach contributes for easier reproducibility of existing MOEA/D variants from the literature, as well as for faster development and testing of new composite algorithms. The package offers an standardized, modular implementation of MOEA/D based on this framework, which was designed aiming at providing researchers and practitioners with a standard way to discuss and express MOEA/D variants. In this paper we introduce the design principles behind the MOEADr package, as well as its current components. Three case studies are provided to illustrate the main aspects of the package

    Mathematical modelling plant signalling networks

    Get PDF
    During the last two decades, molecular genetic studies and the completion of the sequencing of the Arabidopsis thaliana genome have increased knowledge of hormonal regulation in plants. These signal transduction pathways act in concert through gene regulatory and signalling networks whose main components have begun to be elucidated. Our understanding of the resulting cellular processes is hindered by the complex, and sometimes counter-intuitive, dynamics of the networks, which may be interconnected through feedback controls and cross-regulation. Mathematical modelling provides a valuable tool to investigate such dynamics and to perform in silico experiments that may not be easily carried out in a laboratory. In this article, we firstly review general methods for modelling gene and signalling networks and their application in plants. We then describe specific models of hormonal perception and cross-talk in plants. This sub-cellular analysis paves the way for more comprehensive mathematical studies of hormonal transport and signalling in a multi-scale setting

    Accounting for Uncertainty Affecting Technical Change in an Economic-Climate Model

    Get PDF
    The key role of technological change in the decline of energy and carbon intensities of aggregate economic activities is widely recognized. This has focused attention on the issue of developing endogenous models for the evolution of technological change. With a few exceptions this is done using a deterministic framework, even though technological change is a dynamic process which is uncertain by nature. Indeed, the two main vectors through which technological change may be conceptualized, learning through R&D investments and learning-by-doing, both evolve and cumulate in a stochastic manner. How misleading are climate strategies designed without accounting for such uncertainty? The main idea underlying the present piece of research is to assess and discuss the effect of endogenizing this uncertainty on optimal R&D investment trajectories and carbon emission abatement strategies. In order to do so, we use an implicit stochastic programming version of the FEEM-RICE model, first described in Bosetti, Carraro and Galeotti, (2005). The comparative advantage of taking a stochastic programming approach is estimated using as benchmarks the expected-value approach and the worst-case scenario approach. It appears that, accounting for uncertainty and irreversibility would affect both the optimal level of investment in R&D –which should be higher– and emission reductions –which should be contained in the early periods. Indeed, waiting and investing in R&D appears to be the most cost-effective hedging strategy.Stochastic Programming, Uncertainty and Learning, Endogenous Technical Change

    Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) Network

    Full text link
    Because of their effectiveness in broad practical applications, LSTM networks have received a wealth of coverage in scientific journals, technical blogs, and implementation guides. However, in most articles, the inference formulas for the LSTM network and its parent, RNN, are stated axiomatically, while the training formulas are omitted altogether. In addition, the technique of "unrolling" an RNN is routinely presented without justification throughout the literature. The goal of this paper is to explain the essential RNN and LSTM fundamentals in a single document. Drawing from concepts in signal processing, we formally derive the canonical RNN formulation from differential equations. We then propose and prove a precise statement, which yields the RNN unrolling technique. We also review the difficulties with training the standard RNN and address them by transforming the RNN into the "Vanilla LSTM" network through a series of logical arguments. We provide all equations pertaining to the LSTM system together with detailed descriptions of its constituent entities. Albeit unconventional, our choice of notation and the method for presenting the LSTM system emphasizes ease of understanding. As part of the analysis, we identify new opportunities to enrich the LSTM system and incorporate these extensions into the Vanilla LSTM network, producing the most general LSTM variant to date. The target reader has already been exposed to RNNs and LSTM networks through numerous available resources and is open to an alternative pedagogical approach. A Machine Learning practitioner seeking guidance for implementing our new augmented LSTM model in software for experimentation and research will find the insights and derivations in this tutorial valuable as well.Comment: 43 pages, 10 figures, 78 reference

    Carbon Capture and Sequestration: How Much Does this Uncertain Option Affect Near-Term Policy Choices?

    Get PDF
    One of the main issues in the climate policy agenda, the timing of abatement efforts, hinges on the uncertainties of climate change risks and technological evolution. We use a stochastic optimization framework and jointly explore these two features. First, we embed in the model future potential large-scale availability of Carbon Capture and Storage (CCS) technologies. While non-CCS mitigation that reduces fossil energy use is modelled as exerting inertia on the economic system, mainly due to the durability of the capital in energy systems and to technology lock-in and lock-out phenomena, the implementation of CCS technologies is modelled as implying less resilience of the system to changes in policy directions. Second, climate uncertainty is related in the model to the atmospheric temperature response to an increase in GHGs concentration. Performing different simulation experiments, we find that the environmental target, derived from a cost-benefit analysis, should be more ambitious when CCS is included in the picture. Moreover, the possible future availability of CCS is not a reason to significantly reduce near-term optimal abatement efforts. Finally, the availability of better information on the climate cycle is in general more valuable than better information on the CCS technological option.Climate change, Uncertainty, Sequestration, Cost-benefit analysis

    Migration, Unemployment and Net Benefits of Inbound Tourism in a Developing Country

    Get PDF
    International tourism is increasingly viewed as one of the best opportunities for a sustainable economic and social development of developing countries. There is also an increasing concern from public policy makers as to whether mass tourism coastal resorts can play a catalytic role in the overall economic development and improve the real income of their community. In this paper, we present a general equilibrium model which explicitly takes into consideration specific features of some developing countries (e.g. coastal tourism, dual labour market, unemployment, migrations, competition between agriculture and tourism for land) to analyse the ways by which an inbound tourism boom affects this kind of country, in particular its real income. We define the conditions under which an inbound tourism boom makes developing countries residents worse off.Economic impacts, General equilibrium model, Inbound tourism, Migration, Unemployment, Developing countries
    • 

    corecore