199 research outputs found

    Public infrastructure, non cooperative investments and endogeneous growth

    Get PDF
    This paper develops a two-country general equilibrium model with endogenous growth where governements behave strategically in the provision of productive infrastructure. The public capitals enter both national and foreign production as an external input, and they are financed by a flat tax on income. In the private sector, firms and households take the public policy as given when making their decisions. It is shown that both a Markov Perfect Equillibrium (MPE) and a Centralized Solution (CS) exist, even when the parameters allow for endogenous growth, therefore explosive paths for the state variables. And the dynamic analysis reveals three important features. Firstly, under constant returns, the two countries' growth rates differ during the transition but are identical on the balanced growth path. Secondly, due to the infrastructure externality, assuming away constant returns to scale a country with decreasing returns can experience sustained growth provided that the other grows at a positive constant rate. Thirdly, Nash growth rates are compared with the centralized rates. We show that cooperation in infrastructure provision does not necessarily lead to higher growth for each country. We also show that, in some configurations of households' preferences and initial conditions, cooperation would call for a recession in the initial stages of development, whereas strategic investments would not. Lastly, depending also on the configuration of preferences, we show that cooperation can increase or decrease the gap between countries' growth rates.

    Public Infrastructure, non Cooperative Investments and Endogenous Growth

    Get PDF
    This paper develops a two-country general equilibrium model with endogenous growth where governments behave strategically in the provision of productive infrastructure. The public capitals enter both national and foreign production as an external input, and they are …nanced by a at tax on income. In the private sector, fi…rms and households take the public policy as given when making their decisions. For arbitrary constant tax rates, the dynamic analysis reveals two important features. Firstly, under constant returns, the two countries growth rates differ during the transition but are identical on the balanced growth path. Secondly, due to the infrastructure externality, assuming away constant returns to scale a country with decreasing returns can experience sustained growth provided that the other grows at a positive constant rate. Then we endogeneize tax rates. It is shown that both a Markov Perfect Equilibrium (MPE) and a Centralized Solution (CS) exist, even when the parameters allow for endogenous growth, therefore explosive paths for the state variables. Nash growth rates are compared with the centralized rates. We show that cooperation in infrastructure provision does not necessarily lead to higher growth for each country. We also show that, in some con…gurations of households' preferences and initial conditions, cooperation would call for a slowdown in the initial stages of development, whereas strategic investments would not. Lastly, depending also on the con…guration of preferences, we show that cooperation can increase or decrease the gap between countries' growth rates.

    End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes

    Full text link
    Meta-Bayesian optimisation (meta-BO) aims to improve the sample efficiency of Bayesian optimisation by leveraging data from related tasks. While previous methods successfully meta-learn either a surrogate model or an acquisition function independently, joint training of both components remains an open challenge. This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures. We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data. Early on, we notice that training transformer-based neural processes from scratch with RL is challenging due to insufficient supervision, especially when rewards are sparse. We formalise this claim with a combinatorial analysis showing that the widely used notion of regret as a reward signal exhibits a logarithmic sparsity pattern in trajectory lengths. To tackle this problem, we augment the RL objective with an auxiliary task that guides part of the architecture to learn a valid probabilistic model as an inductive bias. We demonstrate that our method achieves state-of-the-art regret results against various baselines in experiments on standard hyperparameter optimisation tasks and also outperforms others in the real-world problems of mixed-integer programming tuning, antibody design, and logic synthesis for electronic design automation

    Geosimulation and Multicriteria Modelling of Residential Land Development in the City of Tehran: A Comparative Analysis of Global and Local Models

    Get PDF
    Conventional models for simulating land-use patterns are insufficient in addressing complex dynamics of urban systems. A new generation of urban models, inspired by research on cellular automata and multi-agent systems, has been proposed to address the drawbacks of conventional modelling. This new generation of urban models is called geosimulation. Geosimulation attempts to model macro-scale patterns using micro-scale urban entities such as vehicles, homeowners, and households. The urban entities are represented by agents in the geosimulation modelling. Each type of agents has different preferences and priorities and shows different behaviours. In the land-use modelling context, the behaviour of agents is their ability to evaluate the suitability of parcels of land using a number of factors (criteria and constraints), and choose the best land(s) for a specific purpose. Multicriteria analysis provides a set of methods and procedures that can be used in the geosimulation modelling to describe the behaviours of agents. There are three main objectives of this research. First, a framework for integrating multicriteria models into geosimulation procedures is developed to simulate residential development in the City of Tehran. Specifically, the local form of multicriteria models is used as a method for modelling agents’ behaviours. Second, the framework is tested in the context of residential land development in Tehran between 1996 and 2006. The empirical research is focused on identifying the spatial patterns of land suitability for residential development taking into account the preferences of three groups of actors (agents): households, developers, and local authorities. Third, a comparative analysis of the results of the geosimulation-multicriteria models is performed. A number of global and local geosimulation-multicriteria models (scenarios) of residential development in Tehran are defined and then the results obtained by the scenarios are evaluated and examined. The output of each geosimulation-multicriteria model is compared to the results of other models and to the actual pattern of land-use in Tehran. The analysis is focused on comparing the results of the local and global geosimulation-multicriteria models. Accuracy measures and spatial metrics are used in the comparative analysis. The results suggest that, in general, the local geosimulation-multicriteria models perform better than the global methods

    The Automatic Statistician: A Relational Perspective

    Get PDF
    Department of Computer EngineeringGaussian Processes (GPs) provide a general and analytically tractable way of capturing complex time-varying, nonparametric functions. The time varying parameters of GPs can be explained as a composition of base kernels such as linear, smoothness or periodicity in that covariance kernels are closed under addition and multiplication. The Automatic Bayesian Covariance Discovery (ABCD) system constructs natural-language description of time-series data by treating unknown time-series data nonparametrically using GPs. Unfortunately, learning a composite covariance kernel with a single time-series dataset often results in less informative kernels instead of finding qualitative distinct descriptions. We address this issue by proposing a relational kernel learning which can model relationship between sets of data and find shared structure among the time series datasets. We show the shared structure can help learning more accurate models for sets of regression problems with some synthetic data, US top market capitalization stock data and US house sales index data.ope

    Highly Automated Formal Verification of Arithmetic Circuits

    Get PDF
    This dissertation investigates the problems of two distinctive formal verification techniques for verifying large scale multiplier circuits and proposes two approaches to overcome some of these problems. The first technique is equivalence checking based on recurrence relations, while the second one is the symbolic computation technique which is based on the theory of Gröbner bases. This investigation demonstrates that approaches based on symbolic computation have better scalability and more robustness than state-of-the-art equivalence checking techniques for verification of arithmetic circuits. According to this conclusion, the thesis leverages the symbolic computation technique to verify floating-point designs. It proposes a new algebraic equivalence checking, in contrast to classical combinational equivalence checking, the proposed technique is capable of checking the equivalence of two circuits which have different architectures of arithmetic units as well as control logic parts, e.g., floating-point multipliers

    MaxSAT Evaluation 2022 : Solver and Benchmark Descriptions

    Get PDF
    Non peer reviewe

    Adaptive Design and Flexible Approval of Clinical Trials

    Get PDF
    Dose-finding clinical trials are among the most critical cornerstones of the healthcare system. In this broad research area, there are many decision making problems that are extremely challenging to address. However, a small improvement may result in significant benefits to the society. Dose-finding clinical trials are extremely expensive and require multiple time-consuming and complicated R&D phases. Despite all the costs and the long time these trials need to conclude (on average over ten years for each new drug/technology), only less than 15\% of these trials successfully end up in a new approved drug entering the market. This problem is even exacerbated for the drugs that target rare diseases, where the costs of testing subjects are higher, sampling budgets are more restricted by the number of available patients, and the chances of success and expected payoffs are at much lower levels. In this dissertation, we first propose a new information-based objective function to guide the adaptive dose allocation as a part of Phase II clinical studies for which we show its merit in small sampling budgets. Then, we redesign Phase II clinical trials with the goal of personalizing this process and to find different target doses with certain efficacy levels, for different groups of patients. Finally, we move on to the FDA aspect of the approval process, and analyze flexible approval policies that the FDA can apply to Phase III clinical trials. The motivation for this work is based on recent studies that suggest flexible approval process can incentivize the research and development of new drugs for rare diseases. We hope that the results of our research help the clinicians, pharmaceutical companies, and the FDA with better understanding the consequences of their decisions, while leading to potentially more effective treatment/dose specification for the patients who are in need of new drugs
    corecore