1,358 research outputs found

    Mapping poverty at multiple geographical scales

    Full text link
    Poverty mapping is a powerful tool to study the geography of poverty. The choice of the spatial resolution is central as poverty measures defined at a coarser level may mask their heterogeneity at finer levels. We introduce a small area multi-scale approach integrating survey and remote sensing data that leverages information at different spatial resolutions and accounts for hierarchical dependencies, preserving estimates coherence. We map poverty rates by proposing a Bayesian Beta-based model equipped with a new benchmarking algorithm that accounts for the double-bounded support. A simulation study shows the effectiveness of our proposal and an application on Bangladesh is discussed.Comment: 22 pages, 7 figure

    A decision support methodology to enhance the competitiveness of the Turkish automotive industry

    Get PDF
    This is the post-print (final draft post-refereeing) version of the article. Copyright @ 2013 Elsevier B.V. All rights reserved.Three levels of competitiveness affect the success of business enterprises in a globally competitive environment: the competitiveness of the company, the competitiveness of the industry in which the company operates and the competitiveness of the country where the business is located. This study analyses the competitiveness of the automotive industry in association with the national competitiveness perspective using a methodology based on Bayesian Causal Networks. First, we structure the competitiveness problem of the automotive industry through a synthesis of expert knowledge in the light of the World Economic Forum’s competitiveness indicators. Second, we model the relationships among the variables identified in the problem structuring stage and analyse these relationships using a Bayesian Causal Network. Third, we develop policy suggestions under various scenarios to enhance the national competitive advantages of the automotive industry. We present an analysis of the Turkish automotive industry as a case study. It is possible to generalise the policy suggestions developed for the case of Turkish automotive industry to the automotive industries in other developing countries where country and industry competitiveness levels are similar to those of Turkey

    UNCERTAINTY QUANTIFICATION OF PETROLEUM RESERVOIRS: A STRUCTURED BAYESIAN APPROACH

    Get PDF
    This thesis proposes a systematic Bayesian approach for uncertainty quantification with an application for petroleum reservoirs. First, we demonstrated the potential of additional misfit functions based on specific events in reservoir management, to gain knowledge about reservoir behaviour and quality in probabilistic forecasting. Water breakthrough and productivity deviation were selected and provided insights of discontinuities in simulation data when compared to the use of traditional misfit functions (e.g. production rate, BHP) alone. Second, we designed and implemented a systematic methodology for uncertainty reduction combining reservoir simulation and emulation techniques under the Bayesian History Matching for Uncertainty Reduction (BHMUR) approach. Flexibility, repeatability and scalability are the main features of this high-level structure, incorporating innovations such as phases of evaluation and multiple emulation techniques. This workflow potentially turns the practice of BHMUR more standardised across applications. It was applied for a complex case study, with 26 uncertainties, outputs from 25 wells and 11+ years of historical data based on a hypothetical reality, resulting in the construction of 115 valid emulators and a small fraction of the original search space appropriately considered non-implausible by the end of the uncertainty reduction process. Third, we expanded methodologies for critical steps in the BHMUR practice: (1) extension of statistical formulation to two-class emulators; (2) efficient selection of a combination of outputs to emulate; (3) validation of emulators based on multiple criteria; and (4) accounting for systematic and random errors in observed data. Finally, a critical step in the BHMUR approach is the quantification of model discrepancy which accounts for imperfect models aiming to represent a real physical system. We proposed a methodology to quantify the model discrepancy originated from errors in target data that are set as boundary conditions in a numerical simulator. Its application demonstrated that model discrepancy is dependent on both time and location in the input space, which is a central finding to guide the BHMUR practice in case of studies based on real fields

    Quantificação de incertezas em reservatórios de petróleo : uma abordagem Bayesiana estruturada

    Get PDF
    Orientadores: Denis José Schiozer e Camila Caiado; Ian Vernon; Michael Goldstein, Guilherme Daniel AvansiTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Durham UniversityResumo: Essa tese propõe uma abordagem Bayesiana sistemática para quantificação de incertezas de reservatórios de petróleo. No primeiro artigo, demonstramos o potencial de funções-objetivo adicionais que são baseadas em eventos específicos da fase de gerenciamento de reservatórios, a fim de melhorar a representação do comportamento do reservatório e a qualidade da previsão probabilística. Irrupção de água e desvio de produtividade foram selecionados, proporcionando um entendimento de descontinuidades no modelo numérico e nos dados de simulação quando comparado com o uso exclusivo de funções objetivo tradicionais (por exemplo, taxa de produção). No segundo artigo, definimos e implementados uma metodologia sistemática para redução de incertezas que combina simulação de reservatórios e técnicas de emulação em uma abordagem de Ajuste de Histórico Bayesiano para Redução de Incertezas (BHMUR, Bayesian History Matching for Uncertainty Reduction, acrônimo em inglês). Flexibilidade, repetitividade e escalabilidade são as características principais dessa estrutura geral que incorpora inovações tais como fases de avaliação e múltiplas técnicas de emulação. Esse procedimento potencialmente transforma a prática de BHMUR em uma mais padronizada para diversas aplicações. Aplicamos em um estudo de caso com 26 atributos incertos, dados de produção de 25 poços e 11+ anos de dados de histórico de produção baseado em uma realidade hipotética, resultando na construção de 115 emuladores validados e uma pequena fração do espaço de busca apropriadamente considerada não-implausível ao final do processo de redução de incertezas. No terceiro artigo, expandimos metodologias para estágios críticos na prática de BHMUR: (1) extensão da formulação estatística de BHMUR para acomodar emuladores do tipo classificadores; (2) seleção efetiva de uma combinação de dados de produção para emulação; (3) validação de emuladores baseados em múltiplos critérios; e (4) consideração de erros sistemáticos e aleatórios em dados observados. No último artigo, avaliamos um passo crítico para a prática de BHMUR, que é a quantificação de discrepância do modelo para contabilizar a representação de sistemas físicos a partir de modelos imperfeitos. Propusemos uma metodologia para quantificar a discrepância do modelo originada em erros de dados medidos e informados ao simulador numérico como condição de contorno (target). A aplicação da metodologia demonstrou que a discrepância do modelo é simultaneamente dependente de tempo e da posição no espaço de busca: uma descoberta importante para orientar o processo de quantificação de incertezas em estudos de caso baseados em reservatórios de petróleo reaisAbstract: This thesis proposes a systematic Bayesian approach for uncertainty quantification with an application for petroleum reservoirs. First, we demonstrated the potential of additional misfit functions based on specific events in reservoir management, to gain knowledge about reservoir behaviour and quality in probabilistic forecasting. Water breakthrough and productivity deviation were selected and provided insights of discontinuities in simulation data when compared to the use of traditional misfit functions (e.g. production rate, BHP) alone. Second, we designed and implemented a systematic methodology for uncertainty reduction combining reservoir simulation and emulation techniques under the Bayesian History Matching for Uncertainty Reduction (BHMUR) approach. Flexibility, repeatability and scalability are the main features of this high-level structure, incorporating innovations such as phases of evaluation and multiple emulation techniques. This workflow potentially turns the practice of BHMUR more standardised across applications. It was applied for a complex case study, with 26 uncertainties, outputs from 25 wells and 11+ years of historical data based on a hypothetical reality, resulting in the construction of 115 valid emulators and a small fraction of the original searching space appropriately considered non-implausible by the end of the uncertainty reduction process. Third, we expanded methodologies for critical steps in the BHMUR practice: (1) extension of statistical formulation to two-class emulators; (2) efficient selection of a combination of outputs to emulate; (3) validation of emulators based on multiple criteria; and (4) accounting for systematic and random errors in observed data. Finally, a critical step in the BHMUR approach is the quantification of model discrepancy which accounts for imperfect models aiming to represent a real physical system. We proposed a methodology to quantify the model discrepancy originated from errors in target data that are set as boundary conditions in a numerical simulator. Its application demonstrated that model discrepancy is dependent on both time and location in the input space, which is a central finding to guide the BHMUR practice in case of studies based on real fieldsDoutoradoReservatórios e GestãoDoutora em Ciências e Engenharia de Petróleo206985/2017-7CNPQFUNCAM

    Pushing the frontier : three essays on Bayesian Stochastic Frontier modelling

    Get PDF
    This thesis presents three essays in Bayesian Stochastic Frontier models for cost and production functions and links the fields of productivity and efficiency measurement and spatial econometrics, with applications to energy economics and aggregate productivity. The thesis presents a chapter of literature review highlighting the advances and gaps in the stochastic frontier literature. Chapter 3 discusses measurement of aggregate efficiency in electricity consumption in transition economies in a cost frontier framework. The underlying model is extended to a Spatial Autoregressive model with efficiency spillovers in Chapter 4, showing good performance in simulations. The model is applied to aggregate productivity in European countries, leading to evidence of convergence between eastern and western economies over time, as in the previous chapter regarding efficiency in electricity consumption. Finally, Chapter 5 proposes a spatial model which allows for dependence in the structure of the inefficiency component while accounting for unobserved heterogeneity. This approach is applied to New Zealand electricity distribution networks, finding some evidence of efficiency spillovers between the firms. All essays explore the performance of the model using simulations and discuss the utility of the approaches in small samples. The thesis concludes with a summary of findings and future paths of research

    Understanding the costs of urban transportation using causal inference methods

    Get PDF
    With urbanisation on the rise, the need to transport the population within cities in an efficient, safe and sustainable manner has increased tremendously. In serving the growing demand for urban travel, one of the key policy question for decision makers is whether to invest more in road infrastructure or in public transportation. As both of these solutions require substantial spending of public money, understanding their costs continues to be a major area of research. This thesis aims to improve our understanding of the technology underlying costs of operation of public and private modes of urban travel and provide new empirical insights using large-scale datasets and application of causal econometric modelling techniques. The thesis provides empirical and theoretical contributions to three different strands in the transportation literature. Firstly, by assessing the relative costs of a group of twenty-four metro systems across the world over the period 2004 to 2016, this thesis models the cost structure of these metros and quantifies the important external sources of cost-efficiency. The main methodological development is to control for confounding from observed and unobserved characteristics of metro operations by application of dynamic panel data methods. Secondly, the thesis provides a quantification of the travel efficiency arising from increasing the provision of road-based urban travel. A crucial pre-condition of this analysis is a reliable characterisation of the technology describing congestion in a road network. In pursuit of this goal, this study develops novel causal econometric models describing vehicular flow-density relationship, both for a highway section and for an urban network, using large-scale traffic detector data and application of non-parametric instrumental variables estimation. Our model is unique as we control for bias from unobserved confounding, for instance, differences in driving behaviour. As an important intermediate research outcome, this thesis also provides a detailed association of the economic theory underlying the link between the flow-density relationship and the corresponding production function for travel in a highway section and in an urban road network. Finally, the influence of density economies in metros is investigated further using large-scale smart card and train location data from the Mass Transit Railway network in Hong Kong. This thesis delivers novel station-based causal econometric models to understand how passenger congestion delays arise in metro networks at higher passenger densities. The model is aimed at providing metro operators with a tool to predict the likely occurrences of a problem in the network well in advance and materialise appropriate control measures to minimise the impact of delays and improve the overall system reliability. The empirical results from this thesis have important implications for appraisal of transportation investment projects.Open Acces

    Predictive modelling and uncertainty quantification of UK forest growth

    Get PDF
    Forestry in the UK is dominated by coniferous plantations. Sitka spruce (Picea sitchensis) and Scots pine (Pinus sylvestris) are the most prevalent species and are mostly grown in single age mono-culture stands. Forest strategy for Scotland, England, and Wales all include efforts to achieve further afforestation. The aim of this afforestation is to provide a multi-functional forest with a broad range of benefits. Due to the time scale involved in forestry, accurate forecasts of stand productivity (along with clearly defined uncertainties) are essential to forest managers. These can be provided by a range of approaches to modelling forest growth. In this project model comparison, Bayesian calibration, and data assimilation methods were all used to attempt to improve forecasts and understanding of uncertainty therein of the two most important conifers in UK forestry. Three different forest growth models were compared in simulating growth of Scots pine. A yield table approach, the process-based 3PGN model, and a Stand Level Dynamic Growth (SLeDG) model were used. Predictions were compared graphically over the typical productivity range for Scots pine in the UK. Strengths and weaknesses of each model were considered. All three produced similar growth trajectories. The greatest difference between models was in volume and biomass in unthinned stands where the yield table predicted a much larger range compared to the other two models. Future advances in data availability and computing power should allow for greater use of process-based models, but in the interim more flexible dynamic growth models may be more useful than static yield tables for providing predictions which extend to non-standard management prescriptions and estimates of early growth and yield. A Bayesian calibration of the SLeDG model was carried out for both Sitka spruce and Scots pine in the UK for the first time. Bayesian calibrations allow both model structure and parameters to be assessed simultaneously in a probabilistic framework, providing a model with which forecasts and their uncertainty can be better understood and quantified using posterior probability distributions. Two different structures for including local productivity in the model were compared with a Bayesian model comparison. A complete calibration of the more probable model structure was then completed. Example forecasts from the calibration were compatible with existing yield tables for both species. This method could be applied to other species or other model structures in the future. Finally, data assimilation was investigated as a way of reducing forecast uncertainty. Data assimilation assumes that neither observations nor models provide a perfect description of a system, but combining them may provide the best estimate. SLeDG model predictions and LiDAR measurements for sub-compartments within Queen Elizabeth Forest Park were combined with an Ensemble Kalman Filter. Uncertainty was reduced following the second data assimilation in all of the state variables. However, errors in stand delineation and estimated stand yield class may have caused observational uncertainty to be greater thus reducing the efficacy of the method for reducing overall uncertainty

    Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review

    Full text link
    Context The International Software Benchmarking Standards Group (ISBSG) maintains a software development repository with over 6000 software projects. This dataset makes it possible to estimate a project s size, effort, duration, and cost. Objective The aim of this study was to determine how and to what extent, ISBSG has been used by researchers from 2000, when the first papers were published, until June of 2012. Method A systematic mapping review was used as the research method, which was applied to over 129 papers obtained after the filtering process. Results The papers were published in 19 journals and 40 conferences. Thirty-five percent of the papers published between years 2000 and 2011 have received at least one citation in journals and only five papers have received six or more citations. Effort variable is the focus of 70.5% of the papers, 22.5% center their research in a variable different from effort and 7% do not consider any target variable. Additionally, in as many as 70.5% of papers, effort estimation is the research topic, followed by dataset properties (36.4%). The more frequent methods are Regression (61.2%), Machine Learning (35.7%), and Estimation by Analogy (22.5%). ISBSG is used as the only support in 55% of the papers while the remaining papers use complementary datasets. The ISBSG release 10 is used most frequently with 32 references. Finally, some benefits and drawbacks of the usage of ISBSG have been highlighted. Conclusion This work presents a snapshot of the existing usage of ISBSG in software development research. ISBSG offers a wealth of information regarding practices from a wide range of organizations, applications, and development types, which constitutes its main potential. However, a data preparation process is required before any analysis. Lastly, the potential of ISBSG to develop new research is also outlined.Fernández Diego, M.; González-Ladrón-De-Guevara, F. (2014). Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review. Information and Software Technology. 56(6):527-544. doi:10.1016/j.infsof.2014.01.003S52754456

    Financial Development, Structure and Growth : New Data, Method and Results

    Get PDF
    The existing weight of evidence suggests that financial structure (the classification of a financial system as bank-based versus market-based) is irrelevant for economic growth. This contradicts the common belief that the institutional structure of a financial system matters. We re-examine this issue using a novel dataset covering 69 countries over 1989-2011 in a Bayesian framework. Our results are conformable to the belief - a market-based system is relevant - with sizable economic effects for the high-income but not for the middle-and-low-income countries. Our findings provide a counterexample to the weight of evidence. We also identify a regime shift in 2008.JEL Classification Codes: G0, O4, O16http://www.grips.ac.jp/list/jp/facultyinfo/leon_gonzalez_roberto
    • …
    corecore