46 research outputs found

    Incorporating Environmental Variability Into Assessment and Management of American Lobster (Homarus americanus)

    Get PDF
    The American lobster (Homarus americanus) support one of the most valuable fisheries in the United States. A growing body of literature recognizes the importance of environmental variables in regulating this species’ biogeography and population dynamics. However, the current lobster stock assessment and management do not explicitly consider the impact of environmental variables such as water temperature and assumes spatiotemporal variabilities in the lobster environment as random background noises. Furthermore, while climate-induced changes in marine ecosystems continue to impact the productivity of lobster fisheries, studies that model lobster response to altered environmental conditions associated with climate change are lacking. As such, evaluating changes in lobster biogeography and population dynamics, as well as explicitly incorporating quantified lobster response to altered environmental conditions into the specie’s stock assessment will be critical for effective lobster fisheries management in a changing environment. This dissertation research developed a modeling framework to assess and incorporate environmental variability in assessment and management of American lobster stocks in the Gulf of Maine, Georges Bank, and southern New England. This modeling framework consists of: 1) a qualitative bioclimate envelope model to quantify the spatiotemporal variability in availability of suitable lobster habitat; 2) a statistical climate-niche model to quantify spatiotemporal variability of lobster distribution; and 3) a process-based population size-structured assessment model to incorporate the effect of environmental variable such as water temperature in lobster population dynamics. The developed modeling framework was used to predict climate-driven changes in lobster habitat suitability and distribution, as well as to determine whether incorporating the environmental effects can better inform historical recruitment especially for years when recruitment was very low or very high. The first component of the framework provides a qualitative bioclimate envelope model to evaluate the spatiotemporal variability of suitable lobster habitat based on four environmental variables (bottom temperature, bottom salinity, depth, and bottom substrate type. The bioclimate envelope model was applied to lobsters in Long Island Sound and inshore Gulf of Maine waters. In the Long Island Sound, an examination of the temporal change in annual median habitat suitability values identified possible time blocks when habitat conditions were extremely poor and revealed a statistically significant decreasing trend in availability of suitable habitat for juveniles during spring from 1978 to 2012. In the Gulf of Maine, a statistically significant increasing trend in habitat suitability was observed for both sexes and stages (juvenile and adult) during the spring (April–June), but not during the fall (September–November). The second component of the framework provides a statistical niche model to quantify the effects of environmental variables on lobster abundance and distribution. The statistical niche model was used to estimate spatiotemporal variation of lobster shell disease in Long Island Sound, and to quantify environmental effects on season, sex- and size-specific lobster distributions in the Gulf of Maine. In the Long Island Sound, the statistical niche model found that spatial distribution of shell disease prevalence was strongly influenced by the interactive latitude and longitude effects, which possibly indicates a geographic origin of shell disease. In the Gulf of Maine, the statistical niche model indicated that bottom temperature and salinity impact on lobster distribution were more pronounced during spring, and predicted significantly higher lobster abundance under a warm climatology scenario. The third component of the framework provides a size-structured population model that can incorporate the environmental effects to inform recruitment dynamics. The size-structured population model was applied to the Gulf of Maine/Georges Bank lobster stock, where climate-driven habitat suitability for lobster recruitments was used to inform the recruitment index. The performance of this assessment model is evaluated by comparing relevant assessment outputs such as recruitment, annual fishing mortality, and magnitude of retrospective biases. The assessment model with an environment-explicit recruitment function estimated higher recruitment and lower fishing mortality in the early 2000s and late 2010s. Retrospective patterns were also reduced when the environmentally-driven recruitment model was used. This dissertation research is novel as it provides the comprehensive framework that can quantify impacts of environmental variability on lobster biogeography and population dynamics at high spatial and temporal scales. The modeling approaches developed in this study facilitate the need to invoke assumptions of environment at non-equilibrium and demonstrate the importance of considering environmental variability in the assessment and management of the lobster fisheries. This dissertation is dedicated to increase the breadth of knowledge about the dynamics of lobster populations and ecosystems and renders a novel first step towards sustainable management of this species given the expected changes in the Northwest Atlantic ecosystem

    Sampling methods for solving Bayesian model updating problems: A tutorial

    Get PDF
    This tutorial paper reviews the use of advanced Monte Carlo sampling methods in the context of Bayesian model updating for engineering applications. Markov Chain Monte Carlo, Transitional Markov Chain Monte Carlo, and Sequential Monte Carlo methods are introduced, applied to different case studies and finally their performance is compared. For each of these methods, numerical implementations and their settings are provided. Three case studies with increased complexity and challenges are presented showing the advantages and limitations of each of the sampling techniques under review. The first case study presents the parameter identification for a spring-mass system under a static load. The second case study presents a 2-dimensional bi-modal posterior distribution and the aim is to observe the performance of each of these sampling techniques in sampling from such distribution. Finally, the last case study presents the stochastic identification of the model parameters of a complex and non-linear numerical model based on experimental data. The case studies presented in this paper consider the recorded data set as a single piece of information which is used to make inferences and estimations on time-invariant model parameters

    A multiscale strategy for fouling prediction and mitigation in gas turbines

    Get PDF
    Gas turbines are one of the primary sources of power for both aerospace and land-based applications. Precisely for this reason, they are often forced to operate in harsh environmental conditions, which involve the occurrence of particle ingestion by the engine. The main implications of this problem are often underestimated. The particulate in the airflow ingested by the machine can deposit or erode its internal surfaces, and lead to the variation of their aerodynamic geometry, entailing performance degradation and, possibly, a reduction in engine life. This issue affects the compressor and the turbine section and can occur for either land-based or aeronautical turbines. For the former, the problem can be mitigated (but not eliminated) by installing filtration systems. For what concern the aerospace field, filtration systems cannot be used. Volcanic eruptions and sand dust storms can send particulate to aircraft cruising altitudes. Also, aircraft operating in remote locations or low altitudes can be subjected to particle ingestion, especially in desert environments. The aim of this work is to propose different methodologies capable to mitigate the effects of fouling or predicting the performance degradation that it generates. For this purpose, both hot and cold engine sections are considered. Concerning the turbine section, new design guidelines are presented. This is because, for this specific component, the time scales of failure events due to hot deposition can be of the order of minutes, which makes any predictive model inapplicable. In this respect, design optimization techniques were applied to find the best HPT vane geometry that is less sensitive to the fouling phenomena. After that, machine learning methods were adopted to obtain a design map that can be useful in the first steps of the design phase. Moreover, after a numerical uncertainty quantification analysis, it was demonstrated that a deterministic optimization is not sufficient to face highly aleatory phenomena such as fouling. This suggests the use of robust or aggressive design techniques to front this issue. On the other hand, with respect to the compressor section, the research was mainly focused on the building of a predictive maintenance tool. This is because the time scales of failure events due to cold deposition are longer than the ones for the hot section, hence the main challenge for this component is the optimization of the washing schedule. As reported in the previous sections, there are several studies in the literature focused on this issue, but almost all of them are data-based instead of physics-based. The innovative strategy proposed here is a mixture between physics-based and data-based methodologies. In particular, a reduced-order model has been developed to predict the behaviour of the whole engine as the degradation proceeds. For this purpose, a gas path code that uses the components’ characteristic maps has been created to simulate the gas turbine. A map variation technique has been used to take into account the fouling effects on each engine component. Particularly, fouling coefficients as a function of the engine architecture, its operating conditions, and the contaminant characteristics have been created. For this purpose, both experimental and computational results have been used. Specifically for the latter, efforts have been done to develop a new numerical deposition/detachment model.Le turbine a gas sono una delle pricipali fonti di energia, sia per applicazioni aeronautiche che terrestri. Proprio per questa ragione, esse sono spesso costrette ad operare in ambienti non propriamente puliti, il che comporta l’ingestione di contaminanti solidi da parte del motore. Le principali implicazioni di questo problema sono spesso sottovalutate. Le particelle solide presenti nel flusso d’aria che il motore ingerisce durante il suo funzionamento possono depositarsi o erodere le superfici interne della macchina, e portare a variazioni alla sua aerodinamica, quindi a degrado di performance e, molto probabilmente, alla diminuzione della sua vita utile. Questo problema aflligge sia la parte del compressore che la parte della turbina, e si manifesta sia in applicazioni terrestri che aeronautiche. Per quanto riguarda la prima, la questione può essere mitigata (ma non eliminata) dall’installazione di sistemi di filtraggio all’ingresso della macchina. Per le applicazioni aeronautiche invece, i sistemi di filtraggio non possono essere utilizzati. Questo implica che il particolato presente ad alte quote, magari grazie ad eventi catastrofici quali eruzioni vulcaniche, o a basse quote, quindi ambienti deseritic, entra liberamente nella turbina a gas. Lo scopo principale di questo lavoro di tesi, è quello di proporre differenti metodologieallo scopo di mitigare gli effetti dello sporcamento o predirre il degrado che esso comporta nelle turbine a gas. Per questo scopo, sia la parte del compressore che quella della turbina sono state prese in considerazione. Per quanto riguarda la parte turbina, saranno presentate nuove guide progettuali volte al trovare la geometria che sia meno sensibile possibile al problema dello sporcamento. Dopo di ciò, i risultati ottenuti verranno trattati tramite tecniche di machine learning, ottenendo una mappa di progetto che potrà essere utile nelle prime fasi della progettazione di questi componenti. Inoltre, essendo l’analisi fin qui condotta di tipo deterministico, un’analisi delle principali fonti di incertezza verrà eseguita con l’utilizzo di tecniche derivanti dall’uncertainty quantification. Questo dimostrerà che l’analisi deterministica è troppo semplificativa, e che sarebbe opportuno spingersi verso una progettazione robusta per affrontare questa tipologia di problemi. D’altro canto, per quanto concerne la parte compressore, la ricerca è stata incentrata principalmente sulla costruzione di uno strumento predittivo, questo perchè la scala temporale del degrado dovuto alla deposizione a "freddo" è molto più dilatata rispetto a quella della sezione "calda". La trategia proposta in questo lavoro di tesi è un’insieme di modelli fisici e data-driven. In particolare, si è sviluppato un modello ad ordine ridotto per la previsione del comportamento del motore soggetto a degrado dovuto all’ingestione di particolato, durante un’intera missione aerea. Per farlo, si è generato un codice cosiddetto gas-path, che modella i singoli componenti della macchina attraverso le loro mappe caratteristiche. Quest’ultime vengono modificate, a seguito della deposizione, attraverso opportuni coefficienti di degrado. Tali coefficienti devono essere adeguatamente stimati per avere una corretta previsione degli eventi, e per fare ciò verrà proposta una strategia che comporta l’utilizzo sia di metodi sperimentali che computazionali, per la generazione di un algoritmo che avrà lo scopo di fornire come output questi coefficienti

    Advances in approximate Bayesian computation and trans-dimensional sampling methodology

    Full text link
    Bayesian statistical models continue to grow in complexity, driven in part by a few key factors: the massive computational resources now available to statisticians; the substantial gains made in sampling methodology and algorithms such as Markov chain Monte Carlo (MCMC), trans-dimensional MCMC (TDMCMC), sequential Monte Carlo (SMC), adaptive algorithms and stochastic approximation methods and approximate Bayesian computation (ABC); and development of more realistic models for real world phenomena as demonstrated in this thesis for financial models and telecommunications engineering. Sophisticated statistical models are increasingly proposed for practical solutions to real world problems in order to better capture salient features of increasingly more complex data. With sophistication comes a parallel requirement for more advanced and automated statistical computational methodologies. The key focus of this thesis revolves around innovation related to the following three significant Bayesian research questions. 1. How can one develop practically useful Bayesian models and corresponding computationally efficient sampling methodology, when the likelihood model is intractable? 2. How can one develop methodology in order to automate Markov chain Monte Carlo sampling approaches to efficiently explore the support of a posterior distribution, defined across multiple Bayesian statistical models? 3. How can these sophisticated Bayesian modelling frameworks and sampling methodologies be utilized to solve practically relevant and important problems in the research fields of financial risk modeling and telecommunications engineering ? This thesis is split into three bodies of work represented in three parts. Each part contains journal papers with novel statistical model and sampling methodological development. The coherent link between each part involves the novel sampling methodologies developed in Part I and utilized in Part II and Part III. Papers contained in each part make progress at addressing the core research questions posed. Part I of this thesis presents generally applicable key statistical sampling methodologies that will be utilized and extended in the subsequent two parts. In particular it presents novel developments in statistical methodology pertaining to likelihood-free or ABC and TDMCMC methodology. The TDMCMC methodology focuses on several aspects of automation in the between model proposal construction, including approximation of the optimal between model proposal kernel via a conditional path sampling density estimator. Then this methodology is explored for several novel Bayesian model selection applications including cointegrated vector autoregressions (CVAR) models and mixture models in which there is an unknown number of mixture components. The second area relates to development of ABC methodology with particular focus on SMC Samplers methodology in an ABC context via Partial Rejection Control (PRC). In addition to novel algorithmic development, key theoretical properties are also studied for the classes of algorithms developed. Then this methodology is developed for a highly challenging practically significant application relating to multivariate Bayesian α\alpha-stable models. Then Part II focuses on novel statistical model development in the areas of financial risk and non-life insurance claims reserving. In each of the papers in this part the focus is on two aspects: foremost the development of novel statistical models to improve the modeling of risk and insurance; and then the associated problem of how to fit and sample from such statistical models efficiently. In particular novel statistical models are developed for Operational Risk (OpRisk) under a Loss Distributional Approach (LDA) and for claims reserving in Actuarial non-life insurance modelling. In each case the models developed include an additional level of complexity which adds flexibility to the model in order to better capture salient features observed in real data. The consequence of the additional complexity comes at the cost that standard fitting and sampling methodologies are generally not applicable, as a result one is required to develop and apply the methodology from Part I. Part III focuses on novel statistical model development in the area of statistical signal processing for wireless communications engineering. Statistical models will be developed or extended for two general classes of wireless communications problem: the first relates to detection of transmitted symbols and joint channel estimation in Multiple Input Multiple Output (MIMO) systems coupled with Orthogonal Frequency Division Multiplexing (OFDM); the second relates to co-operative wireless communications relay systems in which the key focus is on detection of transmitted symbols. Both these areas will require advanced sampling methodology developed in Part I to find solutions to these real world engineering problems

    Statistical Modelling

    Get PDF
    The book collects the proceedings of the 19th International Workshop on Statistical Modelling held in Florence on July 2004. Statistical modelling is an important cornerstone in many scientific disciplines, and the workshop has provided a rich environment for cross-fertilization of ideas from different disciplines. It consists in four invited lectures, 48 contributed papers and 47 posters. The contributions are arranged in sessions: Statistical Modelling; Statistical Modelling in Genomics; Semi-parametric Regression Models; Generalized Linear Mixed Models; Correlated Data Modelling; Missing Data, Measurement of Error and Survival Analysis; Spatial Data Modelling and Time Series and Econometrics

    Probabilistic Methods for Model Validation

    Get PDF
    This dissertation develops a probabilistic method for validation and verification (V&V) of uncertain nonlinear systems. Existing systems-control literature on model and controller V&V either deal with linear systems with norm-bounded uncertainties,or consider nonlinear systems in set-based and moment based framework. These existing methods deal with model invalidation or falsification, rather than assessing the quality of a model with respect to measured data. In this dissertation, an axiomatic framework for model validation is proposed in probabilistically relaxed sense, that instead of simply invalidating a model, seeks to quantify the "degree of validation". To develop this framework, novel algorithms for uncertainty propagation have been proposed for both deterministic and stochastic nonlinear systems in continuous time. For the deterministic flow, we compute the time-varying joint probability density functions over the state space, by solving the Liouville equation via method-of-characteristics. For the stochastic flow, we propose an approximation algorithm that combines the method-of-characteristics solution of Liouville equation with the Karhunen-Lo eve expansion of process noise, thus enabling an indirect solution of Fokker-Planck equation, governing the evolution of joint probability density functions. The efficacy of these algorithms are demonstrated for risk assessment in Mars entry-descent-landing, and for nonlinear estimation. Next, the V&V problem is formulated in terms of Monge-Kantorovich optimal transport, naturally giving rise to a metric, called Wasserstein metric, on the space of probability densities. It is shown that the resulting computation leads to solving a linear program at each time of measurement availability, and computational complexity results for the same are derived. Probabilistic guarantees in average and worst case sense, are given for the validation oracle resulting from the proposed method. The framework is demonstrated for nonlinear robustness veri cation of F-16 flight controllers, subject to probabilistic uncertainties. Frequency domain interpretations for the proposed framework are derived for linear systems, and its connections with existing nonlinear model validation methods are pointed out. In particular, we show that the asymptotic Wasserstein gap between two single-output linear time invariant systems excited by Gaussian white noise, is the difference between their average gains, up to a scaling by the strength of the input noise. A geometric interpretation of this result allows us to propose an intrinsic normalization of the Wasserstein gap, which in turn allows us to compare it with classical systems-theoretic metrics like v-gap. Next, it is shown that the optimal transport map can be used to automatically refine the model. This model refinement formulation leads to solving a non-smooth convex optimization problem. Examples are given to demonstrate how proximal operator splitting based computation enables numerically solving the same. This method is applied for nite-time feedback control of probability density functions, and for data driven modeling of dynamical systems

    Air quality and resource development: a risk assessment in the Hunter Region of Australia

    Get PDF
    corecore