1,487 research outputs found
Uncertainty analysis of 100-year flood maps under climate change scenarios
Floods are natural disastrous hazards that throughout history have had and still have major adverse impacts on people’s life, economy, and the environment. One of the useful tools for flood management are flood maps, which are developed to identify flood prone areas and can be used by insurance companies, local authorities and land planners for rescue and taking proper actions against flood hazards.
Developing flood maps is often carried out by flood inundation modeling tools such as 2D hydrodynamic models. However, often flood maps are generated using a single deterministic model outcome without considering the uncertainty that arises from different sources and propagates through the modeling process.
Moreover, the increasing number of flood events in the last decades combined with the effects of global climate change requires developing accurate and safe flood maps in which the uncertainty has been considered.
Therefore, in this thesis the uncertainty of 100-year flood maps under 3 scenarios (present and future RCP4.5 and RCP8.5) is assessed through intensive Monte Carlo simulations. The uncertainty introduced by model input data namely, roughness coefficient, runoff coefficient and precipitation intensity (which incorporates three different sources of uncertainty: RCP scenario, climate model, and probability distribution function), is propagated through a surrogate hydrodynamic/hydrologic model developed based on a physical 2D model. The results obtained from this study challenge the use of deterministic flood maps and recommend using probabilistic approaches for developing safe and reliable flood maps. Furthermore, they show that the main source of uncertainty comes from the precipitation, namely the selected probability distribution compared to the selected RCP and climate model.publishedVersio
NASA SBIR abstracts of 1990 phase 1 projects
The research objectives of the 280 projects placed under contract in the National Aeronautics and Space Administration (NASA) 1990 Small Business Innovation Research (SBIR) Phase 1 program are described. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses in response to NASA's 1990 SBIR Phase 1 Program Solicitation. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 280, in order of its appearance in the body of the report. The document also includes Appendixes to provide additional information about the SBIR program and permit cross-reference in the 1990 Phase 1 projects by company name, location by state, principal investigator, NASA field center responsible for management of each project, and NASA contract number
Dynamics of Macrosystems; Proceedings of a Workshop, September 3-7, 1984
There is an increasing awareness of the important and persuasive role that instability and random, chaotic motion play in the dynamics of macrosystems. Further research in the field should aim at providing useful tools, and therefore the motivation should come from important questions arising in specific macrosystems. Such systems include biochemical networks, genetic mechanisms, biological communities, neutral networks, cognitive processes and economic structures. This list may seem heterogeneous, but there are similarities between evolution in the different fields. It is not surprising that mathematical methods devised in one field can also be used to describe the dynamics of another.
IIASA is attempting to make progress in this direction. With this aim in view this workshop was held at Laxenburg over the period 3-7 September 1984. These Proceedings cover a broad canvas, ranging from specific biological and economic problems to general aspects of dynamical systems and evolutionary theory
Common metrics for cellular automata models of complex systems
The creation and use of models is critical not only to the scientific process, but also to life in general. Selected features of a system are abstracted into a model that can then be used to gain knowledge of the workings of the observed system and even anticipate its future behaviour. A key feature of the modelling process is the identification of commonality. This allows previous experience of one model to be used in a new or unfamiliar situation. This recognition of commonality between models allows standards to be formed, especially in areas such as measurement. How everyday physical objects are measured is built on an ingrained acceptance of their underlying commonality.
Complex systems, often with their layers of interwoven interactions, are harder to model and, therefore, to measure and predict. Indeed, the inability to compute and model a complex system, except at a localised and temporal level, can be seen as one of its defining attributes. The establishing of commonality between complex systems provides the opportunity to find common metrics. This work looks at two dimensional cellular automata, which are widely used as a simple modelling tool for a variety of systems. This has led to a very diverse range of systems using a common modelling environment based on a lattice of cells. This provides a possible common link between systems using cellular automata that could be exploited to find a common metric that provided information on a diverse range of systems. An enhancement of a categorisation of cellular automata model types used for biological studies is proposed and expanded to include other disciplines. The thesis outlines a new metric, the C-Value, created by the author. This metric, based on the connectedness of the active elements on the cellular automata grid, is then tested with three models built to represent three of the four categories of cellular automata model types. The results show that the new C-Value provides a good indicator of the gathering of active cells on a grid into a single, compact cluster and of indicating, when correlated with the mean density of active cells on the lattice, that their distribution is random. This provides a range to define the disordered and ordered state of a grid. The use of the C-Value in a localised context shows potential for identifying patterns of clusters on the grid
Recommended from our members
Collective analysis of multiple high-throughput gene expression datasets
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonModern technologies have resulted in the production of numerous high-throughput biological datasets. However, the pace of development of capable computational methods does not cope with the pace of generation of new high-throughput datasets. Amongst the most popular biological high-throughput datasets are gene expression datasets (e.g. microarray datasets). This work targets this aspect by proposing a suite of computational methods which can analyse multiple gene expression datasets collectively. The focal method in this suite is the unification of clustering results from multiple datasets using external specifications (UNCLES). This method applies clustering to multiple heterogeneous datasets which measure the expression of the same set of genes separately and then combines the resulting partitions in accordance to one of two types of external specifications; type A identifies the subsets of genes that are consistently co-expressed in all of the given datasets while type B identifies the subsets of genes that are consistently co-expressed in a subset of datasets while being poorly co-expressed in another subset of datasets. This contributes to the types of questions which can addressed by computational methods because existing clustering, consensus clustering, and biclustering methods are inapplicable to address the aforementioned objectives. Moreover, in order to assist in setting some of the parameters required by UNCLES, the M-N scatter plots technique is proposed. These methods, and less mature versions of them, have been validated and applied to numerous real datasets from the biological contexts of budding yeast, bacteria, human red blood cells, and malaria. While collaborating with biologists, these applications have led to various biological insights. In yeast, the role of the poorly-understood gene CMR1 in the yeast cell-cycle has been further elucidated. Also, a novel subset of poorly understood yeast genes has been discovered with an expression profile consistently negatively correlated with the well-known ribosome biogenesis genes. Bacterial data analysis has identified two clusters of negatively correlated genes. Analysis of data from human red blood cells has produced some hypotheses regarding the regulation of the pathways producing such cells. On the other hand, malarial data analysis is still at a preliminary stage. Taken together, this thesis provides an original integrative suite of computational methods which scrutinise multiple gene expression datasets collectively to address previously unresolved questions, and provides the results and findings of many applications of these methods to real biological datasets from multiple contexts.National Institute for Health Research (NIHR) and the Brunel College of Engineering, Design and Physical Science
The Sum of the Parts: Large-Scale Modeling in Systems Biology
Systems biologists often distance themselves from reductionist approaches and formulate their aim as understanding living systems “as a whole.” Yet, it is often unclear what kind of reductionism they have in mind, and in what sense their methodologies would offer a superior approach. To address these questions, we distinguish between two types of reductionism which we call “modular reductionism” and “bottom-up reductionism.” Much knowledge in molecular biology has been gained by decomposing living systems into functional modules or through detailed studies of molecular processes. We ask whether systems biology provides novel ways to recompose these findings in the context of the system as a whole via computational simulations. As an example of computational integration of modules, we analyze the first whole-cell model of the bacterium M. genitalium. Secondly, we examine the attempt to recompose processes across different spatial scales via multi-scale cardiac models. Although these models rely on a number of idealizations and simplifying assumptions as well, we argue that they provide insight into the limitations of reductionist approaches. Whole-cell models can be used to discover properties arising at the interfaces of dynamically coupled processes within a biological system, thereby making more apparent what is lost through decomposition. Similarly, multi-scale modeling highlights the relevance of macroscale parameters and models and challenges the view that living systems can be understood “bottom-up.” Specifically, we point out that system-level properties constrain lower-scale processes. Thus, large-scale modeling reveals how living systems at the same time are more and less than the sum of the parts.
Part of a special issue, Ontologies of Living Beings, guest-edited by A. M. Ferner and Thomas Prade
Extracting circadian clock information from a single time point assay
A working internal circadian clock allows a healthy organism to keep time in order to anticipate transitions between night and day, allowing the temporal optimisation and control of internal processes. The internal circadian clock is regulated by a set of core genes that form a tightly coupled oscillator system. These oscillators are autonomous and robust to noise, but can be slowly reset by external signals that are processed by the master clock in the brain.
In this thesis we explore the robustness of a tightly coupled oscillator model of the circadian clock, and show that its deterministic and stochastic forms are both significantly robust to noise. Using a simple linear algebra approach to rhythmicity detection, we show that a small set of circadian clock genes are rhythmic and synchronised in mouse tissues, and rhythmic and synchronised in a group of human individuals. These sets of tightly regulated, robust oscillators, are genes that we use to de ne the expected behaviour of a healthy circadian clock. We use these “time fingerprints" to design a model, dubbed “Time-Teller", that can be used to tell the time from single time point samples of mouse or human transcriptome.
The dysfunction of the molecular circadian clock is implicated in several major diseases and there is significant evidence that disrupted circadian rhythm is a hallmark of many cancers. Convincing results showing the dysfunction of the circadian clock in solid tumours is lacking due to the difficulties of studying circadian rhythms in tumours within living mammals. Instead of developing biological assays to study this, we take advantage of the design of Time-Teller, using its underlying features to build a metric, Ď´, that indicates dysfunction of the circadian clock. We use Time-Teller to explore the clock function of samples from existing, publicly available tumour transcriptome data.
Although multiple algorithms have been published with the aims of “time-telling" using transcriptome data, none of them have been reported to be able to tell the times of single samples, or provide metrics of clock dysfunction in single samples. Time-Teller is presented in this thesis as an algorithm that both tells the time of a single time-point sample, and provides a measure of clock function for that sample.
In a case study, we use the clock function metric, , as a retrospective prognostic marker for breast cancer using data from a completed clinical trial. Ď´ is shown to correlate with many prognostic markers of breast cancer, and we show how could also be a predictive marker for treatment efficacy and patient survival
- …