238 research outputs found
Efficient treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis
The main goals of this thesis are the development of a computationally efficient framework for stochastic treatment of various important uncertainties in probabilistic seismic hazard and risk assessment, its application to a newly created seismic risk model of Indonesia, and the analysis and quantification of the impact of these uncertainties on the distribution of estimated seismic losses for a large number of synthetic portfolios modeled after real-world counterparts.
The treatment and quantification of uncertainty in probabilistic seismic hazard and risk analysis has already been identified as an area that could benefit from increased research attention.
Furthermore, it has become evident that the lack of research considering the development and application of suitable sampling schemes to increase the computational efficiency of the stochastic simulation represents a bottleneck for applications where model runtime is an important factor.
In this research study, the development and state of the art of probabilistic seismic hazard and risk analysis is first reviewed and opportunities for improved treatment of uncertainties are identified.
A newly developed framework for the stochastic treatment of portfolio location uncertainty as well as ground motion and damage uncertainty is presented.
The framework is then optimized with respect to computational efficiency.
Amongst other techniques, a novel variance reduction scheme for portfolio location uncertainty is developed.
Furthermore, in this thesis, some well-known variance reduction schemes such as Quasi Monte Carlo, Latin Hypercube Sampling and MISER (locally adaptive recursive stratified sampling) are applied for the first time to seismic hazard and risk assessment.
The effectiveness and applicability of all used schemes is analyzed.
Several chapters of this monograph describe the theory, implementation and some exemplary applications of the framework.
To conduct these exemplary applications, a seismic hazard model for Indonesia was developed and used for the analysis and quantification of loss uncertainty for a large collection of synthetic portfolios.
As part of this work, the new framework was integrated into a probabilistic seismic hazard and risk assessment software suite developed and used by Munich Reinsurance Group.
Furthermore, those parts of the framework that deal with location and damage uncertainties are also used by the flood and storm natural catastrophe model development groups at Munich Reinsurance for their risk models
Particle Engineering: Neue Strategien zur Steigerung der Effizienz von Pulvern zur Inhalation
Administration of drugs via the respiratory tract lacks efficiency despite the variety of inhalers and formulations available. In the case of dry powder inhaler (DPI) formulations, scientists attribute this lack of efficiency to the variety of properties of the powder particles and the devices. Matching these interdependent properties is complex, making it difficult to determine the effect of any single influencing factor. With the aim of resolving this complexity, this thesis presents novel strategies for the development of DPI formulations. For the purpose of controlling the properties of individual particles and investigating them isolated from each other, this thesis presents the use of Additive Manufacturing (AM). As a technique that enables the production of customised objects, AM is shown to allow applications at the DPI particle level and at a level between device and formulation. The latter level of application involves the manufacturing of structurally complex objects that are inserted in inhaler devices as free levitating dispersing aids (DAs). In addition, this thesis explores manufacturing of tailored microstructures as a novel particle engineering approach. AM is presented as a technical solution for controlling and adjusting particle size, design, and chemical composition, which can be used to decipher the complexities of developing enhanced DPI formulations. For particle design thinking and engineering processes, this thesis presents the use of Numerical Simulation (NS) as a complementary tool. Applying Discrete Element Method (DEM) modelling allows for simulating the formation and dispersion of drug-carrier agglomerates. Simulating corresponding patterns provides insights into the influence of particle morphology on carrier loading and particle detachment. In summary, this thesis presents AM and NS as novel strategies to engineer and evaluate particles for use in DPI formulations
Efficient spike-sorting of multi-state neurons using inter-spike intervals information
We demonstrate the efficacy of a new spike-sorting method based on a Markov
Chain Monte Carlo (MCMC) algorithm by applying it to real data recorded from
Purkinje cells (PCs) in young rat cerebellar slices. This algorithm is unique
in its capability to estimate and make use of the firing statistics as well as
the spike amplitude dynamics of the recorded neurons. PCs exhibit multiple
discharge states, giving rise to multimodal interspike interval (ISI)
histograms and to correlations between successive ISIs. The amplitude of the
spikes generated by a PC in an "active" state decreases, a feature typical of
many neurons from both vertebrates and invertebrates. These two features
constitute a major and recurrent problem for all the presently available
spike-sorting methods. We first show that a Hidden Markov Model with 3
log-Normal states provides a flexible and satisfying description of the complex
firing of single PCs. We then incorporate this model into our previous MCMC
based spike-sorting algorithm (Pouzat et al, 2004, J. Neurophys. 91, 2910-2928)
and test this new algorithm on multi-unit recordings of bursting PCs. We show
that our method successfully classifies the bursty spike trains fired by PCs by
using an independent single unit recording from a patch-clamp pipette.Comment: 25 pages, to be published in Journal of Neurocience Method
Recommended from our members
Traffic and performance evaluation for optical networks. An Investigation into Modelling and Characterisation of Traffic Flows and Performance Analysis and Engineering for Optical Network Architectures.
The convergence of multiservice heterogeneous networks and ever increasing Internet applications, like peer to peer networking and the increased number of users and services, demand a more efficient bandwidth allocation in optical networks. In this context, new architectures and protocols are needed in conjuction with cost effective quantitative methodologies in order to provide an insight into the performance aspects of the next and future generation Internets.
This thesis reports an investigation, based on efficient simulation methodologies, in order to assess existing high performance algorithms and to propose new ones. The analysis of the traffic characteristics of an OC-192 link (9953.28 Mbps) is initially conducted, a requirement due to the discovery of self-similar long-range dependent properties in network traffic, and the suitability of the GE distribution for modelling interarrival times of bursty traffic in short time scales is presented. Consequently, using a heuristic approach, the self-similar properties of the GE/G/¿ are being presented, providing a method to generate self-similar traffic that takes into consideration burstiness in small time scales. A description of the state of the art in optical networking providing a deeper insight into the current technologies, protocols and architectures in the field, which creates the motivation for more research into the promising switching technique of ¿Optical Burst Switching¿ (OBS). An investigation into the performance impact of various burst assembly strategies on an OBS edge node¿s mean buffer length is conducted. Realistic traffic characteristics are considered based on the analysis of the OC-192 backbone traffic traces. In addition the effect of burstiness in the small time scales on mean assembly time and burst size distribution is investigated. A new Dynamic OBS Offset Allocation Protocol is devised and favourable comparisons are carried out between the proposed OBS protocol and the Just Enough Time (JET) protocol, in terms of mean queue length, blocking and throughput. Finally the research focuses on simulation methodologies employed throughout the thesis using the Graphics Processing Unit (GPU) on a commercial NVidia GeForce 8800 GTX, which was initially designed for gaming computers. Parallel generators of Optical Bursts are implemented and simulated in ¿Compute Unified Device Architecture¿ (CUDA) and compared with simulations run on general-purpose CPU proving the GPU to be a cost-effective platform which can significantly speed-up calculations in order to make simulations of more complex and demanding networks easier to develop
Development of a Random Time-Frequency Access Protocol for M2M Communication
This thesis focuses on the design and development of the random time-frequency access protocol in Machine-to-Machine (M2M) communication systems and covers different aspects of the data collision problem in these systems. The randomisation algorithm, used to access channels in the frequency domain, represents the key factor that affects data collisions. This thesis presents a new randomisation algorithm for the channel selection process for M2M technologies. The new algorithm is based on a uniform randomisation distribution and is called the Uniform Randomisation Channel Selection Technique (URCST). This new channel selection algorithm improves system performance and provides a low probability of collision with minimum complexity, power consumption, and hardware resources. Also, URCST is a general randomisation technique which can be utilised by different M2M technologies. The analysis presented in this research confirms that using URCST improves system performance for different M2M technologies, such as Weightless-N and Sigfox, with a massive number of devices. The thesis also provides a rigorous and flexible mathematical model for the random time-frequency access protocol which can precisely describe the performance of different M2M technologies. This model covers various scenarios with multiple groups of devices that employ different transmission characteristics like the number of connected devices, the number of message copies, the number of channels, the payload size, and transmission time. In addition, new and robust simulation testbeds have been built and developed in this research to evaluate the performance of different M2M technologies that utilise the random time-frequency access protocol. These testbeds cover the channel histogram, the probability of collisions, and the mathematical model. The testbeds were designed to support the multiple message copies approach with various groups of devices that are connected to the same base station and employ different transmission characteristics. Utilising the newly developed channel selection algorithm, mathematical model, and testbeds, the research offers a detailed and thorough analysis of the performance of Weightless-N and Sigfox in terms of the message lost ratio (MLR) and power consumption. The analysis shows some useful insights into the performance of M2M systems. For instance, while using multiple message copies improves the system performance, it might degrade the reliability of the system as the number of devices increases beyond a specific limit. Therefore, increasing the number of message copies can be disadvantageous to M2M communication performance
- …