845 research outputs found

    Fast Solvers for Unsteady Thermal Fluid Structure Interaction

    Full text link
    We consider time dependent thermal fluid structure interaction. The respective models are the compressible Navier-Stokes equations and the nonlinear heat equation. A partitioned coupling approach via a Dirichlet-Neumann method and a fixed point iteration is employed. As a refence solver a previously developed efficient time adaptive higher order time integration scheme is used. To improve upon this, we work on reducing the number of fixed point coupling iterations. Thus, first widely used vector extrapolation methods for convergence acceleration of the fixed point iteration are tested. In particular, Aitken relaxation, minimal polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) are considered. Second, we explore the idea of extrapolation based on data given from the time integration and derive such methods for SDIRK2. While the vector extrapolation methods have no beneficial effects, the extrapolation methods allow to reduce the number of fixed point iterations further by up to a factor of two with linear extrapolation performing better than quadratic.Comment: 17 page

    Large area Czochralski silicon

    Get PDF
    The overall cost effectiveness of the Czochralski process for producing large-area silicon was determined. The feasibility of growing several 12 cm diameter crystals sequentially at 12 cm/h during a furnace run and the subsequent slicing of the ingot using a multiblade slurry saw were investigated. The goal of the wafering process was a slice thickness of 0.25 mm with minimal kerf. A slice + kerf of 0.56 mm was achieved on 12 cm crystal using both 400 grit B4C and SiC abrasive slurries. Crystal growth experiments were performed at 12 cm diameter in a commercially available puller with both 10 and 12 kg melts. Several modifications to the puller hoz zone were required to achieve stable crystal growth over the entire crystal length and to prevent crystallinity loss a few centimeters down the crystal. The maximum practical growth rate for 12 cm crystal in this puller design was 10 cm/h, with 12 to 14 cm/h being the absolute maximum range at which melt freeze occurred

    Theoretical Implications of Directionally Asymmetric Transparency

    Get PDF
    Transparent segments have been a well known challenge for accounts of patterns of long distance agreement, such as vowel and consonant harmony. Two standard ways to account for transparency are autosegmental feature spreading with underspecification (e.g. Kiparsky 1981; Steriade 1987) and Agreement by Correspondence (ABC; Walker 2000; Walker & Rose 2004; Hansson 2001). Both, however, fail to derive the multiple instances of transparency encountered in Tsilhqút'í­n (Cook 1993; 2013). Here, non-retracted dorsals act both as transparent and as opaque to the process of vowel retraction, depending on which side of the trigger, a retracted sibilant, they are located. On the other hand, both retracted and non-retracted dorsals are transparent in sibilant harmony, in which sibilants are forced to agree in retraction. I propose a superset approach that combines feature spreading and underspecification with ABC: All dorsals are transparent in sibilant harmony, because they are outside the correspondence relation. At the first step of the derivation, non-retracted dorsals are not specified for retraction, allowing them to be transparent to regressive retraction. At a later step, they are negatively specified and hence able to block progressive retraction

    Molecular Vibrational Dynamics in Solution Investigated by Stationary and Time-Resolved Infrared Spectroscopy

    Get PDF
    Within this thesis, the time-resolved pump-probe and the two-dimensional mid-infrared spectroscopy were applied to investigate ultrafast processes in the condensed phase. Therefore, the stretching vibration υ3 of the cyanate anion or the carbon dioxide were selected as a vibrational probe for the studies of their aqueous solutions under isobaric heating from 303 K up to 603 K. Herein, the time constants of the vibrational energy relaxation and the underlying mechanism were unraveled. For cyanate and carbon dioxide in aqueous solution, a solvent-assisted sequential intramolecular vibrational redistribution was considered. Within this mechanism, the vibrational excess energy is intramolecular redistributed from the initially excited υ3 into the bending mode. From here, the energy is transferred to a solvent’s resonant mode, where the excess energy is subsequently redistributed within the solvent. The time constants of the intermolecular energy transfer to the solvent’s resonant mode were observed by the time-resolved pump-probe spectroscopy. In case of the CO2/H2O system, the time constant of the intramolecular energy redistribution was obtained from the correlation time used for the simulation of the temperature-dependent stationary infrared spectra based on the Kubo-Anderson general stochastic theory. The vibrational energy relaxation mechanisms of the investigated OCN– and CO2 aqueous solutions were then classified into and discussed in the context of the series of other pseudohalide anions, which were investigated earlier. The two-dimensional mid-IR spectroscopy was applied to investigate the intramolecular dynamics of trans-4-methoxybut-3-en-2-one and its rotamers in C2Cl4. The υ(C=C) and the υ(C=O) stretching vibrations were used as vibrational probes. The two-dimensional infrared spectra were then analyzed in the context to assign the cross peaks to the corresponding vibrational modes of the different rotamers of this molecule. Furthermore, the conformational dynamics were investigated. The occurrence of additional cross peaks at the earliest delay time of 600 fs were attributed to a dimerization of two rotamers in solution

    Frequent Prescribed Fires Can Reduce Risk of Tick-borne Diseases

    Get PDF
    Recently, a two-year study found that long-term prescribed fire significantly reduced tick abundance at sites with varying burn regimes (burned surrounded by burned areas [BB], burned surrounded by unburned areas [BUB], and unburned surrounded by burned areas [UBB]). In the current study, these ticks were tested for pathogens to more directly investigate the impacts of long-term prescribed burning on human disease risk. A total of 5,103 ticks (4,607 Amblyomma americanum, 76 Amblyomma maculatum, 383 Ixodes scapularis, two Ixodes brunneus, and 35 Dermacentor variabilis) were tested for Borrelia spp., Rickettsia spp., Ehrlichia spp., and Anaplasma phagocytophilum. Long-term prescribed fire did not significantly impact pathogen prevalence except that A. americanum from burned habitats had significantly lower prevalence of Rickettsia (8.7% and 4.6% for BUB and UBB sites, respectively) compared to ticks from control sites (unburned, surrounded by unburned [UBUB])(14.6%). However, during the warm season (spring/summer), encounter rates with ticks infected with pathogenic bacteria was significantly lower (98%) at burned sites than at UBUB sites. Thus, despite there being no differences in pathogen prevalence between burned and UBUB sites, risk of pathogen transmission is lower at sites subjected to long-term burning due to lower encounter rates with infected ticks

    CANADA’S GRAIN HANDLING AND TRANSPORTATION SYSTEM: A GIS-BASED EVALUATION OF POLICY CHANGES

    Get PDF
    Western Canada is in a post Canadian Wheat Board single-desk market, in which grain handlers face policy, allocation, and logistical changes to the transportation of grains. This research looks at the rails transportation problem for allocating wheat from Prairie to port position, offering a new allocation system that fits the evolving environment of Western Canada’s grain market. Optimization and analysis of the transport of wheat by railroads is performed using geographic information system software as well as spatial and historical data. The studied transportation problem searches to minimize the costs of time rather than look purely at locational costs or closest proximity to port. Through optimization three major bottlenecks are found to constrain the transportation problem; 1) an allocation preference towards Thunder Bay and Vancouver ports, 2) small capacity train inefficiency, and 3) a mismatched distribution of supply and demand between the Class 1 railway firms. Through analysis of counterfactual policies and a scaled sensitivity analysis of the transportation problem, the grains transport system of railroads is found to be dynamic and time efficient; specifically when utilizing larger train capacities, offering open access to rail, and under times of increased availability of supplies. Even under the current circumstances of reduced grain movement and inefficiencies, there are policies and logistics that can be implemented to offer grain handlers in Western Canada with the transportation needed to fulfill their export demands

    Essays in Statistics

    Get PDF
    This thesis is comprised of several contributions to the field of mathematical statistics, particularly with regards to computational issues of Bayesian statistics and functional data analysis. The first two chapters are concerned with computational Bayesian approaches that allow one to generate samples from an approximation to the posterior distribution in settings where the likelihood function of some statistical model of interest is unknown. This has led to a class of Approximate Bayesian Computation (ABC) methods whose performance depends on the ability to effectively summarize the information content of the data sample by a lower-dimensional vector of summary statistics. Ideally, these statistics are sufficient for the parameter of interest. However, it is difficult to establish sufficiency in a straightforward way if the likelihood of the model is unavailable. In Chapter 1 we propose an indirect approach to select sufficient summary statistics for ABC methods that borrows its intuition from the indirect estimation literature in econometrics. More precisely, we introduce an auxiliary statistical model that is large enough as to contain the structural model of interest. Summary statistics are then identified in this auxiliary model and mapped to the structural model of interest. We show sufficiency of these statistics for Indirect ABC methods based on parameter estimates (ABC-IP), likelihood functions (ABC-IL) and scores (ABC-IS) of the auxiliary model. A detailed simulation study investigates the performance of each proposal and compares it to a traditional, moment-based ABC approach. Particularly, the ABC-IL and ABC-IS algorithms are shown to perform better than both standard ABC and the ABC-IP methods. In Chapter 2 we extend the notion of Indirect ABC methods by proposing an efficient way of weighting the individual entries of the vector of summary statistics obtained from the score-based Indirect ABC approach (ABC-IS). In particular, the weighting matrix is given by the inverse of the asymptotic covariance matrix of the score vector of the auxiliary model and allows us to appropriately assess the distance between the true posterior distribution and the approximation based on the ABC-IS method. We illustrate the performance gain in a simulation study. An empirical application then implements the weighted ABC-IS method to the problem of estimating a continuous-time stochastic volatility model based on non-Gaussian Ornstein-Uhlenbeck processes. We show how a suitable auxiliary model can be constructed and confirm estimation results from concurring Bayesian estimation approaches suggested in the literature. In Chapter 3 we consider the problem of sampling from high-dimensional probability distributions that exhibit multiple, well-separated modes. Such distributions arise frequently, for instance, in the Bayesian estimation of macroeconomic DSGE models. Standard Markov Chain Monte Carlo (MCMC) methods, such as the Metropolis-Hastings algorithm, are prone to get trapped in local neighborhoods of the target distribution thus severely limiting the use of these methods in more complex models. We suggest the use of a Sequential Markov Chain Monte Carlo approach to overcome these difficulties and investigate its finite sample properties. The results show that Sequential MCMC methods clearly outperform standard MCMC approaches in a multimodal setting and can recover both the location as well as the mixture weights in a 12-dimensional mixture model. Moreover, we provide a detailed comparison of the effects different choices of tuning parameters have on the approximation to the true sampling distribution. These results can serve as valuable guidelines when applying this method to more complex economic models, such as the (Bayesian) estimation of Dynamic Stochastic General Equilibrium models. Chapters 4 and 5 study the statistical problem of prediction from a functional perspective. In many statistical applications, data is becoming available at ever increasing frequencies and it has thus become natural to think of discrete observations as realizations of a continuous function, say over the course of one day. However, as functions are generally speaking infinite-dimensional objects, the statistical analysis of such functional data is intrinsically different from standard multivariate techniques. In Chapter 4 we consider prediction in functional additive models of first-order autoregressive type for a time series of functional observations. This is a generalization of functional linear models that are commonly considered in the literature and has two advantages to be applied in a functional time series setting. First, it allows us to introduce a very general notion of time dependencies for functional data in this modeling framework. Particularly, it is rooted at the correlation structure of functional principal component scores and even allows for long memory behavior in the score series across the time dimension. Second, prediction in this modeling framework is straightforwardly implemented as it only concerns conditional means of scalar random variables and we suggest a k-nearest neighbors classification scheme. The theoretical contributions of this paper are twofold. In a first step, we verify the applicability of the functional principal components analysis under our notion of time dependence and obtain precise rates of convergence for the mean function and the covariance operator associated with the observed sample of functions. In a second step, we derive precise rates of convergence of the mean squared error for the proposed predictor, taking into account both the effect of truncating the infinite series expansion at some finite integer L as well as the effect of estimating the covariance operator and associated eigenelements based on a sample of N curves. In Chapter 5 we investigate the performance of functional models in a forecasting study of ground-level ozone-concentration surfaces over the geographical domain of Germany. Our perspective thus differs from the literature on spatially distributed functional processes (which are considered to be (univariate) functions of time that show spatial dependence) in that we consider smooth surfaces defined over some spatial domain that are sampled consecutively over time. In particular, we treat discrete observations that are sampled both over a spatial domain and over time as noisy realizations of some time series of smooth bivariate functions. In a first step we therefore discuss how smooth functions can be reconstructed from such noisy measurements through a finite element spline smoother that is defined over some triangulation of the spatial domain. In a second step we consider two forecasting approaches to functional time series. The first one is a functional linear model of first-order auto-regressive type, whereas the second considers the non-parametric extension to functional additive models discussed in Chapter 4. Both approaches are applied to predicting ground-level ozone concentration measured over the spatial domain of Germany and are shown to yield similar predictions
    • …
    corecore