105 research outputs found

    Diffusion-limited reactions and mortal random walkers in confined geometries

    Full text link
    Motivated by the diffusion-reaction kinetics on interstellar dust grains, we study a first-passage problem of mortal random walkers in a confined two-dimensional geometry. We provide an exact expression for the encounter probability of two walkers, which is evaluated in limiting cases and checked against extensive kinetic Monte Carlo simulations. We analyze the continuum limit which is approached very slowly, with corrections that vanish logarithmically with the lattice size. We then examine the influence of the shape of the lattice on the first-passage probability, where we focus on the aspect ratio dependence: Distorting the lattice always reduces the encounter probability of two walkers and can exhibit a crossover to the behavior of a genuinely one-dimensional random walk. The nature of this transition is also explained qualitatively.Comment: 18 pages, 16 figure

    GPCR-mediated glucose sensing system regulates light-dependent fungal development and mycotoxin production

    Get PDF
    Microorganisms sense environmental fluctuations in nutrients and light, coordinating their growth and development accordingly. Despite their critical roles in fungi, only a few G-protein coupled receptors (GPCRs) have been characterized. The Aspergillus nidulans genome encodes 86 putative GPCRs. Here, we characterise a carbon starvation-induced GPCR-mediated glucose sensing mechanism in A. nidulans. This includes two class V (gprH and gprI) and one class VII (gprM) GPCRs, which in response to glucose promote cAMP signalling, germination and hyphal growth, while negatively regulating sexual development in a light-dependent manner. We demonstrate that GprH regulates sexual development via influencing VeA activity, a key light-dependent regulator of fungal morphogenesis and secondary metabolism. We show that GprH and GprM are light-independent negative regulators of sterigmatocystin biosynthesis. Additionally, we reveal the epistatic interactions between the three GPCRs in regulating sexual development and sterigmatocystin production. In conclusion, GprH, GprM and GprI constitute a novel carbon starvation-induced glucose sensing mechanism that functions upstream of cAMP-PKA signalling to regulate fungal development and mycotoxin production

    Ego-Splitting and the Transcendental Subject. Kant’s Original Insight and Husserl’s Reappraisal

    Get PDF
    In this paper, I contend that there are at least two essential traits that commonly define being an I: self-identity and self-consciousness. I argue that they bear quite an odd relation to each other in the sense that self-consciousness seems to jeopardize self-identity. My main concern is to elucidate this issue within the range of the transcendental philosophies of Immanuel Kant and Edmund Husserl. In the first section, I shall briefly consider Kant’s own rendition of the problem of the Egosplitting. My reading of the Kantian texts reveals that Kant himself was aware of this phenomenon but eventually deems it an unexplainable fact. The second part of the paper tackles the same problematic from the standpoint of Husserlian phenomenology. What Husserl’s extensive analyses on this topic bring to light is that the phenomenon of the Ego-splitting constitutes the bedrock not only of his thought but also of every philosophy that works within the framework of transcendental thinking

    Pointer states for primordial fluctuations in inflationary cosmology

    Get PDF
    Primordial fluctuations in inflationary cosmology acquire classical properties through decoherence when their wavelengths become larger than the Hubble scale. Although decoherence is effective, it is not complete, so a significant part of primordial correlations remains up to the present moment. We address the issue of the pointer states which provide a classical basis for the fluctuations with respect to the influence by an environment (other fields). Applying methods from the quantum theory of open systems (the Lindblad equation), we show that this basis is given by narrow Gaussians that approximate eigenstates of field amplitudes. We calculate both the von Neumann and linear entropy of the fluctuations. Their ratio to the maximal entropy per field mode defines a degree of partial decoherence in the entropy sense. We also determine the time of partial decoherence making the Wigner function positive everywhere which, for super-Hubble modes during inflation, is virtually independent of coupling to the environment and is only slightly larger than the Hubble time. On the other hand, assuming a representative environment (a photon bath), the decoherence time for sub-Hubble modes is finite only if some real dissipation exists.Comment: 32 pages, 2 figures, matches published version: discussion expanded, references added, conclusions unchange

    Influence of different interpolation techniques on the determination of the critical conditions for the onset of dynamic recrystallisation

    No full text
    Accurate modeling of dynamic recrystallization (DRX) is highly important for forming processes like hot rolling and forging. To correctly predict the overall level of dynamic recrystallization reached, it is vital to determine and model the critical conditions that mark the start of DRX. For the determination of the critical conditions, a criterion has been proposed by Poliak and Jonas. It states that the onset of DRX can be detected from an inflection point in the work hardening rate as a function of flow stress. The work hardening rate is the derivative of the flow stress with respect to strain. Flow curves are in general measured at a certain sampling rate, yielding tabular stress-strain data, which are per se not continuously differentiable. In addition, inevitable jitter occurs in measured flow curves. Hence, flow curves need to be interpolated and smoothed before the work hardening rate and further derivatives necessary for evaluating the criterion by Poliak and Jonas can be computed. In this paper, the polynomial interpolation originally proposed by Poliak and Jonas is compared to a new approach based on radial basis functions using a thin plate spline kernel, which combines surface interpolation of various flow curves and smoothing in a single step. It is shown for different steel grades that the interpolation method used has a crucial influence on the resulting critical conditions for DRX, and that a simultaneous evaluation by surface interpolation might yield consistent critical conditions over a range of testing temperatures. © (2013) Trans Tech Publications, Switzerland

    Comparison of semi-empirical and dislocation density based material equations for fast modeling of multistage hot working of steel

    No full text
    Work hardening in metal forming can be modeled using different material model types. To capture the material response to plastic deformation semi-empirical models developed by e.g. Sellars and others compete with the physically-based - or internal variable - models developed by e.g. Bergstrøm or Kocks and Mecking. The more physical nature of the internal variable models means that they typically consist of complex systems of differential equations build upon a multitude of parameters (often > 20) to describe the various specific phenomena involved. The question thus open to debate is whether those internal variable models are more an advantage or a burden from an application point of view. In order to shed light on this question a direct comparison between semi-empirical and internal variable material models is drawn. Both model types are assessed using the following categories: model complexity, effort of model calibration, performance in compression tests, and applicability to hot rolling. The general quality of the model fitted to the same high manganese steel is demonstrated by a double hit compression tests. Additionally the material models are used to predict flow stress, recrystallized fractions and roll forces in a typical hot strip rolling schedule i.e. a complex multistage hot working operation. A difference in the best approach and necessary effort was exposed during model calibration. Validation trials reveal a good agreement between both model types. When modeling a complex rolling operation, however, profound differences in the microstructure predictions become apparent

    Influence of microstructure representation on flow stress and grain size prediction in through-process modeling of AA5182 sheet production

    No full text
    Integrated computational materials engineering is an up to date method for developing new materials and optimizing complete process chains. In the simulation of a process chain, material models play a central role as they capture the response of the material to external process conditions. While much effort is put into their development and improvement, less attention is paid to their implementation, which is problematic because the representation of microstructure in the model has a decisive influence on modeling accuracy and calculation speed. The aim of this article is to analyze the influence of different microstructure representation concepts on the prediction of flow stress and microstructure evolution when using the same set of material equations. Scalar, tree-based and cluster-based concepts are compared for a multi-stage rolling process of an AA5182 alloy. It was found that implementation influences the predicted flow stress and grain size, in particular in the regime of coupled hardening and softening. © 2012 TMS

    A new FE-model for the investigation of bond formation and failure in roll bonding processes

    No full text
    Roll bonding is a process to join two or more different materials permanently in a rolling process. A typical industrial application is the manufacturing of aluminum sheets for heat exchangers in cars where the solder is joined onto a base layer by roll bonding. From a modelling point of view the challenge is to describe the bond formation and failure of the different material layers within a FE-process model. Most methods established today either tie the different layers together or treat them as completely separate. The problem for both assumptions is that they are not applicable to describe the influence of tangential stresses that can cause layer shifting and occur in addition to the normal stresses within the roll gap. To overcome these restrictions in this paper a 2D FE-model is presented that integrates an adapted contact formulation being able to join two bodies that are completely separated at the start of the simulation. The contact formulation is contained in a user subroutine that models bond formation by adhesion in dependence of material flow and load. Additionally if the deformation conditions are detrimental already established bonds can fail. This FE-model is then used to investigate the process boundaries of the first passes of a typical rolling schedule in terms of achievable height reductions. The results show that passes with unfavorable height reduction introduce tensile and shear stresses that can lead to incomplete bonding or can even destroy the bond entirely. It is expected that, with adequate calibration, the developed FEmodel can be used to identify conditions that are profitable for bond formation in roll bonding prior to production and hence can lead to shorter rolling schedules with higher robustness

    Using data sampling and inverse optimization for the reduction of the experimental effort in the characterization of hot working behaviour for a case hardening steel

    No full text
    For the full characterization of the hot working behaviour of a given material a large number of laboratory experiments have to be performed. The experiments themselves are time consuming and the required specimen material can be quite expensive. With the increasing versatility of the testing machines, like dilatometry with easily variable temperatures, overthinking the classical approaches for materials characterization becomes expedient. In this paper a new technique for the reduction of the experimental effort is presented at the example of a 25MoCrS4 case hardening steel. To analyse the potential for the reduction of the experimental effort the classical approach of a full experimental test matrix is chosen. Here 55 flow curves with temperatures between 700 and 1200°C and strain rates from 0.01 to 100/s are experimentally determined. Then a semi-empirical model for strain hardening and dynamic recrystallization is fitted using an automated routine for parameter determination, taking all available flow curves into account. Subsequently, the number of flow curves used to fit the model parameters is gradually reduced. The model accuracy obtained with the reduced experimental data is compared to the initial fit. The natural decrease in accuracy with the use of less data compared to the gain due to the reduction of experimental effort is analysed. In addition optimal distribution of the sampling points in the experimental matrix for a reduced number of experiments is discussed. It is shown that less than a quarter of the full matrix is sufficient to reach accuracies comparable to using the full matrix. Using the vertices and symmetrical distribution of the data within the full experimental matrix allows a drastic reduction of experimental effort while maintaining the initial accuracy. The results suggest that it might be possible to reduce the costs and effort for material characterization by 50-80%
    • …
    corecore