513 research outputs found

    A study of blow-ups in the Keller-Segel model of chemotaxis

    Full text link
    We study the Keller-Segel model of chemotaxis and develop a composite particle-grid numerical method with adaptive time stepping which allows us to accurately resolve singular solutions. The numerical findings (in two dimensions) are then compared with analytical predictions regarding formation and interaction of singularities obtained via analysis of the stochastic differential equations associated with the Keller-Segel model

    Assessment of the effectiveness of a risk-reduction measure on pluvial flooding and economic loss in Eindhoven, the Netherlands

    Get PDF
    Open Access journalCopyright © 2013 The Authors. Published by Elsevier Ltd.12th International Conference on Computing and Control for the Water Industry, CCWI2013Cities are increasingly assessing and reducing pluvial flood risk. Quantitative assessment of the effectiveness of risk-reduction measures is required. We use hydraulic simulation with GIS-based financial analysis to assess the pluvial flood risk for Eindhoven, The Netherlands. Analysis is carried out for four scenarios: two rainfall events, with and without separation of the combined sewer-stormwater network. Flooding statistics show how the risk-reduction measure impacts local flooding. Financial analysis demonstrates the saving resulting from the risk-reduction measure. Expected annual damage is reduced by c.€130,500. City authorities are better equipped in making cost-benefit decisions regarding implementation of pluvial flood risk-reduction measures.EC FP7 project PREPARED: Enabling Chang

    Assessing Financial Loss due to Pluvial Flooding and the Efficacy of Risk-Reduction Measures in the Residential Property Sector

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s11269-014-0833-6A novel quantitative risk assessment for residential properties at risk of pluvial flooding in Eindhoven, The Netherlands, is presented. A hydraulic model belonging to Eindhoven was forced with low return period rainfall events (2, 5 and 10-year design rainfalls). Three scenarios were analysed for each event: a baseline and two risk-reduction scenarios. GIS analysis identified areas where risk-reduction measures had the greatest impact. Financial loss calculations were carried out using fixed-threshold and probabilistic approaches. Under fixed-threshold assessment, per-event Expected Annual Damage (EAD) reached €38.2 m, with reductions of up to €454,000 resulting from risk-reduction measures. Present costs of flooding reach €1.43bn when calculated over a 50-year period. All net-present value figures for the risk-reduction measures are negative. Probabilistic assessment yielded EAD values up to more than double those of the fixed-threshold analysis which suggested positive net-present value. To the best of our knowledge, the probabilistic method based on the distribution of doorstep heights has never before been introduced for pluvial flood risk assessment. Although this work suggests poor net-present value of risk-reduction measures, indirect impacts of flooding, damage to infrastructure and the potential impacts of climate change were omitted. This work represents a useful first step in helping Eindhoven prepare for future pluvial flooding. The analysis is based on software and tools already available at the municipality, eliminating the need for software upgrading or training. The approach is generally applicable to similar cities.European Commission Seventh Framework Program (EC FP7

    The Influence of Specimen Thickness on the High Temperature Corrosion Behavior of CMSX-4 during Thermal-Cycling Exposure

    Get PDF
    CMSX-4 is a single-crystalline Ni-base superalloy designed to be used at very high temperatures and high mechanical loadings. Its excellent corrosion resistance is due to external alumina-scale formation, which however can become less protective under thermal-cycling conditions. The metallic substrate in combination with its superficial oxide scale has to be considered as a composite suffering high stresses. Factors like different coefficients of thermal expansion between oxide and substrate during temperature changes or growing stresses affect the integrity of the oxide scale. This must also be strongly influenced by the thickness of the oxide scale and the substrate as well as the ability to relief such stresses, e.g., by creep deformation. In order to quantify these effects, thin-walled specimens of different thickness (t = 100500 lm) were prepared. Discontinuous measurements of their mass changes were carried out under thermal-cycling conditions at a hot dwell temperature of 1100 C up to 300 thermal cycles. Thin-walled specimens revealed a much lower oxide-spallation rate compared to thick-walled specimens, while thinwalled specimens might show a premature depletion of scale-forming elements. In order to determine which of these competetive factor is more detrimental in terms of a component’s lifetime, the degradation by internal precipitation was studied using scanning electron microscopy (SEM) in combination with energy-dispersive X-ray spectroscopy (EDS). Additionally, a recently developed statistical spallation model was applied to experimental data [D. Poquillon and D. Monceau, Oxidation of Metals, 59, 409–431 (2003)]. The model describes the overall mass change by oxide scale spallation during thermal cycling exposure and is a useful simulation tool for oxide scale spallation processes accounting for variations in the specimen geometry. The evolution of the net-mass change vs. the number of thermal cycles seems to be strongly dependent on the sample thickness

    Consistent alpha-cluster description of the 12C (0^+_2) resonance

    Full text link
    The near-threshold 12C (0^+_2) resonance provides unique possibility for fast helium burning in stars, as predicted by Hoyle to explain the observed abundance of elements in the Universe. Properties of this resonance are calculated within the framework of the alpha-cluster model whose two-body and three-body effective potentials are tuned to describe the alpha - alpha scattering data, the energies of the 0^+_1 and 0^+_2 states, and the 0^+_1-state root-mean-square radius. The extremely small width of the 0^+_2 state, the 0_2^+ to 0_1^+ monopole transition matrix element, and transition radius are found in remarkable agreement with the experimental data. The 0^+_2-state structure is described as a system of three alpha-particles oscillating between the ground-state-like configuration and the elongated chain configuration whose probability exceeds 0.9

    Teratoma formation of human embryonic stem cells in three-dimensional perfusion culture bioreactors

    No full text
    Teratoma formation in mice is today the most stringent test for pluripotency that is available for human pluripotent cells, as chimera formation and tetraploid complementation cannot be performed with human cells. The teratoma assay could also be applied for assessing the safety of human pluripotent cell-derived cell populations intended for therapeutic applications. In our study we examined the spontaneous differentiation behaviour of human embryonic stem cells (hESCs) in a perfused 3D multi-compartment bioreactor system and compared it with differentiation of hESCs and human induced pluripotent cells (hiPSCs) cultured in vitro as embryoid bodies and in vivo in an experimental mouse model of teratoma formation. Results from biochemical, histological/immunohistological and ultrastuctural analyses revealed that hESCs cultured in bioreactors formed tissue-like structures containing derivatives of all three germ layers. Comparison with embryoid bodies and the teratomas revealed a high degree of similarity of the tissues formed in the bioreactor to these in the teratomas at the histological as well as transcriptional level, as detected by comparative whole-genome RNA expression profiling. The 3D culture system represents a novel in vitro model that permits stable long-term cultivation, spontaneous multi-lineage differentiation and tissue formation of pluripotent cells that is comparable to in vivo differentiation. Such a model is of interest, e.g. for the development of novel cell differentiation strategies. In addition, the 3D in vitro model could be used for teratoma studies and pluripotency assays in a fully defined, controlled environment, alternatively to in vivo mouse models. Copyright (c) 2012 John Wiley & Sons, Ltd

    Consensus clustering in complex networks

    Get PDF
    The community structure of complex networks reveals both their organization and hidden relationships among their constituents. Most community detection methods currently available are not deterministic, and their results typically depend on the specific random seeds, initial conditions and tie-break rules adopted for their execution. Consensus clustering is used in data analysis to generate stable results out of a set of partitions delivered by stochastic methods. Here we show that consensus clustering can be combined with any existing method in a self-consistent way, enhancing considerably both the stability and the accuracy of the resulting partitions. This framework is also particularly suitable to monitor the evolution of community structure in temporal networks. An application of consensus clustering to a large citation network of physics papers demonstrates its capability to keep track of the birth, death and diversification of topics.Comment: 11 pages, 12 figures. Published in Scientific Report

    Kernel Spectral Clustering and applications

    Full text link
    In this chapter we review the main literature related to kernel spectral clustering (KSC), an approach to clustering cast within a kernel-based optimization setting. KSC represents a least-squares support vector machine based formulation of spectral clustering described by a weighted kernel PCA objective. Just as in the classifier case, the binary clustering model is expressed by a hyperplane in a high dimensional space induced by a kernel. In addition, the multi-way clustering can be obtained by combining a set of binary decision functions via an Error Correcting Output Codes (ECOC) encoding scheme. Because of its model-based nature, the KSC method encompasses three main steps: training, validation, testing. In the validation stage model selection is performed to obtain tuning parameters, like the number of clusters present in the data. This is a major advantage compared to classical spectral clustering where the determination of the clustering parameters is unclear and relies on heuristics. Once a KSC model is trained on a small subset of the entire data, it is able to generalize well to unseen test points. Beyond the basic formulation, sparse KSC algorithms based on the Incomplete Cholesky Decomposition (ICD) and L0L_0, L1,L0+L1L_1, L_0 + L_1, Group Lasso regularization are reviewed. In that respect, we show how it is possible to handle large scale data. Also, two possible ways to perform hierarchical clustering and a soft clustering method are presented. Finally, real-world applications such as image segmentation, power load time-series clustering, document clustering and big data learning are considered.Comment: chapter contribution to the book "Unsupervised Learning Algorithms
    corecore