999 research outputs found

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Bluff-body aerodynamics and transfer functions for non-catching precipitation measurement instruments.

    Get PDF
    Starting from the old and trivial technique of using a graduated cylinder to collect and manually measure precipitation, numerous advances were made for in-situ precipitation gauges. After decades of scarce innovation, a new family of in-situ precipitation gauges was developed. They are called Non-Catching Gauges (NCG) since they can measure precipitation and its microphysical and dynamic characteristics without the need to collect hydrometeors. The attention that NCGs are gathering today is quite notable, even if they represent only a small fraction of the total precipitation gauges deployed. Their use in the field is bound to continuously grow in time, due to several advantages, discussed in this work, that such instruments present over more traditional ones. However, their major disadvantage is their increased complexity, the effects of which are highlighted by the literature through evidence of calibration and correction issues. Various field intercomparison experiments showed the evidence of significant biases in NCGs measurements. The goal of this work is to investigate two main sources of bias, producing the largest impact on precipitation measurements. The first source of bias evaluated in this work is due to instrument calibration. Several attempts at developing a calibration procedure are presented both in the scientific literature and from the manufacturers. Nevertheless, those methods are hardly traceable to international standards and, in most cases, lack a suitable reference measure to compare against the instrumental output. In this work, a fully traceable calibration procedure is proposed, in analogy with the one already existing for catching type gauges. This requires drops of know diameter and fall velocity to be released over the instrument sensing area. For this reason, the Calibrated Rainfall Generator (CRG) is developed, able to release single drops on demand and measure them independently just before they reach the instrument sensing area. Detachment of drops is obtained by using an electrostatic system, while the measure of their diameter and fall velocity is performed by means of a photogrammetric approach. The Thies Laser Precipitation Monitor (LPM) was tested using the CRG considering two different output telegrams. The first one provides the raw measure of each drop sensed by the instrument while the second one provides the Particle Size and fall Velocity Distribution (PSVD) matrix. Both telegrams show a tendency to underestimate the drop diameter that increases with decreasing the drop size, while errors in the fall velocity measurements have a less definite trend. Furthermore, tests also show a large standard deviation of the measurements, significantly higher than the one of the reference measurements. The underestimation of drop size and fall velocity is also reflected into the RI measurements provided by the instrument, with a resulting underestimation that decreases with increasing the precipitation intensity. The difference between the two telegrams considered is large and may only be explained by differences in the instrument internal processing for the two telegrams. The second instrument tested using the CRG is the Biral VPF-750, a light scatter gauge. Results show a tendency to underestimate both the drop diameter and fall velocity. In the first case, the error decreases with increasing the drops size, similarly to the Thies LPM. However, the error in the fall velocity is considerably higher and instead increases with increasing the drop sizes. In terms of Rainfall Intensity (RI), the instrument shows a strong underestimation that, due to the opposite trend observed for drop diameter and fall velocity, is almost constant with the precipitation intensity. Both instruments show significant biases, corroborated by field intercomparison results from the literature, that is often larger than 10% for the investigated variables. This means that both gauges cannot be classified according to the guidelines proposed in this work for the development of a standard calibration procedure, derived from those already existing for CGs. The second source of bias is wind, a well-established source of environmental error for traditional Catching-type Gauges (CG) but also affecting NCGs. The wind-induced bias is investigated using a numerical approach, combining Computational Fluid Dynamics (CFD) and Lagrangian Particle Tracking (LPT) models. Two different CFD models were tested, the first providing a time-independent steady state solution, while the other is fully time-dependent. Both were compared against wind tunnel results, showing a good agreement with the experimental data, and proving their ability to capture the complex aerodynamic response of instruments when impacted by the wind. The Thies Laser Precipitation Monitor (LPM) is first chosen as a test instrument, being representative of the typical NCGs that are currently deployed in the field. CFD simulations show that wind direction is the primary factor determining the aerodynamic disturbance close to the instrument sensing area. Similar results were found for the OTT Parsivel2, that is another widely diffused NCG. For wind flow parallel to the laser beam, strong disturbance close to the gauge sensing area is observed. Meanwhile, wind coming perpendicular to the laser beam produces minimal flow disturbance. The wind-induced bias is also investigated for the Vaisala WXT-520, an impact disdrometer. This gauge is smaller ad has a more regular shape if compared to the optical disdrometers, but its measuring principle is based on the detection of the drop kinetic energy, while the size and fall velocity are indirectly obtained. CFD simulations show limited disturbance close to the sensing area of the instrument and a negligeable dependency on the wind direction (due to a more radially symmetric geometry). The instrument body further provide minimal shielding of the sensing area. Strong updraft however occurs upstream of the instrument for all wind directions, significantly affecting the fall velocity of the smaller and lighter drops. Using these results, three different LPT models are also tested. The first is an uncoupled model based on the time-independent CFD results and is used to evaluate the instrument performance for all wind speeds and directions considered. The other two models, due to their high computational requirements, are applied only to a selected number of combinations of wind speed and direction for the Thies LPM. Results show a good agreement and allow concluding that the significant increase in computational burden of the latter two models does not significantly improve the accuracy of the results. However, the one-way coupled model highlights the role of turbulence, that may have a significant impact on the instrumental performance when strong recirculation is present near its sensing area. In the case of the two other gauges, only the uncoupled LPT model in combination with the time-independent CFD model is used, this being the best compromise between numerical accuracy and computational cost. Results of the LPT model are presented in terms of variation in the retrieval of precipitation microphysical properties, Catch Ratios (CR), Collection Efficiency (CE) and Radar Retrieval Efficiency (RRE). For the three gauges considered, it is shown that smaller hydrometeors fall velocity close to the instrument sensing area is strongly affected by wind and is – in general – reduced. A significant wind-induced bias is also evident in the Drop Size Distribution (DSD) measured by the gauges. Optical gauges may report a significant lower number of small hydrometeors even at moderate wind speed. Due to the gauge body partially shielding the sensing area. Impact gauge DSD is also strongly influenced by wind, since hydrometeors with high kinetic energy are sensed as having a large diameter. The DSD is therefore shifted towards larger diameters and the instrument tends to overestimate the number of hydrometeors of all sizes. This suggests that the different shapes of the DSD function reported in the field by different instruments may be due, at least partially, to wind-induced biases. In terms of integral precipitation characteristics, the wind direction is the primary factor in determining the performance of optical gauges in windy conditions. For wind parallel to the laser beam, the instrument senses less and less precipitation with increasing the wind speed, with no hydrometeors even reaching the sensing area in some configurations . On the other hand, when the wind is perpendicular to the laser beam, the instrument performs similarly for all wind speeds, with CR and CE values close to one and only a moderate amount of overcatch being observed at high wind speed. Only for the OTT Parsivel2 a non negligeable overcatch is also evident for wind coming at a 45° angle with respect to the beam direction. For the Vaisala WXT-520 the Kinetic Catch Ratio (KCR) and Kinetic Collection Efficiency (KCE) are defined as substitutes for the CR and CE. At low wind speed, the KCR is below unity, due to the reduction in fall velocity produced by the updraft. However, with increasing wind speed, the kinetic energy of hydrometeors carried by wind increases considerably, overcoming the reduction caused by the updraft close to the gauge. For this reason, KCR values becomes much higher than unity, especially for small size hydrometeors. The increase in kinetic energy is reflected into increased KCE values, that are close to unity at low wind speed, but rapidly grow with increasing the wind speed. Wind direction has instead very limited influence on the measurements. In terms of RRE, optical gauges present limited bias for all combinations of wind speed and direction, except for the highest wind speed and flow parallel to the laser beam. This is because a large portion of the radar reflectivity factor (dBZ) is due to medium and large size hydrometeors, that are less influenced by wind. In the case of the impact disdrometer instead, RRE behaves very similarly to the CE, with values that increases with increasing wind speed. This is due to the shift toward larger diameters noted in the DSD that occurs when hydrometeors kinetic energy is increased by wind

    Lattice Boltzmann Methods for Partial Differential Equations

    Get PDF
    Lattice Boltzmann methods provide a robust and highly scalable numerical technique in modern computational fluid dynamics. Besides the discretization procedure, the relaxation principles form the basis of any lattice Boltzmann scheme and render the method a bottom-up approach, which obstructs its development for approximating broad classes of partial differential equations. This work introduces a novel coherent mathematical path to jointly approach the topics of constructability, stability, and limit consistency for lattice Boltzmann methods. A new constructive ansatz for lattice Boltzmann equations is introduced, which highlights the concept of relaxation in a top-down procedure starting at the targeted partial differential equation. Modular convergence proofs are used at each step to identify the key ingredients of relaxation frequencies, equilibria, and moment bases in the ansatz, which determine linear and nonlinear stability as well as consistency orders of relaxation and space-time discretization. For the latter, conventional techniques are employed and extended to determine the impact of the kinetic limit at the very foundation of lattice Boltzmann methods. To computationally analyze nonlinear stability, extensive numerical tests are enabled by combining the intrinsic parallelizability of lattice Boltzmann methods with the platform-agnostic and scalable open-source framework OpenLB. Through upscaling the number and quality of computations, large variations in the parameter spaces of classical benchmark problems are considered for the exploratory indication of methodological insights. Finally, the introduced mathematical and computational techniques are applied for the proposal and analysis of new lattice Boltzmann methods. Based on stabilized relaxation, limit consistent discretizations, and consistent temporal filters, novel numerical schemes are developed for approximating initial value problems and initial boundary value problems as well as coupled systems thereof. In particular, lattice Boltzmann methods are proposed and analyzed for temporal large eddy simulation, for simulating homogenized nonstationary fluid flow through porous media, for binary fluid flow simulations with higher order free energy models, and for the combination with Monte Carlo sampling to approximate statistical solutions of the incompressible Euler equations in three dimensions

    A unifying mathematical definition enables the theoretical study of the algorithmic class of particle methods.

    Get PDF
    Mathematical definitions provide a precise, unambiguous way to formulate concepts. They also provide a common language between disciplines. Thus, they are the basis for a well-founded scientific discussion. In addition, mathematical definitions allow for deeper insights into the defined subject based on mathematical theorems that are incontrovertible under the given definition. Besides their value in mathematics, mathematical definitions are indispensable in other sciences like physics, chemistry, and computer science. In computer science, they help to derive the expected behavior of a computer program and provide guidance for the design and testing of software. Therefore, mathematical definitions can be used to design and implement advanced algorithms. One class of widely used algorithms in computer science is the class of particle-based algorithms, also known as particle methods. Particle methods can solve complex problems in various fields, such as fluid dynamics, plasma physics, or granular flows, using diverse simulation methods, including Discrete Element Methods (DEM), Molecular Dynamics (MD), Reproducing Kernel Particle Methods (RKPM), Particle Strength Exchange (PSE), and Smoothed Particle Hydrodynamics (SPH). Despite the increasing use of particle methods driven by improved computing performance, the relation between these algorithms remains formally unclear. In particular, particle methods lack a unifying mathematical definition and precisely defined terminology. This prevents the determination of whether an algorithm belongs to the class and what distinguishes the class. Here we present a rigorous mathematical definition for determining particle methods and demonstrate its importance by applying it to several canonical algorithms and those not previously recognized as particle methods. Furthermore, we base proofs of theorems about parallelizability and computational power on it and use it to develop scientific computing software. Our definition unified, for the first time, the so far loosely connected notion of particle methods. Thus, it marks the necessary starting point for a broad range of joint formal investigations and applications across fields.:1 Introduction 1.1 The Role of Mathematical Definitions 1.2 Particle Methods 1.3 Scope and Contributions of this Thesis 2 Terminology and Notation 3 A Formal Definition of Particle Methods 3.1 Introduction 3.2 Definition of Particle Methods 3.2.1 Particle Method Algorithm 3.2.2 Particle Method Instance 3.2.3 Particle State Transition Function 3.3 Explanation of the Definition of Particle Methods 3.3.1 Illustrative Example 3.3.2 Explanation of the Particle Method Algorithm 3.3.3 Explanation of the Particle Method Instance 3.3.4 Explanation of the State Transition Function 3.4 Conclusion 4 Algorithms as Particle Methods 4.1 Introduction 4.2 Perfectly Elastic Collision in Arbitrary Dimensions 4.3 Particle Strength Exchange 4.4 Smoothed Particle Hydrodynamics 4.5 Lennard-Jones Molecular Dynamics 4.6 Triangulation refinement 4.7 Conway's Game of Life 4.8 Gaussian Elimination 4.9 Conclusion 5 Parallelizability of Particle Methods 5.1 Introduction 5.2 Particle Methods on Shared Memory Systems 5.2.1 Parallelization Scheme 5.2.2 Lemmata 5.2.3 Parallelizability 5.2.4 Time Complexity 5.2.5 Application 5.3 Particle Methods on Distributed Memory Systems 5.3.1 Parallelization Scheme 5.3.2 Lemmata 5.3.3 Parallelizability 5.3.4 Bounds on Time Complexity and Parallel Scalability 5.4 Conclusion 6 Turing Powerfulness and Halting Decidability 6.1 Introduction 6.2 Turing Machine 6.3 Turing Powerfulness of Particle Methods Under a First Set of Constraints 6.4 Turing Powerfulness of Particle Methods Under a Second Set of Constraints 6.5 Halting Decidability of Particle Methods 6.6 Conclusion 7 Particle Methods as a Basis for Scientific Software Engineering 7.1 Introduction 7.2 Design of the Prototype 7.3 Applications, Comparisons, Convergence Study, and Run-time Evaluations 7.4 Conclusion 8 Results, Discussion, Outlook, and Conclusion 8.1 Problem 8.2 Results 8.3 Discussion 8.4 Outlook 8.5 Conclusio

    Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales

    Full text link
    The main part of this book, Elements of Linear Accelerators, outlines in Part 1 a framework for non-relativistic linear accelerator focusing and accelerating channel design, simulation, optimization and analysis where space charge is an important factor. Part 1 is the most important part of the book; grasping the framework is essential to fully understand and appreciate the elements within it, and the myriad application details of the following Parts. The treatment concentrates on all linacs, large or small, intended for high-intensity, very low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ) is especially developed as a representative and the most complicated linac form (from dc to bunched and accelerated beam), extending to practical design of long, high energy linacs, including space charge resonances and beam halo formation, and some challenges for future work. Also a practical method is presented for designing Alternating-Phase- Focused (APF) linacs with long sequences and high energy gain. Full open-source software is available. The following part, Calm in the Resonances and Other Tales, contains eyewitness accounts of nearly 60 years of participation in accelerator technology. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in the figures. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10

    Towards a unified multiphysics framework applied to reactive bubbly flows

    Get PDF
    Historically, CFD codes implementing arbitrary Lagrangian Eulerian interface tracking (ALEIT) for the simulation of bubbly flows were highly customised for this specific application. However, taking a broader look at multiphase flows in general, it is noticeable that they resemble other multiphysics systems in their basic structure. Analogous to Fluid-Structure-Interaction (FSI) or conjugate heat transfer problems (CHT), multiphase systems also involve several regions with special physical properties that interact with each other across interfaces. Exploiting these structural similarities, a novel unified multiphysics framework for OpenFOAM named multiRegionFoam is introduced, which incorporates the ALE-IT and is tested for its application towards reactive bubbly flows. In this context, the parallelisability of the new framework is addressed in particular, since interface coupled multiphysics simulations in general and simulations of reactive bubbly flows in particular are very computationally intensive. Furthermore, a Dirichlet-Neumann algorithm convergence control based on interface residuals is implemented and tested

    Numerical-laboratory modelling of waves interacting with dams and rigid-flexible plates

    Get PDF
    Fluid-Structure Interaction (FSI) is relevant for a range of mechanical processes, including wave impacts on offshore and coastal structures, wind-excited vibrations of tall buildings, fluttering of bridges and blood flows in arteries. Within the FSI phenomenon, Wave-Structure Interaction (WSI) involves wave impacts on dams, flood protection barriers, wave energy converters, seawalls, breakwaters, oil and gas platforms and offshore wind turbines. These structures are often challenged by extreme waves, e.g. tsunamis generated by landslides, rockfalls and iceberg calving, potentially leading to structural damage under exceptional conditions. For structures undergoing non-negligible deformations, referred to as Wave-Flexible Structure Interaction (WFSI) herein, the physical processes are even more complex. Unfortunately, accurate predictions of the wave effects, e.g. forces, on rigid and flexible structures are still challenging and laboratory models often involve scale effects. This thesis explored a range of WSI phenomena based on the numerical model solids4foam, along with small-scale laboratory experiments. Two-Dimensional (2D) and Three-Dimensional (3D) tsunamis impacting dams were investigated first. The numerical wave loading agreed with predictions based on an existing approach and new empirical equations for wave run-ups and overtoppings of dams were proposed. The dynamic pressures were also investigated and correlated with new semi-theoretical equations. New insight into the 3D effects, including the dam curvature and asymmetrical wave impacts, were provided for selected cases. The combination of both these effects resulted in up to 32% larger run-ups compared to the 2D predictions. 2D wave impacts on offshore and onshore plates of different stiffnesses were then modelled, along with selected 3D tests. The plate stiffness had a negligible effect on the upwave forces for the majority of these tests. However, the offshore flexible plates resulted in up to 40% smaller total forces, compared to the rigid ones, due to increased downwave water depths following the plate deformations. For the onshore tests, the time series of the wave loading were characterised by two force peaks, according to previous studies. The second force peaks were up to 3.3 times larger than the first peaks. New semi-theoretical equations were proposed to predict the onshore wave forces and run-ups of a plate, as a function of the offshore wave energy. Finally, a systematic investigation of the scaling approaches and scale effects for wave impacts on rigid and flexible plates was conducted based on numerical modelling supported by small-scale laboratory tests. The WFSI governing parameters were derived and successfully validated based on the numerical results. A number of simulations, involving non-breaking and breaking wave impacts, were then conducted for the prototypes and up to 40 times smaller models. These were scaled according to the scaling approaches (i) precise Froude (fluid and plate properties scaled), (ii) traditional Froude-Cauchy (fluid properties unscaled, plate properties scaled), (iii) traditional Froude (fluid and plate properties unscaled) and (iv) a new WFSI approach (partial conservation of the WFSI governing parameters). No scale effects were observed for (i). Non-breaking waves were correctly scaled by (ii), however, up to 132% scale effects were observed in the breaking wave pressures due to the unscaled fluid properties. Further, the plate displacements were up to 98% underestimated by (iii). The new approach (iv) successfully predicted non-breaking wave impacts, with less than 4.3% deviations for the maximum wave forces and plate displacements. In conclusion, the findings of this PhD thesis are intended at enhancing the physical understanding of WSI to support the design and laboratory modelling of a range of offshore and onshore structures. Future studies should address a number of further aspects, such as the 3D effects on tsunami impacts and the role of the air compressibility on WFSI. Also, the WFSI governing parameters and the new scaling approach should be further validated using numerical and laboratory experiments

    Metallurgical Process Simulation and Optimization

    Get PDF
    Metallurgy involves the art and science of extracting metals from their ores and modifying the metals for use. With thousands of years of development, many interdisciplinary technologies have been introduced into this traditional and large-scale industry. In modern metallurgical practices, modelling and simulation are widely used to provide solutions in the areas of design, control, optimization, and visualization, and are becoming increasingly significant in the progress of digital transformation and intelligent metallurgy. This Special Issue (SI), entitled “Metallurgical Process Simulation and Optimization”, has been organized as a platform to present the recent advances in the field of modelling and optimization of metallurgical processes, which covers the processes of electric/oxygen steel-making, secondary metallurgy, (continuous) casting, and processing. Eighteen articles have been included that concern various aspects of the topic

    A hydrodynamical perspective on the turbulent transport of bacteria in rivers

    Get PDF
    The transport of bacteria in turbulent river-like environments is addressed, where bacterial populations are frequently encountered attached to solids. This transport mode is investigated by studying the transient settling of heavy particles in turbulent channel flows featuring sediment beds. A numerical method is used to fully resolve turbulence and finite-size particles, which enables the assessment of the complex interplay between flow structures, suspended solids and river sediment
    corecore