1,446 research outputs found

    Lattice Boltzmann Modelling of Droplet Dynamics on Fibres and Meshed Surfaces

    Get PDF
    Fibres and fibrous materials are ubiquitous in nature and industry, and their interactions with liquid droplets are often key for their use and functions. These structures can be employed as-is or combined to construct more complex mesh structures. Therefore, to optimise the effectiveness of these structures, the study of the wetting interactions between droplets and solids is essential. In this work, I use the numerical solver lattice Boltzmann method (LBM) to systematically study three different cases of droplet wetting, spreading, and moving across fibres, and droplets impacting mesh structures. First, I focus on partially wetting droplets moving along a fibre. For the so-called clamshell morphology, I find three possible dynamic regimes upon varying the droplet Bond number and the fibre radius: compact, breakup, and oscillation. For small Bond numbers, in the compact regime, the droplet reaches a steady state, and its velocity scales linearly with the driving body force. For higher Bond numbers, in the breakup regime, satellite droplets are formed trailing the initial moving droplet, which is easier with smaller fibre radii. Finally, in the oscillation regime (favoured in the midrange of fibre radius), the droplet shape periodically extends and contracts along the fibre. Outside of the commonly known fully wetting and partial wetting states, there exists the pseudo-partial wetting state (where both the spherical cap and the thin film can coexist together), which few numerical methods are able to simulate. I implement long-range interactions between the fluid and solid in LBM to realise this wetting state. The robustness of this approach is shown by simulating a number of scenarios. I start by simulating droplets in fully, partial, and pseudo-partial wetting states on flat surfaces, followed by pseudo-partially wetting droplets spreading on grooved surfaces and fibre structures. I also explore the effects of key parameters in long-range interactions. For the dynamics demonstration, I simulate droplets in the pseudo-partial wetting state moving along a fibre in both the barrel and clamshell morphologies at different droplet volumes and fibre radii. Finally, I focus on the dynamics of droplets impacting square mesh structures. I systematically vary the impact point, trajectory, and velocity. To rationalise the results, I find it useful to consider whether the droplet trajectory is dominated by orthogonal or diagonal movement. The former leads to a lower incident rate and a more uniform interaction time distribution, while the latter is typically characterised by more complex droplet trajectories with less predictability. Then, focussing on an impact point, I compare the droplet dynamics impacting a single-layer structure and equivalent double-layer structures. From a water-capturing capability perspective (given the same effective pore size), a double-layer structure performs slightly worse. A double-layer structure also generally leads to shorter interaction time compared to a single-layer structure

    Development of Reactor Multiphysics framework to analyze the effect of crossflow and dynamic gHTC for Depletion and REA transient

    Get PDF
    Department of Nuclear EngineeringThe aim of this research is to create a Multiphysics coupling framework called MPCORE (Multi-Physics CORE) to analyze the behavior of nuclear reactors. This framework couples fuel performance (FP) with neutron kinetics (NK) and thermal hydraulics (TH) modules for depletion and transient analysis. Coupling the FP code allows for accurate modeling of dynamic gap heat transfer for each pin. Converging all modules together provides a more meaningful insight into the variation of reactor parameters. Depletion studies with Multiphysics parameters are essential to understand safety parameters throughout a nuclear reactor's life. The study investigates the passive response of the reactor core to reactivity insertions caused by rod ejection accidents (REA). Most coupling frameworks only couple NK with TH, but this research also includes FP and uses two-way coupling between TH and FP modules to examine the impact on critical safety parameters. The adaptive time-step feature of MPCORE reduces execution time, and the framework performs in-memory data transfer between modules. Verification and validation work for MPCORE coupled modules (RAST-K for NK, CTH1D/CTF for TH, and FRAPI for FP) has been performed for single assembly, 3x3 mini-core, and whole-core problems. The performance of the TH module is evaluated with and without crossflow for transient calculations in whole-core problems. The effect of dynamic and static gap heat transfer coefficient models on the FP module is quantified for assembly, mini-core, and whole-core transient problems. Difference between one-way and two-way coupling between FP and TH modules is quantified for whole-core depletion problems. The study compares safety parameters such as departure from nucleate boiling ratio, linear power, fuel enthalpy, fuel centerline temperature, cladding outer surface temperature, coolant temperature, and cladding hydrogen concentration for different models. A best-estimate coupling framework has been developed and tested for uncertainty quantification (UQ) studies for assembly and mini-core problems. Random sampling and Latin hypercube sampling options are available for UQ studies in MPCORE. Standard deviation of different parameters in case of dynamic gap conductance has increased due to the difference of gap heat transfer in different cases.clos

    On the direct stability assessment of parametric rolling with CFD : Preparation for the Second Generation Intact Stability Criteria

    Get PDF
    Parametric rolling is a phenomenon where a ship starts to uncontrollably oscillate with increasing amplitude. This is caused by the surrounding waves parametrically amplifying the naturally occurring small rolling motions. The phenomenon can be dangerous, but the irregular nature of the sea makes it rare. The International Maritime Organization is drafting a new type of assessment for ship stability in wave-related phenomena. This assessment is known as the Second Generation Intact Stability Criterion (SGISC), and parametric rolling is one of its focuses. The ship design is studied at three levels, where the computational complexity gradually increases. The final level is referred to as Direct Stability Assessment (DSA), where the phenomenon is modeled as accurately as possible. One possible method for this phase is Computational Fluid Dynamics (CFD) simulations, where the fluid flow surrounding the ship is numerically resolved from the Navier-Stokes equations. This allows the ship's response to waves to be accurately resolved without making any assumptions about the design. This thesis studies the possibility of using CFD in the DSA phase for the parametric roll phenomenon. First, a literature survey was conducted on the hydrodynamics of the phenomenon, CFD, and the SGISC. Then, multiple CFD simulations were conducted. These simulations included the actual DSA simulation as well as supplementary simulations for other phases of the assessment. It was found that CFD is a valuable tool for studying the vulnerability to parametric rolling. However, the computational power required to directly study the parametric rolling may still be too high for a typical shipyard

    Charge and heat transport in ionic conductors

    Get PDF
    Transport coefficients relate the off-equilibrium flow of locally conserved quantities, such as charge, energy, and momentum, to gradients of intensive thermodynamic variables in the linear regime. Despite their mathematical formalization dating back to the middle of the last century, when Green and Kubo developed linear response theory, some conceptual subtleties were only recently understood through the formulation of the gauge-invariance and convective-invariance principles. In a nutshell, these invariance principles suggest that transport coefficients are mostly independent of the microscopic definition of the densities and currents. In this thesis, we analyze the consequences of gauge and convective invariances on the charge and heat-transport properties of ionic conductors. The combination of gauge invariance with Thouless' theorem on charge quantization reconciles Faraday's picture of ionic charge transport---whereby each atom carries a well-defined integer charge---with a rigorous quantum-mechanical definition of atomic oxidation states. The latter are topological invariants depending on the paths traced by the coordinates of nuclei in the atomic configuration space. When some general topological conditions are relaxed, we show that oxidation states lose their meaning, and charge can be adiabatically transported across macroscopic distances without a net ionic displacement. This allows for a classification of the different regimes of ionic transport in terms of the topological properties of the electronic structure of the conducting material. Invariance principles also allow one to compute thermal conductivity in multicomponent materials such as ionic conductors through equilibrium molecular dynamics simulations. In particular, heat management is of paramount importance in solid-state electrolytes, solid materials relevant for the production of next-generation batteries, where ionic conduction is mediated by diffusing vacancies and defects. The aforementioned conceptual difficulties in the theory of thermal transport are the root cause of a lack of systematic exploration of such properties in solid-state electrolytes. We showcase the ability of the invariance principles to overcome these issues together with state-of-the-art data analysis techniques in the paradigmatic example of the Li-ion conductor Li3ClO. We provide a simple rationale to explain the temperature and vacancy-concentration dependence of its thermal conductivity, which can be interpreted as the result of the interplay of a crystalline component and a contribution from the effective disorder generated by ionic diffusion

    SuperCDMS HVeV Run 2 Low-Mass Dark Matter Search, Highly Multiplexed Phonon-Mediated Particle Detector with Kinetic Inductance Detector, and the Blackbody Radiation in Cryogenic Experiments

    Get PDF
    There is ample evidence of dark matter (DM), a phenomenon responsible for ≈ 85% of the matter content of the Universe that cannot be explained by the Standard Model (SM). One of the most compelling hypotheses is that DM consists of beyond-SM particle(s) that are nonluminous and nonbaryonic. So far, numerous efforts have been made to search for particle DM, and yet none has yielded an unambiguous observation of DM particles. We present in Chapter 2 the SuperCDMS HVeV Run 2 experiment, where we search for DM in the mass ranges of 0.5--10⁎ MeV/cÂČ for the electron-recoil DM and 1.2--50 eV/cÂČ for the dark photon and the Axion-like particle (ALP). SuperCDMS utilizes cryogenic crystals as detectors to search for DM interaction with the crystal atoms. The interaction is detected in the form of recoil energy mediated by phonons. In the HVeV project, we look for electron recoil, where we enhance the signal by the Neganov-Trofimov-Luke effect under high-voltage biases. The technique enabled us to detect quantized e⁻hâș creation at a 3% ionization energy resolution. Our work is the first DM search analysis considering charge trapping and impact ionization effects for solid-state detectors. We report our results as upper limits for the assumed particle models as functions of DM mass. Our results exclude the DM-electron scattering cross section, the dark photon kinetic mixing parameter, and the ALP axioelectric coupling above 8.4 x 10⁻³⁎ cmÂČ, 3.3 x 10⁻Âč⁎, and 1.0 x 10⁻âč, respectively. Currently every SuperCDMS detector is equipped with a few phonon sensors based on the transition-edge sensor (TES) technology. In order to improve phonon-mediated particle detectors' background rejection performance, we are developing highly multiplexed detectors utilizing kinetic inductance detectors (KIDs) as phonon sensors. This work is detailed in chapter 3 and chapter 4. We have improved our previous KID and readout line designs, which enabled us to produce our first Ăž3" detector with 80 phonon sensors. The detector yielded a frequency placement accuracy of 0.07%, indicating our capability of implementing hundreds of phonon sensors in a typical SuperCDMS-style detector. We detail our fabrication technique for simultaneously employing Al and Nb for the KID circuit. We explain our signal model that includes extracting the RF signal, calibrating the RF signal into pair-breaking energy, and then the pulse detection. We summarize our noise condition and develop models for different noise sources. We combine the signal and the noise models to be an energy resolution model for KID-based phonon-mediated detectors. From this model, we propose strategies to further improve future detectors' energy resolution and introduce our ongoing implementations. Blackbody (BB) radiation is one of the plausible background sources responsible for the low-energy background currently preventing low-threshold DM experiments to search for lower DM mass ranges. In Chapter 5, we present our study for such background for cryogenic experiments. We have developed physical models and, based on the models, simulation tools for BB radiation propagation as photons or waves. We have also developed a theoretical model for BB photons' interaction with semiconductor impurities, which is one of the possible channels for generating the leakage current background in SuperCDMS-style detectors. We have planned for an experiment to calibrate our simulation and leakage current generation model. For the experiment, we have developed a specialized ``mesh TES'' photon detector inspired by cosmic microwave background experiments. We present its sensitivity model, the radiation source developed for the calibration, and the general plan of the experiment.</p

    A CFD methodology for mass transfer of soluble species in incompressible two-phase flows: modelling and applications

    Get PDF
    Continuous flow chemistry is an interesting technology that allows to overcome many of the limitations in terms of scalability of classical batch reactor designs. This approach is particularly relevant for both photochemistry and electrochemistry as new optimal solutions can be designed to limit, for example, the issues related to light penetration, reactor fouling, excessive distance between electrodes and management of hazardous compounds, whilst keeping the productivity high. Such devices operate often in a two-phase regime, where the appearance of a gas in the form of a disperse bubbly flow can be either a desirable feature (e.g. when the gas is needed for the reaction) or the result of a spontaneous reaction (e.g. electrochemistry). Such systems are very complicated flows where many bubbles populate the reactor at the same time and deform under the effect of several forces, such as surface tension, buoyancy and pressure and viscosity terms. Due to the solubility of gas in the liquid solvent, the disperse phase exchanges mass with the liquid (where the reactions generally occur) and the volume of the bubbles changes accordingly. Such physics is mainly a convection-dominated process that occurs at very small length scales (within the concentration boundary layer, which is generally thinner than the hydrodynamic one) and numerical tools for routine design are based on simplifying assumptions (reduced order methods) for the modelling of this region. However, such approaches often lead to errors in the prediction of the mass transfer rate and a fully-resolved method is generally needed to capture the physics at the interface. This last approach comes with a high computational cost (which makes it non suitable for common design processes) but can be employed in simplified scenarios to explore fundamental physics and derive correlation formulae to be used in reduced order models. For the above reasons, this work aims at developing a high-fidelity numerical simulation framework for the study of mass transfer of soluble species in two-phase systems. The numerical modelling of these processes has several challenges, such as the small characteristic spatial scales and the discontinuities in both concentration and velocity profiles at the interface. All these points need to be properly taken into account to obtain an accurate solution at the gas-liquid interface. In this thesis, a new methodology, based on a two scalar approach for the transport of species, is combined with a geometric Volume of Fluid method in the open source software Basilisk (http://basilisk.fr/). A new algorithm is proposed for the treatment of the interfacial velocity jump, which consists of the redistribution of the mass transfer term from the interfacial cells to the neighbouring pure gas ones, in order to ensure the conservation of mass during the advection of the interface. This step is a crucial point of the methodology, since it allows to accurately describe the velocity field near the interface and, consequently, to capture the distribution of species within the concentration boundary layer. The solver is extensively validated against analytical, experimental and numerical benchmarks, which include suspended bubbles in both super- and under-saturated solutions, the Stefan problem for a planar interface, dissolving rising bubbles and competing mass transfer of mixtures in mixed super- and under-saturated liquids. Finally, the methodology is used for the study of real applications, namely the growth of electrochemically generated bubbles on a planar electrode and the mass transfer of a single bubble in a Taylor-Couette device. The effects of the main parameters that characterise the systems (e.g. contact angle, current density and rotor speed) on the growth/dissolution rate of bubbles are investigated. Although these systems need to be necessarily simplified to allow for direct numerical simulations, these examples show that the insight gained into the fundamental physics is valuable information that can be used to develop reduced order models

    Development of a Moving Front Kinetic Monte Carlo Algorithm to Simulate Moving Interface Systems

    Get PDF
    Moving interfaces play vital and crucial roles in a wide variety of different natural, technological, and industrial processes, including solids dissolution, capillary action, sessile droplet spreading, and superhydrophobicity. In each of these systems, the fundamental process behaviour is entirely dependent on the interface and on the underlying physics governing its movement. As a result, there is significant interest in studying and developing models to capture the behaviour of these moving interface systems over a wide variety of different applications. However, the simulation techniques used to model moving interfaces are limited in their application, as the molecular-level models are unable to simulate interface behaviour over large spatial and temporal scales, whereas the large-scale modeling techniques cannot account for the nanoscale processes that govern the interface behaviour or the molecular-scale fluctuations and deviations in the interface. Furthermore, methods developed to bridge the gap between the two scales are prone to error-induced force imbalances at the interface that can result in fictitious behaviour. In order to overcome these challenges, this study developed a novel kinetic Monte Carlo (kMC)-based modelling technique referred to as Moving Front kMC (MFkMC) to adequately and efficiently capture the molecular-scale events and forces governing the moving interface behaviour over large length and timescales. This framework was designed to capture the movement of transiently-varying interfaces in a kinetic-like manner so that its movement can be described using Monte Carlo sampling. The MFkMC algorithm accomplishes this task by evaluating the behaviour of the interfacial molecules and assigning kinetic Monte Carlo-style rate equations that describe the transition probability that a molecule would advance into the neighbouring phase, displacing an interfacial molecule from the opposing phase and thus changing the interface. The proposed algorithm was subsequently used to capture the moving interface behaviour within crystal dissolution, capillary rise, and sessile droplet spreading on both smooth and superhydrophobic surfaces. The individual system models for each application were used to analyze the behaviour within each application and to tackle challenges within each field. The MFkMC modelling method was initially used to capture crystal dissolution for applications in pharmaceutical drug delivery. The developed model was designed to predict the dissolution of a wide variety of crystalline minerals, regardless of their composition and crystal structure. The MFkMC approach was compared against a standard kMC model of the same system to validate the MFkMC approach and highlight its advantages and limitations. The proposed framework was used to explore ways of enhancing crystal dissolution processes by assessing the variability from environmental uncertainties and by performing robust optimization to improve the dissolution performance. The approach was used to simulate calcium carbonate dissolution within the human gastrointestinal system. Polynomial chaos expansions (PCEs) were used to propagate the parametric uncertainty through the kMC model. Robust optimization was subsequently performed to determine the crystal design parameters that achieve target dissolution specifications using low-order PCE coefficient models (LPCMs). The results showcased the applicability of the kMC crystal dissolution model and the need to account for dissolution uncertainty within key biological applications. The MFkMC approach was additionally used to capture capillary rise in cavities of different shapes. The proposed model was adapted to capture the movement of a fluid-fluid interface, such as the moving interface present in capillary action studies, using kMC type approaches based on the forces acting locally upon the interface. The proposed force balance-based MFkMC (FB-MFkMC) expressions were subsequently coupled with capillary action force balance equations to capture capillary rise within any axisymmetric cavity. The developed model was validated against known analytical models that capture capillary rise dynamics in perfect cylinders. Furthermore, the resulting multiscale model was used to analyze capillary rise within axisymmetric cavities of irregular shape and in cylinders subject to surface roughness. These studies highlighted that the FB-MFkMC algorithm can capture the macroscale behaviour of a system subject to molecular-level irregularities such as surface roughness. Furthermore, they highlighted that phenomena such as roughness can significantly affect moving interface behaviour and highlight the need to accommodate for these phenomena. MFkMC was furthermore extended to capture sessile droplet spreading on a smooth surface. The developed approach adapted the capillary action FB-MFkMC model to capture the spreading behaviour of a droplet based on the force balance acting upon the droplet interface, which was developed using analytical inertial and capillary expressions from the literature. This study furthermore derived a new semi-empirical expression to depict the viscous damping force acting on the droplet. The developed viscous force term depends on a fitted parameter c, whose value was observed to vary solely depending on the droplet liquid as captured predominantly by the droplet Ohnesorge number. The proposed approach was subsequently validated using data obtained both from conducted experiments and from the literature to support the robustness of the framework. The predictive capabilities of the developed model were further inspected to provide insights on the sessile droplet system behaviour. The developed FB-MFkMC model was additionally modified to capture sessile droplet spreading on pillared superhydrophobic surfaces (SHSs). These adjustments included developing the Periodic Unit (PU) method of capturing periodic SHS pillar arrays and accommodating for the changes necessary to capture the droplet spreading behaviour across the gaps between the pillars (i.e., Cassie mode wetting). The proposed SHS-based FB-MFkMC (SHS-MFkMC) model was furthermore adapted to accommodate for spontaneous Cassie-to-Wenzel (C2W) droplet transitions on the solid surface. The capabilities of the full SHS-MFkMC model to capture both radial sessile droplet spread and spontaneous C2W transitions were compared to experimental results from within the literature. Furthermore, a sensitivity analysis was conducted to assess the effects of the various system parameters on the model performance and compare them with the expected system results

    Computational modelling of large-scale fire spread through informal settlements

    Get PDF
    Informal settlements are a global phenomenon and are characterised by low quality construction and dense layouts, generally as a result of a lack of application of formal building regulations. They may be known by another name, such as slums or shantytowns, or exist in more novel contexts, for example refugee camps and homeless tented communities. However, a consistent feature of informal settlements in any context is their exposure to fire risk. Fire risk presents both as risk of fire ignition – how often fires occur – and risk of fire spread. Where there is a high risk of fire spread, a localised fire in an individual home may swiftly develop to the scale of tens or hundreds of homes. In some cases this may result in injury and loss of life, but the predominant issue is the extensive scale of property loss to communities that already exist in impoverished and precarious circumstances. There is vital need to understand and quantify fire spread risk factors to develop mitigation measures that may be used to protect informal settlements from large-scale fires. In recent years, a growing set of experiments have contributed to knowledge around how fires in informal structures, particularly of the type found in Cape Town’s expansive informal settlements, grow and spread. This knowledge is rooted in fundamental concepts used in fire science and engineering such as compartment fire dynamics, heat transfer and material ignition. However, it is practically challenging and cost-prohibited to scale experimentation to settlement-scale fire spread where multiple adjacent structures may be involved in the fire. As such, it is proposed that computational tools are developed to simulate fire spread through settlements, such that mitigation measures can be quantitatively tested and refined at settlement-scale, thus reducing the need for costly experimentation. Large-scale urban fire spread models have existed since the 1950s but have largely focussed on the phenomenon of post-earthquake fire spread. Additionally, it is only in the last 20 years that modellers have moved away from simplistic empirical models towards dynamic ‘physics-based’ models that attempt to explicitly define the fire behaviour. Previous models have achieved this to varying degrees of success, with many failing to conceptualise the underlying fire behaviour in a physically realistic manner even if at settlement-scale they may visually appear to be representative of fire spread. Intrinsic to this failure is a distinct lack of robust model validation. However, one model – of post-earthquake fire spread in Japan – sets itself apart with a clear underpinning of well-reasoned fire behaviour at the key unit of analysis for any urban fire model, the compartment fire. This thesis presents the adoption and integration of this model for application in informal settlements, both contextualising it to a new physical domain and proposing improved methods for modelling key aspects of fire behaviour. The thesis is underpinned by particularly strong focus on local-level validation, showing that the model accurately captures key fire characteristics at the level of individual compartments before even considering outcomes at settlement scale. The first area of focus is on the compartment fire, the fundamental unit of analysis of the model. It is contextualised for informal settlements by adapting fuel loads and boundary conditions to align with values from experiments. Additionally, a new fuel-oxygen mixing model is implemented where previously over-efficient combustion was being assumed within the compartment. In combination with timestep refinement, this also contributed to stabilisation of the compartment fire, where previously the model destabilised due to its inability to resolve gas flows and the neutral plane for ventilation-controlled fires. The result is a stable compartment fire model that can sustain ventilation-controlled conditions and which compares well to key metrics – temperature, heat release rate and oxygen concentration – when compared to experimental results. New phenomena of cardboard lining ‘flashover’ and externally-heated accelerated fire growth are also implemented in the model for the first time. Subsequently, the external thermal environment – external flaming, radiative heat transfer and ignition – is investigated. Prior external flame models are found to be rigid, context-specific and insensitive to wind. Given informal settlement fires are known to be sensitive to wind and highly driven by flame impingement and high local levels of radiation, it is crucial to model external venting flame as accurately as possible. Computational fluid dynamics (CFD) models are utilised to derive new correlations for external venting flame dimensions. Ignition is then modelled as a dependence on both radiation and flame impingement, where previously these have been treated as two independent mechanisms. The model is also adapted to reflect the dynamic and intermittent nature of external flames, such that probability is built into flame dimensions, based on data from experiments and CFD models. The new method is appropriately sensitive to wind speed and direction, to a degree not yet achieved in prior urban fire spread models. Development of both the compartment fire and external flame submodels was conducted at the local scale, modelling only one or two compartments at a time. Finally, the model is examined in a domain of 20 structures (as per a previous experiment) to assess how newly implemented submodels affect fire spread at the multi-structure level. This first verifies the efficacy of the new model in capturing the characteristics of fire spread in informal settlements compared to the original model. It also shows the dynamic response of the model to wind-driven conditions. Additionally, this process helps to uncover where there are still errors or inconsistent conceptualisation of fire spread behaviours, particularly associated with the model’s reading and mapping of the spatial environment. Thus, it provides the basis and scope for continued adaptation of the model to increase its robustness. Overall, interrogation of previous ‘physics-based’ models uncovered a lack of robust validation which inspired the approach taken in this research of developing the model at high resolution. Specific submodels had to be carefully examined and developed to ensure they were first stable and physically reasonable and, secondly, contextually appropriate for informal settlements. Though the research stopped short of application to entire informal settlement domains, the underlying model functionality has been updated with significantly more detailed and robust representation of real fire behaviour than has previously been applied in any large urban fire spread context. Future application to informal settlements or adaptation for other urban environments should be much expedited as a result

    Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales

    Full text link
    The main part of this book, Elements of Linear Accelerators, outlines in Part 1 a framework for non-relativistic linear accelerator focusing and accelerating channel design, simulation, optimization and analysis where space charge is an important factor. Part 1 is the most important part of the book; grasping the framework is essential to fully understand and appreciate the elements within it, and the myriad application details of the following Parts. The treatment concentrates on all linacs, large or small, intended for high-intensity, very low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ) is especially developed as a representative and the most complicated linac form (from dc to bunched and accelerated beam), extending to practical design of long, high energy linacs, including space charge resonances and beam halo formation, and some challenges for future work. Also a practical method is presented for designing Alternating-Phase- Focused (APF) linacs with long sequences and high energy gain. Full open-source software is available. The following part, Calm in the Resonances and Other Tales, contains eyewitness accounts of nearly 60 years of participation in accelerator technology. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in the figures. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10
    • 

    corecore