76 research outputs found

    A Drift-Kinetic Analytical Model for SOL Plasma Dynamics at Arbitrary Collisionality

    Full text link
    A drift-kinetic model to describe the plasma dynamics in the scrape-off layer region of tokamak devices at arbitrary collisionality is derived. Our formulation is based on a gyroaveraged Lagrangian description of the charged particle motion, and the corresponding drift-kinetic Boltzmann equation that includes a full Coulomb collision operator. Using a Hermite-Laguerre velocity space decomposition of the gyroaveraged distribution function, a set of equations to evolve the coefficients of the expansion is presented. By evaluating explicitly the moments of the Coulomb collision operator, distribution functions arbitrarily far from equilibrium can be studied at arbitrary collisionalities. A fluid closure in the high-collisionality limit is presented, and the corresponding fluid equations are compared with previously-derived fluid models

    A pure-jump market-making model for high-frequency trading

    Get PDF
    We propose a new market-making model which incorporates a number of realistic features relevant for high-frequency trading. In particular, we model the dependency structure of prices and order arrivals with novel self- and cross-exciting point processes. Furthermore, instead of assuming the bid and ask prices can be adjusted continuously by the market maker, we formulate the market maker\u27s decisions as an optimal switching problem. Moreover, the risk of overtrading has been taken into consideration by allowing each order to have different size, and the market maker can make use of market orders, which are treated as impulse control, to get rid of excessive inventory. Because of the stochastic intensities of the cross-exciting point processes, the optimality condition cannot be formulated using classical Hamilton-Jacobi-Bellman quasi-variational inequality (HJBQVI), so we extend the framework of constrained forward backward stochastic differential equation (CFBSDE) to solve our optimal control problem

    A framework for information dissemination in social networks using Hawkes processes

    Get PDF
    International audienceWe define in this paper a general Hawkes-based framework to model information diffusion in social networks. The proposed framework takes into consideration the hidden interactions between users as well as the interactions between contents and social networks, and can also accommodate dynamic social networks and various temporal effects of the diffusion, which provides a complete analysis of the hidden influences in social networks. This framework can be combined with topic modeling, for which modified collapsed Gibbs sampling and variational Bayes techniques are derived. We provide an estimation algorithm based on nonnegative tensor factorization techniques, which together with a dimensionality reduction argument are able to discover , in addition, the latent community structure of the social network. At last, we provide numerical examples from real-life networks: a Game of Thrones and a MemeTracker datasets

    Dissémination de l’information et dynamique des opinions dans les réseaux sociaux

    Get PDF
    Our aim in this Ph. D. thesis is to study the diffusion of information as well as the opinion dynamics of users in social networks. Information diffusion models explore the paths taken by information being transmitted through a social network in order to understand and analyze the relationships between users in such network, leading to a better comprehension of human relations and dynamics. This thesis is based on both sides of information diffusion: first by developing mathematical theories and models to study the relationships between people and information, and in a second time by creating tools to better exploit the hidden patterns in these relationships. The theoretical tools developed in this thesis are opinion dynamics models and information diffusion models, where we study the information flow from users in social networks, and the practical tools developed in this thesis are a novel community detection algorithm and a novel trend detection algorithm. We start by introducing an opinion dynamics model in which agents interact with each other about several distinct opinions/contents. In our framework, agents do not exchange all their opinions with each other, they communicate about randomly chosen opinions at each time. We show, using stochastic approximation algorithms, that under mild assumptions this opinion dynamics algorithm converges as time increases, whose behavior is ruled by how users choose the opinions to broadcast at each time. We develop next a community detection algorithm which is a direct application of this opinion dynamics model: when agents broadcast the content they appreciate the most. Communities are thus formed, where they are defined as groups of users that appreciate mostly the same content. This algorithm, which is distributed by nature, has the remarkable property that the discovered communities can be studied from a solid mathematical standpoint. In addition to the theoretical advantage over heuristic community detection methods, the presented algorithm is able to accommodate weighted networks, parametric and nonparametric versions, with the discovery of overlapping communities a byproduct with no mathematical overhead. In a second part, we define a general framework to model information diffusion in social networks. The proposed framework takes into consideration not only the hidden interactions between users, but as well the interactions between contents and multiple social networks. It also accommodates dynamic networks and various temporal effects of the diffusion. This framework can be combined with topic modeling, for which several estimation techniques are derived, which are based on nonnegative tensor factorization techniques. Together with a dimensionality reduction argument, this techniques discover, in addition, the latent community structure of the users in the social networks. At last, we use one instance of the previous framework to develop a trend detection algorithm designed to find trendy topics in a social network. We take into consideration the interaction between users and topics, we formally define trendiness and derive trend indices for each topic being disseminated in the social network. These indices take into consideration the distance between the real broadcast intensity and the maximum expected broadcast intensity and the social network topology. The proposed trend detection algorithm uses stochastic control techniques in order calculate the trend indices, is fast and aggregates all the information of the broadcasts into a simple one-dimensional process, thus reducing its complexity and the quantity of necessary data to the detection. To the best of our knowledge, this is the first trend detection algorithm that is based solely on the individual performances of topicsLa dissémination d'information explore les chemins pris par l'information qui est transmise dans un réseau social, afin de comprendre et modéliser les relations entre les utilisateurs de ce réseau, ce qui permet une meilleur compréhension des relations humaines et leurs dynamique. Même si la priorité de ce travail soit théorique, en envisageant des aspects psychologiques et sociologiques des réseaux sociaux, les modèles de dissémination d'information sont aussi à la base de plusieurs applications concrètes, comme la maximisation d'influence, la prédication de liens, la découverte des noeuds influents, la détection des communautés, la détection des tendances, etc. Cette thèse est donc basée sur ces deux facettes de la dissémination d'information: nous développons d'abord des cadres théoriques mathématiquement solides pour étudier les relations entre les personnes et l'information, et dans un deuxième moment nous créons des outils responsables pour une exploration plus cohérente des liens cachés dans ces relations. Les outils théoriques développés ici sont les modèles de dynamique d'opinions et de dissémination d'information, où nous étudions le flot d'informations des utilisateurs dans les réseaux sociaux, et les outils pratiques développés ici sont un nouveau algorithme de détection de communautés et un nouveau algorithme de détection de tendances dans les réseaux sociau

    High repetition rate laser driven proton source and a new method of enhancing acceleration

    Get PDF
    In the past few decades and with the increasing availability of high-intensity laser systems, laser ion acceleration has evolved into a mature and promising source for experiments with energetic ions. In particular, the latest laser-driven proton energy has reached nearly 100 MeV. Most applications require a stable ion source with high repetition frequency. The methods and strategies for realizing such repetitive laser-ion sources vary dramatically, in particular with respect to the employed target technology. In view of the interest of our research group on solid thin targets, the first focus of this PhD work was on an automated target positioning system that is employed in various research topics. A pilot study with one thousand targets was conducted with the nano-Foil Target Positioning System at the ATLAS 300 at Laboratory for Extreme Photonics (LEX Photonics), which was able to deliver laser pulses with a pulse energy of up to 6J and a pulse duration 25fs. Through real-time monitoring of various parameters of the laser pulses and targets, we have evaluated the stability of the proton source at a repetition rate of 0.5 Hz. During this study, we artificially varied parameters that were controllable and studied their impact on the proton yield. While scientifically interesting, the results did not clearly reveal the basis that would allow for stabilizing the proton source. It is likely that spatial-temporal contrast fluctuations contribute, which cannot yet be monitored on shot-to-shot. The request for repetition rate poses challenges to optimization strategies that rely on targets more complicated than plain foils. Among currently favored methods, which are reviewed in this thesis, are mass-limited-targets (MLT). Their lateral size is comparable to the laser focus diameter and therefore the energized electrons remain confined to a microscopically small volume such that acceleration fields are increased, as is the ion energy. However, the rapid positioning of MLTs is experimentally challenging. In order to find alternatives, we tested the generation of transient micro-targets by manipulating an initially plain foil with a Laguerre-Gaussian (LG) pre-pulse. This pre-pulse was introduced in the frontend of the CLAPA 200 TW laser at Peking University and passed through a spiral phase plate (SPP) before sending it back with a 1.7 ns advance to the main laser pulse into the laser chain. In the far-field, i.e. in the focus on the target, the LG pre-pulse results in a donut-shaped intensity distribution and initiates a ring-shaped plasma that is left to expand. The main laser pulse focuses on the center of this ring. The experimental results revealed a doubling of proton energy under the right pre-pulse intensity conditions. The evolution of the ring-pre-plasma expansion is modeled and the interaction between the main pulse with the transiently micro-plasma is studied by particle-in-cell simulations. The simulation results can recover the experimental observation, in particular, the proton energy increase in the relevant parameter range. Our understanding is in line with expectations that energetic electrons remain concentrated around the central part of the quasi-isolated micro-target, even though the target is not fully isolated but surrounded by a low-density plasma by the time of laser-plasma interaction at peak intensity

    A QED framework for nonlinear and singular optics

    Get PDF
    The theory of quantum electrodynamics is employed in the description of linear and nonlinear optical effects. We study the effects of using a two energy level approximation in simplifying expressions obtained from perturbation theory, equivalent to truncating the completeness relation. However, applying a two-level model with a lack of regard for its domain of validity may deliver misleading results. A new theorem on the expectation values of analytical operator functions imposes additional constraints on any atom or molecule modelled as a two-level system. We introduce measures designed to indicate occasions when the two-level approximation may be valid. Analysis of the optical angular momentum operator delivers a division into spin and orbital parts satisfying electric-magnetic democracy, and determine a new compartmentalisation of the optical angular momentum. An analysis is performed on the recently rediscovered optical chirality, and its corresponding flux, delivering results proportional to the helicity and spin angular momentum in monochromatic beams. A new polarisation basis is introduced to determine the maximum values that an infinite family of optical helicity- and spin- type measures may take, and disproves recent claims of ‘superchiral light’. A theoretical description of recent experiments relate helicity- and spin- type measures to the circular differential response of molecules, and show that nodal enhancements to circular dichroism relate only to photon number-phase uncertainty relation and do not signify ‘superchiral’ regions. The six-wave mixing of optical vortex input, in nonlinear media, demonstrates the quantum entanglement of pairs of optical vortex modes. The probability for each possible output pair displays a combinatorial weighting, associated with Pascal’s triangle. A quantum electrodynamic analysis of the effect of a second body on absorption can be extended by integrating over all possible positions of the mediator molecules, modelling a continuous medium. This provides links with both the molecular and bulk properties of materials

    A study of impact breakage of single rock specimen using discrete element method

    Get PDF
    Comminution is a critical stage of mineral processing which aims to reduce the size of ore particles through breakage, consequently increasing the likelihood of the liberation of valuable minerals. However, comminution is highly energy-intensive and an understanding of the key breakage mechanisms has been identified as an important factor in improving the efficiency of the process. Several factors, such as pre-existing cracks, mineralogical composition, ore shape and size are known to affect ore breakage behaviour during breakage. To investigate breakage mechanisms, it is important to be able to determine how individual factor influences the breakage behaviour of rock specimens. However, isolating and investigating individual factors under experimental conditions is challenging and typically impractical. Numerical techniques such as the Bonded Particle Model-Discrete Element Method (BPMDEM) have been developed as a means of investigating in isolation, the effects of different factors on ore breakage behaviour under closely controlled breakage conditions using synthetic rock specimens. This study investigates how individual factors influence rock specimen breakage using BPM-DEM numerical methods. Numerical simulations were conducted using ESyS-particle 2.3.5, an open-source discrete element method (DEM) software package which uses Python-based libraries to generate geometries and simulations and a C++ engine for mathematical computations. Empirical calibration relationships were developed to relate microstructural model parameters to the macroscopic mechanical properties that are typically obtained from standard geotechnical breakage experiments. The robustness of the model was evaluated by considering the sensitivity of fracture measures to the variation of model resolution, size-dependency and macroscopic mechanical properties (Young's modulus and uniaxial compressive strength) of the numerical specimens. A comparative study of single rock specimen breakage using the current BPM-DEM and laboratory SILC experiments carried out by Barbosa et al. (2019) was conducted. The measured fracture force and fracture patterns at different sizes for both cylindrical and spherical synthetic rock specimens were examined. Furthermore, the model was used to study, in isolation, the influence of pre-existing cracks in rock specimens and differing mineralogical compositions upon measurable breakage properties. Numerical rock specimens with pre-existing cracks were constructed using a microcrack approach, while a unique approach with the insertion of "seed points" was developed and demonstrated to construct numerical rock specimens with varying mineralogical compositions. Results from the numerical simulations showed that a high model resolution with a sufficiently large number of DEM-spheres exhibited results with the least deviation and error with respect to fracture measures, and, was therefore considered numerically stable. The dependency of fracture measurements on specimen size showed an expected increase in the measured fracture force as the specimen size increases. The variation of the macroscopic Young's modulus and uniaxial compressive strength against the fracture measures emphasised that the locus of these mechanical properties against the fracture measure can be used to specify a calibration relationship. Results of the comparative study showed that for both cylindrical and spherical rock specimens, the DEM consistently predicted the fragment patterns as well as the increase in the measured fracture force as the specimen size increased. The investigation on the effect of pre-existing cracks revealed that an increasing number of pre-existing cracks in rock specimens necessitated lower fracture force and consequently produced a low amount of new fracture surface area. For the binary phase mineralogical composition in the study, it was found that the fracture force decreased with an increase in the concentration of the softer component due to the increased percentage of weakness in the specimen. It was concluded that, with an appropriate calibration exercise and a realistic specification of material properties from the evaluation study, the DEM as a tool was sufficient to act as a "virtual laboratory" to isolate and study the individual effects of factors that influence ore breakage. The understanding of these results highlighted two important points. Firstly, this study was able to unravel some of the possible causes of the inefficiency in comminution practices, whereby significant amounts of energy can be expended to achieve minimal gains in respect of enhancing liberation due to pre-weakening and mineralogical composition. Secondly, it emphasised some of the causes of the variation observed during ore characterisation on a laboratory breakage device, attributable to pre-weakening and mineralogical composition
    • …
    corecore