5,874 research outputs found
Fairness in overloaded parallel queues
Maximizing throughput for heterogeneous parallel server queues has received
quite a bit of attention from the research community and the stability region
for such systems is well understood. However, many real-world systems have
periods where they are temporarily overloaded. Under such scenarios, the
unstable queues often starve limited resources. This work examines what happens
during periods of temporary overload. Specifically, we look at how to fairly
distribute stress. We explore the dynamics of the queue workloads under the
MaxWeight scheduling policy during long periods of stress and discuss how to
tune this policy in order to achieve a target fairness ratio across these
workloads
A fast direct numerical simulation method for characterising hydraulic roughness
We describe a fast direct numerical simulation (DNS) method that promises to
directly characterise the hydraulic roughness of any given rough surface, from
the hydraulically smooth to the fully rough regime. The method circumvents the
unfavourable computational cost associated with simulating high-Reynolds-number
flows by employing minimal-span channels (Jimenez & Moin 1991).
Proof-of-concept simulations demonstrate that flows in minimal-span channels
are sufficient for capturing the downward velocity shift, that is, the Hama
roughness function, predicted by flows in full-span channels. We consider two
sets of simulations, first with modelled roughness imposed by body forces, and
second with explicit roughness described by roughness-conforming grids. Owing
to the minimal cost, we are able to conduct DNSs with increasing roughness
Reynolds numbers while maintaining a fixed blockage ratio, as is typical in
full-scale applications. The present method promises a practical, fast and
accurate tool for characterising hydraulic resistance directly from
profilometry data of rough surfaces.Comment: Published in the Journal of Fluid Mechanic
An Electronic Market-Maker
This paper presents an adaptive learning model for market-making under the reinforcement learning framework. Reinforcement learning is a learning technique in which agents aim to maximize the long-term accumulated rewards. No knowledge of the market environment, such as the order arrival or price process, is assumed. Instead, the agent learns from real-time market experience and develops explicit market-making strategies, achieving multiple objectives including the maximizing of profits and minimization of the bid-ask spread. The simulation results show initial success in bringing learning techniques to building market-making algorithms
Systemic Risk and Hedge Funds
Systemic risk is commonly used to describe the possibility of a series of correlated defaults among financial institutions---typically banks---that occur over a short period of time, often caused by a single major event. However, since the collapse of Long Term Capital Management in 1998, it has become clear that hedge funds are also involved in systemic risk exposures. The hedge-fund industry has a symbiotic relationship with the banking sector, and many banks now operate proprietary trading units that are organized much like hedge funds. As a result, the risk exposures of the hedge-fund industry may have a material impact on the banking sector, resulting in new sources of systemic risks. In this paper, we attempt to quantify the potential impact of hedge funds on systemic risk by developing a number of new risk measures for hedge funds and applying them to individual and aggregate hedge-fund returns data. These measures include: illiquidity risk exposure, nonlinear factor models for hedge-fund and banking-sector indexes, logistic regression analysis of hedge-fund liquidation probabilities, and aggregate measures of volatility and distress based on regime-switching models. Our preliminary findings suggest that the hedge-fund industry may be heading into a challenging period of lower expected returns, and that systemic risk is currently on the rise.
Kinetic-energy systems, density scaling, and homogeneity relations in density-functional theory
We examine the behavior of the Kohn-Sham kinetic energy T_s[ρ] and the interacting kinetic energy T[ρ] under homogeneous density scaling, ρ(r)→ζρ(r). Using convexity arguments, we derive simple inequalities and scaling constraints for the kinetic energy. We also demonstrate that a recently derived homogeneity relation for the kinetic energy [S. B. Liu and R. G. Parr, Chem. Phys. Lett. 278, 341 (1997)] does not hold in real systems, due to nonsmoothness of the kinetic-energy functional. We carry out a numerical study of the density scaling of T_s[ρ] using ab initio densities, and find it exhibits an effective homogeneity close to 5/3. We also explore alternative reference systems for the kinetic energy which have fewer particles than the true N-particle interacting system. However, we conclude that the Kohn-Sham reference system is the only viable choice for accurate calculation, as it contains the necessary physics
A new chemical concept: Shape chemical potentials
Within the density functional formalism, we introduce the shape chemical potential μ_i(^n) for subsystems, which in the limiting case of point subsystems, is a local chemical potential μ^n(r). It describes the electron withdrawing/donating ability of specified density fragments. The shape chemical potential does not equalize between subsystems, and provides a powerful new method to identify and describe local features of molecular systems. We explore the formal properties of μ_i(^n) especially with respect to discontinuities, and reconcile our results with Sanderson’s principle. We also perform preliminary calculations on model systems of atoms in molecules, and atomic shell structure, demonstrating how μ_i(^n) and μ^n(r), identify and characterize chemical features as regions of different shape chemical potential. We present arguments that shell structure, and other chemical features, are not ever obtainable within Thomas–Fermi-type theories
- …
