47,754 research outputs found

    The fine-structure of volatility feedback I: multi-scale self-reflexivity

    Full text link
    We attempt to unveil the fine structure of volatility feedback effects in the context of general quadratic autoregressive (QARCH) models, which assume that today's volatility can be expressed as a general quadratic form of the past daily returns. The standard ARCH or GARCH framework is recovered when the quadratic kernel is diagonal. The calibration of these models on US stock returns reveals several unexpected features. The off-diagonal (non ARCH) coefficients of the quadratic kernel are found to be highly significant both In-Sample and Out-of-Sample, but all these coefficients turn out to be one order of magnitude smaller than the diagonal elements. This confirms that daily returns play a special role in the volatility feedback mechanism, as postulated by ARCH models. The feedback kernel exhibits a surprisingly complex structure, incompatible with models proposed so far in the literature. Its spectral properties suggest the existence of volatility-neutral patterns of past returns. The diagonal part of the quadratic kernel is found to decay as a power-law of the lag, in line with the long-memory of volatility. Finally, QARCH models suggest some violations of Time Reversal Symmetry in financial time series, which are indeed observed empirically, although of much smaller amplitude than predicted. We speculate that a faithful volatility model should include both ARCH feedback effects and a stochastic component

    Adaptive MBER space-time DFE assisted multiuser detection for SDMA systems

    No full text
    In this contribution we propose a space-time decision feedback equalization (ST-DFE) assisted multiuser detection (MUD) scheme for multiple antenna aided space division multiple access systems. A minimum bit error rate (MBER) design is invoked for the MUD, which is shown to be capable of improving the achievable bit error rate performance over that of the minimum mean square error (MMSE) design. An adaptive MBER ST-DFE-MUD is proposed using the least bit error rate algorithm, which is demonstrated to consistently outperform the least mean square (LMS) algorithm, while achieving a lower computational complexity than the LMS algorithm for the binary signalling scheme. Simulation results demonstrate that theMBER ST-DFE-MUD is more robust to channel estimation errors as well as to error propagation imposed by decision feedback errors, compared to the MMSE ST-DFE-MUD

    A neural model of border-ownership from kinetic occlusion

    Full text link
    Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure–ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure–ground segregation, despite whether figure–ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background.This work was supported in part by CELEST (NSF SBE-0354378 and OMA-0835976), the Office of Naval Research (ONR N00014-11-1-0535) and Air Force Office of Scientific Research (AFOSR FA9550-12-1-0436). (NSF SBE-0354378 - CELEST; OMA-0835976 - CELEST; ONR N00014-11-1-0535 - Office of Naval Research; AFOSR FA9550-12-1-0436 - Air Force Office of Scientific Research)Published versio

    A general class of Lagrangian smoothed particle hydrodynamics methods and implications for fluid mixing problems

    Get PDF
    Various formulations of smoothed particle hydrodynamics (SPH) have been proposed, intended to resolve certain difficulties in the treatment of fluid mixing instabilities. Most have involved changes to the algorithm which either introduces artificial correction terms or violates what is arguably the greatest advantage of SPH over other methods: manifest conservation of energy, entropy, momentum and angular momentum. Here, we show how a class of alternative SPH equations of motion (EOM) can be derived self-consistently from a discrete particle Lagrangian – guaranteeing manifest conservation – in a manner which tremendously improves treatment of these instabilities and contact discontinuities. Saitoh & Makino recently noted that the volume element used to discretize the EOM does not need to explicitly invoke the mass density (as in the ‘standard’ approach); we show how this insight, and the resulting degree of freedom, can be incorporated into the rigorous Lagrangian formulation that retains ideal conservation properties and includes the ‘∇h’ terms that account for variable smoothing lengths. We derive a general EOM for any choice of volume element (particle ‘weights’) and method of determining smoothing lengths. We then specify this to a ‘pressure–entropy formulation’ which resolves problems in the traditional treatment of fluid interfaces. Implementing this in a new version of the GADGET code, we show it leads to good performance in mixing experiments (e.g. Kelvin–Helmholtz and ‘blob’ tests). And conservation is maintained even in strong shock/blastwave tests, where formulations without manifest conservation produce large errors. This also improves the treatment of subsonic turbulence and lessens the need for large kernel particle numbers. The code changes are trivial and entail no additional numerical expense. This provides a general framework for self-consistent derivation of different ‘flavours’ of SPH

    Minimum Bit-Error Rate Design for Space-Time Equalisation-Based Multiuser Detection

    No full text
    A novel minimum bit-error rate (MBER) space–time equalization (STE)-based multiuser detector (MUD) is proposed for multiple-receive-antenna-assisted space-division multiple-access systems. It is shown that the MBER-STE-aided MUD significantly outperforms the standard minimum mean-square error design in terms of the achievable bit-error rate (BER). Adaptive implementations of the MBER STE are considered, and both the block-data-based and sample-by-sample adaptive MBER algorithms are proposed. The latter, referred to as the least BER (LBER) algorithm, is compared with the most popular adaptive algorithm, known as the least mean square (LMS) algorithm. It is shown that in case of binary phase-shift keying, the computational complexity of the LBER-STE is about half of that required by the classic LMS-STE. Simulation results demonstrate that the LBER algorithm performs consistently better than the classic LMS algorithm, both in terms of its convergence speed and steady-state BER performance. Index Terms—Adaptive algorithm, minimum bit-error rate (MBER), multiuser detection (MUD), space–time processing

    A Neural Model of Surface Perception: Lightness, Anchoring, and Filling-in

    Full text link
    This article develops a neural model of how the visual system processes natural images under variable illumination conditions to generate surface lightness percepts. Previous models have clarified how the brain can compute the relative contrast of images from variably illuminate scenes. How the brain determines an absolute lightness scale that "anchors" percepts of surface lightness to us the full dynamic range of neurons remains an unsolved problem. Lightness anchoring properties include articulation, insulation, configuration, and are effects. The model quantatively simulates these and other lightness data such as discounting the illuminant, the double brilliant illusion, lightness constancy and contrast, Mondrian contrast constancy, and the Craik-O'Brien-Cornsweet illusion. The model also clarifies the functional significance for lightness perception of anatomical and neurophysiological data, including gain control at retinal photoreceptors, and spatioal contrast adaptation at the negative feedback circuit between the inner segment of photoreceptors and interacting horizontal cells. The model retina can hereby adjust its sensitivity to input intensities ranging from dim moonlight to dazzling sunlight. A later model cortical processing stages, boundary representations gate the filling-in of surface lightness via long-range horizontal connections. Variants of this filling-in mechanism run 100-1000 times faster than diffusion mechanisms of previous biological filling-in models, and shows how filling-in can occur at realistic speeds. A new anchoring mechanism called the Blurred-Highest-Luminance-As-White (BHLAW) rule helps simulate how surface lightness becomes sensitive to the spatial scale of objects in a scene. The model is also able to process natural images under variable lighting conditions.Air Force Office of Scientific Research (F49620-01-1-0397); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-01-1-0624
    • 

    corecore