4,958 research outputs found

    Price Jump Prediction in Limit Order Book

    Full text link
    A limit order book provides information on available limit order prices and their volumes. Based on these quantities, we give an empirical result on the relationship between the bid-ask liquidity balance and trade sign and we show that liquidity balance on best bid/best ask is quite informative for predicting the future market order's direction. Moreover, we define price jump as a sell (buy) market order arrival which is executed at a price which is smaller (larger) than the best bid (best ask) price at the moment just after the precedent market order arrival. Features are then extracted related to limit order volumes, limit order price gaps, market order information and limit order event information. Logistic regression is applied to predict the price jump from the limit order book's feature. LASSO logistic regression is introduced to help us make variable selection from which we are capable to highlight the importance of different features in predicting the future price jump. In order to get rid of the intraday data seasonality, the analysis is based on two separated datasets: morning dataset and afternoon dataset. Based on an analysis on forty largest French stocks of CAC40, we find that trade sign and market order size as well as the liquidity on the best bid (best ask) are consistently informative for predicting the incoming price jump.Comment: 16 page

    Exponential Weight Functions for Quasi-Proportional Auctions

    Full text link
    In quasi-proportional auctions, the allocation is shared among bidders in proportion to their weighted bids. The auctioneer selects a bid weight function, and bidders know the weight function when they bid. In this note, we analyze how weight functions that are exponential in the bid affect bidder behavior. We show that exponential weight functions have a pure-strategy Nash equilibrium, we characterize bids at an equilibrium, and we compare it to an equilibrium for power weight functions.Comment: 16 pages, 16 figure

    Revenue-Maximizing Mechanism Design for Quasi-Proportional Auctions

    Full text link
    In quasi-proportional auctions, each bidder receives a fraction of the allocation equal to the weight of their bid divided by the sum of weights of all bids, where each bid's weight is determined by a weight function. We study the relationship between the weight function, bidders' private values, number of bidders, and the seller's revenue in equilibrium. It has been shown that if one bidder has a much higher private value than the others, then a nearly flat weight function maximizes revenue. Essentially, threatening the bidder who has the highest valuation with having to share the allocation maximizes the revenue. We show that as bidder private values approach parity, steeper weight functions maximize revenue by making the quasi-proportional auction more like a winner-take-all auction. We also show that steeper weight functions maximize revenue as the number of bidders increases. For flatter weight functions, there is known to be a unique pure-strategy Nash equilibrium. We show that a pure-strategy Nash equilibrium also exists for steeper weight functions, and we give lower bounds for bids at an equilibrium. For a special case that includes the two-bidder auction, we show that the pure-strategy Nash equilibrium is unique, and we show how to compute the revenue at equilibrium. We also show that selecting a weight function based on private value ratios and number of bidders is necessary for a quasi-proportional auction to produce more revenue than a second-price auction

    Beverage Bloggers: A Developing Relationship Between Wine Blogger Expertise and Twitter Followers

    Get PDF
    This pilot study examines how beverage bloggers’ beverage experience and certified wine knowledge influences their wine destination recommendations on Twitter. Microblogging a wine destination through Twitter is explored in this study. In the context of social media, the role of Twitter as a microblog in promoting wine destinations is specifically examined. The present study examines the food and beverage experience and wine credentials of bloggers through survey and correlations of their wine destination recommendations, travel habits and geographic home. This exploratory study finds that different levels of wine credentials have an influence on blogger\u27s recommendation of both international and domestic wine destinations. The analysis shows that the increasing number of wine credentials possessed by a blogger influences the number of followers they have on Twitter

    Integrated RF-photonic Filters via Photonic-Phononic Emit-Receive Operations

    Full text link
    The creation of high-performance narrowband filters is of great interest for many RF-signal processing applications. To this end, numerous schemes for electronic, MEMS-based, and microwave photonic filters have been demonstrated. Filtering schemes based on microwave photonic systems offer superior flexibility and tunability to traditional RF filters. However, these optical-based filters are typically limited to GHz-widths and often have large RF insertion losses, posing challenges for integration into high-fidelity radiofrequency circuits. In this article, we demonstrate a novel type of microwave filter that combines the attractive features of microwave photonic filters with high-Q phononic signal processing using a photonic-phononic emit-receive process. Through this process, a RF signal encoded on a guided optical wave is transduced onto a GHz-frequency acoustic wave, where it may be filtered through shaping of acoustic transfer functions before being re-encoded onto a spatially separate optical probe. This emit-receive functionality, realized in an integrated silicon waveguide, produces MHz-bandwidth band-pass filtering while supporting low RF insertion losses necessary for high dynamic range in a microwave photonic link. We also demonstrate record-high internal efficiency for emit-receive operations of this type, and show that the emit-receive operation is uniquely suitable for the creation of serial filter banks with minimal loss of fidelity. This photonic-phononic emitter-receiver represents a new method for low-distortion signal-processing in an integrated all-silicon device

    Consistent Bounded-Asynchronous Parameter Servers for Distributed ML

    Full text link
    In distributed ML applications, shared parameters are usually replicated among computing nodes to minimize network overhead. Therefore, proper consistency model must be carefully chosen to ensure algorithm's correctness and provide high throughput. Existing consistency models used in general-purpose databases and modern distributed ML systems are either too loose to guarantee correctness of the ML algorithms or too strict and thus fail to fully exploit the computing power of the underlying distributed system. Many ML algorithms fall into the category of \emph{iterative convergent algorithms} which start from a randomly chosen initial point and converge to optima by repeating iteratively a set of procedures. We've found that many such algorithms are to a bounded amount of inconsistency and still converge correctly. This property allows distributed ML to relax strict consistency models to improve system performance while theoretically guarantees algorithmic correctness. In this paper, we present several relaxed consistency models for asynchronous parallel computation and theoretically prove their algorithmic correctness. The proposed consistency models are implemented in a distributed parameter server and evaluated in the context of a popular ML application: topic modeling.Comment: Corrected Titl

    State Space LSTM Models with Particle MCMC Inference

    Full text link
    Long Short-Term Memory (LSTM) is one of the most powerful sequence models. Despite the strong performance, however, it lacks the nice interpretability as in state space models. In this paper, we present a way to combine the best of both worlds by introducing State Space LSTM (SSL) models that generalizes the earlier work \cite{zaheer2017latent} of combining topic models with LSTM. However, unlike \cite{zaheer2017latent}, we do not make any factorization assumptions in our inference algorithm. We present an efficient sampler based on sequential Monte Carlo (SMC) method that draws from the joint posterior directly. Experimental results confirms the superiority and stability of this SMC inference algorithm on a variety of domains

    A silicon Brillouin laser

    Full text link
    Brillouin laser oscillators offer powerful and flexible dynamics as the basis for mode-locked lasers, microwave oscillators, and optical gyroscopes in a variety of optical systems. However, Brillouin interactions are exceedingly weak in conventional silicon photonic waveguides, stifling progress towards silicon-based Brillouin lasers. The recent advent of hybrid photonic-phononic waveguides has revealed Brillouin interactions to be one of the strongest and most tailorable nonlinearities in silicon. Here, we harness these engineered nonlinearities to demonstrate Brillouin lasing in silicon. Moreover, we show that this silicon-based Brillouin laser enters an intriguing regime of dynamics, in which optical self-oscillation produces phonon linewidth narrowing. Our results provide a platform to develop a range of applications for monolithic integration within silicon photonic circuits.Comment: Updated after publication on June 8, 201

    Primitives for Dynamic Big Model Parallelism

    Full text link
    When training large machine learning models with many variables or parameters, a single machine is often inadequate since the model may be too large to fit in memory, while training can take a long time even with stochastic updates. A natural recourse is to turn to distributed cluster computing, in order to harness additional memory and processors. However, naive, unstructured parallelization of ML algorithms can make inefficient use of distributed memory, while failing to obtain proportional convergence speedups - or can even result in divergence. We develop a framework of primitives for dynamic model-parallelism, STRADS, in order to explore partitioning and update scheduling of model variables in distributed ML algorithms - thus improving their memory efficiency while presenting new opportunities to speed up convergence without compromising inference correctness. We demonstrate the efficacy of model-parallel algorithms implemented in STRADS versus popular implementations for Topic Modeling, Matrix Factorization and Lasso

    Supervised Dimensionality Reduction for Big Data

    Full text link
    To solve key biomedical problems, experimentalists now routinely measure millions or billions of features (dimensions) per sample, with the hope that data science techniques will be able to build accurate data-driven inferences. Because sample sizes are typically orders of magnitude smaller than the dimensionality of these data, valid inferences require finding a low-dimensional representation that preserves the discriminating information (e.g., whether the individual suffers from a particular disease). Existing linear and nonlinear dimensionality reduction methods either are not supervised, scale poorly to operate in big data regimes, lack theoretical guarantees, or are "black-box" methods unsuitable for many applications. We introduce "Linear Optimal Low-rank" projection (LOL), which extends principle components analysis by incorporating, rather than ignoring, class labels, and facilitates straightforward generalizations. We prove, and substantiate with both synthetic and real data benchmarks, that LOL leads to an improved data representation for subsequent classification, while maintaining computational efficiency and scalability. Using multiple brain imaging datasets consisting of >150 million features, and several genomics datasets with >500,000 features, LOL achieves achieves state-of-the-art classification accuracy, while only requiring a few minutes on a standard desktop computer.Comment: 6 figure
    • …
    corecore