1,736 research outputs found

    A brief history of plastic surgery

    Get PDF
    Historically, plastic surgery have been practiced for thousands of years, going back to more primitive methods that were seen in India since around 800 B.C. At that time, plastic surgery procedures consisted of skin grafts that were performed on those that suffered from skin damaging injures. Ancient doctors developed methods to help suture the skin to the body, to help prevent scarring. They performed reconstructive operations on ears and noses that were lost in war or through punishment for a crime. The Romans were also practicing plastic surgery by the first century B.C. Their culture greatly admired the beauty of naked body thus promoting them to improve or eliminate the appearance of any bodily defect or deformity. Their procedures included breast reduction and scar removal. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/2601

    Many-body correlations in one-dimensional optical lattices with alkaline-earth(-like) atoms

    Full text link
    We explore the rich nature of correlations in the ground state of ultracold atoms trapped in state-dependent optical lattices. In particular, we consider interacting fermionic ytterbium or strontium atoms, realizing a two-orbital Hubbard model with two spin components. We analyze the model in one-dimensional setting with the experimentally relevant hierarchy of tunneling and interaction amplitudes by means of exact diagonalization and matrix product states approaches, and study the correlation functions in density, spin, and orbital sectors as functions of variable densities of atoms in the ground and metastable excited states. We show that in certain ranges of densities these atomic systems demonstrate strong density-wave, ferro- and antiferromagnetic, as well as antiferroorbital correlations.Comment: 8 pages, 5 figure

    C++ Design Patterns for Low-latency Applications Including High-frequency Trading

    Full text link
    This work aims to bridge the existing knowledge gap in the optimisation of latency-critical code, specifically focusing on high-frequency trading (HFT) systems. The research culminates in three main contributions: the creation of a Low-Latency Programming Repository, the optimisation of a market-neutral statistical arbitrage pairs trading strategy, and the implementation of the Disruptor pattern in C++. The repository serves as a practical guide and is enriched with rigorous statistical benchmarking, while the trading strategy optimisation led to substantial improvements in speed and profitability. The Disruptor pattern showcased significant performance enhancement over traditional queuing methods. Evaluation metrics include speed, cache utilisation, and statistical significance, among others. Techniques like Cache Warming and Constexpr showed the most significant gains in latency reduction. Future directions involve expanding the repository, testing the optimised trading algorithm in a live trading environment, and integrating the Disruptor pattern with the trading algorithm for comprehensive system benchmarking. The work is oriented towards academics and industry practitioners seeking to improve performance in latency-sensitive applications

    From Deep Filtering to Deep Econometrics

    Full text link
    Calculating true volatility is an essential task for option pricing and risk management. However, it is made difficult by market microstructure noise. Particle filtering has been proposed to solve this problem as it favorable statistical properties, but relies on assumptions about underlying market dynamics. Machine learning methods have also been proposed but lack interpretability, and often lag in performance. In this paper we implement the SV-PF-RNN: a hybrid neural network and particle filter architecture. Our SV-PF-RNN is designed specifically with stochastic volatility estimation in mind. We then show that it can improve on the performance of a basic particle filter

    Transformers versus LSTMs for electronic trading

    Full text link
    With the rapid development of artificial intelligence, long short term memory (LSTM), one kind of recurrent neural network (RNN), has been widely applied in time series prediction. Like RNN, Transformer is designed to handle the sequential data. As Transformer achieved great success in Natural Language Processing (NLP), researchers got interested in Transformer's performance on time series prediction, and plenty of Transformer-based solutions on long time series forecasting have come out recently. However, when it comes to financial time series prediction, LSTM is still a dominant architecture. Therefore, the question this study wants to answer is: whether the Transformer-based model can be applied in financial time series prediction and beat LSTM. To answer this question, various LSTM-based and Transformer-based models are compared on multiple financial prediction tasks based on high-frequency limit order book data. A new LSTM-based model called DLSTM is built and new architecture for the Transformer-based model is designed to adapt for financial prediction. The experiment result reflects that the Transformer-based model only has the limited advantage in absolute price sequence prediction. The LSTM-based models show better and more robust performance on difference sequence prediction, such as price difference and price movement

    Applying Deep Learning to Calibrate Stochastic Volatility Models

    Full text link
    Stochastic volatility models, where the volatility is a stochastic process, can capture most of the essential stylized facts of implied volatility surfaces and give more realistic dynamics of the volatility smile/skew. However, they come with the significant issue that they take too long to calibrate. Alternative calibration methods based on Deep Learning (DL) techniques have been recently used to build fast and accurate solutions to the calibration problem. Huge and Savine developed a Differential Machine Learning (DML) approach, where Machine Learning models are trained on samples of not only features and labels but also differentials of labels to features. The present work aims to apply the DML technique to price vanilla European options (i.e. the calibration instruments), more specifically, puts when the underlying asset follows a Heston model and then calibrate the model on the trained network. DML allows for fast training and accurate pricing. The trained neural network dramatically reduces Heston calibration's computation time. In this work, we also introduce different regularisation techniques, and we apply them notably in the case of the DML. We compare their performance in reducing overfitting and improving the generalisation error. The DML performance is also compared to the classical DL (without differentiation) one in the case of Feed-Forward Neural Networks. We show that the DML outperforms the DL. The complete code for our experiments is provided in the GitHub repository: https://github.com/asridi/DML-Calibration-Heston-Mode

    Derivatives Sensitivities Computation under Heston Model on GPU

    Full text link
    This report investigates the computation of option Greeks for European and Asian options under the Heston stochastic volatility model on GPU. We first implemented the exact simulation method proposed by Broadie and Kaya and used it as a baseline for precision and speed. We then proposed a novel method for computing Greeks using the Milstein discretisation method on GPU. Our results show that the proposed method provides a speed-up up to 200x compared to the exact simulation implementation and that it can be used for both European and Asian options. However, the accuracy of the GPU method for estimating Rho is inferior to the CPU method. Overall, our study demonstrates the potential of GPU for computing derivatives sensitivies with numerical methods
    corecore