5,873 research outputs found

    Domain-specific implementation of high-order Discontinuous Galerkin methods in spherical geometry

    Get PDF
    In recent years, domain-specific languages (DSLs) have achieved significant success in large-scale efforts to reimplement existing meteorological models in a performance portable manner. The dynamical cores of these models are based on finite difference and finite volume schemes, and existing DSLs are generally limited to supporting only these numerical methods. In the meantime, there have been numerous attempts to use high-order Discontinuous Galerkin (DG) methods for atmospheric dynamics, which are currently largely unsupported in main-stream DSLs. In order to link these developments, we present two domain-specific languages which extend the existing GridTools (GT) ecosystem to high-order DG discretization. The first is a C++-based DSL called G4GT, which, despite being no longer supported, gave us the impetus to implement extensions to the subsequent Python-based production DSL called GT4Py to support the operations needed for DG solvers. As a proof of concept, the shallow water equations in spherical geometry are implemented in both DSLs, thus providing a blueprint for the application of domain-specific languages to the development of global atmospheric models. We believe this is the first GPU-capable DSL implementation of DG in spherical geometry. The results demonstrate that a DSL designed for finite difference/volume methods can be successfully extended to implement a DG solver, while preserving the performance-portability of the DSL.ISSN:0010-4655ISSN:1879-294

    Applications of Deep Learning Models in Financial Forecasting

    Get PDF
    In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting. The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data. The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided

    Predicting depression and suicidal tendencies by analyzing online activities using machine learning in android devices

    Get PDF
    Artificial Intelligence (AI) has brought about a profound transformation in the realm of technology, with Machine Learning (ML) within AI playing a crucial role in today's healthcare systems. Advanced systems with intellectual abilities resembling those of humans are being created and utilized to carry out intricate tasks. Applications like Object recognition, classification, Optical Character Recognition (OCR), Natural Language processing (NLP), among others, have started producing magnificent results with algorithms trained on humongous data readily available these days. Keeping in view the socio-economic implications of the pandemic threat posed to the world by COVID-19, this research aims at improving the quality of life of people suffering from mild depression by timely diagnosing the symptoms using AI in android devices, especially phones. In cases of severe depression, which is highly likely to lead to suicide, valuable lives can also be saved if adequate help can be dispatched to such patients within time. This can be achieved using automatic analysis of users’ data including text messages, emails, voice calls and internet search history, among other mobile phone activities, using Text mining/ text analytics which is the process of deriving meaningful information from natural language text. Machine Learning models analyse the users’ behaviour continuously from text and voice communications and data, thereby identifying if there are any negative tendencies in the behaviour over a certain period of time, and by using this information make inferences about the mental health state of the patient and instantly request appropriate healthcare before it is too late. In this research, an android application capable of performing the aforementioned tasks in real-time has been developed and tested for various performance features with an average accuracy of 95%

    Language integrated relational lenses

    Get PDF
    Relational databases are ubiquitous. Such monolithic databases accumulate large amounts of data, yet applications typically only work on small portions of the data at a time. A subset of the database defined as a computation on the underlying tables is called a view. Querying views is helpful, but it is also desirable to update them and have these changes be applied to the underlying database. This view update problem has been the subject of much previous work before, but support by database servers is limited and only rarely available. Lenses are a popular approach to bidirectional transformations, a generalization of the view update problem in databases to arbitrary data. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan propose an approach to updatable views called relational lenses. However, to the best of our knowledge this proposal has not been implemented or evaluated prior to the work reported in this thesis. This thesis proposes programming language support for relational lenses. Language integrated relational lenses support expressive and efficient view updates, without relying on updatable view support from the database server. By integrating relational lenses into the programming language, application development becomes easier and less error-prone, avoiding the impedance mismatch of having two programming languages. Integrating relational lenses into the language poses additional challenges. As defined by Bohannon et al. relational lenses completely recompute the database, making them inefficient as the database scales. The other challenge is that some parts of the well-formedness conditions are too general for implementation. Bohannon et al. specify predicates using possibly infinite abstract sets and define the type checking rules using relational algebra. Incremental relational lenses equip relational lenses with change-propagating semantics that map small changes to the view into (potentially) small changes to the source tables. We prove that our incremental semantics are functionally equivalent to the non-incremental semantics, and our experimental results show orders of magnitude improvement over the non-incremental approach. This thesis introduces a concrete predicate syntax and shows how the required checks are performed on these predicates and show that they satisfy the abstract predicate specifications. We discuss trade-offs between static predicates that are fully known at compile time vs dynamic predicates that are only known during execution and introduce hybrid predicates taking inspiration from both approaches. This thesis adapts the typing rules for relational lenses from sequential composition to a functional style of sub-expressions. We prove that any well-typed functional relational lens expression can derive a well-typed sequential lens. We use these additions to relational lenses as the foundation for two practical implementations: an extension of the Links functional language and a library written in Haskell. The second implementation demonstrates how type-level computation can be used to implement relational lenses without changes to the compiler. These two implementations attest to the possibility of turning relational lenses into a practical language feature

    Path-sensitive Type Analysis with Backward Analysis for Quality Assurance of Dynamic Typed Language Code

    Full text link
    Precise and fast static type analysis for dynamically typed language is very difficult. This is mainly because the lack of static type information makes it difficult to approximate all possible values of a variable. Actually, the existing static type analysis methods are imprecise or slow. In this paper, we propose a novel method to improve the precision of static type analysis for Python code, where a backward analysis is used to obtain the path-sensitivity. By doing so, our method aims to obtain more precise static type information, which contributes to the overall improvement of static analysis. To show the effectiveness of our method, we conducted a preliminary experiment to compare our method implementation and the existing analysis tool with respect to precision and time efficiency. The result shows our method provides more precise type analysis with fewer false positives than the existing static type analysis tool. Also it shows our proposed method increases the analysis time, but it is still within the range of practical use.Comment: 9 pages with 6 figure

    PMP: Privacy-Aware Matrix Profile against Sensitive Pattern Inference

    Get PDF
    Recent rapid development of sensor technology has allowed massive fine-grained time series (TS) data to be collected and set the foundation for the development of data-driven services and applications. During the process, data sharing is often involved to allow the third-party modelers to perform specific time series data mining (TSDM) tasks based on the need of data owner. The high resolution of TS brings new challenges in protecting privacy. While meaningful information in high-resolution TS shifts from concrete point values to local shape-based segments, numerous research have found that long shape-based patterns could contain more sensitive information and may potentially be extracted and misused by a malicious third party. However, the privacy issue for TS patterns is surprisingly seldom explored in privacy-preserving literature. In this work, we consider a new privacy-preserving problem: preventing malicious inference on long shape-based patterns while preserving short segment information for the utility task performance. To mitigate the challenge, we investigate an alternative approach by sharing Matrix Profile (MP), which is a non-linear transformation of original data and a versatile data structure that supports many data mining tasks. We found that while MP can prevent concrete shape leakage, the canonical correlation in MP index can still reveal the location of sensitive long pattern. Based on this observation, we design two attacks named Location Attack and Entropy Attack to extract the pattern location from MP. To further protect MP from these two attacks, we propose a Privacy-Aware Matrix Profile (PMP) via perturbing the local correlation and breaking the canonical correlation in MP index vector. We evaluate our proposed PMP against baseline noise-adding methods through quantitative analysis and real-world case studies to show the effectiveness of the proposed method

    Model-independent bubble wall velocities in local thermal equilibrium

    Full text link
    Accurately determining bubble wall velocities in first-order phase transitions is of great importance for the prediction of gravitational wave signals and the matter-antimatter asymmetry. However, it is a challenging task which typically depends on the underlying particle physics model. Recently, it has been shown that assuming local thermal equilibrium can provide a good approximation when calculating the bubble wall velocity. In this paper, we provide a model-independent determination of bubble wall velocities in local thermal equilibrium. Our results show that, under the reasonable assumption that the sound speeds in the plasma are approximately uniform, the hydrodynamics can be fully characterized by four quantities: the phase strength αn\alpha_n, the ratio of the enthalpies in the broken and symmetric phases, Ψn\Psi_n, and the sound speeds in both phases, csc_s and cbc_b. We provide a code snippet that allows for a determination of the wall velocity and energy fraction in local thermal equilibrium in any model. In addition, we present a fit function for the wall velocity in the case cs=cb=1/3c_s = c_b = 1/\sqrt 3.Comment: 34 pages, 6 figures; v2 matches published versio

    Automated identification and behaviour classification for modelling social dynamics in group-housed mice

    Get PDF
    Mice are often used in biology as exploratory models of human conditions, due to their similar genetics and physiology. Unfortunately, research on behaviour has traditionally been limited to studying individuals in isolated environments and over short periods of time. This can miss critical time-effects, and, since mice are social creatures, bias results. This work addresses this gap in research by developing tools to analyse the individual behaviour of group-housed mice in the home-cage over several days and with minimal disruption. Using data provided by the Mary Lyon Centre at MRC Harwell we designed an end-to-end system that (a) tracks and identifies mice in a cage, (b) infers their behaviour, and subsequently (c) models the group dynamics as functions of individual activities. In support of the above, we also curated and made available a large dataset of mouse localisation and behaviour classifications (IMADGE), as well as two smaller annotated datasets for training/evaluating the identification (TIDe) and behaviour inference (ABODe) systems. This research constitutes the first of its kind in terms of the scale and challenges addressed. The data source (side-view single-channel video with clutter and no identification markers for mice) presents challenging conditions for analysis, but has the potential to give richer information while using industry standard housing. A Tracking and Identification module was developed to automatically detect, track and identify the (visually similar) mice in the cluttered home-cage using only single-channel IR video and coarse position from RFID readings. Existing detectors and trackers were combined with a novel Integer Linear Programming formulation to assign anonymous tracks to mouse identities. This utilised a probabilistic weight model of affinity between detections and RFID pickups. The next task necessitated the implementation of the Activity Labelling module that classifies the behaviour of each mouse, handling occlusion to avoid giving unreliable classifications when the mice cannot be observed. Two key aspects of this were (a) careful feature-selection, and (b) judicious balancing of the errors of the system in line with the repercussions for our setup. Given these sequences of individual behaviours, we analysed the interaction dynamics between mice in the same cage by collapsing the group behaviour into a sequence of interpretable latent regimes using both static and temporal (Markov) models. Using a permutation matrix, we were able to automatically assign mice to roles in the HMM, fit a global model to a group of cages and analyse abnormalities in data from a different demographic

    Emotion Recognition for Human-Centered Conversational Agents

    Get PDF
    This thesis proposes a study on Emotion Recognition in Conversation to address the challenges of the task with a chatbot reference case study to enhance conversational agents’ ability to understand and respond appropriately to human emotion. The study consists of two phases. The first one involves the use of several baselines and the implementation of EmoBERTa to explore aspects of the task, such as preprocessing, balancing technique and context modelling tested on ERC benchmark dataset. The results reveal that the punctuation provides key information to the task, balancing techniques can provide marginal improvements if appropriately selected and context can provide additional information and suggest that a non-static context construction could be beneficial. In the second phase, the effectiveness of a Few-Shot learning method, SetFit, is explored in the context of ERC to face the scarce amount of real labelled data. An incompatibility with the given context definition of the architecture employed by the mentioned method called for an adaptation which proved to be ineffective. The performance of the SetFit method and finetuning are compared in a limited data regime. Finally, the study explores the capabilities of a trained model on a specific ERC dataset to adapt to limited data from a different domain using Transfer Learning and fine-tuning with inconclusive results. The findings and insight from this can lay the groundwork for future developments and studies in the growing field of emotional-aware conversational agents and the application of Few-Shot learning in this task

    Computer Vision-Based Hand Tracking and 3D Reconstruction as a Human-Computer Input Modality with Clinical Application

    Get PDF
    The recent pandemic has impeded patients with hand injuries from connecting in person with their therapists. To address this challenge and improve hand telerehabilitation, we propose two computer vision-based technologies, photogrammetry and augmented reality as alternative and affordable solutions for visualization and remote monitoring of hand trauma without costly equipment. In this thesis, we extend the application of 3D rendering and virtual reality-based user interface to hand therapy. We compare the performance of four popular photogrammetry software in reconstructing a 3D model of a synthetic human hand from videos captured through a smartphone. The visual quality, reconstruction time and geometric accuracy of output model meshes are compared. Reality Capture produces the best result, with output mesh having the least error of 1mm and a total reconstruction time of 15 minutes. We developed an augmented reality app using MediaPipe algorithms that extract hand key points, finger joint coordinates and angles in real-time from hand images or live stream media. We conducted a study to investigate its input variability and validity as a reliable tool for remote assessment of finger range of motion. The intraclass correlation coefficient between DIGITS and in-person measurement obtained is 0.767- 0.81 for finger extension and 0.958–0.857 for finger flexion. Finally, we develop and surveyed the usability of a mobile application that collects patient data medical history, self-reported pain levels and hand 3D models and transfer them to therapists. These technologies can improve hand telerehabilitation, aid clinicians in monitoring hand conditions remotely and make decisions on appropriate therapy, medication, and hand orthoses
    • …
    corecore