5,702 research outputs found
Task-aware Adaptive Learning for Cross-domain Few-shot Learning
Although existing few-shot learning works yield promising results for in-domain queries, they still suffer from weak cross-domain generalization. Limited support data requires effective knowledge transfer, but domain-shift makes this harder. Towards this emerging challenge, researchers improved adaptation by introducing task-specific parameters, which are directly optimized and estimated for each task. However, adding a fixed number of additional parameters fails to consider the diverse domain shifts between target tasks and the source domain, limiting efficacy. In this paper, we first observe the dependence of task-specific parameter configuration on the target task. Abundant task-specific parameters may over-fit, and insufficient task-specific parameters may result in under-adaptation -- but the optimal task-specific configuration varies for different test tasks. Based on these findings, we propose the Task-aware Adaptive Network (TA2-Net), which is trained by reinforcement learning to adaptively estimate the optimal task-specific parameter configuration for each test task. It learns, for example, that tasks with significant domain shift usually have a larger need for task-specific parameters for adaptation. We evaluate our model on Meta-dataset. Empirical results show that our model outperforms existing state-of-the-art methods
Applications of Deep Learning Models in Financial Forecasting
In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting.
The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with
approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data.
The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to
financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided
DIC-Transformer: interpretation of plant disease classification results using image caption generation technology
Disease image classification systems play a crucial role in identifying disease categories in the field of agricultural diseases. However, current plant disease image classification methods can only predict the disease category and do not offer explanations for the characteristics of the predicted disease images. Due to the current situation, this paper employed image description generation technology to produce distinct descriptions for different plant disease categories. A two-stage model called DIC-Transformer, which encompasses three tasks (detection, interpretation, and classification), was proposed. In the first stage, Faster R-CNN was utilized to detect the diseased area and generate the feature vector of the diseased image, with the Swin Transformer as the backbone. In the second stage, the model utilized the Transformer to generate image captions. It then generated the image feature vector, which is weighted by text features, to improve the performance of image classification in the subsequent classification decoder. Additionally, a dataset containing text and visualizations for agricultural diseases (ADCG-18) was compiled. The dataset contains images of 18 diseases and descriptive information about their characteristics. Then, using the ADCG-18, the DIC-Transformer was compared to 11 existing classical caption generation methods and 10 image classification models. The evaluation indicators for captions include Bleu1–4, CiderD, and Rouge. The values of BLEU-1, CIDEr-D, and ROUGE were 0.756, 450.51, and 0.721. The results of DIC-Transformer were 0.01, 29.55, and 0.014 higher than those of the highest-performing comparison model, Fc. The classification evaluation metrics include accuracy, recall, and F1 score, with accuracy at 0.854, recall at 0.854, and F1 score at 0.853. The results of DIC-Transformer were 0.024, 0.078, and 0.075 higher than those of the highest-performing comparison model, MobileNetV2. The results indicate that the DIC-Transformer outperforms other comparison models in classification and caption generation
Dataflow Programming and Acceleration of Computationally-Intensive Algorithms
The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping
Gender bias in transformers: A comprehensive review of detection and mitigation strategies
Gender bias in artificial intelligence (AI) has emerged as a pressing concern with profound implications for individuals’ lives. This paper presents a comprehensive survey that explores gender bias in Transformer models from a linguistic perspective. While the existence of gender bias in language models has been acknowledged in previous studies, there remains a lack of consensus on how to measure and evaluate this bias effectively. Our survey critically examines the existing literature on gender bias in Transformers, shedding light on the diverse methodologies and metrics employed to assess bias. Several limitations in current approaches to measuring gender bias in Transformers are identified, encompassing the utilization of incomplete or flawed metrics, inadequate dataset sizes, and a dearth of standardization in evaluation methods. Furthermore, our survey delves into the potential ramifications of gender bias in Transformers for downstream applications, including dialogue systems and machine translation. We underscore the importance of fostering equity and fairness in these systems by emphasizing the need for heightened awareness and accountability in developing and deploying language technologies. This paper serves as a comprehensive overview of gender bias in Transformer models, providing novel insights and offering valuable directions for future research in this critical domain
Cross-frame feature-saliency mutual reinforcing for weakly supervised video salient object detection
Sound Event Detection by Exploring Audio Sequence Modelling
Everyday sounds in real-world environments are a powerful source of information by which humans can interact with their environments. Humans can infer what is happening around them by listening to everyday sounds. At the same time, it is a challenging task for a computer algorithm in a smart device to automatically recognise, understand, and interpret everyday sounds. Sound event detection (SED) is the process of transcribing an audio recording into sound event tags with onset and offset time values. This involves classification and segmentation of sound events in the given audio recording. SED has numerous applications in everyday life which include security and surveillance, automation, healthcare monitoring, multimedia information retrieval, and assisted living technologies. SED is to everyday sounds what automatic speech recognition (ASR) is to speech and automatic music transcription (AMT) is to music. The fundamental questions in designing a sound recognition system are, which portion of a sound event should the system analyse, and what proportion of a sound event should the system process in order to claim a confident detection of that particular sound event. While the classification of sound events has improved a lot in recent years, it is considered that the temporal-segmentation of sound events has not improved in the same extent. The aim of this thesis is to propose and develop methods to improve the segmentation and classification of everyday sound events in SED models. In particular, this thesis explores the segmentation of sound events by investigating audio sequence encoding-based and audio sequence modelling-based methods, in an effort to improve the overall sound event detection performance. In the first phase of this thesis, efforts are put towards improving sound event detection by explicitly conditioning the audio sequence representations of an SED model using sound activity detection (SAD) and onset detection. To achieve this, we propose multi-task learning-based SED models in which SAD and onset detection are used as auxiliary tasks for the SED task. The next part of this thesis explores self-attention-based audio sequence modelling, which aggregates audio representations based on temporal relations within and between sound events, scored on the basis of the similarity of sound event portions in audio event sequences. We propose SED models that include memory-controlled, adaptive, dynamic, and source separation-induced self-attention variants, with the aim to improve overall sound recognition
Accessibility at Film Festivals: Guidelines for Inclusive Subtitling
In today's media-dominated world, the imperative for accessibility has never been greater, and ensuring that audiovisual experiences cater to individuals with sensory disabilities has become a pressing concern. One of the key initiatives in this endeavour is inclusive subtitling (IS), a practice rooted in the broader contexts of subtitling for the deaf and hard of hearing (SDH/CC), audiovisual translation studies (AVTS), media accessibility studies (MAS), and the evolving field of Deaf studies (DS). This study aims to offer a comprehensive exploration of how inclusive subtitling contributes to fostering accessible and inclusive audiovisual experiences, with a particular focus on its implications within the unique environment of film festivals. To gain a holistic perspective of inclusive subtitling, it is essential to examine its lineage in relation to analogous practices, which is the focus of the first chapter. Inclusive subtitling is an extension of SDH/CC, designed for individuals with hearing impairments, and SDH/CC, in turn, is a nuanced variation of traditional subtitling extensively explored within the realm of AVTS. To encapsulate the diverse techniques and modalities aimed at making audiovisual content universally accessible, the study recognises the term "Audiovisual Accessibility" (AVA). The second chapter explores the interconnection of accessibility studies (AS), AVTS, and MAS, highlighting their symbiotic relationship and their role in framing inclusive subtitles within these fields. These interconnections are pivotal in shaping a framework for the practice of inclusive subtitling, enabling a comprehensive examination of its applicability and research implications. The third chapter delves into Deaf studies and the evolution of Deafhood, which hinges on the history and culture of Deaf individuals. This chapter elucidates the distinction between ‘deafness’ as a medical construct and ‘Deafhood’ as a cultural identity, crucial to the understanding of audiovisual accessibility and its intersection with the Deaf community's perspectives. In the fourth chapter, the focus turns to the exploration of film festivals, with a specific emphasis on the crucial role of subtitles in enhancing accessibility, particularly when films are presented in their original languages. The chapter marks a critical point, highlighting the inherent connection between subtitles and the immersive nature of film festivals that aspire to promote inclusivity in the cinematic experience. The emphasis on inclusivity extends to the evolution of film festivals, giving rise to more advanced forms, including accessible film festivals and Deaf film festivals. At the core of the chapter is a thorough examination of the corpus, specifically, the SDH/CC of films spanning the editions from 2020 to 2023 of two highly significant film festivals, namely BFI Flare and the London Film Festival. The corpus serves as the foundation upon which my research unfolds, providing a nuanced understanding of the role subtitles play in film festival contexts. The main chapter, chapter five, thoroughly analyses the technical and linguistic aspects of inclusive subtitling, drawing insights from the Inclusive Subtitling Guidelines - a two version document devised by myself - and offering real-world applications supported by a case study at an Italian film festival and another case study of the short film Pure, with the relevant inclusive subtitles file annexed. In conclusion, the research sets the stage for a comprehensive exploration of inclusive subtitling's role in ensuring accessible and inclusive audiovisual experiences, particularly within film festivals. It underscores the importance of accessibility in the world of audiovisual media and highlights the need for inclusive practices to cater to diverse audiences
Photographer-guided attributes for underwater image aesthetics
Automated aesthetic assessment of photographs is an active research area with applications in image editing and retrieval. There are many suggestions on the various factors of importance in making an image ‘good’ or ‘aesthetically pleasing’. However, there is no consensus in the literature on a definitive set of attributes that contribute to image aesthetics for underwater images, which include features specific to an aquatic environment. In this research we interview underwater photographers and apply thematic analysis to their responses with the aim of determining which attributes are important for an aesthetically-pleasing underwater image. The results define a set of nine key attributes (i.e. Aesthetics, Aquatic features, Colour, Composition, Image precision, Lighting, Novelty, Subject(s), and Technical competence). These findings will guide future work in automated assessment of underwater image aesthetics
- …