7,340 research outputs found

    Undergraduate Catalog of Studies, 2023-2024

    Get PDF

    Integrated Generative Adversarial Networks and Deep Convolutional Neural Networks for Image Data Classification A Case Study for COVID-19

    Get PDF
    Convolutional Neural Networks (CNNs) have garnered significant utilisation within automated image classification systems. CNNs possess the ability to leverage the spatial and temporal correlations inherent in a dataset. This study delves into the use of cutting-edge deep learning for precise image data classification, focusing on overcoming the difficulties brought on by the COVID-19 pandemic. In order to improve the accuracy and robustness of COVID-19 image classification, the study introduces a novel methodology that combines the strength of Deep Convolutional Neural Networks (DCNNs) and Generative Adversarial Networks (GANs). This proposed study helps to mitigate the lack of labelled coronavirus (COVID-19) images, which has been a standard limitation in related research, and improves the model’s ability to distinguish between COVID-19-related patterns and healthy lung images. The study uses a thorough case study and uses a sizable dataset of chest X-ray images covering COVID-19 cases, other respiratory conditions, and healthy lung conditions. The integrated model outperforms conventional DCNN-based techniques in terms of classification accuracy after being trained on this dataset. To address the issues of an unbalanced dataset, GAN will produce synthetic pictures and extract deep features from every image. A thorough understanding of the model’s performance in real-world scenarios is also provided by the study’s meticulous evaluation of the model’s performance using a variety of metrics, including accuracy, precision, recall, and F1-score

    Computational techniques to interpret the neural code underlying complex cognitive processes

    Get PDF
    Advances in large-scale neural recording technology have significantly improved the capacity to further elucidate the neural code underlying complex cognitive processes. This thesis aimed to investigate two research questions in rodent models. First, what is the role of the hippocampus in memory and specifically what is the underlying neural code that contributes to spatial memory and navigational decision-making. Second, how is social cognition represented in the medial prefrontal cortex at the level of individual neurons. To start, the thesis begins by investigating memory and social cognition in the context of healthy and diseased states that use non-invasive methods (i.e. fMRI and animal behavioural studies). The main body of the thesis then shifts to developing our fundamental understanding of the neural mechanisms underpinning these cognitive processes by applying computational techniques to ana lyse stable large-scale neural recordings. To achieve this, tailored calcium imaging and behaviour preprocessing computational pipelines were developed and optimised for use in social interaction and spatial navigation experimental analysis. In parallel, a review was conducted on methods for multivariate/neural population analysis. A comparison of multiple neural manifold learning (NML) algorithms identified that non linear algorithms such as UMAP are more adaptable across datasets of varying noise and behavioural complexity. Furthermore, the review visualises how NML can be applied to disease states in the brain and introduces the secondary analyses that can be used to enhance or characterise a neural manifold. Lastly, the preprocessing and analytical pipelines were combined to investigate the neural mechanisms in volved in social cognition and spatial memory. The social cognition study explored how neural firing in the medial Prefrontal cortex changed as a function of the social dominance paradigm, the "Tube Test". The univariate analysis identified an ensemble of behavioural-tuned neurons that fire preferentially during specific behaviours such as "pushing" or "retreating" for the animal’s own behaviour and/or the competitor’s behaviour. Furthermore, in dominant animals, the neural population exhibited greater average firing than that of subordinate animals. Next, to investigate spatial memory, a spatial recency task was used, where rats learnt to navigate towards one of three reward locations and then recall the rewarded location of the session. During the task, over 1000 neurons were recorded from the hippocampal CA1 region for five rats over multiple sessions. Multivariate analysis revealed that the sequence of neurons encoding an animal’s spatial position leading up to a rewarded location was also active in the decision period before the animal navigates to the rewarded location. The result posits that prospective replay of neural sequences in the hippocampal CA1 region could provide a mechanism by which decision-making is supported

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Applications of Deep Learning Models in Financial Forecasting

    Get PDF
    In financial markets, deep learning techniques sparked a revolution, reshaping conventional approaches and amplifying predictive capabilities. This thesis explored the applications of deep learning models to unravel insights and methodologies aimed at advancing financial forecasting. The crux of the research problem lies in the applications of predictive models within financial domains, characterised by high volatility and uncertainty. This thesis investigated the application of advanced deep-learning methodologies in the context of financial forecasting, addressing the challenges posed by the dynamic nature of financial markets. These challenges were tackled by exploring a range of techniques, including convolutional neural networks (CNNs), long short-term memory networks (LSTMs), autoencoders (AEs), and variational autoencoders (VAEs), along with approaches such as encoding financial time series into images. Through analysis, methodologies such as transfer learning, convolutional neural networks, long short-term memory networks, generative modelling, and image encoding of time series data were examined. These methodologies collectively offered a comprehensive toolkit for extracting meaningful insights from financial data. The present work investigated the practicality of a deep learning CNN-LSTM model within the Directional Change framework to predict significant DC events—a task crucial for timely decisionmaking in financial markets. Furthermore, the potential of autoencoders and variational autoencoders to enhance financial forecasting accuracy and remove noise from financial time series data was explored. Leveraging their capacity within financial time series, these models offered promising avenues for improved data representation and subsequent forecasting. To further contribute to financial prediction capabilities, a deep multi-model was developed that harnessed the power of pre-trained computer vision models. This innovative approach aimed to predict the VVIX, utilising the cross-disciplinary synergy between computer vision and financial forecasting. By integrating knowledge from these domains, novel insights into the prediction of market volatility were provided

    Assessing generalisability of deep learning-based polyp detection and segmentation methods through a computer vision challenge

    Get PDF
    Polyps are well-known cancer precursors identified by colonoscopy. However, variability in their size, appearance, and location makes the detection of polyps challenging. Moreover, colonoscopy surveillance and removal of polyps are highly operator-dependent procedures and occur in a highly complex organ topology. There exists a high missed detection rate and incomplete removal of colonic polyps. To assist in clinical procedures and reduce missed rates, automated methods for detecting and segmenting polyps using machine learning have been achieved in past years. However, the major drawback in most of these methods is their ability to generalise to out-of-sample unseen datasets from different centres, populations, modalities, and acquisition systems. To test this hypothesis rigorously, we, together with expert gastroenterologists, curated a multi-centre and multi-population dataset acquired from six different colonoscopy systems and challenged the computational expert teams to develop robust automated detection and segmentation methods in a crowd-sourcing Endoscopic computer vision challenge. This work put forward rigorous generalisability tests and assesses the usability of devised deep learning methods in dynamic and actual clinical colonoscopy procedures. We analyse the results of four top performing teams for the detection task and five top performing teams for the segmentation task. Our analyses demonstrate that the top-ranking teams concentrated mainly on accuracy over the real-time performance required for clinical applicability. We further dissect the devised methods and provide an experiment-based hypothesis that reveals the need for improved generalisability to tackle diversity present in multi-centre datasets and routine clinical procedures

    ScribFormer: Transformer Makes CNN Work Better for Scribble-based Medical Image Segmentation

    Get PDF
    Most recent scribble-supervised segmentation methods commonly adopt a CNN framework with an encoder-decoder architecture. Despite its multiple benefits, this framework generally can only capture small-range feature dependency for the convolutional layer with the local receptive field, which makes it difficult to learn global shape information from the limited information provided by scribble annotations. To address this issue, this paper proposes a new CNN-Transformer hybrid solution for scribble-supervised medical image segmentation called ScribFormer. The proposed ScribFormer model has a triple-branch structure, i.e., the hybrid of a CNN branch, a Transformer branch, and an attention-guided class activation map (ACAM) branch. Specifically, the CNN branch collaborates with the Transformer branch to fuse the local features learned from CNN with the global representations obtained from Transformer, which can effectively overcome limitations of existing scribble-supervised segmentation methods. Furthermore, the ACAM branch assists in unifying the shallow convolution features and the deep convolution features to improve model’s performance further. Extensive experiments on two public datasets and one private dataset show that our ScribFormer has superior performance over the state-of-the-art scribble-supervised segmentation methods, and achieves even better results than the fully-supervised segmentation methods. The code is released at https://github.com/HUANGLIZI/ScribFormer

    Capsule networks with residual pose routing

    Get PDF
    Capsule networks (CapsNets) have been known difficult to develop a deeper architecture, which is desirable for high performance in the deep learning era, due to the complex capsule routing algorithms. In this article, we present a simple yet effective capsule routing algorithm, which is presented by a residual pose routing. Specifically, the higher-layer capsule pose is achieved by an identity mapping on the adjacently lower-layer capsule pose. Such simple residual pose routing has two advantages: 1) reducing the routing computation complexity and 2) avoiding gradient vanishing due to its residual learning framework. On top of that, we explicitly reformulate the capsule layers by building a residual pose block. Stacking multiple such blocks results in a deep residual CapsNets (ResCaps) with a ResNet-like architecture. Results on MNIST, AffNIST, SmallNORB, and CIFAR-10/100 show the effectiveness of ResCaps for image classification. Furthermore, we successfully extend our residual pose routing to large-scale real-world applications, including 3-D object reconstruction and classification, and 2-D saliency dense prediction. The source code has been released on https://github.com/liuyi1989/ResCaps

    Development and assessment of learning-based vessel biomarkers from CTA in ischemic stroke

    Get PDF
    corecore