87 research outputs found

    Decomposition and classification of electroencephalography data

    Get PDF

    Unsupervised methods for large-scale, cell-resolution neural data analysis

    Get PDF
    In order to keep up with the volume of data, as well as the complexity of experiments and models in modern neuroscience, we need scalable and principled analytic programmes that take into account the scientific goals and the challenges of biological experiments. This work focuses on algorithms that tackle problems throughout the whole data analysis process. I first investigate how to best transform two-photon calcium imaging microscopy recordings – sets of contiguous images – into an easier-to-analyse matrix containing time courses of individual neurons. For this I first estimate how the true fluorescence signal gets transformed by tissue artefacts and the microscope setup, by learning the parameters of a realistic physical model from recorded data. Next, I describe how individual neural cell bodies may be segmented from the images, based on a cost function tailored to neural characteristics. Finally, I describe an interpretable non-linear dynamical model of neural population activity, which provides immediate scientific insight into complex system behaviour, and may spawn a new way of investigating stochastic non-linear dynamical systems. I hope the algorithms described here will not only be integrated into analytic pipelines of neural recordings, but also point out that algorithmic design should be informed by communication with the broader community, understanding and tackling the challenges inherent in experimental biological science

    Latent variable regression and applications to planetary seismic instrumentation

    Get PDF
    The work presented in this thesis is framed by the concept of latent variables, a modern data analytics approach. A latent variable represents an extracted component from a dataset which is not directly measured. The concept is first applied to combat the problem of ill-posed regression through the promising method of partial least squares (PLS). In this context the latent variables within a data matrix are extracted through an iterative algorithm based on cross-covariance as an optimisation criterion. This work first extends the PLS algorithm, using adaptive and recursive techniques, for online, non-stationary data applications. The standard PLS algorithm is further generalised for complex-, quaternion- and tensor-valued data. In doing so it is shown that the multidimensional algebras facilitate physically meaningful representations, demonstrated through smart-grid frequency estimation and image-classification tasks. The second part of the thesis uses this knowledge to inform a performance analysis of the MEMS microseismometer implemented for the InSight mission to Mars. This is given in terms of the sensor's intrinsic self-noise, the estimation of which is achieved from experimental data with a colocated instrument. The standard coherence and proposed delta noise estimators are analysed with respect to practical issues. The implementation of algorithms for the alignment, calibration and post-processing of the data then enabled a definitive self-noise estimate, validated from data acquired in ultra-quiet, deep-space environment. A method for the decorrelation of the microseismometer's output from its thermal response is proposed. To do so a novel sensor fusion approach based on the Kalman filter is developed for a full-band transfer-function correction, in contrast to the traditional ill-posed frequency division method. This algorithm was applied to experimental data which determined the thermal model coefficients while validating the sensor's performance at tidal frequencies 1E-5Hz and in extreme environments at -65C. This thesis, therefore, provides a definitive view of the latent variables perspective. This is achieved through the general algorithms developed for regression with multidimensional data and the bespoke application to seismic instrumentation.Open Acces

    Methods for Photoacoustic Image Reconstruction Exploiting Properties of Curvelet Frame

    Get PDF
    Curvelet frame is of special significance for photoacoustic tomography (PAT) due to its sparsifying and microlocalisation properties. In this PhD project, we explore the methods for image reconstruction in PAT with flat sensor geometry using Curvelet properties. This thesis makes five distinct contributions: (i) We investigate formulation of the forward, adjoint and inverse operators for PAT in Fourier domain. We derive a one-to-one map between wavefront directions in image and data spaces in PAT. Combining the Fourier operators with the wavefront map allows us to create the appropriate PAT operators for solving limited-view problems due to limited angular sensor sensitivity. (ii) We devise a concept of wedge restricted Curvelet transform, a modification of standard Curvelet transform, which allows us to formulate a tight frame of wedge restricted Curvelets on the range of the PAT forward operator for PAT data representation. We consider details specific to PAT data such as symmetries, time oversampling and their consequences. We further adapt the wedge restricted Curvelet to decompose the wavefronts into visible and invisible parts in the data domain as well as in the image domain. (iii) We formulate a two step approach based on the recovery of the complete volume of the photoacoustic data from the sub-sampled data followed by the acoustic inversion, and a one step approach where the photoacoustic image is directly recovered from the subsampled data. The wedge restricted Curvelet is used as the sparse representation of the photoacoustic data in the two step approach. (iv) We discuss a joint variational approach that incorporates Curvelet sparsity in photoacoustic image domain and spatio-temporal regularization via optical flow constraint to achieve improved results for dynamic PAT reconstruction. (v) We consider the limited-view problem due to limited angular sensitivity of the sensor (see (i) for the formulation of the corresponding fast operators in Fourier domain). We propose complementary information learning approach based on splitting the problem into visible and invisible singularities. We perform a sparse reconstruction of the visible Curvelet coefficients using compressed sensing techniques and propose a tailored deep neural network architecture to recover the invisible coefficients

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Deep invariant feature learning for remote sensing scene classification

    Get PDF
    Image classification, as the core task in the computer vision field, has proceeded at a break­neck pace. It largely attributes to the recent growth of deep learning techniques which have blown the conventional statistical methods on a plethora of benchmarks and even can outperform humans in specific image classification tasks. Despite deep learning exceeding alternative techniques, they have many apparent disadvantages that prevent them from being deployed for the general-purpose. Specifically, deep learning always requires a considerable amount of well-annotated data to circumvent the problems of over-fitting and the lacking of prior knowledge. However, manually labelled data is expensive to acquire and is impossible to incorporate the variations as much as the real world. Consequently, deep learning models usually fail when they confront with the underrepresented variations in the training data. This is the main reason why the deep learning model is barely satisfactory in the challeng­ing image recognition task that contains nuisance variations such as, Remote Sensing Scene Classification (RSSC). The classification of remote sensing scene image is a procedure of assigning the seman­tic meaning labels for the given satellite images that contain the complicated variations, such as texture and appearances. The algorithms for effectively understanding and recognising remote sensing scene images have the potential to be employed in a broad range of applications, such as urban planning, Land Use and Land Cover (LULC) determination, natural hazards detection, vegetation mapping, environmental monitoring. This inspires us to de­sign the frameworks that can automatically predict the precise label for satellite images. In our research project, we mine and define the challenges in RSSC community compared with general scene image recognition tasks. Specifically, we summarise the problems into the following perspectives. 1) Visual-semantic ambiguity: the discrepancy between visual features and semantic concepts; 2) Variations: the intra-class diversity and inter-class similarity; 3) Clutter background; 4) The small size of the training set; 5) Unsatisfactory classification accuracy in large-scale datasets. To address the aforementioned challenges, we explore a way to dynamically expand the capabilities of incorporating the prior knowledge by transforming the input data so that we can learn the globally invariant second-order features from the transformed data for improving the performance of RSSC tasks. First, we devise a recurrent transformer network (RTN) to progressively discover the discriminative regions of input images and learn the corresponding second-order features. The model is optimised using pairwise ranking loss to achieve localising discriminative parts and learning the corresponding features in a mutu­ally reinforced way. Second, we observed that existing remote sensing image datasets lack the provision of ontological structures. Therefore, a multi-granularity canonical appearance pooling (MG-CAP) model is proposed to automatically seek the implied hierarchical structures of datasets and produced covariance features contained the multi-grained information. Third, we explore a way to improve the discriminative power of the second-order features. To accomplish this target, we present a covariance feature embedding (CFE) model to im­prove the distinctive power of covariance pooling by using suitable matrix normalisation methods and a low-norm cosine similarity loss to accurately metric the distances of high­dimensional features. Finally, we improved the performance of RSSC while using fewer model parameters. An invariant deep compressible covariance pooling (IDCCP) model is presented to boost the classification accuracy for RSSC tasks. Meanwhile, we proofed the generalisability of our IDCCP model using group theory and manifold optimisation techniques. All of the proposed frameworks allow being optimised in an end-to-end manner and are well-supported by GPU acceleration. We conduct extensive experiments on the well-known remote sensing scene image datasets to demonstrate the great promotions of our proposed methods in comparison with state-of-the-art approaches

    Probabilistic Learning by Demonstration from Complete and Incomplete Data

    No full text
    In recent years we have observed a convergence of the fields of robotics and machine learning initiated by technological advances bringing AI closer to the physical world. A prerequisite, however, for successful applications is to formulate reliable and precise offline algorithms, requiring minimal tuning, fast and adaptive online algorithms and finally effective ways of rectifying corrupt demonstrations. In this work we aim to address some of those challenges. We begin by employing two offline algorithms for the purpose of Learning by Demonstration (LbD). A Bayesian non-parametric approach, able to infer the optimal model size without compromising the model's descriptive power and a Quantum Statistical extension to the mixture model able to achieve high precision for a given model size. We explore the efficacy of those algorithms in several one- and multi-shot LbD application achieving very promising results in terms of speed and and accuracy. Acknowledging that more realistic robotic applications also require more adaptive algorithmic approaches, we then introduce an online learning algorithm for quantum mixtures based on the online EM. The method exhibits high stability and precision, outperforming well-established online algorithms, as demonstrated for several regression benchmark datasets and a multi-shot trajectory LbD case study. Finally, aiming to account for data corruption due to sensor failures or occlusions, we propose a model for automatically rectifying damaged sequences in an unsupervised manner. In our approach we take into account the sequential nature of the data, the redundancy manifesting itself among repetitions of the same task and the potential of knowledge transfer across different tasks. We have devised a temporal factor model, with each factor modelling a single basic pattern in time and collectively forming a dictionary of fundamental trajectories shared across sequences. We have evaluated our method in a number of real-life datasets.Open Acces

    Single Pixel Polarimetric Imaging through Scattering Media

    Get PDF
    Compared to pure intensity-based imaging techniques, polarimetric imaging can provide additional information, particularly about an imaged object's compositional, morphological and microstructural properties. The value of polarimetric imaging has already been demonstrated in various applications, such as early glaucoma detection and cancer discrimination. Its applicability, however, to practical in vivo imaging situations is limited as the object of interest is often located behind a scattering layer, such as biological tissue, which scrambles both the spatial and polarimetric information about the object that is contained in the propagating light. As such, this work set out to find a means of conducting polarimetric imaging through scattering media. Under the assumption that it is possible to illuminate the object plane with the required spatial patterns, single pixel cameras can enable imaging in scattering environments and were hence thoroughly investigated in this thesis as a route to polarimetric imaging through scattering media. A theoretical model for single pixel polarimetric imaging was first developed, and conditions under which the proposed method was feasible were identified and verified using 2D coupled line dipole simulations. The proposed method was further tested through experiments conducted using an in-house custom-built setup, composed of off-the-shelf components. To mitigate noise and to ensure that the obtained polarimetric image was physical, a constrained least squares algorithm was proposed and implemented. Experiments with various test objects hidden behind scattering phantoms showed that single pixel polarimetric imaging was able to successfully reconstruct the polarimetric images of the hidden object, whereas a spatially resolved detector in the same configuration resulted in an image that bore no resemblance to the test object. Further experiments that were conducted with the same test objects hidden behind chicken breast slices were, unfortunately, unable to recover an accurate polarimetric image of the hidden object. Additional investigations identified two factors that had likely affected the image reconstruction - spatial inhomogeneity and temporally varying transmittance of the chicken breast, both of which were unaccounted for in the data processing. On the basis of the experiments and simulations conducted in this work, single pixel polarimetric imaging was found to be a feasible approach for polarimetric imaging through scattering media. Finally, further improvements to establish single pixel polarimetric imaging as a practical technique are discussed.Open Acces

    Numerical investigation of bone adaptation to exercise and fracture in Thoroughbred racehorses

    Get PDF
    Third metacarpal bone (MC3) fracture has a massive welfare and economic impact on horse racing, representing 45% of all fatal lower limb fractures, which in themselves represent more than 80% of reasons for death or euthanasia on the UK racecourses. Most of these fractures occur due to the accumulation of tissue fatigue as a result of repetitive loading rather than a specific traumatic event. Despite considerable research in the field, including applying various diagnostic methods, it still remains a challenge to accurately predict the fracture risk and prevent this type of injury. The objective of this thesis is to develop computational tools to quantify bone adaptation and resistance to fracture, thereby providing the basis for a viable and robust solution. Recent advances in subject-specific finite element model generation, for example computed tomography imaging and efficient segmentation algorithms, have significantly improved the accuracy of finite element modelling. Numerical analysis techniques are widely used to enhance understanding of fracture in bones and provide better insight into relationships between load transfer and bone morphology. This thesis proposes a finite element based framework allowing for integrated simulation of bone remodelling under specific loading conditions, followed by the evaluation of its fracture resistance. Accurate representation of bone geometry and heterogeneous material properties are obtained from calibrated computed tomography scans.The material mapping between CT-scan data and discretised geometries for the finite element method is carried out by using Moving Least Squares approximation and L2-projection. Thus is then used for numerical investigations and assessment of density gradients at the common site of fracture. Bone is able to adapt its density to changes in external conditions. This property is one of the most important mechanisms for the development of resistance to fracture. Therefore, a finite element approach for simulating adaptive bone changes (also called bone remodelling) is proposed. The implemented method is based on a phenomenological model of the macroscopic behaviour of bone based on the thermodynamics of open systems. Numerical results showed that the proposed technique has the potential to accurately simulate the long-term bone response to specified training conditions and also improve possible treatment options for bone implants. Assessment of the fracture risk was conducted with crack propagation analysis. The potential of two different approaches was investigated: smeared phase-field and discrete configurational mechanics approach. The popular phase-field method represents a crack by a smooth damage variable leading to a phase-field approximation of the variational formulation for brittle fracture. A robust solution scheme was implemented using a monolithic solution scheme with arc-length control. In the configurational mechanics approach, the driving forces, and fracture energy release rate, are expressed in terms of nodal quantities, enabling a fully implicit formulation for modelling the evolving crack front. The approach was extended for the first time to capture the influence of heterogeneous density distribution. The outcomes of this study showed that discrete and smeared crack approximations are capable of predicting crack paths in three-dimensional heterogeneous bodies with comparable results. However, due to the necessity of using significantly finer meshes, phase-field was found to be less numerically efficient. Finally, the current state of the framework's development was assessed using numerical simulations for bone adaptation and subsequent fracture propagation, including analysis of an equine metacarpal bone. Numerical convergence was demonstrated for all examples, and the use of singularity elements proved to further improve the rate of convergence. It was shown that bone adaptation history and bone density distribution influence both fracture resistance and the resulting crack path. The promising results of this study offer a~novel framework to simulate changes in the bone structure in response to exercise and quantify the likelihood of a fracture
    • …
    corecore