1,197 research outputs found

    Information selection and fusion in vision systems

    Get PDF
    Handling the enormous amounts of data produced by data-intensive imaging systems, such as multi-camera surveillance systems and microscopes, is technically challenging. While image and video compression help to manage the data volumes, they do not address the basic problem of information overflow. In this PhD we tackle the problem in a more drastic way. We select information of interest to a specific vision task, and discard the rest. We also combine data from different sources into a single output product, which presents the information of interest to end users in a suitable, summarized format. We treat two types of vision systems. The first type is conventional light microscopes. During this PhD, we have exploited for the first time the potential of the curvelet transform for image fusion for depth-of-field extension, allowing us to combine the advantages of multi-resolution image analysis for image fusion with increased directional sensitivity. As a result, the proposed technique clearly outperforms state-of-the-art methods, both on real microscopy data and on artificially generated images. The second type is camera networks with overlapping fields of view. To enable joint processing in such networks, inter-camera communication is essential. Because of infrastructure costs, power consumption for wireless transmission, etc., transmitting high-bandwidth video streams between cameras should be avoided. Fortunately, recently designed 'smart cameras', which have on-board processing and communication hardware, allow distributing the required image processing over the cameras. This permits compactly representing useful information from each camera. We focus on representing information for people localization and observation, which are important tools for statistical analysis of room usage, quick localization of people in case of building fires, etc. To further save bandwidth, we select which cameras should be involved in a vision task and transmit observations only from the selected cameras. We provide an information-theoretically founded framework for general purpose camera selection based on the Dempster-Shafer theory of evidence. Applied to tracking, it allows tracking people using a dynamic selection of as little as three cameras with the same accuracy as when using up to ten cameras

    Creating a platform for the democratisation of Deep Learning in microscopy

    Get PDF
    One of the major technological success stories of the last decade has been the advent of deep learning (DL), which has touched almost every aspect of modern life after a breakthrough performance in an image detection challenge in 2012. The bioimaging community quickly recognised the prospect of the automated ability to make sense of image data with near-human performance as potentially ground-breaking. In the decade since, hundreds of publications have used this technology to tackle many problems related to image analysis, such as labelling or counting cells, identifying cells or organelles of interest in large image datasets, or removing noise or improving the resolution of images. However, the adoption of DL tools in large parts of the bioimaging community has been slow, and many tools have remained in the hands of developers. In this project, I have identified key barriers which have prevented many bioimage analysts and microscopists from accessing existing DL technology in their field and have, in collaboration with colleagues, developed the ZeroCostDL4Mic platform, which aims to address these barriers. This project is inspired by the observation that the most significant impact technology can have in science is when it becomes ubiquitous, that is, when its use becomes essential to address the community’s questions. This work represents one of the first attempts to make DL tools accessible in a transparent, code-free, and affordable manner for bioimage analysis to unlock the full potential of DL via its democratisation for the bioimaging community

    A fast and accurate basis pursuit denoising algorithm with application to super-resolving tomographic SAR

    Get PDF
    L1L_1 regularization is used for finding sparse solutions to an underdetermined linear system. As sparse signals are widely expected in remote sensing, this type of regularization scheme and its extensions have been widely employed in many remote sensing problems, such as image fusion, target detection, image super-resolution, and others and have led to promising results. However, solving such sparse reconstruction problems is computationally expensive and has limitations in its practical use. In this paper, we proposed a novel efficient algorithm for solving the complex-valued L1L_1 regularized least squares problem. Taking the high-dimensional tomographic synthetic aperture radar (TomoSAR) as a practical example, we carried out extensive experiments, both with simulation data and real data, to demonstrate that the proposed approach can retain the accuracy of second order methods while dramatically speeding up the processing by one or two orders. Although we have chosen TomoSAR as the example, the proposed method can be generally applied to any spectral estimation problems.Comment: 11 pages, IEEE Transactions on Geoscience and Remote Sensin

    Automatic Neuron Detection in Calcium Imaging Data Using Convolutional Networks

    Full text link
    Calcium imaging is an important technique for monitoring the activity of thousands of neurons simultaneously. As calcium imaging datasets grow in size, automated detection of individual neurons is becoming important. Here we apply a supervised learning approach to this problem and show that convolutional networks can achieve near-human accuracy and superhuman speed. Accuracy is superior to the popular PCA/ICA method based on precision and recall relative to ground truth annotation by a human expert. These results suggest that convolutional networks are an efficient and flexible tool for the analysis of large-scale calcium imaging data.Comment: 9 pages, 5 figures, 2 ancillary files; minor changes for camera-ready version. appears in Advances in Neural Information Processing Systems 29 (NIPS 2016

    Learning associations between clinical information and motion-based descriptors using a large scale MR-derived cardiac motion atlas

    Full text link
    The availability of large scale databases containing imaging and non-imaging data, such as the UK Biobank, represents an opportunity to improve our understanding of healthy and diseased bodily function. Cardiac motion atlases provide a space of reference in which the motion fields of a cohort of subjects can be directly compared. In this work, a cardiac motion atlas is built from cine MR data from the UK Biobank (~ 6000 subjects). Two automated quality control strategies are proposed to reject subjects with insufficient image quality. Based on the atlas, three dimensionality reduction algorithms are evaluated to learn data-driven cardiac motion descriptors, and statistical methods used to study the association between these descriptors and non-imaging data. Results show a positive correlation between the atlas motion descriptors and body fat percentage, basal metabolic rate, hypertension, smoking status and alcohol intake frequency. The proposed method outperforms the ability to identify changes in cardiac function due to these known cardiovascular risk factors compared to ejection fraction, the most commonly used descriptor of cardiac function. In conclusion, this work represents a framework for further investigation of the factors influencing cardiac health.Comment: 2018 International Workshop on Statistical Atlases and Computational Modeling of the Hear

    Fast wide-volume functional imaging of engineered in vitro brain tissues

    Get PDF
    The need for in vitro models that mimic the human brain to replace animal testing and allow high-throughput screening has driven scientists to develop new tools that reproduce tissue-like features on a chip. Three-dimensional (3D) in vitro cultures are emerging as an unmatched platform that preserves the complexity of cell-to-cell connections within a tissue, improves cell survival, and boosts neuronal differentiation. In this context, new and flexible imaging approaches are required to monitor the functional states of 3D networks. Herein, we propose an experimental model based on 3D neuronal networks in an alginate hydrogel, a tunable wide-volume imaging approach, and an efficient denoising algorithm to resolve, down to single cell resolution, the 3D activity of hundreds of neurons expressing the calcium sensor GCaMP6s. Furthermore, we implemented a 3D co-culture system mimicking the contiguous interfaces of distinct brain tissues such as the cortical-hippocampal interface. The analysis of the network activity of single and layered neuronal co-cultures revealed cell-type-specific activities and an organization of neuronal subpopulations that changed in the two culture configurations. Overall, our experimental platform represents a simple, powerful and cost-effective platform for developing and monitoring living 3D layered brain tissue on chip structures with high resolution and high throughput
    • …
    corecore