28 research outputs found

    Studying the Chlorophyll Fluorescence in Cyanobacteria with Membrane Computing Techniques

    Get PDF
    In this paper, we report a pioneer study of the decrease in chlorophyll uorescence produced by the reduction of MTT (a dimethyl thiazolyl diphenyl tetrazolium salt) monitored using an epi uorescence microscope coupled to automate image analysis in the framework of P systems. Such analysis has been performed by a family of tissue P systems working on the images as data inpuJunta de Andalucía P08-TIC-04200Ministerio de Economía y Competitividad TIN2012-3743

    Skeletonizing Images by Using Spiking Neural P Systems

    Get PDF
    Skeletonizing an image is representing a shape with a small amount of information by converting the initial image into a more compact representation and keeping the meaning features. In this paper we use spiking neural P systems to solve this problem. Based on such devices, a parallel software has been implemented on the GPU architecture. Some real-world applications and open lines for future research are also presented.Ministerio de Ciencia e Innovación TIN2008-04487-EMinisterio de Ciencia e Innovación TIN-2009-13192Junta de Andalucía P08-TIC-0420

    Single View 3D Reconstruction using Deep Learning

    Get PDF
    One of the major challenges in the field of Computer Vision has been the reconstruction of a 3D object or scene from a single 2D image. While there are many notable examples, traditional methods for single view reconstruction often fail to generalise due to the presence of many brittle hand-crafted engineering solutions, limiting their applicability to real world problems. Recently, deep learning has taken over the field of Computer Vision and ”learning to reconstruct” has become the dominant technique for addressing the limitations of traditional methods when performing single view 3D reconstruction. Deep learning allows our reconstruction methods to learn generalisable image features and monocular cues that would otherwise be difficult to engineer through ad-hoc hand-crafted approaches. However, it can often be difficult to efficiently integrate the various 3D shape representations within the deep learning framework. In particular, 3D volumetric representations can be adapted to work with Convolutional Neural Networks, but they are computationally expensive and memory inefficient when using local convolutional layers. Also, the successful learning of generalisable feature representations for 3D reconstruction requires large amounts of diverse training data. In practice, this is challenging for 3D training data, as it entails a costly and time consuming manual data collection and annotation process. Researchers have attempted to address these issues by utilising self-supervised learning and generative modelling techniques, however these approaches often produce suboptimal results when compared with models trained on larger datasets. This thesis addresses several key challenges incurred when using deep learning for ”learning to reconstruct” 3D shapes from single view images. We observe that it is possible to learn a compressed representation for multiple categories of the 3D ShapeNet dataset, improving the computational and memory efficiency when working with 3D volumetric representations. To address the challenge of data acquisition, we leverage deep generative models to ”hallucinate” hidden or latent novel viewpoints for a given input image. Combining these images with depths estimated by a self-supervised depth estimator and the known camera properties, allowed us to reconstruct textured 3D point clouds without any ground truth 3D training data. Furthermore, we show that is is possible to improve upon the previous self-supervised monocular depth estimator by adding a self-attention and a discrete volumetric representation, significantly improving accuracy on the KITTI 2015 dataset and enabling the estimation of uncertainty depth predictions.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 202

    Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen

    Get PDF
    This work deals with 3D reconstruction and 3D motion estimation from stereo images using variational methods that are based on dense optical flow. In the first part of the thesis, we will investigate a novel application for dense optical flow, namely the estimation of the fundamental matrix of a stereo image pair. By exploiting the high interdependency between the recovered stereo geometry and the established image correspondences, we propose a coupled refinement of the fundamental matrix and the optical flow as a second contribution, thereby improving the accuracy of both. As opposed to many existing techniques, our joint method does not solve for the camera pose and scene structure separately, but recovers them in a single optimisation step. True to our principle of joint optimisation, we further couple the dense 3D reconstruction of the scene to the estimation of its 3D motion in the final part of this thesis. This is achieved by integrating spatial and temporal information from multiple stereo pairs in a novel model for scene flow computation.Diese Arbeit befasst sich mit der 3D Rekonstruktion und der 3D Bewegungsschätzung aus Stereodaten unter Verwendung von Variationsansätzen, die auf dichten Verfahren zur Berechnung des optischen Flusses beruhen. Im ersten Teil der Arbeit untersuchen wir ein neues Anwendungsgebiet von dichtem optischen Fluss, nämlich die Bestimmung der Fundamentalmatrix aus Stereobildpaaren. Indem wir die Abhängigkeit zwischen der geschätzten Stereogeometrie in Form der Fundamentalmatrix und den berechneten Bildkorrespondenzen geeignet ausnutzen, sind wir in der Lage, im zweiten Teil der Arbeit eine gekoppelte Bestimmung der Fundamentalmatrix und des optischen Flusses vorzuschlagen, die zur einer Erhöhung der Genauigkeit beider Schätzungen führt. Im Gegensatz zu vielen existierenden Verfahren berechnet unser gekoppelter Ansatz dabei die Lage der Kameras und die 3D Szenenstruktur nicht einzeln, sondern bestimmt sie in einem einzigen gemeinsamen Optimierungsschritt. Dem Prinzip der gemeinsamen Schätzung weiter folgend koppeln wir im letzten Teil der Arbeit die dichte 3D Rekonstruktion der Szene zusätzlich mit der Bestimmung der zugehörigen 3D Bewegung. Dies wird durch die Intergation von räumlicher und zeitlicher Information aus mehreren Stereobildpaaren in ein neues Modell zur Szenenflussschätzung realisiert

    Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen

    Get PDF
    This work deals with 3D reconstruction and 3D motion estimation from stereo images using variational methods that are based on dense optical flow. In the first part of the thesis, we will investigate a novel application for dense optical flow, namely the estimation of the fundamental matrix of a stereo image pair. By exploiting the high interdependency between the recovered stereo geometry and the established image correspondences, we propose a coupled refinement of the fundamental matrix and the optical flow as a second contribution, thereby improving the accuracy of both. As opposed to many existing techniques, our joint method does not solve for the camera pose and scene structure separately, but recovers them in a single optimisation step. True to our principle of joint optimisation, we further couple the dense 3D reconstruction of the scene to the estimation of its 3D motion in the final part of this thesis. This is achieved by integrating spatial and temporal information from multiple stereo pairs in a novel model for scene flow computation.Diese Arbeit befasst sich mit der 3D Rekonstruktion und der 3D Bewegungsschätzung aus Stereodaten unter Verwendung von Variationsansätzen, die auf dichten Verfahren zur Berechnung des optischen Flusses beruhen. Im ersten Teil der Arbeit untersuchen wir ein neues Anwendungsgebiet von dichtem optischen Fluss, nämlich die Bestimmung der Fundamentalmatrix aus Stereobildpaaren. Indem wir die Abhängigkeit zwischen der geschätzten Stereogeometrie in Form der Fundamentalmatrix und den berechneten Bildkorrespondenzen geeignet ausnutzen, sind wir in der Lage, im zweiten Teil der Arbeit eine gekoppelte Bestimmung der Fundamentalmatrix und des optischen Flusses vorzuschlagen, die zur einer Erhöhung der Genauigkeit beider Schätzungen führt. Im Gegensatz zu vielen existierenden Verfahren berechnet unser gekoppelter Ansatz dabei die Lage der Kameras und die 3D Szenenstruktur nicht einzeln, sondern bestimmt sie in einem einzigen gemeinsamen Optimierungsschritt. Dem Prinzip der gemeinsamen Schätzung weiter folgend koppeln wir im letzten Teil der Arbeit die dichte 3D Rekonstruktion der Szene zusätzlich mit der Bestimmung der zugehörigen 3D Bewegung. Dies wird durch die Intergation von räumlicher und zeitlicher Information aus mehreren Stereobildpaaren in ein neues Modell zur Szenenflussschätzung realisiert

    Enhancing Registration for Image-Guided Neurosurgery

    Get PDF
    Pharmacologically refractive temporal lobe epilepsy and malignant glioma brain tumours are examples of pathologies that are clinically managed through neurosurgical intervention. The aims of neurosurgery are, where possible, to perform a resection of the surgical target while minimising morbidity to critical structures in the vicinity of the resected brain area. Image-guidance technology aims to assist this task by displaying a model of brain anatomy to the surgical team, which may include an overlay of surgical planning information derived from preoperative scanning such as the segmented resection target and nearby critical brain structures. Accurate neuronavigation is hindered by brain shift, the complex and non-rigid deformation of the brain that arises during surgery, which invalidates assumed rigid geometric correspondence between the neuronavigation model and the true shifted positions of relevant brain areas. Imaging using an interventional MRI (iMRI) scanner in a next-generation operating room can serve as a reference for intraoperative updates of the neuronavigation. An established clinical image processing workflow for iMRI-based guidance involves the correction of relevant imaging artefacts and the estimation of deformation due to brain shift based on non-rigid registration. The present thesis introduces two refinements aimed at enhancing the accuracy and reliability of iMRI-based guidance. A method is presented for the correction of magnetic susceptibility artefacts, which affect diffusion and functional MRI datasets, based on simulating magnetic field variation in the head from structural iMRI scans. Next, a method is presented for estimating brain shift using discrete non-rigid registration and a novel local similarity measure equipped with an edge-preserving property which is shown to improve the accuracy of the estimated deformation in the vicinity of the resected area for a number of cases of surgery performed for the management of temporal lobe epilepsy and glioma
    corecore