122 research outputs found

    3D exemplar-based image inpainting in electron microscopy

    Get PDF
    In electron microscopy (EM) a common problem is the non-availability of data, which causes artefacts in reconstructions. In this thesis the goal is to generate artificial data where missing in EM by using exemplar-based inpainting (EBI). We implement an accelerated 3D version tailored to applications in EM, which reduces reconstruction times from days to minutes. We develop intelligent sampling strategies to find optimal data as input for reconstruction methods. Further, we investigate approaches to reduce electron dose and acquisition time. Sparse sampling followed by inpainting is the most promising approach. As common evaluation measures may lead to misinterpretation of results in EM and falsify a subsequent analysis, we propose to use application driven metrics and demonstrate this in a segmentation task. A further application of our technique is the artificial generation of projections in tiltbased EM. EBI is used to generate missing projections, such that the full angular range is covered. Subsequent reconstructions are significantly enhanced in terms of resolution, which facilitates further analysis of samples. In conclusion, EBI proves promising when used as an additional data generation step to tackle the non-availability of data in EM, which is evaluated in selected applications. Enhancing adaptive sampling methods and refining EBI, especially considering the mutual influence, promotes higher throughput in EM using less electron dose while not lessening quality.Ein häufig vorkommendes Problem in der Elektronenmikroskopie (EM) ist die Nichtverfügbarkeit von Daten, was zu Artefakten in Rekonstruktionen führt. In dieser Arbeit ist es das Ziel fehlende Daten in der EM künstlich zu erzeugen, was durch Exemplar-basiertes Inpainting (EBI) realisiert wird. Wir implementieren eine auf EM zugeschnittene beschleunigte 3D Version, welche es ermöglicht, Rekonstruktionszeiten von Tagen auf Minuten zu reduzieren. Wir entwickeln intelligente Abtaststrategien, um optimale Datenpunkte für die Rekonstruktion zu erhalten. Ansätze zur Reduzierung von Elektronendosis und Aufnahmezeit werden untersucht. Unterabtastung gefolgt von Inpainting führt zu den besten Resultaten. Evaluationsmaße zur Beurteilung der Rekonstruktionsqualität helfen in der EM oft nicht und können zu falschen Schlüssen führen, weswegen anwendungsbasierte Metriken die bessere Wahl darstellen. Dies demonstrieren wir anhand eines Beispiels. Die künstliche Erzeugung von Projektionen in der neigungsbasierten Elektronentomographie ist eine weitere Anwendung. EBI wird verwendet um fehlende Projektionen zu generieren. Daraus resultierende Rekonstruktionen weisen eine deutlich erhöhte Auflösung auf. EBI ist ein vielversprechender Ansatz, um nicht verfügbare Daten in der EM zu generieren. Dies wird auf Basis verschiedener Anwendungen gezeigt und evaluiert. Adaptive Aufnahmestrategien und EBI können also zu einem höheren Durchsatz in der EM führen, ohne die Bildqualität merklich zu verschlechtern

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Reduction of Limited Angle Artifacts in Medical Tomography via Image Reconstruction

    Get PDF
    Artifacts are unwanted effects in tomographic images that do not reflect the nature of the object. Their widespread occurrence makes their reduction and if possible removal an important subject in the development of tomographic image reconstruction algorithms. Limited angle artifacts are caused by the limited angular measurements, constraining the available tomographic information. This thesis focuses on reducing these artifacts via image reconstruction in two cases of incomplete measurements from: (1) the gaps left after the removal of high density objects such as dental fillings, screws and implants in computed tomography (CT) and (2) partial ring scanner configurations in positron emission tomography (PET). In order to include knowledge about the measurement and noise, prior terms were used within the reconstruction methods. Careful consideration was given to the trade-off between image blurring and noise reduction upon reconstruction of low-dose measurements.Development of reconstruction methods is an incremental process starting with testing on simple phantoms towards more clinically relevant ones by modeling the respective physical processes involved. In this work, phantoms were constructed to ensure that the proposed reconstruction methods addressed to the limited angle problem. The reconstructed images were assessed qualitatively and quantitatively in terms of noise reduction, edge sharpness and contrast recovery.Maximum a posteriori (MAP) estimation with median root prior (MRP) was selected for the reconstruction of limited angle measurements. MAP with MRP successfully reduced the artifacts caused by limited angle data in various datasets, tested with the reconstruction of both list-mode and projection data. In all cases, its performance was found to be superior to conventional reconstruction methods such as total-variation (TV) prior, maximum likelihood expectation maximization (MLEM) and filtered backprojection (FBP). MAP with MRP was also more robust with respect to parameter selection than MAP with TV prior.This thesis demonstrates the wide-range applicability of MAP with MRP in medical tomography, especially in low-dose imaging. Furthermore, we emphasize the importance of developing and testing reconstruction methods with application-specific phantoms, together with the properties and limitations of the measurements in mind

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera

    Get PDF
    Hand movements play an essential role in a person’s ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation. The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA. First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10°. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten. These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces

    On incorporating inductive biases into deep neural networks

    Get PDF
    A machine learning (ML) algorithm can be interpreted as a system that learns to capture patterns in data distributions. Before the modern \emph{deep learning era}, emulating the human brain, the use of structured representations and strong inductive bias have been prevalent in building ML models, partly due to the expensive computational resources and the limited availability of data. On the contrary, armed with increasingly cheaper hardware and abundant data, deep learning has made unprecedented progress during the past decade, showcasing incredible performance on a diverse set of ML tasks. In contrast to \emph{classical ML} models, the latter seeks to minimize structured representations and inductive bias when learning, implicitly favoring the flexibility of learning over manual intervention. Despite the impressive performance, attention is being drawn towards enhancing the (relatively) weaker areas of deep models such as learning with limited resources, robustness, minimal overhead to realize simple relationships, and ability to generalize the learned representations beyond the training conditions, which were (arguably) the forte of classical ML. Consequently, a recent hybrid trend is surfacing that aims to blend structured representations and substantial inductive bias into deep models, with the hope of improving them. Based on the above motivation, this thesis investigates methods to improve the performance of deep models using inductive bias and structured representations across multiple problem domains. To this end, we inject a priori knowledge into deep models in the form of enhanced feature extraction techniques, geometrical priors, engineered features, and optimization constraints. Especially, we show that by leveraging the prior knowledge about the task in hand and the structure of data, the performance of deep learning models can be significantly elevated. We begin by exploring equivariant representation learning. In general, the real-world observations are prone to fundamental transformations (e.g., translation, rotation), and deep models typically demand expensive data-augmentations and a high number of filters to tackle such variance. In comparison, carefully designed equivariant filters possess this ability by nature. Henceforth, we propose a novel \emph{volumetric convolution} operation that can convolve arbitrary functions in the unit-ball (B3\mathbb{B}^3) while preserving rotational equivariance by projecting the input data onto the Zernike basis. We conduct extensive experiments and show that our formulations can be used to construct significantly cheaper ML models. Next, we study generative modeling of 3D objects and propose a principled approach to synthesize 3D point-clouds in the spectral-domain by obtaining a structured representation of 3D points as functions on the unit sphere (S2\mathbb{S}^2). Using the prior knowledge about the spectral moments and the output data manifold, we design an architecture that can maximally utilize the information in the inputs and generate high-resolution point-clouds with minimal computational overhead. Finally, we propose a framework to build normalizing flows (NF) based on increasing triangular maps and Bernstein-type polynomials. Compared to the existing NF approaches, our framework consists of favorable characteristics for fusing inductive bias within the model i.e., theoretical upper bounds for the approximation error, robustness, higher interpretability, suitability for compactly supported densities, and the ability to employ higher degree polynomials without training instability. Most importantly, we present a constructive universality proof, which permits us to analytically derive the optimal model coefficients for known transformations without training
    corecore