43 research outputs found

    Motion Correction of Whole-Body PET Data with a Joint PET-MRI Registration Functional

    Get PDF
    Respiratory motion is known to degrade image quality in PET imaging. The necessary acquisition time of several minutes per bed position will inevitably lead to a blurring effect due to organ motion. A lot of research has been done with regards to motion correction of PET data. As full-body PET-MRI became available recently, the anatomical data provided by MRI is a promising source of motion information. Current PET-MRI-based motion correction approaches, however, do not take into account the available information provided by PET data. PET data, though, may add valuable additional information to increase motion estimation robustness and precision.In this work we propose a registration functional that is capable of performing motion detection in gated data of two modalities simultaneously. Evaluation is performed using phantom data. We demonstrate that performing a joint registration of both modalities does improve registration accuracy and PET image quality.<br

    Comparison of two 3D tracking paradigms for freely flying insects

    Full text link
    In this paper, we discuss and compare state-of-the-art 3D tracking paradigms for flying insects such as Drosophila melanogaster. If two cameras are employed to estimate the trajectories of these identical appearing objects, calculating stereo and temporal correspondences leads to an NP-hard assignment problem. Currently, there are two different types of approaches discussed in the literature: probabilistic approaches and global correspondence selection approaches. Both have advantages and limitations in terms of accuracy and complexity. Here, we present algorithms for both paradigms. The probabilistic approach utilizes the Kalman filter for temporal tracking. The correspondence selection approach calculates the trajectories based on an overall cost function. Limitations of both approaches are addressed by integrating a third camera to verify consistency of the stereo pairings and to reduce the complexity of the global selection. Furthermore, a novel greedy optimization scheme is introduced for the correspondence selection approach. We compare both paradigms based on synthetic data with ground truth availability. Results show that the global selection is more accurate, while the previously proposed tracking-by-matching (probabilistic) approach is causal and feasible for longer tracking periods and very high target densities. We further demonstrate that our extended global selection scheme outperforms current correspondence selection approaches in tracking accuracy and tracking time

    Dissecting Early Differentially Expressed Genes in a Mixture of Differentiating Embryonic Stem Cells

    Get PDF
    The differentiation of embryonic stem cells is initiated by a gradual loss of pluripotency-associated transcripts and induction of differentiation genes. Accordingly, the detection of differentially expressed genes at the early stages of differentiation could assist the identification of the causal genes that either promote or inhibit differentiation. The previous methods of identifying differentially expressed genes by comparing different cell types would inevitably include a large portion of genes that respond to, rather than regulate, the differentiation process. We demonstrate through the use of biological replicates and a novel statistical approach that the gene expression data obtained without prior separation of cell types are informative for detecting differentially expressed genes at the early stages of differentiation. Applying the proposed method to analyze the differentiation of murine embryonic stem cells, we identified and then experimentally verified Smarcad1 as a novel regulator of pluripotency and self-renewal. We formalized this statistical approach as a statistical test that is generally applicable to analyze other differentiation processes

    Comparative study of unsupervised dimension reduction techniques for the visualization of microarray gene expression data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Visualization of DNA microarray data in two or three dimensional spaces is an important exploratory analysis step in order to detect quality issues or to generate new hypotheses. Principal Component Analysis (PCA) is a widely used linear method to define the mapping between the high-dimensional data and its low-dimensional representation. During the last decade, many new nonlinear methods for dimension reduction have been proposed, but it is still unclear how well these methods capture the underlying structure of microarray gene expression data. In this study, we assessed the performance of the PCA approach and of six nonlinear dimension reduction methods, namely Kernel PCA, Locally Linear Embedding, Isomap, Diffusion Maps, Laplacian Eigenmaps and Maximum Variance Unfolding, in terms of visualization of microarray data.</p> <p>Results</p> <p>A systematic benchmark, consisting of Support Vector Machine classification, cluster validation and noise evaluations was applied to ten microarray and several simulated datasets. Significant differences between PCA and most of the nonlinear methods were observed in two and three dimensional target spaces. With an increasing number of dimensions and an increasing number of differentially expressed genes, all methods showed similar performance. PCA and Diffusion Maps responded less sensitive to noise than the other nonlinear methods.</p> <p>Conclusions</p> <p>Locally Linear Embedding and Isomap showed a superior performance on all datasets. In very low-dimensional representations and with few differentially expressed genes, these two methods preserve more of the underlying structure of the data than PCA, and thus are favorable alternatives for the visualization of microarray data.</p

    Dynamic Programming Based Segmentation in Biomedical Imaging

    Full text link
    Many applications in biomedical imaging have a demand on automatic detection of lines, contours, or boundaries of bones, organs, vessels, and cells. Aim is to support expert decisions in interactive applications or to include it as part of a processing pipeline for automatic image analysis. Biomedical images often suffer from noisy data and fuzzy edges. Therefore, there is a need for robust methods for contour and line detection. Dynamic programming is a popular technique that satisfies these requirements in many ways. This work gives a brief overview over approaches and applications that utilize dynamic programming to solve problems in the challenging field of biomedical imaging

    GERoMe-a Method for Evaluating Stability of Graph Extraction Algorithms Without Ground Truth

    Full text link
    The extraction of graph structures in Euclidean vector space is a topic of interest with applications in many fields, such as the analysis of vascular networks in the biomedical domain. While a number of approaches have been proposed to tackle the problem of graph extraction, a quantitative evaluation of those algorithms remains a challenging task: In many cases, manual generation of ground truth for real-world data is time-consuming, error-prone, and thus not feasible. While tools for generating synthetic datasets with corresponding ground truth exist, the resulting data often does not reflect the complexity that real-world scenarios show in morphology and topology. As a complementary or even alternative approach, we propose GERoMe, the graph extraction robustness measure, which provides a means of quantifying the stability of algorithms that extract (multi-)graphs with associated node positions from non-graph structures. Our method takes edge-associated properties into consideration and does not necessarily require ground truth data, although available ground truth information can be incorporated to additionally evaluate the correctness of the graph extraction algorithm. We evaluate the behavior of the proposed graph similarity measure and demonstrate the usefulness and applicability of our method in an exemplary study on both synthetic and real-world data

    Replayed video attack detection based on motion blur analysis

    No full text
    Abstract Face presentation attacks are the main threats to face recognition systems, and many presentation attack detection (PAD) methods have been proposed in recent years. Although these methods have achieved significant performance in some specific intrusion modes, difficulties still exist in addressing replayed video attacks. That is because the replayed fake faces contain a variety of aliveness signals, such as eye blinking and facial expression changes. Replayed video attacks occur when attackers try to invade biometric systems by presenting face videos in front of the cameras, and these videos are often launched by a liquid-crystal display (LCD) screen. Due to the smearing effects and movements of LCD, videos captured from the real and replayed fake faces present different motion blurs, which are reflected mainly in blur intensity variation and blur width. Based on these descriptions, a motion blur analysis-based method is proposed to deal with the replayed video attack problem. We first present a 1D convolutional neural network (CNN) for motion blur intensity variation description in the time domain, which consists of a serial of 1D convolutional and pooling filters. Then, a local similar pattern (LSP) feature is introduced to extract blur width. Finally, features extracted from 1D CNN and LSP are fused to detect the replayed video attacks. Extensive experiments on two standard face PAD databases, i.e., relay-attack and OULU-NPU, indicate that our proposed method based on the motion blur analysis significantly outperforms the state-of-the-art methods and shows excellent generalization capability

    Statistical Permutation-based Artery Mapping (SPAM): a novel approach to evaluate imaging signals in the vessel wall

    Full text link
    BACKGROUNG: Cardiovascular diseases are the leading cause of death worldwide. A prominent cause of cardiovascular events is atherosclerosis, a chronic inflammation of the arterial wall that leads to the formation of so called atherosclerotic plaques. There is a strong clinical need to develop new, non-invasive vascular imaging techniques in order to identify high-risk plaques, which might escape detection using conventional methods based on the assessment of the luminal narrowing. In this context, molecular imaging strategies based on fluorescent tracers and fluorescence reflectance imaging (FRI) seem well suited to assess molecular and cellular activity. However, such an analysis demands a precise and standardized analysis method, which is orientated on reproducible anatomical landmarks, ensuring to compare equivalent regions across different subjects. METHODS: We propose a novel method, Statistical Permutation-based Artery Mapping (SPAM). Our approach is especially useful for the understanding of complex and heterogeneous regional processes during the course of atherosclerosis. Our method involves three steps, which are (I) standardisation with an additional intensity normalization, (II) permutation testing, and (III) cluster-enhancement. Although permutation testing and cluster enhancement are already well-established in functional magnetic resonance imaging, to the best of our knowledge these strategies have so far not been applied in cardiovascular molecular imaging. RESULTS: We tested our method using FRI images of murine aortic vessels in order to find recurring patterns in atherosclerotic plaques across multiple subjects. We demonstrate that our pixel-wise and cluster-enhanced testing approach is feasible and useful to analyse tracer distributions in FRI data sets of aortic vessels. CONCLUSIONS: We expect our method to be a useful tool within the field of molecular imaging of atherosclerotic plaques since cluster-enhanced permutation testing is a powerful approach for finding significant differences of tracer distributions in inflamed atherosclerotic vessels.</p
    corecore