3,090 research outputs found

    Joint deprojection of Sunyaev-Zeldovich and X-ray images of galaxy clusters

    Get PDF
    We present two non-parametric deprojection methods aimed at recovering the three-dimensional density and temperature profiles of galaxy clusters from spatially resolved thermal Sunyaev-Zeldovich (tSZ) and X-ray surface brightness maps, thus avoiding the use of X-ray spectroscopic data. In both methods, clusters are assumed to be spherically symmetric and modeled with an onion-skin structure. The first method follows a direct geometrical approach. The second method is based on the maximization of a single joint (tSZ and X-ray) likelihood function, which allows one to fit simultaneously the two signals by following a Monte Carlo Markov Chain approach. These techniques are tested against a set of cosmological simulations of clusters, with and without instrumental noise. We project each cluster along the three orthogonal directions defined by the principal axes of the momentum of inertia tensor. This enables us to check any bias in the deprojection associated to the cluster elongation along the line of sight. After averaging over all the three projection directions, we find an overall good reconstruction, with a small (<~10 per cent) overestimate of the gas density profile. This turns into a comparable overestimate of the gas mass within the virial radius, which we ascribe to the presence of residual gas clumping. Apart from this small bias the reconstruction has an intrinsic scatter of about 5 per cent, which is dominated by gas clumpiness. Cluster elongation along the line of sight biases the deprojected temperature profile upwards at r<~0.2r_vir and downwards at larger radii. A comparable bias is also found in the deprojected temperature profile. Overall, this turns into a systematic underestimate of the gas mass, up to 10 percent. (Abridged)Comment: 17 pages, 15 figures, accepted by MNRA

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process

    Full text link
    Neural radiance fields (NeRFs) have gained popularity across various applications. However, they face challenges in the sparse view setting, lacking sufficient constraints from volume rendering. Reconstructing and understanding a 3D scene from sparse and unconstrained cameras is a long-standing problem in classical computer vision with diverse applications. While recent works have explored NeRFs in sparse, unconstrained view scenarios, their focus has been primarily on enhancing reconstruction and novel view synthesis. Our approach takes a broader perspective by posing the question: "from where has each point been seen?" -- which gates how well we can understand and reconstruct it. In other words, we aim to determine the origin or provenance of each 3D point and its associated information under sparse, unconstrained views. We introduce ProvNeRF, a model that enriches a traditional NeRF representation by incorporating per-point provenance, modeling likely source locations for each point. We achieve this by extending implicit maximum likelihood estimation (IMLE) for stochastic processes. Notably, our method is compatible with any pre-trained NeRF model and the associated training camera poses. We demonstrate that modeling per-point provenance offers several advantages, including uncertainty estimation, criteria-based view selection, and improved novel view synthesis, compared to state-of-the-art methods. Please visit our project page at https://provnerf.github.i

    Monocular Vision SLAM for Indoor Aerial Vehicles

    Get PDF
    This paper presents a novel indoor navigation and ranging strategy by using a monocular camera. The proposed algorithms are integrated with simultaneous localization and mapping (SLAM) with a focus on indoor aerial vehicle applications. We experimentally validate the proposed algorithms by using a fully self-contained micro aerial vehicle (MAV) with on-board image processing and SLAM capabilities. The range measurement strategy is inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals. The navigation strategy assumes an unknown, GPS-denied environment, which is representable via corner-like feature points and straight architectural lines. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners

    3D Motion Analysis via Energy Minimization

    Get PDF
    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to computing the apparent image motion vector field. Furthermore this results currently in the most accurate motion estimation techniques in literature. Much as this is an engineering approach of fine-tuning precision to the last detail, it helps to get a better insight into the problem of motion estimation. This profoundly contributes to state-of-the-art research in motion analysis, in particular facilitating the use of motion estimation in a wide range of applications. In Chapter 5, scene flow is rethought. Scene flow stands for the three-dimensional motion vector field for every image pixel, computed from a stereo image sequence. Again, decoupling of the commonly coupled approach of estimating three-dimensional position and three dimensional motion yields an approach to scene ow estimation with more accurate results and a considerably lower computational load. It results in a dense scene flow field and enables additional applications based on the dense three-dimensional motion vector field, which are to be investigated in the future. One such application is the segmentation of moving objects in an image sequence. Detecting moving objects within the scene is one of the most important features to extract in image sequences from a dynamic environment. This is presented in Chapter 6. Scene flow and the segmentation of independently moving objects are only first steps towards machine visual kinesthesia. Throughout this work, I present possible future work to improve the estimation of optical flow and scene flow. Chapter 7 additionally presents an outlook on future research for driver assistance applications. But there is much more to the full understanding of the three-dimensional dynamic scene. This work is meant to inspire the reader to think outside the box and contribute to the vision of building perceiving machines.</em

    Filamentary Accretion Flows in the Embedded Serpens South Protocluster

    Full text link
    One puzzle in understanding how stars form in clusters is the source of mass -- is all of the mass in place before the first stars are born, or is there an extended period when the cluster accretes material which can continuously fuel the star formation process? We use a multi-line spectral survey of the southern filament associated with the Serpens South embedded cluster-forming region in order to determine if mass is accreting from the filament onto the cluster, and whether the accretion rate is significant. Our analysis suggests that material is flowing along the filament's long axis at a rate of ~30Msol/Myr (inferred from the N2H+ velocity gradient along the filament), and radially contracting onto the filament at ~130Msol/Myr (inferred from HNC self-absorption). These accretion rates are sufficient to supply mass to the central cluster at a similar rate to the current star formation rate in the cluster. Filamentary accretion flows may therefore be very important in the ongoing evolution of this cluster.Comment: 19 pages, 8 figures, 2 tables; accepted for publication in Ap

    Hopfield Networks in Relevance and Redundancy Feature Selection Applied to Classification of Biomedical High-Resolution Micro-CT Images

    Get PDF
    We study filter–based feature selection methods for classification of biomedical images. For feature selection, we use two filters — a relevance filter which measures usefulness of individual features for target prediction, and a redundancy filter, which measures similarity between features. As selection method that combines relevance and redundancy we try out a Hopfield network. We experimentally compare selection methods, running unitary redundancy and relevance filters, against a greedy algorithm with redundancy thresholds [9], the min-redundancy max-relevance integration [8,23,36], and our Hopfield network selection. We conclude that on the whole, Hopfield selection was one of the most successful methods, outperforming min-redundancy max-relevance when\ud more features are selected

    Quantitative analysis of microscopy

    Get PDF
    Particle tracking is an essential tool for the study of dynamics of biological processes. The dynamics of these processes happens in three-dimensional (3D) space as the biological structures themselves are 3D. The focus of this thesis is on the development of single particle tracking methods for analysis of the dynamics of biological processes through the use of image processing techniques. Firstly, introduced is a novel particle tracking method that works with two-dimensional (2D) image data. This method uses the theory of Haar-like features for particle detection and trajectory linking is achieved using a combination of three Kalman filters within an interacting multiple models framework. The trajectory linking process utilises an extended state space variable which better describe the morphology and intensity profiles of the particles under investigation at their current position. This tracking method is validated using both 2D synthetically generated images as well as 2D experimentally collected images. It is shown that this method outperforms 14 other stateof-the-art methods. Next this method is used to analyse the dynamics of fluorescently labelled particles using a live-cell fluorescence microscopy technique, specifically a variant of the super-resolution (SR) method PALM, spt-PALM. From this application, conclusions about the organisation of the proteins under investigation at the cell membrane are drawn. Introduced next is a second particle tracking method which is highly efficient and capable of working with both 2D and 3D image data. This method uses a novel Haar-inspired feature for particle detection, drawing inspiration from the type of particles to be detected which are typically circular in 2D space and spherical in 3D image space. Trajectory linking in this method utilises a global nearest neighbour methodology incorporating both motion models to describe the motion of the particles under investigation and a further extended state space variable describing many more aspects of the particles to be linked. This method is validated using a variety of both 2D and 3D synthetic image data. The methods performance is compared with 14 other state-of-the-art methods showing it to be one of the best overall performing methods. Finally, analysis tools to study a SR image restoration method developed by our research group, referred to as Translation Microscopy (TRAM) are investigated [1]. TRAM can be implemented on any standardised microscope and deliver an improvement in resolution of up to 7-fold. However, the results from TRAM and other SR imaging methods require specialised tools to validate and analyse them. Tools have been developed to validate that TRAM performs correctly using a specially designed ground truth. Furthermore, through analysis of results on a biological sample corroborate other published results based on the size of biological structures, showing again that TRAM performs as expected.EPSC
    • …
    corecore