184 research outputs found

    Extremal Regions Detection Guided by Maxima of Gradient Magnitude

    Get PDF

    MTrack: Automated Detection, Tracking, and Analysis of Dynamic Microtubules

    Get PDF
    Microtubules are polar, dynamic filaments fundamental to many cellular processes. In vitro reconstitution approaches with purified tubulin are essential to elucidate different aspects of microtubule behavior. To date, deriving data from fluorescence microscopy images by manually creating and analyzing kymographs is still commonplace. Here, we present MTrack, implemented as a plug-in for the open-source platform Fiji, which automatically identifies and tracks dynamic microtubules with sub-pixel resolution using advanced objection recognition. MTrack provides automatic data interpretation yielding relevant parameters of microtubule dynamic instability together with population statistics. The application of our software produces unbiased and comparable quantitative datasets in a fully automated fashion. This helps the experimentalist to achieve higher reproducibility at higher throughput on a user-friendly platform. We use simulated data and real data to benchmark our algorithm and show that it reliably detects, tracks, and analyzes dynamic microtubules and achieves sub-pixel precision even at low signal-to-noise ratios.V.K. was supported by the IRI Life Sciences postdoc fellowship in the labs of S.R. and S.P. C.H. and S.R. acknowledge funding by the IRI Life Sciences (Humboldt-Universität zu Berlin, Excellence Initiative/DFG). W.H. was supported by the Alliance Berlin Canberra “Crossing Boundaries: Molecular Interactions in Malaria”, which is co-funded by a grant from the Deutsche Forschungsgemeinschaf (DFG) for the International Research Training Group (IRTG) 2290 and the Australian National University. S.P. was supported by the MDC Berlin

    Characterness: An indicator of text in the wild

    Full text link
    Text in an image provides vital information for interpreting its contents, and text in a scene can aid a variety of tasks from navigation to obstacle avoidance and odometry. Despite its value, however, detecting general text in images remains a challenging research problem. Motivated by the need to consider the widely varying forms of natural text, we propose a bottom-up approach to the problem, which reflects the characterness of an image region. In this sense, our approach mirrors the move from saliency detection methods to measures of objectness. In order to measure the characterness, we develop three novel cues that are tailored for character detection and a Bayesian method for their integration. Because text is made up of sets of characters, we then design a Markov random field model so as to exploit the inherent dependencies between characters. We experimentally demonstrate the effectiveness of our characterness cues as well as the advantage of Bayesian multicue integration. The proposed text detector outperforms state-of-the-art methods on a few benchmark scene text detection data sets. We also show that our measurement of characterness is superior than state-of-the-art saliency detection models when applied to the same task. © 2013 IEEE

    A mask-based approach for the geometric calibration of thermal-infrared cameras

    Get PDF
    Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site

    Robust Feature Matching in Long-Running Poor-Quality Videos

    Get PDF

    Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison

    Get PDF
    Because of its non-destructive nature, label-free imaging is an important strategy for studying biological processes. However, routine microscopic techniques like phase contrast or DIC suffer from shadow-cast artifacts making automatic segmentation challenging. The aim of this study was to compare the segmentation efficacy of published steps of segmentation work-flow (image reconstruction, foreground segmentation, cell detection (seed-point extraction) and cell (instance) segmentation) on a dataset of the same cells from multiple contrast microscopic modalities

    Graph-based Spatial Motion Tracking Using Affine-covariant Regions

    Get PDF
    This thesis considers the task of spatial motion reconstruction from image sequences using a stereoscopic camera setup. In a variety of fields, such as flow analysis in physics or the measurement of oscillation characteristics and damping behavior in mechanical engineering, efficient and accurate methods for motion analysis are of great importance. This work discusses each algorithmic step of the motion reconstruction problem using a set of freely available image sequences. The presented concepts and evaluation results are of a generic nature and may thus be applied to a multitude of applications in various fields, where motion can be observed by two calibrated cameras. The first step in the processing chain of a motion reconstruction algorithm is concerned with the automated detection of salient locations (=features or regions) within each image of a given sequence. In this thesis, detection is directly performed on the natural texture of the observed objects instead of using artificial marker elements (as with many currently available methods). As one of the major contributions of this work, five well-known detection methods from the contemporary literature are compared to each other with regard to several performance measures, such as localization accuracy or the robustness under perspective distortions. The given results extend the available literature on the topic and facilitate the well-founded selection of appropriate detectors according to the requirements of specific target applications. In the second step, both spatial and temporal correspondences have to be established between features extracted from different images. With the former, two images taken at the same time instant but with different cameras are considered (stereo reconstruction) while with the latter, correspondences are sought between temporally adjacent images from the same camera instead (monocular feature tracking). With most classical methods, an observed object is either spatially reconstructed at a single time instant yielding a set of three-dimensional coordinates, or its motion is analyzed separately within each camera yielding a set of two-dimensional trajectories. A major contribution of this thesis is a concept for the unification of both stereo reconstruction and monocular tracking. Based on sets of two-dimensional trajectories from each camera of a stereo setup, the proposed method uses a graph-based approach to find correspondences not between single features but between entire trajectories instead. Thereby, the influence of locally ambiguous correspondences is mitigated significantly. The resulting spatial trajectories contain both the three-dimensional structure and the motion of the observed objects at the same time. To the best knowledge of the author, a similar concept does not yet exist in the literature. In a detailed evaluation, the superiority of the new method is demonstrated

    Robust surface modelling of visual hull from multiple silhouettes

    Get PDF
    Reconstructing depth information from images is one of the actively researched themes in computer vision and its application involves most vision research areas from object recognition to realistic visualisation. Amongst other useful vision-based reconstruction techniques, this thesis extensively investigates the visual hull (VH) concept for volume approximation and its robust surface modelling when various views of an object are available. Assuming that multiple images are captured from a circular motion, projection matrices are generally parameterised in terms of a rotation angle from a reference position in order to facilitate the multi-camera calibration. However, this assumption is often violated in practice, i.e., a pure rotation in a planar motion with accurate rotation angle is hardly realisable. To address this problem, at first, this thesis proposes a calibration method associated with the approximate circular motion. With these modified projection matrices, a resulting VH is represented by a hierarchical tree structure of voxels from which surfaces are extracted by the Marching cubes (MC) algorithm. However, the surfaces may have unexpected artefacts caused by a coarser volume reconstruction, the topological ambiguity of the MC algorithm, and imperfect image processing or calibration result. To avoid this sensitivity, this thesis proposes a robust surface construction algorithm which initially classifies local convex regions from imperfect MC vertices and then aggregates local surfaces constructed by the 3D convex hull algorithm. Furthermore, this thesis also explores the use of wide baseline images to refine a coarse VH using an affine invariant region descriptor. This improves the quality of VH when a small number of initial views is given. In conclusion, the proposed methods achieve a 3D model with enhanced accuracy. Also, robust surface modelling is retained when silhouette images are degraded by practical noise

    Contributions to the Completeness and Complementarity of Local Image Features

    Get PDF
    Tese de doutoramento em Engenharia Informática apresentada à Faculdade de Ciências e Tecnologia da Universidade de CoimbraLocal image feature detection (or extraction, if we want to use a more semantically correct term) is a central and extremely active research topic in the field of computer vision. Reliable solutions to prominent problems such as matching, content-based image retrieval, object (class) recognition, and symmetry detection, often make use of local image features. It is widely accepted that a good local feature detector is the one that efficiently retrieves distinctive, accurate, and repeatable features in the presence of a wide variety of photometric and geometric transformations. However, these requirements are not always the most important. In fact, not all the applications require the same properties from a local feature detector. We can distinguish three broad categories of applications according to the required properties. The first category includes applications in which the semantic meaning of a particular type of features is exploited. For instance, edge or even ridge detection can be used to identify blood vessels in medical images or watercourses in aerial images. Another example in this category is the use of blob extraction to identify blob-like organisms in microscopic images. A second category includes tasks such as matching, tracking, and registration, which mainly require distinctive, repeatable, and accurate features. Finally, a third category comprises applications such as object (class) recognition, image retrieval, scene classification, and image compression. For this category, it is crucial that features preserve the most informative image content (robust image representation), while requirements such as repeatability and accuracy are of less importance. Our research work is mainly focused on the problem of providing a robust image representation through the use of local features. The limited number of types of features that a local feature extractor responds to might be insufficient to provide the so-called robust image representation. It is fundamental to analyze the completeness of local features, i.e., the amount of image information preserved by local features, as well as the often neglected complementarity between sets of features. The major contributions of this work come in the form of two substantially different local feature detectors aimed at providing considerably robust image representations. The first algorithm is an information theoretic-based keypoint extraction that responds to complementary local structures that are salient (highly informative) within the image context. This method represents a new paradigm in local feature extraction, as it introduces context-awareness principles. The second algorithm extracts Stable Salient Shapes, a novel type of regions, which are obtained through a feature-driven detection of Maximally Stable Extremal Regions (MSER). This method provides compact and robust image representations and overcomes some of the major shortcomings of MSER detection. We empirically validate the methods by investigating the repeatability, accuracy, completeness, and complementarity of the proposed features on standard benchmarks. Under these results, we discuss the applicability of both methods
    corecore