1,480 research outputs found

    A multi-scale filament extraction method: getfilaments

    Get PDF
    Far-infrared imaging surveys of Galactic star-forming regions with Herschel have shown that a substantial part of the cold interstellar medium appears as a fascinating web of omnipresent filamentary structures. This highly anisotropic ingredient of the interstellar material further complicates the difficult problem of the systematic detection and measurement of dense cores in the strongly variable but (relatively) isotropic backgrounds. Observational evidence that stars form in dense filaments creates severe problems for automated source extraction methods that must reliably distinguish sources not only from fluctuating backgrounds and noise, but also from the filamentary structures. A previous paper presented the multi-scale, multi-wavelength source extraction method getsources based on a fine spatial scale decomposition and filtering of irrelevant scales from images. In this paper, a multi-scale, multi-wavelength filament extraction method getfilaments is presented that solves this problem, substantially improving the robustness of source extraction with getsources in filamentary backgrounds. The main difference is that the filaments extracted by getfilaments are now subtracted by getsources from detection images during source extraction, greatly reducing the chances of contaminating catalogs with spurious sources. The intimate physical relationship between forming stars and filaments seen in Herschel observations demands that accurate filament extraction methods must remove the contribution of sources and that accurate source extraction methods must be able to remove underlying filamentary structures. Source extraction with getsources now provides researchers also with clean images of filaments, free of sources, noise, and isotropic backgrounds.Comment: 15 pages, 19 figures, to be published in Astronomy & Astrophysics; language polished for better readabilit

    High-level programming of stencil computations on multi-GPU systems using the SkelCL library

    Get PDF
    The implementation of stencil computations on modern, massively parallel systems with GPUs and other accelerators currently relies on manually-tuned coding using low-level approaches like OpenCL and CUDA. This makes development of stencil applications a complex, time-consuming, and error-prone task. We describe how stencil computations can be programmed in our SkelCL approach that combines high-level programming abstractions with competitive performance on multi-GPU systems. SkelCL extends the OpenCL standard by three high-level features: 1) pre-implemented parallel patterns (a.k.a. skeletons); 2) container data types for vectors and matrices; 3) automatic data (re)distribution mechanism. We introduce two new SkelCL skeletons which specifically target stencil computations – MapOverlap and Stencil – and we describe their use for particular application examples, discuss their efficient parallel implementation, and report experimental results on systems with multiple GPUs. Our evaluation of three real-world applications shows that stencil code written with SkelCL is considerably shorter and offers competitive performance to hand-tuned OpenCL code

    Extracting curve-skeletons from digital shapes using occluding contours

    Get PDF
    Curve-skeletons are compact and semantically relevant shape descriptors, able to summarize both topology and pose of a wide range of digital objects. Most of the state-of-the-art algorithms for their computation rely on the type of geometric primitives used and sampling frequency. In this paper we introduce a formally sound and intuitive definition of curve-skeleton, then we propose a novel method for skeleton extraction that rely on the visual appearance of the shapes. To achieve this result we inspect the properties of occluding contours, showing how information about the symmetry axes of a 3D shape can be inferred by a small set of its planar projections. The proposed method is fast, insensitive to noise, capable of working with different shape representations, resolution insensitive and easy to implement

    Covariate factor mitigation techniques for robust gait recognition

    Get PDF
    The human gait is a discriminative feature capable of recognising a person by their unique walking manner. Currently gait recognition is based on videos captured in a controlled environment. These videos contain challenges, termed covariate factors, which affect the natural appearance and motion of gait, e.g. carrying a bag, clothing, shoe type and time. However gait recognition has yet to achieve robustness to these covariate factors. To achieve enhanced robustness capabilities, it is essential to address the existing gait recognition limitations. Specifically, this thesis develops an understanding of how covariate factors behave while a person is in motion and the impact covariate factors have on the natural appearance and motion of gait. Enhanced robustness is achieved by producing a combination of novel gait representations and novel covariate factor detection and removal procedures. Having addressed the limitations regarding covariate factors, this thesis achieves the goal of robust gait recognition. Using a skeleton representation of the human figure, the Skeleton Variance Image condenses a skeleton sequence into a single compact 2D gait representation to express the natural gait motion. In addition, a covariate factor detection and removal module is used to maximise the mitigation of covariate factor effects. By establishing the average pixel distribution within training (covariate factor free) representations, a comparison against test (covariate factor) representations achieves effective covariate factor detection. The corresponding difference can effectively remove covariate factors which occur at the boundary of, and hidden within, the human figure.The Engineering and Physical Sciences Research Council (EPSRC

    Affine Registration of label maps in Label Space

    Get PDF
    Two key aspects of coupled multi-object shape\ud analysis and atlas generation are the choice of representation\ud and subsequent registration methods used to align the sample\ud set. For example, a typical brain image can be labeled into\ud three structures: grey matter, white matter and cerebrospinal\ud fluid. Many manipulations such as interpolation, transformation,\ud smoothing, or registration need to be performed on these images\ud before they can be used in further analysis. Current techniques\ud for such analysis tend to trade off performance between the two\ud tasks, performing well for one task but developing problems when\ud used for the other.\ud This article proposes to use a representation that is both\ud flexible and well suited for both tasks. We propose to map object\ud labels to vertices of a regular simplex, e.g. the unit interval for\ud two labels, a triangle for three labels, a tetrahedron for four\ud labels, etc. This representation, which is routinely used in fuzzy\ud classification, is ideally suited for representing and registering\ud multiple shapes. On closer examination, this representation\ud reveals several desirable properties: algebraic operations may\ud be done directly, label uncertainty is expressed as a weighted\ud mixture of labels (probabilistic interpretation), interpolation is\ud unbiased toward any label or the background, and registration\ud may be performed directly.\ud We demonstrate these properties by using label space in a gradient\ud descent based registration scheme to obtain a probabilistic\ud atlas. While straightforward, this iterative method is very slow,\ud could get stuck in local minima, and depends heavily on the initial\ud conditions. To address these issues, two fast methods are proposed\ud which serve as coarse registration schemes following which the\ud iterative descent method can be used to refine the results. Further,\ud we derive an analytical formulation for direct computation of the\ud "group mean" from the parameters of pairwise registration of all\ud the images in the sample set. We show results on richly labeled\ud 2D and 3D data sets

    Skeletonization methods for image and volume inpainting

    Get PDF

    Skeletonization methods for image and volume inpainting

    Get PDF
    corecore