721 research outputs found

    On the segmentation and classification of hand radiographs

    Get PDF
    This research is part of a wider project to build predictive models of bone age using hand radiograph images. We examine ways of finding the outline of a hand from an X-ray as the first stage in segmenting the image into constituent bones. We assess a variety of algorithms including contouring, which has not previously been used in this context. We introduce a novel ensemble algorithm for combining outlines using two voting schemes, a likelihood ratio test and dynamic time warping (DTW). Our goal is to minimize the human intervention required, hence we investigate alternative ways of training a classifier to determine whether an outline is in fact correct or not. We evaluate outlining and classification on a set of 1370 images. We conclude that ensembling with DTW improves performance of all outlining algorithms, that the contouring algorithm used with the DTW ensemble performs the best of those assessed, and that the most effective classifier of hand outlines assessed is a random forest applied to outlines transformed into principal components

    Subspace Representations and Learning for Visual Recognition

    Get PDF
    Pervasive and affordable sensor and storage technology enables the acquisition of an ever-rising amount of visual data. The ability to extract semantic information by interpreting, indexing and searching visual data is impacting domains such as surveillance, robotics, intelligence, human- computer interaction, navigation, healthcare, and several others. This further stimulates the investigation of automated extraction techniques that are more efficient, and robust against the many sources of noise affecting the already complex visual data, which is carrying the semantic information of interest. We address the problem by designing novel visual data representations, based on learning data subspace decompositions that are invariant against noise, while being informative for the task at hand. We use this guiding principle to tackle several visual recognition problems, including detection and recognition of human interactions from surveillance video, face recognition in unconstrained environments, and domain generalization for object recognition.;By interpreting visual data with a simple additive noise model, we consider the subspaces spanned by the model portion (model subspace) and the noise portion (variation subspace). We observe that decomposing the variation subspace against the model subspace gives rise to the so-called parity subspace. Decomposing the model subspace against the variation subspace instead gives rise to what we name invariant subspace. We extend the use of kernel techniques for the parity subspace. This enables modeling the highly non-linear temporal trajectories describing human behavior, and performing detection and recognition of human interactions. In addition, we introduce supervised low-rank matrix decomposition techniques for learning the invariant subspace for two other tasks. We learn invariant representations for face recognition from grossly corrupted images, and we learn object recognition classifiers that are invariant to the so-called domain bias.;Extensive experiments using the benchmark datasets publicly available for each of the three tasks, show that learning representations based on subspace decompositions invariant to the sources of noise lead to results comparable or better than the state-of-the-art

    The Diversity of Diffuse Lyα\alpha Nebulae around Star-Forming Galaxies at High Redshift

    Full text link
    We report the detection of diffuse Lyα\alpha emission, or Lyα\alpha halos (LAHs), around star-forming galaxies at z≈3.78z\approx3.78 and 2.662.66 in the NOAO Deep Wide-Field Survey Bo\"otes field. Our samples consist of a total of ∼\sim1400 galaxies, within two separate regions containing spectroscopically confirmed galaxy overdensities. They provide a unique opportunity to investigate how the LAH characteristics vary with host galaxy large-scale environment and physical properties. We stack Lyα\alpha images of different samples defined by these properties and measure their median LAH sizes by decomposing the stacked Lyα\alpha radial profile into a compact galaxy-like and an extended halo-like component. We find that the exponential scale-length of LAHs depends on UV continuum and Lyα\alpha luminosities, but not on Lyα\alpha equivalent widths or galaxy overdensity parameters. The full samples, which are dominated by low UV-continuum luminosity Lyα\alpha emitters (MUV≳−21M_{\rm UV} \gtrsim -21), exhibit LAH sizes of 5 − 6 \,-\,6\,kpc. However, the most UV- or Lyα\alpha-luminous galaxies have more extended halos with scale-lengths of 7 − 9 \,-\,9\,kpc. The stacked Lyα\alpha radial profiles decline more steeply than recent theoretical predictions that include the contributions from gravitational cooling of infalling gas and from low-level star formation in satellites. On the other hand, the LAH extent matches what one would expect for photons produced in the galaxy and then resonantly scattered by gas in an outflowing envelope. The observed trends of LAH sizes with host galaxy properties suggest that the physical conditions of the circumgalactic medium (covering fraction, HI column density, and outflow velocity) change with halo mass and/or star-formation rates.Comment: published in ApJ, minor proof corrections applie

    Fully-Automatic Multiresolution Idealization for Filtered Ion Channel Recordings: Flickering Event Detection

    Full text link
    We propose a new model-free segmentation method, JULES, which combines recent statistical multiresolution techniques with local deconvolution for idealization of ion channel recordings. The multiresolution criterion takes into account scales down to the sampling rate enabling the detection of flickering events, i.e., events on small temporal scales, even below the filter frequency. For such small scales the deconvolution step allows for a precise determination of dwell times and, in particular, of amplitude levels, a task which is not possible with common thresholding methods. This is confirmed theoretically and in a comprehensive simulation study. In addition, JULES can be applied as a preprocessing method for a refined hidden Markov analysis. Our new methodolodgy allows us to show that gramicidin A flickering events have the same amplitude as the slow gating events. JULES is available as an R function jules in the package clampSeg

    Detecting and tracking people in real-time

    Get PDF
    The problem of detecting and tracking people in images and video has been the subject of a great deal of research, but remains a challenging task. Being able to detect and track people would have an impact in a number of fields, such as driverless vehicles, automated surveillance, and human-computer interaction. The difficulties that must be overcome include coping with variations in appearance between different people, changes in lighting, and the ability to detect people across multiple scales. As well as having high accuracy, it is desirable for a technique to evaluate an image with low latency between receiving the image and producing a result. This thesis explores methods for detecting and tracking people in images and video. Techniques are implemented on a desktop computer, with an emphasis on low latency. The problem of detection is examined first. The well established integral channel features detector is introduced and reimplemented, and various novelties are implemented in regards to the features used by the detector. Results are given to quantify the accuracy and the speed of the developed detectors on the INRIA person dataset. The method is further extended by examining the prospect of using multiple classifiers in conjunction. It is shown that using a classifier with a version of the same classifier reflected in the vertical axis can improve performance. A novel method for clustering images of people to find modes of appearance is also presented. This involves using boosting classifiers to map a set of images to vectors, to which K-means clustering is applied. Boosting classifiers are then trained on these clustered datasets to create sets of multiple classifiers, and it is demonstrated that these sets of classifiers can be evaluated on images with only a small increase in the running time over single classifiers. The problem of single target tracking is addressed using the mean shift algorithm. Mean shift tracking works by finding the best colour match for a target from frame to frame. A novel form of mean shift tracking through scale is developed, and the problem of multiple target tracking is addressed by using boosting classifiers in conjunction with Kalman filters. Tests are carried out on the CAVIAR dataset, which gives representative examples of surveillance scenarios, to show the performance of the proposed approaches.Open Acces

    Multiloop functional renormalization group approach to quantum spin systems

    Full text link
    Renormalization group methods are well-established tools for the (numerical) investigation of the low-energy properties of correlated quantum many-body systems, allowing to capture their scale-dependent nature. The functional renormalization group (FRG) allows to continuously evolve a microscopic model action to an effective low-energy action as a function of decreasing energy scales via an exact functional flow equation, which is then approximated by some truncation scheme to facilitate computation. Here, we report on our transcription of a recently developed multiloop truncation approach for electronic FRG calculations to the pseudo-fermion functional renormalization group (pf-FRG) for interacting quantum spin systems. We discuss in detail the conceptual intricacies of the flow equations generated by the multiloop truncation, as well as essential refinements to the integration scheme for the resulting integro-differential equations. To benchmark our approach we analyze antiferromagnetic Heisenberg models on the pyrochlore, simple cubic and face-centered cubic lattice, discussing the convergence of physical observables for higher-loop calculations and comparing with existing results where available. Combined, these methodological refinements systematically improve the pf-FRG approach to one of the numerical tools of choice when exploring frustrated quantum magnetism in higher spatial dimensions.Comment: 22 pages, 9 figure

    A survey of exemplar-based texture synthesis

    Full text link
    Exemplar-based texture synthesis is the process of generating, from an input sample, new texture images of arbitrary size and which are perceptually equivalent to the sample. The two main approaches are statistics-based methods and patch re-arrangement methods. In the first class, a texture is characterized by a statistical signature; then, a random sampling conditioned to this signature produces genuinely different texture images. The second class boils down to a clever "copy-paste" procedure, which stitches together large regions of the sample. Hybrid methods try to combine ideas from both approaches to avoid their hurdles. The recent approaches using convolutional neural networks fit to this classification, some being statistical and others performing patch re-arrangement in the feature space. They produce impressive synthesis on various kinds of textures. Nevertheless, we found that most real textures are organized at multiple scales, with global structures revealed at coarse scales and highly varying details at finer ones. Thus, when confronted with large natural images of textures the results of state-of-the-art methods degrade rapidly, and the problem of modeling them remains wide open.Comment: v2: Added comments and typos fixes. New section added to describe FRAME. New method presented: CNNMR
    • …
    corecore