81 research outputs found

    Estimation of Muscle Fiber Orientation in Ultrasound Images Using Revoting Hough Transform (RVHT)

    Get PDF
    2008-2009 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    A review of hough transform and line segment detection approaches

    Get PDF
    In a wide range of image processing and computer vision problems, line segment detection is one of the most critical challenges. For more than three decades researchers have contributed to build more robust and accurate algorithms with faster performance. In this paper we review the main approaches and in particular the Hough transform and its extensions, which are among the most well-known techniques for the detection of straight lines in a digital image. This paper is based on extensive practical research and is organised into two main parts. In the first part, the HT and its major research directions and limitations are discussed. In the second part of the paper, state-of-the-art line segmentation techniques are reviewed and categorized into three main groups with fundamentally distinctive characteristics. Their relative advantages and disadvantages are compared and summarised in a table

    A review of hough transform and line segment detection approaches

    Get PDF
    In a wide range of image processing and computer vision problems, line segment detection is one of the most critical challenges. For more than three decades researchers have contributed to build more robust and accurate algorithms with faster performance. In this paper we review the main approaches and in particular the Hough transform and its extensions, which are among the most well-known techniques for the detection of straight lines in a digital image. This paper is based on extensive practical research and is organised into two main parts. In the first part, the HT and its major research directions and limitations are discussed. In the second part of the paper, state-of-the-art line segmentation techniques are reviewed and categorized into three main groups with fundamentally distinctive characteristics. Their relative advantages and disadvantages are compared and summarised in a table

    A nonlinear detection algorithm for periodic signals in gravitational wave detectors

    Get PDF
    We present an algorithm for the detection of periodic sources of gravitational waves with interferometric detectors that is based on a special symmetry of the problem: the contributions to the phase modulation of the signal from the earth rotation are exactly equal and opposite at any two instants of time separated by half a sidereal day; the corresponding is true for the contributions from the earth orbital motion for half a sidereal year, assuming a circular orbit. The addition of phases through multiplications of the shifted time series gives a demodulated signal; specific attention is given to the reduction of noise mixing resulting from these multiplications. We discuss the statistics of this algorithm for all-sky searches (which include a parameterization of the source spin-down), in particular its optimal sensitivity as a function of required computational power. Two specific examples of all-sky searches (broad-band and narrow-band) are explored numerically, and their performances are compared with the stack-slide technique (P. R. Brady, T. Creighton, Phys. Rev. D, 61, 082001).Comment: 9 pages, 3 figures, to appear in Phys. Rev.

    Text Segmentation in Web Images Using Colour Perception and Topological Features

    Get PDF
    The research presented in this thesis addresses the problem of Text Segmentation in Web images. Text is routinely created in image form (headers, banners etc.) on Web pages, as an attempt to overcome the stylistic limitations of HTML. This text however, has a potentially high semantic value in terms of indexing and searching for the corresponding Web pages. As current search engine technology does not allow for text extraction and recognition in images, the text in image form is ignored. Moreover, it is desirable to obtain a uniform representation of all visible text of a Web page (for applications such as voice browsing or automated content analysis). This thesis presents two methods for text segmentation in Web images using colour perception and topological features. The nature of Web images and the implicit problems to text segmentation are described, and a study is performed to assess the magnitude of the problem and establish the need for automated text segmentation methods. Two segmentation methods are subsequently presented: the Split-and-Merge segmentation method and the Fuzzy segmentation method. Although approached in a distinctly different way in each method, the safe assumption that a human being should be able to read the text in any given Web Image is the foundation of both methods’ reasoning. This anthropocentric character of the methods along with the use of topological features of connected components, comprise the underlying working principles of the methods. An approach for classifying the connected components resulting from the segmentation methods as either characters or parts of the background is also presented

    Experimental comparison of noise and resolution for 2k and 4k storage phosphor radiography systems

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/134792/1/mp8656.pd

    Feature-Based Textures

    Get PDF
    This paper introduces feature-based textures, a new image representation that combines features and texture samples for high-quality texture mapping. Features identify boundaries within a texture where samples change discontinuously. They can be extracted from vector graphics representations, or explicity added to raster images to improve sharpness. Texture lookups are then interpolated from samples while respecting these boundaries. We present results from a software implementation of this technique demonstrating quality, efficiency and low memory overhead

    Development and Validation of an Algorithm for the Digitization of ECG Paper Images

    Get PDF
    The electrocardiogram (ECG) signal describes the heart’s electrical activity, allowing it to detect several health conditions, including cardiac system abnormalities and dysfunctions. Nowadays, most patient medical records are still paper-based, especially those made in past decades. The importance of collecting digitized ECGs is twofold: firstly, all medical applications can be easily implemented with an engineering approach if the ECGs are treated as signals; secondly, paper ECGs can deteriorate over time, therefore a correct evaluation of the patient’s clinical evolution is not always guaranteed. The goal of this paper is the realization of an automatic conversion algorithm from paper-based ECGs (images) to digital ECG signals. The algorithm involves a digitization process tested on an image set of 16 subjects, also with pathologies. The quantitative analysis of the digitization method is carried out by evaluating the repeatability and reproducibility of the algorithm. The digitization accuracy is evaluated both on the entire signal and on six ECG time parameters (R-R peak distance, QRS complex duration, QT interval, PQ interval, P-wave duration, and heart rate). Results demonstrate the algorithm efficiency has an average Pearson correlation coefficient of 0.94 and measurement errors of the ECG time parameters are always less than 1 mm. Due to the promising experimental results, the algorithm could be embedded into a graphical interface, becoming a measurement and collection tool for cardiologists
    • 

    corecore