35 research outputs found

    Classical methods of left ventricular contour extraction and preprocessing of echocardiographic images: a review

    Get PDF
    A main objective of digital processing of echocardiographic images is to improve the signal to noise ratio of the video images acquired from the ultrasound equipment along with contour extraction in order to obtain cardiac parameters. We present a review and comparison among the different proposed methods in the current literature for both noise removal and contour extraction of echocardiographic images. It is shown that classical methods do not render good contours and there is a need for a different approach to contour extraction algorithms.Peer ReviewedPostprint (published version

    Red blood cell segmentation and classification method using MATLAB

    Get PDF
    Red blood cells (RBCs) are the most important kind of blood cell. Its diagnosis is very important process for early detection of related disease such as malaria and anemia before suitable follow up treatment can be proceed. Some of the human disease can be showed by counting the number of red blood cells. Red blood cell count gives the vital information that help diagnosis many of the patient’s sickness. Conventional method under blood smears RBC diagnosis is applying light microscope conducted by pathologist. This method is time-consuming and laborious. In this project an automated RBC counting is proposed to speed up the time consumption and to reduce the potential of the wrongly identified RBC. Initially the RBC goes for image pre-processing which involved global thresholding. Then it continues with RBCs counting by using two different algorithms which are the watershed segmentation based on distance transform, and the second one is the artificial neural network (ANN) classification with fitting application depend on regression method. Before applying ANN classification there are step needed to get feature extraction data that are the data extraction using moment invariant. There are still weaknesses and constraints due to the image itself such as color similarity, weak edge boundary, overlapping condition, and image quality. Thus, more study must be done to handle those matters to produce strong analysis approach for medical diagnosis purpose. This project build a better solution and help to improve the current methods so that it can be more capable, robust, and effective whenever any sample of blood cell is analyzed. At the end of this project it conducted comparison between 20 images of blood samples taken from the medical electronic laboratory in Universiti Tun Hussein Onn Malaysia (UTHM). The proposed method has been tested on blood cell images and the effectiveness and reliability of each of the counting method has been demonstrated

    A parallel algorithm for determining two-dimensional object positions using incomplete information about their boundaries

    Full text link
    Extraction of two-dimensional object locations using current techniques is a computationally intensive process. In this paper a parallel algorithm is presented that can specify the location of objects from edge streaks produced by an edge operator. Best-first searches are carried out in a number of non-interacting and localized edge streak spaces. The outcome of each search is a hypothesis. Each edge streak votes for a single hypothesis; it may also take part in the formation of other hypotheses. A poll of the votes determined the stronger hypotheses. The algorithm can be used as a front end to a visual pattern recognition system where features are extracted from the hypothesized object boundary or from the area localized by the hypothesized boundary.Experimental results from a biomedical domain are presented.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/28103/1/0000551.pd

    Counting of RBCs and WBCs in noisy normal blood smear microscopic images

    Get PDF
    This work focuses on the segmentation and counting of peripheral blood smear particles which plays a vital role in medical diagnosis. Our approach profits from some powerful processing techniques. Firstly, the method used for denoising a blood smear image is based on the Bivariate wavelet. Secondly, image edge preservation uses the Kuwahara filter. Thirdly, a new binarization technique is introduced by merging the Otsu and Niblack methods. We have also proposed an efficient step-by-step procedure to determine solid binary objects by merging modified binary, edged images and modified Chan-Vese active contours. The separation of White Blood Cells (WBCs) from Red Blood Cells (RBCs) into two sub-images based on the RBC (blood’s dominant particle) size estimation is a critical step. Using Granulometry, we get an approximation of the RBC size. The proposed separation algorithm is an iterative mechanism which is based on morphological theory, saturation amount and RBC size. A primary aim of this work is to introduce an accurate mechanism for counting blood smear particles. This is accomplished by using the Immersion Watershed algorithm which counts red and white blood cells separately. To evaluate the capability of the proposed framework,experiments were conducted on normal blood smear images. This framework was compared to other published approaches and found to have lower complexity and better performance in its constituent steps; hence, it has a better overall performance

    Development of an instrument for evaluation of interferograms

    Get PDF
    A system for the evaluation of interference patterns was evaluated. A picture analysis system based on a computer with a television digitizer was used for digitizing and processing interferograms

    Using a disk operator to convert raster images of engineering drawings to vector images

    Get PDF
    Computer Scienc

    Image morphological processing

    Get PDF
    Mathematical Morphology with applications in image processing and analysis has been becoming increasingly important in today\u27s technology. Mathematical Morphological operations, which are based on set theory, can extract object features by suitably shaped structuring elements. Mathematical Morphological filters are combinations of morphological operations that transform an image into a quantitative description of its geometrical structure based on structuring elements. Important applications of morphological operations are shape description, shape recognition, nonlinear filtering, industrial parts inspection, and medical image processing. In this dissertation, basic morphological operations, properties and fuzzy morphology are reviewed. Existing techniques for solving corner and edge detection are presented. A new approach to solve corner detection using regulated mathematical morphology is presented and is shown that it is more efficient in binary images than the existing mathematical morphology based asymmetric closing for corner detection. A new class of morphological operations called sweep mathematical morphological operations is developed. The theoretical framework for representation, computation and analysis of sweep morphology is presented. The basic sweep morphological operations, sweep dilation and sweep erosion, are defined and their properties are studied. It is shown that considering only the boundaries and performing operations on the boundaries can substantially reduce the computation. Various applications of this new class of morphological operations are discussed, including the blending of swept surfaces with deformations, image enhancement, edge linking and shortest path planning for rotating objects. Sweep mathematical morphology is an efficient tool for geometric modeling and representation. The sweep dilation/erosion provides a natural representation of sweep motion in the manufacturing processes. A set of grammatical rules that govern the generation of objects belonging to the same group are defined. Earley\u27s parser serves in the screening process to determine whether a pattern is a part of the language. Finally, summary and future research of this dissertation are provided

    An ultra-fast user-steered image segmentation paradigm: live wire on the fly

    Full text link

    An opportunist and cooperative approach for low level vision

    Get PDF
    This paper presents a new approach for the design of cooperative segmentatio n systems. The idea is to introduce cooperation as an integrant part of the decisio n process . Each segmentation method is then given the ability to make adaptiv e decisions, to postpone difficult decisions and to solve pending problems by requesting and accumulating information. For this purpose, each method is implemented as an incremental process that can interrupt itself at any time to ask for a cooperation with other processes, in an opportunist way : "child" processe s are then created at certain locations in the image. Child processes are thus created each time a "complex" situation is encountered, to gather more information which , collected at the "parent" level, is used to take "better" decisions, in a more secure way. The processes are controlled by a scheduler, like in a multi-task operating system. Depth-first or breadth-first control strategies may be implemented, wher e information queries are treated immediately or differed . The potential interest of the approach is illustrated on a variety of examples .Nous présentons une nouvelle approche de la coopération entre plusieurs méthodes de segmentation. L'idée est d'introduire la coopération dans le processus de décision. Chaque méthode de segmentation peut alors prendre des décisions adaptées, différer les décisions difficiles et résoudre les problèmes délicats en demandant et en accumulant les informations fournies par d'autres méthodes. Dans ce but, chaque méthode comporte une structure de contrôle incrémentale, interruptible à tout moment pour demander l'aide opportune d'autres méthodes : des processus «fils» sont créés aux endroits critiques dès qu'une situation complexe est rencontrée, puis des informations sont retournées au processus père qui est alors en mesure de prendre des décisions plus fiables. Les processus sont contrôlés par un séquenceur, dans l'esprit d'un système d'exploitation multitâches. Ce séquenceur initialise des processus à différents endroits de l'image. Ensuite, la stratégie de parcours en largeur d'abord ou en profondeur d'abord permet de traiter immédiatement ou non les demandes de coopération. Le parcours en largeur d'abord implique l'analyse des problèmes les plus simples avant ceux qui sont plus complexes. Nous illustrons les différents choix sur des exemples variés
    corecore