1,127 research outputs found

    Research on robust salient object extraction in image

    Get PDF
    戶ćșŠ:新 ; æ–‡éƒšçœć ±ć‘Šç•Șć·:ç”Č2641ć· ; ć­ŠäœăźçšźéĄž:ćšćŁ«(ć·„ć­Š) ; 授䞎ćčŽæœˆæ—„:2008/3/15 ; æ—©ć€§ć­Šäœèš˜ç•Șć·:新480

    Processing of CW Doppler images to extract velocity profile

    Get PDF
    The present work aims to find a good way to automatically trace the velocity profile in vessels shown in continuous wave Doppler spectrograms, replacing the more traditional manual tracing, easy to perform but rich in drawbacks. Different methods of pre-processing this kind of images in order to prepare the edge detection step are presented. Various techniques, taken from the literature or newly created, are tested on a set of 18 CW Doppler spectrograms. The main purpose is to understand which is the best strategy to put into practice, remembering that the goal is to get the maximum velocity envelope, or the mean velocity in the vessel. Two main strategies are tested: a mild pre-processing before edge detection, followed by an edge linking step to fill the gaps in the extracted contour, or a stronger pre-processing that guarantees a continuous contour without further operations. A comparison is made on the obtained results and it is shown that the two approaches are somehow complementary in their pros and cons. In further work, the strengths of the two strategies should be combined in a hybrid method that would guarantee a good compromise between continuity of edges and mathematical-based detection of boundarie

    Computer-aided diagnosis of complications of total hip replacement X-ray images

    Get PDF
    Hip replacement surgery has experienced a dramatic evolution in recent years supported by the latest developments in many areas of technology and surgical procedures. Unfortunately complications that follow hip replacement surgery remains the most challenging dilemma faced both by the patients and medical experts. The thesis presents a novel approach to segment the prosthesis of a THR surgical process by using an Active Contour Model (ACM) that is initiated via an automatically detected seed point within the enarthrosis region of the prosthesis. The circular area is detected via the use of a Fast, Randomized Circle Detection Algorithm. Experimental results are provided to compare the performance of the proposed ACM based approach to popular thresholding based approaches. Further an approach to automatically detect the Obturator Foramen using an ACM approach is also presented. Based on analysis of how medical experts carry out the detection of loosening and subsidence of a prosthesis and the presence of infections around the prosthesis area, this thesis presents novel computational analysis concepts to identify the key feature points of the prosthesis that are required to detect all of the above three types of complications. Initially key points along the prosthesis boundary are determined by measuring the curvature on the surface of the prosthesis. By traversing the edge pixels, starting from one end of the boundary of a detected prosthesis, the curvature values are determined and effectively used to determine key points of the prosthesis surface and their relative positioning. After the key-points are detected, pixel value gradients across the boundary of the prosthesis are determined along the boundary of the prosthesis to determine the presence of subsidence, loosening and infections. Experimental results and analysis are presented to show that the presence of subsidence is determined by the identification of dark pixels around the convex bend closest to the stem area of the prosthesis and away from it. The presence of loosening is determined by the additional presence of dark regions just outside the two straight line edges of the stem area of the prosthesis. The presence of infections is represented by the determination of dark areas around the tip of the stem of the prosthesis. All three complications are thus determined by a single process where the detailed analysis defer. The experimental results presented show the effectiveness of all proposed approaches which are also compared and validated against the ground truth recorded manually with expert user input

    A RE Methodology to achieve Accurate Polygon Models and NURBS Surfaces by Applying Different Data Processing Techniques

    Get PDF
    The scope of this work is to present a reverse engineering (RE) methodology to achieve accurate polygon models for 3D printing or additive manufacturing (AM) applications, as well as NURBS (Non-Uniform Rational B-Splines) surfaces for advanced machining processes. The accuracy of the 3D models generated by this RE process depends on the data acquisition system, the scanning conditions and the data processing techniques. To carry out this study, workpieces of different material and geometry were selected, using X-ray computed tomography (XRCT) and a Laser Scanner (LS) as data acquisition systems for scanning purposes. Once this is done, this work focuses on the data processing step in order to assess the accuracy of applying different processing techniques. Special attention is given to the XRCT data processing step. For that reason, the models generated from the LS point clouds processing step were utilized as a reference to perform the deviation analysis. Nonetheless, the proposed methodology could be applied for both data inputs: 2D cross-sectional images and point clouds. Finally, the target outputs of this data processing chain were evaluated due to their own reverse engineering applications, highlighting the promising future of the proposed methodology.This research was funded by the he Department of Economic Development, Sustainability and Environment of the Basque Government for funding the KK-2020/00094 (INSPECTA) research project and the Spanish Ministry of Science and Innovation for funding the ALASURF project (PID2019-109220RB-I00)

    Localizing Polygonal Objects in Man-Made Environments

    Get PDF
    Object detection is a significant challenge in Computer Vision and has received a lot of attention in the field. One such challenge addressed in this thesis is the detection of polygonal objects, which are prevalent in man-made environments. Shape analysis is an important cue to detect these objects. We propose a contour-based object detection framework to deal with the related challenges, including how to efficiently detect polygonal shapes and how to exploit them for object detection. First, we propose an efficient component tree segmentation framework for stable region extraction and a multi-resolution line segment detection algorithm, which form the bases of our detection framework. Our component tree segmentation algorithm explores the optimal threshold for each branch of the component tree, and achieves a significant improvement over image thresholding segmentation, and comparable performance to more sophisticated methods but only at a fraction of computation time. Our line segment detector overcomes several inherent limitations of the Hough transform, and achieves a comparable performance to the state-of-the-art line segment detectors. However, our approach can better capture dominant structures and is more stable against low-quality imaging conditions. Second, we propose a global shape analysis measurement for simple polygon detection and use it to develop an approach for real-time landing site detection in unconstrained man-made environments. Since the task of detecting landing sites must be performed in a few seconds or less, existing methods are often limited to simple local intensity and edge variation cues. By contrast, we show how to efficiently take into account the potential sitesĂą global shape, which is a critical cue in man-made scenes. Our method relies on component tree segmentation algorithm and a new shape regularity measure to look for polygonal regions in video sequences. In this way we enforce both temporal consistency and geometric regularity, resulting in reliable and consistent detections. Third, we propose a generic contour grouping based object detection approach by exploring promising cycles in a line fragment graph. Previous contour-based methods are limited to use additive scoring functions. In this thesis, we propose an approximate search approach that eliminates this restriction. Given a weighted line fragment graph, we prune its cycle space by removing cycles containing weak nodes or weak edges, until the upper bound of the cycle space is less than the threshold defined by the cyclomatic number. Object contours are then detected as maximally scoring elementary circuits in the pruned cycle space. Furthermore, we propose another more efficient algorithm, which reconstructs the graph by grouping the strongest edges iteratively until the number of the cycles reaches the upper bound. Our approximate search approaches can be used with any cycle scoring function. Moreover, unlike other contour grouping based approaches, our approach does not rely on a greedy strategy for finding multiple candidates and is capable of finding multiple candidates sharing common line fragments. We demonstrate that our approach significantly outperforms the state-of-the-art

    Liver segmentation using 3D CT scans.

    Get PDF
    Master of Science in Computer Science. University of KwaZulu-Natal, Durban, 2018.Abstract available in PDF file

    Segmentation of striatal brain structures from high resolution pet images

    Get PDF
    Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and ComputersWe propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results

    Vehicle license plate detection and recognition

    Get PDF
    "December 2013.""A Thesis presented to the Faculty of the Graduate School at the University of Missouri In Partial Fulfillment of the Requirements for the Degree Master of Science."Thesis supervisor: Dr. Zhihai He.In this work, we develop a license plate detection method using a SVM (Support Vector Machine) classifier with HOG (Histogram of Oriented Gradients) features. The system performs window searching at different scales and analyzes the HOG feature using a SVM and locates their bounding boxes using a Mean Shift method. Edge information is used to accelerate the time consuming scanning process. Our license plate detection results show that this method is relatively insensitive to variations in illumination, license plate patterns, camera perspective and background variations. We tested our method on 200 real life images, captured on Chinese highways under different weather conditions and lighting conditions. And we achieved a detection rate of 100%. After detecting license plates, alignment is then performed on the plate candidates. Conceptually, this alignment method searches neighbors of the bounding box detected, and finds the optimum edge position where the outside regions are very different from the inside regions of the license plate, from color's perspective in RGB space. This method accurately aligns the bounding box to the edges of the plate so that the subsequent license plate segmentation and recognition can be performed accurately and reliably. The system performs license plate segmentation using global alignment on the binary license plate. A global model depending on the layout of license plates is proposed to segment the plates. This model searches for the optimum position where the characters are all segmented but not chopped into pieces. At last, the characters are recognized by another SVM classifier, with a feature size of 576, including raw features, vertical and horizontal scanning features. Our character recognition results show that 99% of the digits are successfully recognized, while the letters achieve an recognition rate of 95%. The license plate recognition system was then incorporated into an embedded system for parallel computing. Several TS7250 and an auxiliary board are used to simulIncludes bibliographical references (pages 67-73)
    • 

    corecore