1,406 research outputs found

    A topological sampling theorem for Robust boundary reconstruction and image segmentation

    Get PDF
    AbstractExisting theories on shape digitization impose strong constraints on admissible shapes, and require error-free data. Consequently, these theories are not applicable to most real-world situations. In this paper, we propose a new approach that overcomes many of these limitations. It assumes that segmentation algorithms represent the detected boundary by a set of points whose deviation from the true contours is bounded. Given these error bounds, we reconstruct boundary connectivity by means of Delaunay triangulation and α-shapes. We prove that this procedure is guaranteed to result in topologically correct image segmentations under certain realistic conditions. Experiments on real and synthetic images demonstrate the good performance of the new method and confirm the predictions of our theory

    A Simulated shape recognition system using feature extraction

    Get PDF
    A simulated shape recognition system using feature extraction was built as an aid for designing robot vision systems. The simulation allows the user to study the effects of image resolution and feature selection on the performance of a vision system that tries to identify unknown 2-D objects. Performance issues that can be studied include identification accuracy and recognition speed as functions of resolution and the size and makeup of the feature set. Two approaches to feature selection were studied as was a nearest neighbor classification algorithm based on Mahalanobis distances. Using a pool of ten objects and twelve features, the system was tested by performing studies of hypothetical visual recognition tasks

    Automated Analysis of Metacarpal Cortical Thickness in Serial Hand Radiographs

    Get PDF
    To understand the roles of various genes that influence skeletal bone accumulation and loss, accurate measurement of bone mineralization is needed. However, it is a challenging task to accurately assess bone growth over a person\u27s lifetime. Traditionally, manual analysis of hand radiographs has been used to quantify bone growth, but these measurements are tedious and may be impractical for a large-scale growth study. The aim of this project was to develop a tool to automate the measurement of metacarpal cortical bone thickness in standard hand-wrist radiographs of humans aged 3 months to 70+ years that would be more accurate, precise and efficient than manual radiograph analysis. The task was divided into two parts: development of automatic analysis software and the implementation of the routines in a Graphical User Interface (GUI). The automatic analysis was to ideally execute without user intervention, but we anticipated that not all images would be successfully analyzed. The GUI, therefore, provides the interface for the user to execute the program, review results of the automated routines, make semi-automated and manual corrections, view the quantitative results and growth trend of the participant and save the results of all analyses. The project objectives were attained. Of a test set of about 350 images from participants in a large research study, automatic analysis was successful in approximately 75% of the reasonable quality images and manual intervention allowed the remaining 25% of these images to be successfully analyzed. For images of poorer quality, including many that the Lifespan Health Research Center (LHRC) clients would not expect to be analyzed successfully, the inputs provided by the user allowed approximately 80% to be analyzed, but the remaining 20% could not be analyzed with the software. The developed software tool provides results that are more accurate and precise than those from manual analyses. Measurement accuracy, as assessed by phantom measurements, was approximately 0.5% and interobserver and intraobserver agreement were 92.1% and 96.7%, respectively. Interobserver and intraobserver correlation values for automated analysis were 0.9674 and 0.9929, respectively, versus 0.7000 and 0.7820 for manual analysis. The automated analysis process is also approximately 87.5% more efficient than manual image analysis and automatically generates an output file containing over 160 variables of interest. The software is currently being used successfully to analyze over 17,000 images in a study of human bone growth

    Histogram Equalization with Filtering Techniques for Enhancement of Low Quality Microscopic Blood Smear Images

    Get PDF
    This paper presents image enhancement and filtering techniques for microscope blood smear image, in order to improve low image quality that have characteristics: blurred, the diminished true color of objects which are cells , unclear boundary and low contrast between the cells and background. Therefore in this paper proposed histogram equalization (HE) technique followed with filtering techniques such as median filter. HE utilizing to adjust the contrast which based on intensity pixels values, hence able to measure image quality through image histogram as shown in results, while removing noise from the images using filtering and gamma correction parameter in order to distinguish between background and foreground (cells) to get clear borders also. These techniques have been implemented on 46 blood samples. The proposed method successfully improve the readability of the cells in the low quality of blood smear images this mean that contain more information with a good effectiveness which lead for the correct sickness detection and data analysis

    Color image quality measures and retrieval

    Get PDF
    The focus of this dissertation is mainly on color image, especially on the images with lossy compression. Issues related to color quantization, color correction, color image retrieval and color image quality evaluation are addressed. A no-reference color image quality index is proposed. A novel color correction method applied to low bit-rate JPEG image is developed. A novel method for content-based image retrieval based upon combined feature vectors of shape, texture, and color similarities has been suggested. In addition, an image specific color reduction method has been introduced, which allows a 24-bit JPEG image to be shown in the 8-bit color monitor with 256-color display. The reduction in download and decode time mainly comes from the smart encoder incorporating with the proposed color reduction method after color space conversion stage. To summarize, the methods that have been developed can be divided into two categories: one is visual representation, and the other is image quality measure. Three algorithms are designed for visual representation: (1) An image-based visual representation for color correction on low bit-rate JPEG images. Previous studies on color correction are mainly on color image calibration among devices. Little attention was paid to the compressed image whose color distortion is evident in low bit-rate JPEG images. In this dissertation, a lookup table algorithm is designed based on the loss of PSNR in different compression ratio. (2) A feature-based representation for content-based image retrieval. It is a concatenated vector of color, shape, and texture features from region of interest (ROI). (3) An image-specific 256 colors (8 bits) reproduction for color reduction from 16 millions colors (24 bits). By inserting the proposed color reduction method into a JPEG encoder, the image size could be further reduced and the transmission time is also reduced. This smart encoder enables its decoder using less time in decoding. Three algorithms are designed for image quality measure (IQM): (1) A referenced IQM based upon image representation in very low-dimension. Previous studies on IQMs are based on high-dimensional domain including spatial and frequency domains. In this dissertation, a low-dimensional domain IQM based on random projection is designed, with preservation of the IQM accuracy in high-dimensional domain. (2) A no-reference image blurring metric. Based on the edge gradient, the degree of image blur can be measured. (3) A no-reference color IQM based upon colorfulness, contrast and sharpness

    Analysis of the Different Medicinal Leaf with Fractal Dimension

    Get PDF
    Fractal analysis has been applied to describe various aspects connected with the complexity of plant morphology. In this work we determined the fractal dimension of leaves from various species of Peepal leaf, Castrol oil leaf, papaya leaf in order to characterize the structure/architecture of these leaves. The present study deals with the analysis of leaf shapes in terms of fractal geometry with medicinal leaves using the techniques of Image Processing. In this work we determined the fractal dimension of different leaves. The results are very informative

    PointHuman: Reconstructing Clothed Human from Point Cloud of Parametric Model

    Get PDF
    It is very difficult to accomplish the 3D reconstruction of the clothed human body from a single RGB image, because the 2D image lacks the representation information of the 3D human body, especially for the clothed human body. In order to solve this problem, we introduced a priority scheme of different body parts spatial information and proposed PointHuman network. PointHuman combines the spatial feature of the parametric model of the human body with the implicit functions without expressive restrictions. In PointHuman reconstruction framework, we use Point Transformer to extract the semantic spatial feature of the parametric model of the human body to regularize the implicit function of the neural network, which extends the generalization ability of the neural network to complex human poses and various styles of clothing. Moreover, considering the ambiguity of depth information, we estimate the depth of the parameterized model after point cloudization, and obtain an offset depth value. The offset depth value improves the consistency between the parameterized model and the neural implicit function, and accuracy of human reconstruction models. Finally, we optimize the restoration of the parametric model from a single image, and propose a depth perception method. This method further improves the estimation accuracy of the parametric model and finally improves the effectiveness of human reconstruction. Our method achieves competitive performance on the THuman dataset

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models
    corecore