1,869 research outputs found

    Erosion Based Visibility Preprocessing

    Get PDF
    International audienceThis paper presents a novel method for computing visibility in 2.5D environments. It is based on a novel theoretical result: the visibility from a region can be conservatively estimated by computing the visibility from a point using appropriately "shrunk" occluders and occludees. We show how approximated but yet conservative shrunk objects can efficiently be computed in a urban environment. The application of this theorem provides a tighter potentially visible set (PVS) than the original method it is built on. Finally, theoretical implications of the theorem are discussed, and we believe it can open new research directions

    Study of augmentations on historical manuscripts using TrOCR

    Get PDF
    Historical manuscripts are an essential source of original content. For many reasons, it is hard to recognize these manuscripts as text. This thesis used a state-of-the-art Handwritten Text Recognizer, TrOCR, to recognize a 16th-century manuscript. TrOCR uses a vision transformer to encode the input images and a language transformer to decode them back to text. We showed that carefully preprocessed images and designed augmentations can improve the performance of TrOCR. We suggest an ensemble of augmented models to achieve an even better performance

    PREMATURE INFANT BLOOD VESSEL SEGMENTATION OF RETINAL IMAGES BASED ON HYBRID METHOD FOR THE DETERMINATION OF TORTUOSITY

    Get PDF
    For the retinal blood vessels segmentation, we used a method, which is based on the morphological operations. The output of this process is extracted retinal binary image, where is situated main blood vessels. In this paper is used dataset of images (2800 images) from device RetCam3. Before applying the image processing, it was selected 30 images with diagnosed pre-plus diseases, and it is divided into two groups with low contrast and good contrast images. In the next part of the analysis, it was analyzing and showing blood vessels with tortuosity. Tortuosity is a symptom of ROP (retinopathy of prematurity). The clinical physicians evaluate tortuosity by visual comparison of the retinal images images. For this reason, it was suggested model which can automatically indicate the tortuosity of the retinal blood vessels by setting a threshold of the blood vessels curvature

    Improved Stroke Detection at Early Stages Using Haar Wavelets and Laplacian Pyramid

    Get PDF
    Stroke merupakan pembunuh nomor tiga di dunia, namun hanya sedikit metode tentang deteksi dini. Oleh karena itu dibutuhkan metode untuk mendeteksi hal tersebut. Penelitian ini mengusulkan sebuah metode gabungan untuk mendeteksi dua jenis stroke secara simultan. Haar wavelets untuk mendeteksi stroke hemoragik dan Laplacian pyramid untuk mendeteksi stroke iskemik. Tahapan dalam penelitian ini terdiri dari pra proses tahap 1 dan 2, Haar wavelets, Laplacian pyramid, dan perbaikan kualitas citra. Pra proses adalah menghilangkan bagian tulang tengkorak, reduksi derau, perbaikan kontras, dan menghilangkan bagian selain citra otak. Kemudian dilakukan perbaikan citra. Selanjutnya Haar wavelet digunakan untuk ekstraksi daerah hemoragik sedangkan Laplacian pyramid untuk ekstraksi daerah iskemik. Tahapan terakhir adalah menghitung fitur Grey Level Cooccurrence Matrix (GLCM) sebagai fitur untuk proses klasifikasi. Hasil visualisasi diproses lanjut untuk ekstrasi fitur menggunakan GLCM dengan 12 fitur dan kemudian GLCM dengan 4 fitur. Untuk proses klasifikasi digunakan SVM dan KNN, sedangkan pengukuran performa menggunakan akurasi. Jumlah data hemoragik dan iskemik adalah 45 citra yang dibagi menjadi 2 bagian, 28 citra untuk pengujian dan 17 citra untuk pelatihan. Hasil akhir menunjukkan akurasi tertinggi yang dicapai menggunakan SVM adalah 82% dan KNN adalah 88%

    A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images

    Get PDF
    Diabetic Peripheral Neuropathy (DPN) is one of the most common types of diabetes that can affect the cornea. An accurate analysis of the nerve structures can assist the early diagnosis of this disease. This paper proposes a robust, fast and fully automatic nerve segmentation and morphometric parameter quantification system for corneal confocal microscope images. The segmentation part consists of three main steps. First, a preprocessing step is applied to enhance the visibility of the nerves and remove noise using anisotropic diffusion filtering, specifically a Coherence filter followed by Gaussian filtering. Second, morphological operations are applied to remove unwanted objects in the input image such as epithelial cells and small nerve segments. Finally, an edge detection step is applied to detect all the nerves in the input image. In this step, an efficient algorithm for connecting discontinuous nerves is proposed. In the morphometric parameters quantification part, a number of features are extracted, including thickness, tortuosity and length of nerve, which may be used for the early diagnosis of diabetic polyneuropathy and when planning Laser-Assisted in situ Keratomileusis (LASIK) or Photorefractive keratectomy (PRK). The performance of the proposed segmentation system is evaluated against manually traced ground-truth images based on a database consisting of 498 corneal sub-basal nerve images (238 are normal and 260 are abnormal). In addition, the robustness and efficiency of the proposed system in extracting morphometric features with clinical utility was evaluated in 919 images taken from healthy subjects and diabetic patients with and without neuropathy. We demonstrate rapid (13 seconds/image), robust and effective automated corneal nerve quantification. The proposed system will be deployed as a useful clinical tool to support the expertise of ophthalmologists and save the clinician time in a busy clinical setting

    A CAD System for the Detection of Clustered Microcalcification in Digitized Mammogram Film

    Get PDF
    Cluster of microcalcification in mammograms are an important early sign of breast cancer. This report presents a computer aided diagnosis (CAD) system for the automatic detection of cluster rnicrocalcifications in digitized mammograms. The main objective of this study is to present the approach for microcalcifications detection in mammography image. In literature review author illustrate the techniques used in image processing, segmentation, feature extraction and neural network in detecting rnicrocalcification. The proposed system consists of two main steps. First step is image preprocessing and segmentation in order to improve and enhance the quality of image. Then second step is feature extraction to analyze the image and conclude whether the case is malignant or benign. The programming of the project using MATLAB still need to be improved since it produce the output that did not meet the author expectation especially in feature extraction

    Mesh-based 3D Textured Urban Mapping

    Get PDF
    In the era of autonomous driving, urban mapping represents a core step to let vehicles interact with the urban context. Successful mapping algorithms have been proposed in the last decade building the map leveraging on data from a single sensor. The focus of the system presented in this paper is twofold: the joint estimation of a 3D map from lidar data and images, based on a 3D mesh, and its texturing. Indeed, even if most surveying vehicles for mapping are endowed by cameras and lidar, existing mapping algorithms usually rely on either images or lidar data; moreover both image-based and lidar-based systems often represent the map as a point cloud, while a continuous textured mesh representation would be useful for visualization and navigation purposes. In the proposed framework, we join the accuracy of the 3D lidar data, and the dense information and appearance carried by the images, in estimating a visibility consistent map upon the lidar measurements, and refining it photometrically through the acquired images. We evaluate the proposed framework against the KITTI dataset and we show the performance improvement with respect to two state of the art urban mapping algorithms, and two widely used surface reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201

    Conservative occlusion culling for urban visualization using a slice-wise data structure

    Get PDF
    Cataloged from PDF version of article.In this paper, we propose a framework for urban visualization using a conservative from-region visibility algorithm based on occluder shrinking. The visible geometry in a typical urban walkthrough mainly consists of partially visible buildings. Occlusion-culling algorithms, in which the granularity is buildings, process these partially visible buildings as if they are completely visible. To address the problem of partial visibility, we propose a data structure, called slice-wise data structure, that represents buildings in terms of slices parallel to the coordinate axes. We observe that the visible parts of the objects usually have simple shapes. This observation establishes the base for occlusion-culling where the occlusion granularity is individual slices. The proposed slice-wise data structure has minimal storage requirements. We also propose to shrink general 3D occluders in a scene to find volumetric occlusion. Empirical results show that significant increase in frame rates and decrease in the number of processed polygons can be achieved using the proposed slice-wise occlusion-culling as compared to an occlusion-culling method where the granularity is individual buildings. © 2007 Elsevier Inc. All rights reserved

    Surfaces from the visual past : recovering high-resolution terrain data from historic aerial imagery for multitemporal landscape analysis

    No full text
    Historic aerial images are invaluable sources of aid to archaeological research. Often collected with large-format photogrammetric quality cameras, these images are potential archives of multidimensional data that can be used to recover information about historic landscapes that have been lost to modern development. However, a lack of camera information for many historic images coupled with physical degradation of their media has often made it difficult to compute geometrically rigorous 3D content from such imagery. While advances in photogrammetry and computer vision over the last two decades have made possible the extraction of accurate and detailed 3D topographical data from high-quality digital images emanating from uncalibrated or unknown cameras, the target source material for these algorithms is normally digital content and thus not negatively affected by the passage of time. In this paper, we present refinements to a computer vision-based workflow for the extraction of 3D data from historic aerial imagery, using readily available software, specific image preprocessing techniques and in-field measurement observations to mitigate some shortcomings of archival imagery and improve extraction of historical digital elevation models (hDEMs) for use in landscape archaeological research. We apply the developed method to a series of historic image sets and modern topographic data covering a period of over 70 years in western Sicily (Italy) and evaluate the outcome. The resulting series of hDEMs form a temporal data stack which is compared with modern high-resolution terrain data using a geomorphic change detection approach, providing a quantification of landscape change through time in extent and depth, and the impact of this change on archaeological resources
    corecore