136 research outputs found

    Restoration of deteriorated text sections in ancient document images using atri-level semi-adaptive thresholding technique

    Get PDF
    The proposed research aims to restore deteriorated text sections that are affected by stain markings, ink seepages and document ageing in ancient document photographs, as these challenges confront document enhancement. A tri-level semi-adaptive thresholding technique is developed in this paper to overcome the issues. The primary focus, however, is on removing deteriorations that obscure text sections. The proposed algorithm includes three levels of degradation removal as well as pre- and post-enhancement processes. In level-wise degradation removal, a global thresholding approach is used, whereas, pseudo-colouring uses local thresholding procedures. Experiments on palm leaf and DIBCO document photos reveal a decent performance in removing ink/oil stains whilst retaining obscured text sections. In DIBCO and palm leaf datasets, our system also showed its efficacy in removing common deteriorations such as uneven illumination, show throughs, discolouration and writing marks. The proposed technique directly correlates to other thresholding-based benchmark techniques producing average F-measure and precision of 65.73 and 93% towards DIBCO datasets and 55.24 and 94% towards palm leaf datasets. Subjective analysis shows the robustness of proposed model towards the removal of stains degradations with a qualitative score of 3 towards 45% of samples indicating degradation removal with fairly readable text

    The use of machine learning/deep learning in PET/CT interpretation to aid in outcome prediction in lymphoma

    Get PDF
    Lymphoma is a haematopoietic malignancy consisting of two broad categories: Hodgkin lymphoma (HL) and non-Hodgkin lymphoma (NHL). These categories can be further split into subtypes with classical HL (cHL) and diffuse large B cell lymphoma (DLBCL) being the commonest subtypes. The gold standard imaging modality for staging and response assessment for cHL and DLBCL is 2-deoxy-2-[fluorine-18]fluoro-D-glucose (FDG) positron emission tomography/computed tomography (PET/CT), with patients having a worse prognosis if they do not demonstrate complete metabolic response (CMR). However, approximately 15% of patients will relapse even after CMR. Therefore, being able to identify patients who are likely to relapse it may be possible to stratify treatment early to improve patient outcomes. The aim of this project is to develop and test image derived predictive models based on the baseline PET/CT to risk stratify patients pre-treatment

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Image Processing and Analysis for Preclinical and Clinical Applications

    Get PDF
    Radiomics is one of the most successful branches of research in the field of image processing and analysis, as it provides valuable quantitative information for the personalized medicine. It has the potential to discover features of the disease that cannot be appreciated with the naked eye in both preclinical and clinical studies. In general, all quantitative approaches based on biomedical images, such as positron emission tomography (PET), computed tomography (CT) and magnetic resonance imaging (MRI), have a positive clinical impact in the detection of biological processes and diseases as well as in predicting response to treatment. This Special Issue, “Image Processing and Analysis for Preclinical and Clinical Applications”, addresses some gaps in this field to improve the quality of research in the clinical and preclinical environment. It consists of fourteen peer-reviewed papers covering a range of topics and applications related to biomedical image processing and analysis

    Radiogenomics in non-small-cell lung cancer

    Get PDF
    Ο μη μικροκυτταρικός καρκίνος του πνεύμονα είναι ο πιο συχνά συναντώμενος υποτύπος καρκίνου του πνεύμονα, ο οποίος αποτελείται από ένα φάσμα υποτύπων. Το NSCLC είναι ένας θανατηφόρος, ετερογενής συμπαγής όγκος με μια εκτεταμένη σειρά μοριακών χαρακτηριστικών. Η πάθηση έχει γίνει ένα αξιοσημείωτο παράδειγμα ιατρικής ακριβείας καθώς το ενδιαφέρον για το θέμα συνεχίζει να επεκτείνεται. Ο απώτερος στόχος της τρέχουσας έρευνας είναι να χρησιμοποιήσει συγκεκριμένα γονίδια ως βιοδείκτες για την πρόγνωση, την έγκαιρη διάγνωση και την εξατομικευμένη θεραπεία, τα οποία διευκολύνονται από τη χρήση εξελισσόμενων τεχνικών αλληλούχισης επόμενης γενιάς που επιτρέπουν την ταυτόχρονη ανίχνευση μεγάλου αριθμού γενετικές ανωμαλίες. Γνωστές μεταλλάξεις ενός αριθμού γονιδίων, όπως τα EGFR, ALK και KRAS, επηρεάζουν ήδη τις αποφάσεις θεραπείας και νέα βασικά γονίδια και μοριακές υπογραφές διερευνώνται για την προγνωστική τους αξία καθώς και για την πιθανή συμβολή τους στην ανοσοθεραπεία και τη θεραπεία της υποτροπής στην αντίσταση στις υπάρχουσες θεραπείες. Οι τύποι δειγμάτων που χρησιμοποιούνται για μελέτες NGS, όπως αναρροφήσεις με λεπτή βελόνα, ιστός ενσωματωμένος σε παραφίνη σταθεροποιημένος με φορμαλίνη και DNA χωρίς κύτταρα, έχουν ο καθένας τα δικά του πλεονεκτήματα και μειονεκτήματα που πρέπει να ληφθούν υπόψηNon-small cell lung cancer is the most often encountered subtype of lung cancer, which consists of a spectrum of subtypes. NSCLC is a lethal, heterogeneous solid tumor with an extensive array of molecular features. The condition has become a notable example of precision medicine as interest in the topic continues to expand. The ultimate goal of the current research is to use specific genes as biomarkers for its prognosis, timely diagnosis, and personalized therapy, all of which are facilitated by the use of evolving next-generation sequencing techniques that permit the simultaneous detection of a large number of genetic abnormalities. Known mutations of a number of genes, such as EGFR, ALK, and KRAS, already influence treatment decisions, and new key genes and molecular signatures are being investigated for their prognostic value as well as their potential contribution to immunotherapy and the treatment of recurrence due to resistance to existing therapies. The sample types utilized for NGS studies, such as fine-needle aspirates, formalin-fixed paraffin-embedded tissue, and cell-free DNA, each have their own advantages and disadvantages that must be taken into accoun

    Improved Otsu and Kapur approach for white blood cells segmentation based on LebTLBO optimization for the detection of Leukemia.

    Full text link
    The diagnosis of leukemia involves the detection of the abnormal characteristics of blood cells by a trained pathologist. Currently, this is done manually by observing the morphological characteristics of white blood cells in the microscopic images. Though there are some equipment- based and chemical-based tests available, the use and adaptation of the automated computer vision-based system is still an issue. There are certain software frameworks available in the literature; however, they are still not being adopted commercially. So there is a need for an automated and software- based framework for the detection of leukemia. In software-based detection, segmentation is the first critical stage that outputs the region of interest for further accurate diagnosis. Therefore, this paper explores an efficient and hybrid segmentation that proposes a more efficient and effective system for leukemia diagnosis. A very popular publicly available database, the acute lymphoblastic leukemia image database (ALL-IDB), is used in this research. First, the images are pre-processed and segmentation is done using Multilevel thresholding with Otsu and Kapur methods. To further optimize the segmentation performance, the Learning enthusiasm-based teaching-learning-based optimization (LebTLBO) algorithm is employed. Different metrics are used for measuring the system performance. A comparative analysis of the proposed methodology is done with existing benchmarks methods. The proposed approach has proven to be better than earlier techniques with measuring parameters of PSNR and Similarity index. The result shows a significant improvement in the performance measures with optimizing threshold algorithms and the LebTLBO technique

    Candidate generation and validation techniques for pedestrian detection in thermal (infrared) surveillance videos.

    Get PDF
    Doctoral Degree. University of KwaZulu- Natal, Durban.Video surveillance systems have become prevalent. Factors responsible for this prevalence include, but are not limited to, rapid advancements in technology, reduction in the cost of surveillance systems and changes in user demand. Research in video surveillance is majorly driven by rising global security needs which in turn increase the demand for proactive systems which monitor persistently. Persistent monitoring is a challenge for most video surveillance systems because they depend on visible light cameras. Visible light cameras depend on the presence of external light and can easily be undermined by over-, under, or non-uniform illumination. Thermal infrared cameras have been considered as alternatives to visible light cameras because they measure the intensity of infrared energy emitted from objects and so can function persistently. Many methods put forward make use of methods developed for visible footage, but these tend to underperform in infrared images due to different characteristics of thermal footage compared to visible footage. This thesis aims to increase the accuracy of pedestrian detection in thermal infrared surveillance footage by incorporating strategies into existing frameworks used in visible image processing techniques for IR pedestrian detection without the need to initially assume a model for the image distribution. Therefore, two novel techniques for candidate generation were formulated. The first is an Entropy-based histogram modication algorithm that incorporates a strategy for energy loss to iteratively modify the histogram of an image for background elimination and pedestrian retention. The second is a Background Subtraction method featuring a strategy for building a reliable background image without needing to use the whole video frame. Furthermore, pedestrian detection involves simultaneously solving several sub-tasks while adapting each task with IR-speci_c adaptations. Therefore, a novel semi-supervised single model for pedestrian detection was formulated that eliminates the need for separate modules of candidate generation and validation by integrating region and boundary properties of the image with motion patterns such that all the _ne-tuning and adjustment happens during energy minimization. Performance evaluations have been performed on four publicly available benchmark surveillance datasets consisting of footage taken under a wide variety of weather conditions and taken from different perspectives

    Multiscale Pore Network Modeling of Hierarchical Media with Applications to Improved Oil and Gas Recovery

    Get PDF
    For complex geological materials such as carbonates and tight sandstones having pores at several scales, the conventional relationships are not adequate to quantify transport properties. Therefore, it becomes important to study these complex rocks at the pore scale and apply relevant physics for transport properties. However, with the current state of imaging technology it is not possible to obtain realistic images of the rock having pores at several orders of magnitude in a single image. Therefore, it becomes necessary to develop modeling tools that can study images with unresolved porosity. A pore network can be extracted such that the transport properties of the visible voids are calculated, while the interplay between micro- and macro-porosity can be studied by modeling the unresolved pores as effective continua. In this work, first we have attempted to generate three phase multiscale artificial images using PoreSpy and then devised a method of network extraction on these three phase images in a single step and thus created a multiscale pore network model using OpenPNM. Also, 3D and three-phase segmentation of real carbonate images were prepared where the developed algorithm was successfully tested. A cubic grid is applied to the micropores region which becomes the mesh for the continua simulation with each element endowed with effective properties. The macropores are then stitched together with the continua scale, thus creating a hybrid hierarchical pore network that possess information at several scales. The multiscale pore network algorithm prepared in this work is fast and robust and has been tested on several 2D and 3D artificial and real rock images. Porosity, permeability, and formation factor have been calculated on the resulting pore networks and validated with the real sandstone and carbonate images
    corecore