149 research outputs found

    Modified Canny Detector-based Active Contour for Segmentation

    Get PDF
    In the present work, an integrated modified canny detector and an active contour were proposed for automated medical image segmentation. Since the traditional canny detector (TCD) detects only the edge’s pixels, which are insufficient for labelling the image, a shape feature was extracted to select the initial region of interest ‘IROI’ as an initial mask for the active contour without edge (ACWE), using a proposed modified canny detector (MCD). This procedure overcomes the drawback of the manual initialization of the mask location and shape in the traditional ACWE, which is sensitive to the shape of region of region of interest (ROI). The proposed method solves this problem by selecting the initial location and shape of the IROI using the MCD. Also, a post-processing stage was applied for more cleaning and smoothing the ROI. A practical computational time is achieved as the proposed system requires less than 5 minutes, which is significantly less than the required time using the traditional ACWE. The results proved the ability of the proposed method for medical image segmentation with average dice 87.54%

    Modified Canny Detector-based Active Contour for Segmentation

    Get PDF
    In the present work, an integrated modified canny detector and an active contour were proposed for automated medical image segmentation. Since the traditional canny detector (TCD) detects only the edge’s pixels, which are insufficient for labelling the image, a shape feature was extracted to select the initial region of interest ‘IROI’ as an initial mask for the active contour without edge (ACWE), using a proposed modified canny detector (MCD). This procedure overcomes the drawback of the manual initialization of the mask location and shape in the traditional ACWE, which is sensitive to the shape of region of region of interest (ROI). The proposed method solves this problem by selecting the initial location and shape of the IROI using the MCD. Also, a post-processing stage was applied for more cleaning and smoothing the ROI. A practical computational time is achieved as the proposed system requires less than 5 minutes, which is significantly less than the required time using the traditional ACWE. The results proved the ability of the proposed method for medical image segmentation with average dice 87.54%

    Juxta-Vascular Pulmonary Nodule Segmentation in PET-CT Imaging Based on an LBF Active Contour Model with Information Entropy and Joint Vector

    Get PDF
    The accurate segmentation of pulmonary nodules is an important preprocessing step in computer-aided diagnoses of lung cancers. However, the existing segmentation methods may cause the problem of edge leakage and cannot segment juxta-vascular pulmonary nodules accurately. To address this problem, a novel automatic segmentation method based on an LBF active contour model with information entropy and joint vector is proposed in this paper. Our method extracts the interest area of pulmonary nodules by a standard uptake value (SUV) in Positron Emission Tomography (PET) images, and automatic threshold iteration is used to construct an initial contour roughly. The SUV information entropy and the gray-value joint vector of Positron Emission Tomography–Computed Tomography (PET-CT) images are calculated to drive the evolution of contour curve. At the edge of pulmonary nodules, evolution will be stopped and accurate results of pulmonary nodule segmentation can be obtained. Experimental results show that our method can achieve 92.35% average dice similarity coefficient, 2.19 mm Hausdorff distance, and 3.33% false positive with the manual segmentation results. Compared with the existing methods, our proposed method that segments juxta-vascular pulmonary nodules in PET-CT images is more accurate and efficient

    Automatic Detection and Classification of Breast Tumors in Ultrasonic Images Using Texture and Morphological Features

    Get PDF
    Due to severe presence of speckle noise, poor image contrast and irregular lesion shape, it is challenging to build a fully automatic detection and classification system for breast ultrasonic images. In this paper, a novel and effective computer-aided method including generation of a region of interest (ROI), segmentation and classification of breast tumor is proposed without any manual intervention. By incorporating local features of texture and position, a ROI is firstly detected using a self-organizing map neural network. Then a modified Normalized Cut approach considering the weighted neighborhood gray values is proposed to partition the ROI into clusters and get the initial boundary. In addition, a regional-fitting active contour model is used to adjust the few inaccurate initial boundaries for the final segmentation. Finally, three textures and five morphologic features are extracted from each breast tumor; whereby a highly efficient Affinity Propagation clustering is used to fulfill the malignancy and benign classification for an existing database without any training process. The proposed system is validated by 132 cases (67 benignancies and 65 malignancies) with its performance compared to traditional methods such as level set segmentation, artificial neural network classifiers, and so forth. Experiment results show that the proposed system, which needs no training procedure or manual interference, performs best in detection and classification of ultrasonic breast tumors, while having the lowest computation complexity

    Comparative analysis and implementation of structured edge active contour

    Get PDF
    This paper proposes modified chanvese model which can be implemented on image for segmentation. The structure of paper is based on Linear structure tensor (LST) as input to the variant model. Structure tensor is a matrix illustration of partial derivative information. In the proposed model, the original image is considered as information channel for computing structure tensor. Difference of Gaussian (DOG) is featuring improvement in which we can get less blurred image than original image.In this paper LST is modified by adding intensity information to enhance orientation information. Finally Active Contour Model (ACM) is used to segment the images. The proposed algorithm is tested on various images and also on some images which have intensity inhomogeneity and results are shown. Also, the results with other algorithms like chanvese, Bhattacharya, Gabor based chanvese and Novel structure tensor based model are compared.It is verified that accuracy of proposed model is the best. The biggest advantage of proposed model is clear edge enhancement

    Image texture analysis of transvaginal ultrasound in monitoring ovarian cancer

    Get PDF
    Ovarian cancer has the highest mortality rate of all gynaecologic cancers and is the fifth most common cancer in UK women. It has been dubbed “the silent killer” because of its non-specific symptoms. Amongst various imaging modalities, ultrasound is considered the main modality for ovarian cancer triage. Like other imaging modalities, the main issue is that the interpretation of the images is subjective and observer dependent. In order to overcome this problem, texture analysis was considered for this study. Advances in medical imaging, computer technology and image processing have collectively ramped up the interest of many researchers in texture analysis. While there have been a number of successful uses of texture analysis technique reported, to my knowledge, until recently it has yet to be applied to characterise an ovarian lesion from a B-mode image. The concept of applying texture analysis in the medical field would not replace the conventional method of interpreting images but is simply intended to aid clinicians in making their diagnoses. Five categories of textural features were considered in this study: grey-level co-occurrence matrix (GLCM), Run Length Matrix (RLM), gradient, auto-regressive (AR) and wavelet. Prior to the image classification, the robustness or how well a specific textural feature can tolerate variation arises from the image acquisition and texture extraction process was first evaluated. This includes random variation caused by the ultrasound system and the operator during image acquisition. Other factors include the influence of region of interest (ROI) size, ROI depth, scanner gain setting, and „calliper line‟. Evaluation of scanning reliability was carried out using a tissue-equivalent phantom as well as evaluations of a clinical environment. iii Additionally, the reliability of the ROI delineation procedure for clinical images was also evaluated. An image enhancement technique and semi-automatic segmentation tool were employed in order to improve the ROI delineation procedure. The results of the study indicated that two out of five textural features, GLCM and wavelet, were robust. Hence, these two features were then used for image classification purposes. To extract textural features from the clinical images, two ROI delineation approaches were introduced: (i) the textural features were extracted from the whole area of the tissue of interest, and (ii) the anechoic area within the normal and malignant tissues was excluded from features extraction. The results revealed that the second approach outperformed the first approach: there is a significant difference in the GLCM and wavelet features between the three groups: normal tissue, cysts, and malignant. Receiver operating characteristic (ROC) curve analysis was carried out to determine the discriminatory ability of textural features, which was found to be satisfactory. The principal conclusion was that GLCM and wavelet features can potentially be used as computer aided diagnosis (CAD) tools to help clinicians in the diagnosis of ovarian cancer

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Entwicklungen und Untersuchungen zur Bildgebung der Schilddrüse: 124Iod-PET/CT, 3D-Ultraschall und nuklearmedizinisch-sonographische Bildfusion

    Get PDF
    In der etablierten Schilddrüsenbildgebung existieren trotz des bereits hohen Standards begrenzende Faktoren. Methodische und technische Neuerungen erscheinen mithin sinnvoll und geboten. Die vorliegende Habilitationsschrift stellt die Entwicklung und Erprobung neuer Konzepte der Schilddrüsendiagnostik in drei Teilgebieten vor: *Durch die 124Iod-Niedrigaktivitäts-PET/Niedrigdosis-CT wird (i) die Ortsauflösung der herkömmlichen Szintigraphie übertroffen und die Detektierbarkeit kleinerer Strukturen sowie anatomischer Details verbessert. Durch den parallel akquirierten CT-Datensatz können (ii) zusätzliche Erkenntnisse zur Schilddrüse sowie deren Beziehung zu Nachbarorganen gewonnen werden. Darüber hinaus sind (iii) im Rahmen der Vorbereitung von Radiojodtherapien prätherapeutische Uptake-Messungen möglich. *Der 3D-US ermöglicht (i) den lückenlosen Scan der Schilddrüse und (ii) die vollständige digitale Archivierung des Untersuchungsvolumens im PACS. Dadurch ergeben sich auf Schnittbildworkstations die Vorteile (iii) des Second Readings, (iv) des Side-by-Side-Vergleichs mit vorangegangenen 3D-US-Studien und anderen Schnittbildverfahren. Darüber hinaus kann (v) eine nachträgliche Datenverarbeitung (Processing) erfolgen. *Die Einbeziehung des Ultraschalls in das Konzept der Fusions- bzw. Hybridbildgebung hat gezeigt, dass die räumliche Verknüpfung und bildliche Überlagerung der morphologisch-sonographischen Informationen mit den nuklearmedizinisch-funktionellen Bilddaten erfolgen kann. Aus dem klinischen Potential der Methoden einerseits, sowie den geschilderten Limitationen andererseits ergeben sich Implikationen für die Zukunft. Zunächst sind die apparativ-technische Weiterentwicklung der Verfahren sowie eine Optimierung der informationstechnischen Einbindung notwendig. Darüber hinaus muss eine Entwicklung hin zu einer zeitsparenden und einfachen Anwendbarkeit erfolgen, um einen rationellen klinischen Workflow zu ermöglichen und personelle Ressourcen zu schonen

    I Simulatori in realtà virtuale: un ausilio nella formazione chirurgica

    Get PDF
    Negli ultimi anni la necessità di formazione in campo laparoscopico ha spinto verso la creazione di simulatori chirurgici di diversa fattura e diversa complessità. Al momento molti di questi sono disponibili in commercio. Ognuna di questi ha il proprio design, struttura e programma di formazione. L'evoluzione è rappresentata dall’utilizzo della Realtà Virtuale, che mima l'azione reale e lavora sulle diverse competenze acquisite durante i corsi di formazione e l’esperienza chirurgica al campo operatorio. Il ruolo della formazione "sicura ed efficiente" è necessario nel corso di una specializzazione in chirurgia e durante la formazione continua. La simulazione in realtà virtuale è in grado di offrire un numero infinito di scenari chirurgici. I simulatori chirurgici in realtà virtuale di ultima generazione sono forniti di percorsi di formazione graduali che guidano lo specializzando nell’acquisizione di manualità “fine” nei singoli tasks fino alla procedura completa “full task” di un intervento chirurgico, ad esempio una colecistectomia. In questo studio abbiamo voluto testare la validità di un’acquisizione graduale di tecnica manuale “step by step” rispetto all’esercizio diretto solo su una procedura completa mediante l’ausilio di un simulatore in Virtual Reality, il LapMentor®(Simbionix,Israele). Specializzandi in Chirurgia Generale privi di esperienza precedente in laparoscopia hanno ottenuto risultati migliori sulla procedura completa della colecistectomia laparoscopica procedendo durante il corso step by step rispetto a coloro che hanno eseguito la procedura completa “full task” direttamente. Il nostro studio conferma che una buona esperienza e la conoscenza delle capacità tecniche di base nel campo della formazione laparoscopica migliorano le prestazioni nella procedura completa.In the last years the need for training in laparoscopy has led to the creation of surgical simulators of varying complexity and different bill. Currently, many of these are commercially available. Each of these has its own design, structure and training program. The trend is the use of virtual reality, which mimics the real action and work on various skills acquired during the training and experience in the surgical operating field. The role of training on safe and efficient "is necessary in the course of specialization in surgery and during the training. The simulation in virtual reality is able to offer an infinite number of surgical scenarios. The surgical simulators in virtual reality are equipped with the latest training courses that guide the gradual specializing in the acquisition of manual skills "end" in the individual tasks to complete procedure "full task" for surgery, such as a cholecystectomy. In this study we wanted to test the validity of the gradual acquisition of technical manual "step by step" only on a direct comparison with the whole procedure with the help of a mortgage in Virtual Reality, the LapMentor ® (Simbionix, Israel) .Specializing in general surgery with no previous experience in laparoscopy have performed better on the whole procedure of laparoscopic cholecystectomy during the course of proceeding step by step than those who performed the procedure complete "full task" directly. Our study confirms that a good experience and knowledge of basic technical skills in training laparoscopic improve performance in the whole procedure
    corecore