299 research outputs found

    Biomimetic Design for Efficient Robotic Performance in Dynamic Aquatic Environments - Survey

    Get PDF
    This manuscript is a review over the published articles on edge detection. At first, it provides theoretical background, and then reviews wide range of methods of edge detection in different categorizes. The review also studies the relationship between categories, and presents evaluations regarding to their application, performance, and implementation. It was stated that the edge detection methods structurally are a combination of image smoothing and image differentiation plus a post-processing for edge labelling. The image smoothing involves filters that reduce the noise, regularize the numerical computation, and provide a parametric representation of the image that works as a mathematical microscope to analyze it in different scales and increase the accuracy and reliability of edge detection. The image differentiation provides information of intensity transition in the image that is necessary to represent the position and strength of the edges and their orientation. The edge labelling calls for post-processing to suppress the false edges, link the dispread ones, and produce a uniform contour of objects

    A novel edge detection method based on efficient gaussian binomial filter

    Get PDF
    Most basic and recent image edge detection methods are based on exploiting spatial high-frequency to localize efficiency the boundaries and image discontinuities. These approaches are strictly sensitive to noise, and their performance decrease with the increasing noise level. This research suggests a novel and robust approach based on a binomial Gaussian filter for edge detection. We propose a scheme-based Gaussian filter that employs low-pass filters to reduce noise and gradient image differentiation to perform edge recovering. The results presented illustrate that the proposed approach outperforms the basic method for edge detection. The global scheme may be implemented efficiently with high speed using the proposed novel binomial Gaussian filter

    Edge Detection via Edge-Strength Estimation Using Fuzzy Reasoning and Optimal Threshold Selection Using Particle Swarm Optimization

    Get PDF
    An edge is a set of connected pixels lying on the boundary between two regions in an image that differs in pixel intensity. Accordingly, several gradient-based edge detectors have been developed that are based on measuring local changes in gray value; a pixel is declared to be an edge pixel if the change is significant. However, the minimum value of intensity change that may be considered to be significant remains a question. Therefore, it makes sense to calculate the edge-strength at every pixel on the basis of the intensity gradient at that pixel point. This edge-strength gives a measure of the potentiality of a pixel to be an edge pixel. In this paper, we propose to use a set of fuzzy rules to estimate the edge-strength. This is followed by selecting a threshold; only pixels having edge-strength above the threshold are considered to be edge pixels. This threshold is selected such that the overall probability of error in identifying edge pixels, that is, the sum of the probability of misdetection and the probability of false alarm, is minimum. This minimization is achieved via particle swarm optimization (PSO). Experimental results demonstrate the effectiveness of our proposed edge detection method over some other standard gradient-based methods

    Segmentasi Citra Sapi Berbasis Deteksi Tepi Menggunakan Algoritma Canny Edge Detection

    Get PDF
    Abstract. The determination of the cattle price is generally agreed through bargaining, it is not based on the weight of the cows being sold. Most people mainly use rough calculation. There are formulas to calculate the weight but they require perimeter information of chest size and body length. It is necessary to measure the cow manually, but in reality it is not easy to do because the cow is difficult to control. Therefore, it requires a tool that can help measure easily. This article represents the early stages of research to determine the weight of cows from the cow image acquisition. It focuses on segmentation and image processing. The image acquisition results are processed using five scenarios. The results of the evaluation show that scenario 3 (Median Blur and Canny) has the best result with the value of 230,051 MSE and 24,524 dB PSNR.Keywords: Edge Detection, Canny, Segmentation, Cow, Image Processing Abstrak. Penentuan harga sapi umumnya disepakati melalui tawar menawar bukan didasarkan pada bobot sapi yang dijual. Kebanyakan menggunakan perhitungan secara kasar maupun secara kira-kira. Terdapat rumus untuk menghitung bobot sapi, rumus yang ada memerlukan informasi terkait lingkar dada dan panjang badan. Untuk mendapatkan nilai lingkar dada dan panjang badan perlu dilakukan pengukuran secara manual, namun di lapangan hal tersebut tidak mudah dilakukan karena sapi sulit dikondisikan. Oleh karena itu diperlukan alat yang dapat mengukur secara mudah. Tulisan ini merupakan tahap awal dari penelitian untuk menentukan bobot sapi dari hasil akuisisi citra sapi. Oleh sebab itu pada tahap awal ini difokuskan pada segmentasi serta pengolahan citra sapi untuk menentukan deteksi tepi terbaik yang nantinya digunakan pada penelitian selanjutnya. Citra sapi hasil akuisisi diproses menggunakan lima buah skenario deteksi tepi. Hasil evaluasi menujukkan bahwa Skenario 3 (Median Blur dan Canny) memiliki hasil yang terbaik dengan nilai MSE sebesar 230.051 dan PSNR sebesar 24.524 dB.Kata Kunci: Deteksi Tepi, Canny, Segmentasi, Sapi, Pengolahan Citra Digital

    A New Robust Multi focus image fusion Method

    Get PDF
    In today's digital era, multi focus picture fusion is a critical problem in the field of computational image processing. In the field of fusion information, multi-focus picture fusion has emerged as a significant research subject. The primary objective of multi focus image fusion is to merge graphical information from several images with various focus points into a single image with no information loss. We provide a robust image fusion method that can combine two or more degraded input photos into a single clear resulting output image with additional detailed information about the fused input images. The targeted item from each of the input photographs is combined to create a secondary image output. The action level quantities and the fusion rule are two key components of picture fusion, as is widely acknowledged. The activity level values are essentially implemented in either the "spatial domain" or the "transform domain" in most common fusion methods, such as wavelet. The brightness information computed from various source photos is compared to the laws developed to produce brightness / focus maps by using local filters to extract high-frequency characteristics. As a result, the focus map provides integrated clarity information, which is useful for a variety of Multi focus picture fusion problems. Image fusion with several modalities, for example. Completing these two jobs, on the other hand. As a consequence, we offer a strategy for achieving good fusion performance in this study paper. A Convolutional Neural Network (CNN) was trained on both high-quality and blurred picture patches to represent the mapping. The main advantage of this idea is that it can create a CNN model that can provide both the Activity level Measurement" and the Fusion rule, overcoming the limitations of previous fusion procedures. Multi focus image fusion is demonstrated using microscopic images, medical imaging, computer visualization, and Image information improvement is also a benefit of multi-focus image fusion. Greater precision is necessary in terms of target detection and identification. Face recognition" and a more compact work load, as well as enhanced system consistency, are among the new features

    Microcalcifications Detection Using Image And Signal Processing Techniques For Early Detection Of Breast Cancer

    Get PDF
    Breast cancer has transformed into a severe health problem around the world. Early diagnosis is an important factor to survive this disease. The earliest detection signs of potential breast cancer that is distinguishable by current screening techniques are the presence of microcalcifications (MCs). MCs are small crystals of calcium apatite and their normal size ranges from 0.1mm to 0.5mm single crystals to groups up to a few centimeters in diameter. They are the first indication of breast cancer in more than 40% of all breast cancer cases, making their diagnosis critical. This dissertation proposes several segmentation techniques for detecting and isolating point microcalcifications: Otsu’s Method, Balanced Histogram Thresholding, Iterative Method, Maximum Entropy, Moment Preserving, and Genetic Algorithm. These methods were applied to medical images to detect microcalcifications. In this dissertation, results from the application of these techniques are presented and their efficiency for early detection of breast cancer is explained. This dissertation also explains theories and algorithms related to these techniques that can be used for breast cancer detection

    Study and Development of Some Novel Image Segmentation Techniques

    Get PDF
    Some fuzzy technique based segmentation methods are studied and implemented and some fuzzy c means clustering based segmentation algorithms are developed in this thesis to suppress high and low uniform random noise. The reason for not developing fuzzy rule based segmentation method is that they are application dependent In many occasions, the images in real life are affected with noise. Fuzzy c means clustering based segmentation does not give good segmentation result under such condition. Various extension of the FCM method for segmentation are present in the literature. But most of them modify the objective function hence changing the basic FCM algorithm present in MATLAB toolboxes. Hence efforts have been made to develop FCM algorithm without modifying their objective function for better segmentation . The fuzzy technique based segmentation methods that are studied and developed are summarized here. (A) Fuzzy edge detection based segmentation: Two fuzzy edge detection methods are studied and implemented for segmentation: (i) FIS based edge detection and (ii) Fast multilevel fuzzy edge detector (FMFED). (i): The Fuzzy Inference system (FIS) based edge detector consists of some fuzzy inference rules which are defined in such a way that the FIS system output (“edges”) is high only for those pixels belonging to edges in the input image. A robustness to contrast and lightining variations were also taken into consideration while developing these rules.The output of the FIS based edge detector is then compared with the existing Sobel, LoG and Canny edge detector results. The algorithm is seen to be application dependent and time consuming. (ii) Fast Multilevel Fuzzy Edge Detector: To realise the fast and accurate detection of edges, the FMFED algorithm is proposed. It first enhances the image contrast by means of a fast multilevel fuzzy enhancement algorithm using simple transformation function based on two image thresholds. Second, the edges are extracted from the enhanced image by using a two stage edge detector operator that identifies the edge candidates based on local characteristics of the image and then determines the true edge pixels using edge detector operator based on extremum of the gradient values. Finally the segmentation of the edge image is done by morphological operator by edge linking. (B) FCM based segmentation: Two fuzzy clustering based segmentation methods are developed: (i) Modified Spatial Fuzzy c-Means (MSFCM) (ii) Neighbourhood Attraction Fuzzy c-Means (NAFCM). . (i) Contrast-Limited Adaptive Histogram Equalization Fuzzy c-Means (CLAHEFCM): This proposed algorithm presents a color segmentation process for low contrast images or unevenly illuminated images. The algorithm presented in this paper first enhances the contrast of the image by using contrast limited adaptive histogram equalization. After the enhancement of the image this method divides the color space into a given number of clusters, the number of cluster are fixed initially. The image is converted from RGB color space to LAB color space before the clustering process. Clustering is done here by using Fuzzy c means algorithm. The image is segmented based on color of a region, that is, areas having same color are grouped together. The image segmentation is done by taking into consideration, to which cluster a given pixel belongs the most. The method has been applied on a number of color test images and it is observed to give good segmentation results (ii) Modified Spatial Fuzzy c-means (MSFCM): The proposed algorithm divides the color space into a given number of clusters, the number of cluster are fixed initially. The image is converted from RGB color space to LAB color space before the clustering process. A robust segmentation technique based on extension to the traditional fuzzy c-means (FCM) clustering algorithm is proposed. The spatial information of each pixel in an image has been taken into consideration to get a noise free segmentation result. The image is segmented based on color of a region, that is, areas having same color are grouped together. The image segmentation is done by taking into consideration, to which cluster a given pixel belongs the most. The method has been applied to some color test images and its performance has been compared to FCM and FCM based methods to show its superiority over them. The proposed technique is observed to be an efficient and easy method for segmentation of noisy images. (iv) Neighbourhood Attraction Fuzzy c Means Algorithm: A new algorithm based on the IFCM neighbourhood attraction is used without changing the distance function of the FCM and hence avoiding an extra neural network optimization step for the adjusting parameters of the distance function, it is called Neighborhood Atrraction FCM (NAFCM). During clustering, each pixel attempts to attract its neighbouring pixels towards its own cluster. This neighbourhood attraction depends on two factors: the pixel intensities or feature attraction, and the spatial position of the neighbours or distance attraction, which also depends on neighbourhood structure. The NAFCM algorithm is tested on a synthetic image (chapter 6, figure 6.3-6.6) and a number of skin tumor images. It is observed to produce excellent clustering result under high noise condition when compared with the other FCM based clustering methods

    AN AUTOMATED, DEEP LEARNING APPROACH TO SYSTEMATICALLY & SEQUENTIALLY DERIVE THREE-DIMENSIONAL KNEE KINEMATICS DIRECTLY FROM TWO-DIMENSIONAL FLUOROSCOPIC VIDEO

    Get PDF
    Total knee arthroplasty (TKA), also known as total knee replacement, is a surgical procedure to replace damaged parts of the knee joint with artificial components. It aims to relieve pain and improve knee function. TKA can improve knee kinematics and reduce pain, but it may also cause altered joint mechanics and complications. Proper patient selection, implant design, and surgical technique are important for successful outcomes. Kinematics analysis plays a vital role in TKA by evaluating knee joint movement and mechanics. It helps assess surgery success, guides implant and technique selection, informs implant design improvements, detects problems early, and improves patient outcomes. However, evaluating the kinematics of patients using conventional approaches presents significant challenges. The reliance on 3D CAD models limits applicability, as not all patients have access to such models. Moreover, the manual and time-consuming nature of the process makes it impractical for timely evaluations. Furthermore, the evaluation is confined to laboratory settings, limiting its feasibility in various locations. This study aims to address these limitations by introducing a new methodology for analyzing in vivo 3D kinematics using an automated deep learning approach. The proposed methodology involves several steps, starting with image segmentation of the femur and tibia using a robust deep learning approach. Subsequently, 3D reconstruction of the implants is performed, followed by automated registration. Finally, efficient knee kinematics modeling is conducted. The final kinematics results showed potential for reducing workload and increasing efficiency. The algorithms demonstrated high speed and accuracy, which could enable real-time TKA kinematics analysis in the operating room or clinical settings. Unlike previous studies that relied on sponsorships and limited patient samples, this algorithm allows the analysis of any patient, anywhere, and at any time, accommodating larger subject populations and complete fluoroscopic sequences. Although further improvements can be made, the study showcases the potential of machine learning to expand access to TKA analysis tools and advance biomedical engineering applications
    corecore