19 research outputs found

    Retrieval System for Patent Images

    Get PDF
    AbstractPatent information and images play important roles to describe the novelty of an invention. However, current patent collections do not support image retrieval and patent images are become almost unsearchable. This paper presents a short review of the existing research work and challenges in patent image retrieval domain. From the review, the image feature extraction step is found to be an important step to match the query and database images successfully. In order to improve the current feature extraction step in image patent retrieval, we propose a patent image retrieval approach based on Affine-SIFT technique. Comparison discussions between the existing feature extraction techniques are presented to assess the potential of this proposed approach

    TPCNN: Two-path convolutional neural network for tumor and liver segmentation in CT images using a novel encoding approach

    Get PDF
    Automatic liver and tumour segmentation in CT images are crucial in numerous clinical applications, such as postoperative assessment, surgical planning, and pathological diagnosis of hepatic diseases. However, there are still a considerable number of difficulties to overcome due to the fuzzy boundary, irregular shapes, and complex tissues of the liver. In this paper, for liver and tumor segmentation and to overcome the mentioned challenges a simple but powerful strategy is presented based on a cascade convolutional neural network. At the first, the input image is normalized using the Z-Score algorithm. This normalized image provides more information about the boundary of tumor and liver. Also, the Local Direction of Gradient (LDOG) which is a novel encoding algorithm is proposed to demonstrate some key features inside the image. The proposed encoding image is highly effective in recognizing the border of liver, even in the regions close to the touching organs. Then, a cascade CNN structure for extracting both local and semi-global features is used which utilized the original image and two other obtained images as the input data. Rather than using a complex deep CNN model with a lot of hyperparameters, we employ a simple but effective model to decrease the train and testing time. Our technique outperforms the state-of-the-art works in terms of segmentation accuracy and efficiency

    Lung infection segmentation for COVID-19 pneumonia based on a cascade convolutional network from CT images

    Get PDF
    The COVID-19 pandemic is a global, national, and local public health which causing a significant outbreak in all countries and regions for both males and females around the world. Automated detection of lung infections and their boundaries from medical images offers a great potential to augment the patient treatment healthcare strategies for tackling COVID-19 and its impacts. Detecting this disease from lung CT scan images is perhaps one of the fastest ways to diagnose the patients. However, finding the presence of infected tissues and segment them from CT slices faces numerous challenges, including similar adjacent tissues, vague boundary, and erratic infections. To overcome the mentioned problems, we propose a two-route convolutional neural network (CNN) by extracting global and local features for detecting and classifying COVID-19 infection from CT images. Each pixel from the image is classified into normal and infected tissue. For improving the classification accuracy, we used two different strategies including Fuzzy c-mean clustering and local directional pattern (LDN) encoding methods to represent the input image differently. This allows us to find a more complex pattern from the image. To overcome the overfitting problems due to small samples, an augmentation approach is utilized. The results demonstrated that the proposed framework achieved Precision 96%, Recall 97%, F-score, average surface distance (ASD) of 2.8\pm0.3\ mm and volume overlap error (VOE) of 5.6\pm1.2%

    A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    No full text
    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods

    Expression of tolerogenic dendritic cells in the small intestinal tissue of patients with celiac disease

    No full text
    Tolerogenic dendritic cells (tolCDs) play an important role in the regulation of inflammation in autoimmune diseases such as celiac disease (CeD). Dendritic cells express CD207, CD11c, and CD103 on their surface. In addition to the receptors mentioned above, tolCDs can express the immune-regulating enzyme indoleamine 2,3-dioxygenase (IDO). This study aimed to determine the mRNA and protein expression of CD11c, CD103 and CD207 markers, and also IDO gene expression in intestinal tissues of CeD patients in comparison to the healthy individuals. Duodenal biopsies were collected from 60 CeD patients and 60 controls. Total RNA was extracted and gene expression analysis was performed using Real-time PCR SYBR® Green method. Additionally, biopsy specimens were paraffinized and protein expression was evaluated using immunohistochemistry (IHC) for expression of CD11c+, CD207+and CD103+. Gene expression levels of CD11c (P = 0.045), CD103 (P < 0.001), CD207 (P < 0.001) and IDO (P = 0.01) were significantly increased in CeD patients compared to the control group. However, only CD103 protein expression was found to be significantly higher in CeD patients in comparison to the control group (P < 0.001). The result of this study showed that the expresion levels of CD11c, CD103, CD207 and IDO markers were higher in CeD patients compared to the controls, indicating the effort of dendritic cells to counterbalance the gliadin-triggered abnormal immune responses in CeD patients

    Visual comparison for moving target detection methods.

    No full text
    <p>The first row is original images, the second row is frame difference method and third row is our proposed method.</p

    Segmented sub-regions using SLIC.

    No full text
    <p>(a) A candidate mask (CM) region, (b) Sub-region generation based on proposed parallel SLIC segmentation algorithm.</p
    corecore