37 research outputs found

    Face Recognition Under Varying Illumination

    Get PDF
    This study is a result of a successful joint-venture with my adviser Prof. Dr. Muhittin Gökmen. I am thankful to him for his continuous assistance on preparing this project. Special thanks to the assistants of the Computer Vision Laboratory for their steady support and help in many topics related with the project

    CVFC: Attention-Based Cross-View Feature Consistency for Weakly Supervised Semantic Segmentation of Pathology Images

    Full text link
    Histopathology image segmentation is the gold standard for diagnosing cancer, and can indicate cancer prognosis. However, histopathology image segmentation requires high-quality masks, so many studies now use imagelevel labels to achieve pixel-level segmentation to reduce the need for fine-grained annotation. To solve this problem, we propose an attention-based cross-view feature consistency end-to-end pseudo-mask generation framework named CVFC based on the attention mechanism. Specifically, CVFC is a three-branch joint framework composed of two Resnet38 and one Resnet50, and the independent branch multi-scale integrated feature map to generate a class activation map (CAM); in each branch, through down-sampling and The expansion method adjusts the size of the CAM; the middle branch projects the feature matrix to the query and key feature spaces, and generates a feature space perception matrix through the connection layer and inner product to adjust and refine the CAM of each branch; finally, through the feature consistency loss and feature cross loss to optimize the parameters of CVFC in co-training mode. After a large number of experiments, An IoU of 0.7122 and a fwIoU of 0.7018 are obtained on the WSSS4LUAD dataset, which outperforms HistoSegNet, SEAM, C-CAM, WSSS-Tissue, and OEEM, respectively.Comment: Submitted to BIBM202

    Multi-Head Attention Mechanism Learning for Cancer New Subtypes and Treatment Based on Cancer Multi-Omics Data

    Full text link
    Due to the high heterogeneity and clinical characteristics of cancer, there are significant differences in multi-omics data and clinical features among subtypes of different cancers. Therefore, the identification and discovery of cancer subtypes are crucial for the diagnosis, treatment, and prognosis of cancer. In this study, we proposed a generalization framework based on attention mechanisms for unsupervised contrastive learning (AMUCL) to analyze cancer multi-omics data for the identification and characterization of cancer subtypes. AMUCL framework includes a unsupervised multi-head attention mechanism, which deeply extracts multi-omics data features. Importantly, a decoupled contrastive learning model (DMACL) based on a multi-head attention mechanism is proposed to learn multi-omics data features and clusters and identify new cancer subtypes. This unsupervised contrastive learning method clusters subtypes by calculating the similarity between samples in the feature space and sample space of multi-omics data. Compared to 11 other deep learning models, the DMACL model achieved a C-index of 0.002, a Silhouette score of 0.801, and a Davies Bouldin Score of 0.38 on a single-cell multi-omics dataset. On a cancer multi-omics dataset, the DMACL model obtained a C-index of 0.016, a Silhouette score of 0.688, and a Davies Bouldin Score of 0.46, and obtained the most reliable cancer subtype clustering results for each type of cancer. Finally, we used the DMACL model in the AMUCL framework to reveal six cancer subtypes of AML. By analyzing the GO functional enrichment, subtype-specific biological functions, and GSEA of AML, we further enhanced the interpretability of cancer subtype analysis based on the generalizable AMUCL framework

    Understanding CNNs from excitations

    Full text link
    For instance-level explanation, in order to reveal the relations between high-level semantics and detailed spatial information, this paper proposes a novel cognitive approach to neural networks, which named PANE. Under the guidance of PANE, a novel saliency map representation method, named IOM, is proposed for CNN-like models. We make the comparison with eight state-of-the-art saliency map representation methods. The experimental results show that IOM far outperforms baselines. The work of this paper may bring a new perspective to understand deep neural networks

    Designing of Intelligent Parking Lot Based On MQTT

    No full text
    With the development of economy and the improvement of people’s living standards, people’s lives are getting inseparable from cars, the contradiction between the number of parking spaces and the increasing demand for parking is becoming more and more outstanding. It is necessary to design a intelligent parking system. This paper analyzes the drawbacks of the traditional parking system ,and design the main functions and solutions of the intelligent parking. This paper present the whole architecture of the system, and discusses the key technologies: ZigBee networking, MQTT protocol, Node.js, and mobile client technology. This paper proposed an effectively way of urban parking problem

    Local line derivative pattern for face recognition

    No full text
    In this paper, we propose a novel face descriptor for face recognition, named Local Line Derivative Pattern (LLDP). High-order derivative images in two directions are obtained by convolving original images with Sobel Masks. A revised binary coding function is proposed and three standards on arranging the weights are also proposed. Based on the standards, the weights of a line neighborhood in two directions are arranged. The LLDP labels in two directions are calculated with the proposed binary coding function and weights. The labeled image is divided into blocks where spatial histograms are extracted separately and concatenated into an entire histogram as features for recognition. The experiments on the FERET and Extended Yale B show superior performances of the proposed LLDP compared to other existing methods based on the LBP. The results prove that the LLDP has good robustness against expression, illumination and aging variations
    corecore