22 research outputs found

    Genetic Alterations in Primary Gastric Carcinomas Correlated with Clinicopathological Variables by Array Comparative Genomic Hybridization

    Get PDF
    Genetic alterations have been recognized as an important event in the carcinogenesis of gastric cancer (GC). We conducted high resolution bacterial artificial chromosome array-comparative genomic hybridization, to elucidate in more detail the genomic alterations, and to establish a pattern of DNA copy number changes with distinct clinical variables in GC. Our results showed some correlations between novel amplified or deleted regions and clinical status. Copy-number gains were frequently detected at 1p, 5p, 7q, 8q, 11p, 16p, 20p and 20q, and losses at 1p, 2q, 4q, 5q, 7q, 9p, 14q, and 18q. Losses at 4q23, 9p23, 14q31.1, or 18q21.1 as well as a gain at 20q12 were correlated with tumor-node-metastasis tumor stage. Losses at 9p23 or 14q31.1 were associated with lymph node status. Metastasis was determined to be related to losses at 4q23 or 4q28.2, as well as losses at 4q15.2, 4q21.21, 4q 28.2, or 14q31.1, with differentiation. One of the notable aspects of this study was that the losses at 4q or 14q could be employed in the evaluation of the metastatic status of GC. Our results should provide a potential resource for the molecular cytogenetic events in GC, and should also provide clues in the hunt for genes associated with GC

    Identification of novel candidate target genes, including EPHB3, MASP1 and SST at 3q26.2–q29 in squamous cell carcinoma of the lung

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The underlying genetic alterations for squamous cell carcinoma (SCC) and adenocarcinoma (AC) carcinogenesis are largely unknown.</p> <p>Methods</p> <p>High-resolution array- CGH was performed to identify the differences in the patterns of genomic imbalances between SCC and AC of non-small cell lung cancer (NSCLC).</p> <p>Results</p> <p>On a genome-wide profile, SCCs showed higher frequency of gains than ACs (<it>p </it>= 0.067). More specifically, statistically significant differences were observed across the histologic subtypes for gains at 2q14.2, 3q26.2–q29, 12p13.2–p13.33, and 19p13.3, as well as losses at 3p26.2–p26.3, 16p13.11, and 17p11.2 in SCC, and gains at 7q22.1 and losses at 15q22.2–q25.2 occurred in AC (<it>P </it>< 0.05). The most striking difference between SCC and AC was gains at the 3q26.2–q29, occurring in 86% (19/22) of SCCs, but in only 21% (3/14) of ACs. Many significant genes at the 3q26.2–q29 regions previously linked to a specific histology, such as EVI1,<it>MDS1, PIK3CA </it>and <it>TP73L</it>, were observed in SCC (<it>P </it>< 0.05). In addition, we identified the following possible target genes (> 30% of patients) at 3q26.2–q29: <it>LOC389174 </it>(3q26.2),<it>KCNMB3 </it>(3q26.32),<it>EPHB3 </it>(3q27.1), <it>MASP1 </it>and <it>SST </it>(3q27.3), <it>LPP </it>and <it>FGF12 </it>(3q28), and <it>OPA1</it>,<it>KIAA022</it>,<it>LOC220729</it>, <it>LOC440996</it>,<it>LOC440997</it>, and <it>LOC440998 </it>(3q29), all of which were significantly targeted in SCC (<it>P </it>< 0.05). Among these same genes, high-level amplifications were detected for the gene, <it>EPHB3</it>, at 3q27.1, and <it>MASP1 </it>and <it>SST</it>, at 3q27.3 (18, 18, and 14%, respectively). Quantitative real time PCR demonstrated array CGH detected potential candidate genes that were over expressed in SCCs.</p> <p>Conclusion</p> <p>Using whole-genome array CGH, we have successfully identified significant differences and unique information of chromosomal signatures prevalent between the SCC and AC subtypes of NSCLC. The newly identified candidate target genes may prove to be highly attractive candidate molecular markers for the classification of NSCLC histologic subtypes, and could potentially contribute to the pathogenesis of the squamous cell carcinoma of the lung.</p

    Cutaneous Metastatic Rectal Adenocarcinoma in Zosteriform Distribution

    No full text

    Adult Multiple Myofibromas on an Atrophic Patch on the Thigh

    No full text

    Visual Positioning System Based on 6D Object Pose Estimation Using Mobile Web

    No full text
    Recently, the demand for location-based services using mobile devices in indoor spaces without a global positioning system (GPS) has increased. However, to the best of our knowledge, solutions that are fully applicable to indoor positioning and navigation and ensure real-time mobility on mobile devices, such as global navigation satellite system (GNSS) solutions, cannot achieve remarkable researches in indoor circumstances. Indoor single-shot image positioning using smartphone cameras does not require a dedicated infrastructure and offers the advantages of low price and large potential markets owing to the popularization of smartphones. However, existing methods or systems based on smartphone cameras and image algorithms encounter various limitations when implemented in indoor environments. To address this, we designed an indoor visual positioning system for mobile devices that can locate users in indoor scenes. The proposed method uses a smartphone camera to detect objects through a single image in a web environment and calculates the location of the smartphone to find users in an indoor space. The system is inexpensive because it integrates deep learning and computer vision algorithms and does not require additional infrastructure. We present a novel method of detecting 3D model objects from single-shot RGB data, estimating the 6D pose and position of the camera and correcting errors based on voxels. To this end, the popular convolutional neural network (CNN) is improved by real-time pose estimation to handle the entire 6D pose estimate the location and direction of the camera. The estimated position of the camera is addressed to a voxel to determine a stable user position. Our VPS system provides the user with indoor information in 3D AR model. The voxel address optimization approach with camera 6D position estimation using RGB images in a mobile web environment outperforms real-time performance and accuracy compared to current state-of-the-art methods using RGB depth or point cloud

    Visual Positioning System Based on 6D Object Pose Estimation Using Mobile Web

    No full text
    Recently, the demand for location-based services using mobile devices in indoor spaces without a global positioning system (GPS) has increased. However, to the best of our knowledge, solutions that are fully applicable to indoor positioning and navigation and ensure real-time mobility on mobile devices, such as global navigation satellite system (GNSS) solutions, cannot achieve remarkable researches in indoor circumstances. Indoor single-shot image positioning using smartphone cameras does not require a dedicated infrastructure and offers the advantages of low price and large potential markets owing to the popularization of smartphones. However, existing methods or systems based on smartphone cameras and image algorithms encounter various limitations when implemented in indoor environments. To address this, we designed an indoor visual positioning system for mobile devices that can locate users in indoor scenes. The proposed method uses a smartphone camera to detect objects through a single image in a web environment and calculates the location of the smartphone to find users in an indoor space. The system is inexpensive because it integrates deep learning and computer vision algorithms and does not require additional infrastructure. We present a novel method of detecting 3D model objects from single-shot RGB data, estimating the 6D pose and position of the camera and correcting errors based on voxels. To this end, the popular convolutional neural network (CNN) is improved by real-time pose estimation to handle the entire 6D pose estimate the location and direction of the camera. The estimated position of the camera is addressed to a voxel to determine a stable user position. Our VPS system provides the user with indoor information in 3D AR model. The voxel address optimization approach with camera 6D position estimation using RGB images in a mobile web environment outperforms real-time performance and accuracy compared to current state-of-the-art methods using RGB depth or point cloud

    Graph Convolutional Network for 3D Object Pose Estimation in a Point Cloud

    No full text
    Graph Neural Networks (GNNs) are neural networks that learn the representation of nodes and associated edges that connect it to every other node while maintaining graph representation. Graph Convolutional Neural Networks (GCNs), as a representative method in GNNs, in the context of computer vision, utilize conventional Convolutional Neural Networks (CNNs) to process data supported by graphs. This paper proposes a one-stage GCN approach for 3D object detection and poses estimation by structuring non-linearly distributed points of a graph. Our network provides the required details to analyze, generate and estimate bounding boxes by spatially structuring the input data into graphs. Our method proposes a keypoint attention mechanism that aggregates the relative features between each point to estimate the category and pose of the object to which the vertices of the graph belong, and also designs nine degrees of freedom of multi-object pose estimation. In addition, to avoid gimbal lock in 3D space, we use quaternion rotation, instead of Euler angle. Experimental results showed that memory usage and efficiency could be improved by aggregating point features from the point cloud and their neighbors in a graph structure. Overall, the system achieved comparable performance against state-of-the-art systems
    corecore