28 research outputs found

    A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Get PDF

    Co-expression of apoptin (VP3) and antibacterial peptide cecropin B mutant (ABPS1) genes induce higher rate of apoptosis in HepG2 and A375 cell lines

    Get PDF
    The antibacterial peptide cecropin B mutant (ABPS1) gene has a broad range of antibacterial and  antiproliferative properties. Apoptin (VP3), a chicken anaemia virus-encoded protein is known to induce  apoptosis in human transformed cells. To explore drug combination in human tumor cells, apoptin and ABPS1 eukaryotic expression vector pIRES2-EGFP-apoptin and pIRES2-EGFP-ABPS1 were constructed and their expression effect individually and in combinations were studied in HepG2 and A375 cells. The vector pIRES2-EGFP-ABPS1 and pIRES2-EGFP-apoptin were transfected into tumor cells HepG2 and A375 by the  lipofectamine-mediated DNA transfection procedure. At 48 h post transfection, the apoptotic rate obtained by flow cytometry and the morphological changes under light and scanning electron microscope of tumor cells  were significant. In contrast, the microvilli on the surface of the control cells were disrupted, decreased and even disappeared. The cell membrane was injured and intracellular substances leaked out. Furthermore, our  results indicate that the apoptotic rates of apoptin (27.32% in HepG2 and 9.34% in A375 cells), were higher  than ABPS1 (23.79% in HepG2 and 8.33% in A375 cells). Moreover, the co-expression of Apoptin and ABPS1  showed higher apoptotic rates which were 27.66 and 10.33% in HepG2 and A375 cells respectively. However, the apoptotic rates obtained in HepG2 cells treated with apoptin and apoptin and ABPS1 together were closely  similar, but, not in A375 cells. Therefore, the results of the present study showed that the combination of  Apoptin and ABPS1 has synergistic effect in HepG2 and A375 cell lines.Keys words: Apoptin, ABPS1, apoptosis, co-expression, HepG2, A375

    Dense Connectivity Based Two-Stream Deep Feature Fusion Framework for Aerial Scene Classification

    No full text
    Aerial scene classification is an active and challenging problem in high-resolution remote sensing imagery understanding. Deep learning models, especially convolutional neural networks (CNNs), have achieved prominent performance in this field. The extraction of deep features from the layers of a CNN model is widely used in these CNN-based methods. Although the CNN-based approaches have obtained great success, there is still plenty of room to further increase the classification accuracy. As a matter of fact, the fusion with other features has great potential for leading to the better performance of aerial scene classification. Therefore, we propose two effective architectures based on the idea of feature-level fusion. The first architecture, i.e., texture coded two-stream deep architecture, uses the raw RGB network stream and the mapped local binary patterns (LBP) coded network stream to extract two different sets of features and fuses them using a novel deep feature fusion model. In the second architecture, i.e., saliency coded two-stream deep architecture, we employ the saliency coded network stream as the second stream and fuse it with the raw RGB network stream using the same feature fusion model. For sake of validation and comparison, our proposed architectures are evaluated via comprehensive experiments with three publicly available remote sensing scene datasets. The classification accuracies of saliency coded two-stream architecture with our feature fusion model achieve 97.79%, 98.90%, 94.09%, 95.99%, 85.02%, and 87.01% on the UC-Merced dataset (50% and 80% training samples), the Aerial Image Dataset (AID) (20% and 50% training samples), and the NWPU-RESISC45 dataset (10% and 20% training samples), respectively, overwhelming state-of-the-art methods

    A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    No full text
    One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references

    Aerial Scene Classification via Multilevel Fusion Based on Deep Convolutional Neural Networks

    No full text

    E-DBPN: Enhanced Deep Back-Projection Networks for Remote Sensing Scene Image Superresolution

    No full text

    Experimental Evidence of Precipitation of All 12 Variants in a Single β Grain in Titanium Alloys

    No full text
    The effect of the changing of the local composition of the β matrix on the precipitation of the α phase has been investigated by electron backscatter diffraction (EBSD) to obtain more insight in the nucleation and variant selection of these α plates based on the Ti-5.04Al/Ti-1.52Mo (at.%) diffusion couple. The results showed that the composition gradient was formed from one side of the diffusion couple to another side after diffusion annealing. Followed by a secondary heat treatment process, it was found for the first time that all 12 variants formed in a single β grain in the diffusion zone in the Ti-5.04Al/Ti-1.52Mo diffusion couple, which indicated that the changing of the local composition of the β matrix significantly weakened the α variants selection behavior

    Comparative Study of Two Insulinlike Proteases in Cryptosporidium parvum

    No full text
    Cryptosporidiumparvum is a common protozoan pathogen responsible for moderate-to-severe diarrhea in humans and animals. The small genome of C. parvum has 22 genes encoding insulinlike proteases (INS) with diverse sequences, suggesting that members of the protein family may have different biological functions in the life cycle. In this study, two members of the INS family, CpINS-4 and CpINS-6 with the Zn2+-binding motif “HXXEH” but different numbers of function domains, were expressed in Escherichia coli and used in the generation of polyclonal antibodies. In both recombinant and native proteins, CpINS-4 and CpINS-6 were spliced into multiple fragments. The antibodies generated recognized their respective recombinant and native proteins and the spliced products, but had minimum cross-reactivity with each other. Anti-CpINS-4 antibodies reacted with the middle region of sporozoites and merozoites, while CpINS-6 had the highest reactivity to the apical region. Polyclonal anti-CpINS-4 antibodies produced 36% reduction in parasite load in HCT-8 cultures at 24 h, while those against CpINS-6, which has one of the function domains missing, failed in doing so. The genes encoding both CpINS-4 and CpINS-6 had the highest expression in the invasion phase of in vitro C. parvum culture. These data suggest that CpINS-4 and CpINS-6 might be expressed in different organelles and play different biological functions in the life cycle of C. parvum
    corecore