22 research outputs found

    Multiscale simulations of growth-dominated Sb2_2Te phase-change material for non-volatile photonic applications

    Full text link
    Chalcogenide phase-change materials (PCMs) are widely applied in electronic and photonic applications, such as non-volatile memory and neuro-inspired computing. Doped Sb2_2Te alloys are now gaining increasing attention for on-chip photonic applications, due to their growth-driven crystallization features. However, it remains unknown whether Sb2_2Te also forms a metastable crystalline phase upon nanoseconds crystallization in devices, similar to the case of nucleation-driven Ge-Sb-Te alloys. Here, we carry out ab initio simulations to understand the changes in optical properties of amorphous Sb2_2Te upon crystallization and post annealing. During the continuous transformation process, changes in the dielectric function are highly wavelength-dependent from the visible-light range towards the telecommunication band. Our finite-difference time-domain simulations based on the ab initio input reveal key differences in device output for color display and photonic memory applications upon tellurium ordering. Our work serves as an example of how multiscale simulations of materials can guide practical photonic phase-change applications.Comment: 16 pages,8 figure

    EFAFN: An Efficient Feature Adaptive Fusion Network with Facial Feature for Multimodal Sarcasm Detection

    No full text
    Sarcasm often manifests itself in some implicit language and exaggerated expressions. For instance, an elongated word, a sarcastic phrase, or a change of tone. Most research on sarcasm detection has recently been based on text and image information. In this paper, we argue that most image data input to the sarcasm detection model is redundant, for example, complex background information and foreground information irrelevant to sarcasm detection. Since facial details contain emotional changes and social characteristics, we should pay more attention to the image data of the face area. We, therefore, treat text, audio, and face images as three modalities and propose a multimodal deep-learning model to tackle this problem. Our model extracts the text, audio, and image features of face regions and then uses our proposed feature fusion strategy to fuse these three modal features into one feature vector for classification. To enhance the model’s generalization ability, we use the IMGAUG image enhancement tool to augment the public sarcasm detection dataset MUStARD. Experiments show that although using a simple supervised method is effective, using a feature fusion strategy and image features from face regions can further improve the F1 score from 72.5% to 79.0%

    Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks.

    No full text
    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection

    Human epididymis protein 4, a novel potential biomarker for diagnostic and prognosis monitoring of lung cancer

    No full text
    Abstract Objective This study aimed to explore the application value of human epididymis protein 4 (HE4) in diagnosing and monitoring the prognosis of lung cancer. Methods First, TCGA (The Cancer Genome Atlas) databases were used to analyze whey‐acidic‐protein 4‐disulfide bond core domain 2 (WFDC2) gene expression levels in lung cancer tissues. Then, a total of 160 individuals were enrolled, categorized into three groups: the lung cancer group (n = 80), the benign lesions group (n = 40), and the healthy controls group (n = 40). Serum HE4 levels and other biomarkers were quantified using an electro‐chemiluminescent immunoassay. Additionally, the expression of HE4 in tissues was analyzed through immunohistochemistry (IHC). In vitro cultures of human airway epithelial (human bronchial epithelial [HBE]) cells and various lung cancer cell lines (SPC/PC9/A594/H520) were utilized to detect HE4 levels via western blot (WB). Results Analysis of the TCGA and UALCAN (The University of Alabama at Birmingham Cancer Data Analysis Portal) databases showed that WFDC2 gene expression levels were upregulated in lung cancer tissues (p < 0.01). Compared with the control group and the benign group, HE4 was significantly higher in the serum of patients with lung cancer (p < 0.001). Receiver operating characteristic (ROC) analysis confirmed that HE4 had better diagnostic efficacy than classical markers in the differential diagnosis of lung cancer and benign lesions and had the highest diagnostic value in lung adenocarcinoma (area under the ROC curve [AUC] = 0.826). HE4 increased in early lung cancer and positively correlated with poor prognosis (p < 0.001). Moreover, the results of WB and IHC revealed that the expression of HE4 was increased in lung cancer cells (SPC/A549/H520) and lung cancer tissues but decreased in PC9 cells with a lack of exon EGFR19 (p < 0.05). Conclusion Serum HE4 emerges as a promising novel biomarker for the diagnosis and prognosis assessment of lung cancer

    The brain activity varied between 400 ms to 600 ms in different image complexity (HIC, MIC, and LIC).

    No full text
    <p>The brain activity varied between 400 ms to 600 ms in different image complexity (HIC, MIC, and LIC).</p

    Values of the areas under the receiver operating characteristic curve (AUC) of all subjects under the two algorithms, and the time windows is 50 ms.

    No full text
    <p>Values of the areas under the receiver operating characteristic curve (AUC) of all subjects under the two algorithms, and the time windows is 50 ms.</p

    Rapid serial visual presentation (RSVP) paradigm.

    No full text
    <p>The RSVP sequence consisted of 25 blocks (a total of 2400 images, i.e., 25 target categories with 300 images and 175 nontarget categories with 2100 images), and each block comprised one target category (12 images) and seven nontarget categories (84 images). The images are presented in 25 blocks, with a distinct target category in each block. The target categories of each block are shown in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0184713#pone.0184713.t001" target="_blank">Table 1</a>. Each image is presented for 200 ms (The image is similar but not identical to the original image, and is therefore for illustrative purposes only).</p

    Effect of IC on ERP signal.

    No full text
    <p>ERPs stimulated by the (A) high-complexity target image, (B) high-complexity nontarget image, (C) medium-complexity target image, (D) medium-complexity nontarget image, (E) low-complexity target image, and (F) low-complexity nontarget image. (G) Trial averaged target ERP wave forms calculated from (A), (C) and (E), where the red, blue, and green lines indicate the ERP components inspired by high-, medium- and low-complexity image. Similarly, images in (H) are indicative of nontarget image.</p

    System overview.

    No full text
    <p>First, the sample data are divided into equal three parts according to the image complexity (IC). The EEG data induced by high-, medium-, and low-complexity images. We trained the classifiers separately on the different data sets. During testing, we first determined the complexity and category (high-, medium-, or low-complexity image) of the test picture. Then, we calculated the interest scores of the EEG induced by a test image using the corresponding classifier. The result was combined with a certain weight to obtain the final decision score (The image is similar but not identical to the original image, and is therefore for illustrative purposes only).</p
    corecore