1,067 research outputs found

    Filter Bank Common Spatial Pattern Algorithm on BCI Competition IV Datasets 2a and 2b

    Get PDF
    The Common Spatial Pattern (CSP) algorithm is an effective and popular method for classifying 2-class motor imagery electroencephalogram (EEG) data, but its effectiveness depends on the subject-specific frequency band. This paper presents the Filter Bank Common Spatial Pattern (FBCSP) algorithm to optimize the subject-specific frequency band for CSP on Datasets 2a and 2b of the Brain-Computer Interface (BCI) Competition IV. Dataset 2a comprised 4 classes of 22 channels EEG data from 9 subjects, and Dataset 2b comprised 2 classes of 3 bipolar channels EEG data from 9 subjects. Multi-class extensions to FBCSP are also presented to handle the 4-class EEG data in Dataset 2a, namely, Divide-and-Conquer (DC), Pair-Wise (PW), and One-Versus-Rest (OVR) approaches. Two feature selection algorithms are also presented to select discriminative CSP features on Dataset 2b, namely, the Mutual Information-based Best Individual Feature (MIBIF) algorithm, and the Mutual Information-based Rough Set Reduction (MIRSR) algorithm. The single-trial classification accuracies were presented using 10 × 10-fold cross-validations on the training data and session-to-session transfer on the evaluation data from both datasets. Disclosure of the test data labels after the BCI Competition IV showed that the FBCSP algorithm performed relatively the best among the other submitted algorithms and yielded a mean kappa value of 0.569 and 0.600 across all subjects in Datasets 2a and 2b respectively

    A Novel Two-Layer DAG-based Reactive Protocol for IoT Data Reliability in Metaverse

    Full text link
    Many applications, e.g., digital twins, rely on sensing data from Internet of Things (IoT) networks, which is used to infer event(s) and initiate actions to affect an environment. This gives rise to concerns relating to data integrity and provenance. One possible solution to address these concerns is to employ blockchain. However, blockchain has high resource requirements, thereby making it unsuitable for use on resource-constrained IoT devices. To this end, this paper proposes a novel approach, called two-layer directed acyclic graph (2LDAG), whereby IoT devices only store a digital fingerprint of data generated by their neighbors. Further, it proposes a novel proof-of-path (PoP) protocol that allows an operator or digital twin to verify data in an on-demand manner. The simulation results show 2LDAG has storage and communication cost that is respectively two and three orders of magnitude lower than traditional blockchain and also blockchains that use a DAG structure. Moreover, 2LDAG achieves consensus even when 49\% of nodes are malicious

    TinyKG: Memory-Efficient Training Framework for Knowledge Graph Neural Recommender Systems

    Full text link
    There has been an explosion of interest in designing various Knowledge Graph Neural Networks (KGNNs), which achieve state-of-the-art performance and provide great explainability for recommendation. The promising performance is mainly resulting from their capability of capturing high-order proximity messages over the knowledge graphs. However, training KGNNs at scale is challenging due to the high memory usage. In the forward pass, the automatic differentiation engines (\textsl{e.g.}, TensorFlow/PyTorch) generally need to cache all intermediate activation maps in order to compute gradients in the backward pass, which leads to a large GPU memory footprint. Existing work solves this problem by utilizing multi-GPU distributed frameworks. Nonetheless, this poses a practical challenge when seeking to deploy KGNNs in memory-constrained environments, especially for industry-scale graphs. Here we present TinyKG, a memory-efficient GPU-based training framework for KGNNs for the tasks of recommendation. Specifically, TinyKG uses exact activations in the forward pass while storing a quantized version of activations in the GPU buffers. During the backward pass, these low-precision activations are dequantized back to full-precision tensors, in order to compute gradients. To reduce the quantization errors, TinyKG applies a simple yet effective quantization algorithm to compress the activations, which ensures unbiasedness with low variance. As such, the training memory footprint of KGNNs is largely reduced with negligible accuracy loss. To evaluate the performance of our TinyKG, we conduct comprehensive experiments on real-world datasets. We found that our TinyKG with INT2 quantization aggressively reduces the memory footprint of activation maps with 7×7 \times, only with 2%2\% loss in accuracy, allowing us to deploy KGNNs on memory-constrained devices

    Estimation of diameter growth parameters for Cryptomeria Plantations inïżœ Taiwan Using the Local Yield Table Construction System

    Get PDF
    We applied the Local Yield Table Construction System (LYCS), a computer program that estimates stand growth as a function of various density control strategies, to Cryptomeria plantations in Taiwan. Parameters of the growth model were estimated from permanent plot data on Cryptomeria stands stored in a database at the Experimental Forest of National Taiwan University. The diameter at breast height (DBH) and the number of trees measured in permanent plots were used as parameters to estimate the curve of the DBH growth rate, the effects of stand density on diameter growth, growth in terms of DBH, and diameter distribution. The estimated stand growth could be adapted to the observed values in the permanent plots. Based on these results, yield tables for various stand density control strategies can now be constructed for Cryptomeria stands in Taiwan

    Hessian-aware Quantized Node Embeddings for Recommendation

    Full text link
    Graph Neural Networks (GNNs) have achieved state-of-the-art performance in recommender systems. Nevertheless, the process of searching and ranking from a large item corpus usually requires high latency, which limits the widespread deployment of GNNs in industry-scale applications. To address this issue, many methods compress user/item representations into the binary embedding space to reduce space requirements and accelerate inference. Also, they use the Straight-through Estimator (STE) to prevent vanishing gradients during back-propagation. However, the STE often causes the gradient mismatch problem, leading to sub-optimal results. In this work, we present the Hessian-aware Quantized GNN (HQ-GNN) as an effective solution for discrete representations of users/items that enable fast retrieval. HQ-GNN is composed of two components: a GNN encoder for learning continuous node embeddings and a quantized module for compressing full-precision embeddings into low-bit ones. Consequently, HQ-GNN benefits from both lower memory requirements and faster inference speeds compared to vanilla GNNs. To address the gradient mismatch problem in STE, we further consider the quantized errors and its second-order derivatives for better stability. The experimental results on several large-scale datasets show that HQ-GNN achieves a good balance between latency and performance

    Sharpness-Aware Graph Collaborative Filtering

    Full text link
    Graph Neural Networks (GNNs) have achieved impressive performance in collaborative filtering. However, GNNs tend to yield inferior performance when the distributions of training and test data are not aligned well. Also, training GNNs requires optimizing non-convex neural networks with an abundance of local and global minima, which may differ widely in their performance at test time. Thus, it is essential to choose the minima carefully. Here we propose an effective training schema, called {gSAM}, under the principle that the \textit{flatter} minima has a better generalization ability than the \textit{sharper} ones. To achieve this goal, gSAM regularizes the flatness of the weight loss landscape by forming a bi-level optimization: the outer problem conducts the standard model training while the inner problem helps the model jump out of the sharp minima. Experimental results show the superiority of our gSAM

    The ‘Singapore Fever’ in China: policy mobility and mutation

    Get PDF
    The ‘Singapore Model’ has constituted the only second explicit attempt by the Communist Party of China (CPC) to learn from a foreign country following Mao Zedong’s pledge to contour ‘China’s tomorrow’ on the Soviet Union experience during the early 1950s. This paper critically evaluates policy transfers from Singapore to China in the post-Mao era. It re-examines how this Sino-Singaporean regulatory engagement came about historically following Deng Xiaoping’s visit to Singapore in 1978, and offers a careful re-reading of the degree to which actual policy borrowing by China could transcend different state ideologies, abstract ideas and subjective attitudes. Particular focus is placed on the effects of CPC cadre training in Singapore universities and policy mutation within two government-to-government projects, namely the Suzhou Industrial Park and the Tianjin Eco-City. The paper concludes that the ‘Singapore Model’, as applied in post-Mao China, casts institutional reforms as an open-ended process of policy experimentation and adaptation that is fraught with tension and resistance

    A study on joint modeling and data augmentation of multi-modalities for audio-visual scene classification

    Full text link
    In this paper, we propose two techniques, namely joint modeling and data augmentation, to improve system performances for audio-visual scene classification (AVSC). We employ pre-trained networks trained only on image data sets to extract video embedding; whereas for audio embedding models, we decide to train them from scratch. We explore different neural network architectures for joint modeling to effectively combine the video and audio modalities. Moreover, data augmentation strategies are investigated to increase audio-visual training set size. For the video modality the effectiveness of several operations in RandAugment is verified. An audio-video joint mixup scheme is proposed to further improve AVSC performances. Evaluated on the development set of TAU Urban Audio Visual Scenes 2021, our final system can achieve the best accuracy of 94.2% among all single AVSC systems submitted to DCASE 2021 Task 1b.Comment: 5 pages, 1 figure, submitted to INTERSPEECH 202

    Treatment effects of rhBMP‐2 on invasiveness of oral carcinoma cell lines

    Full text link
    Objective: To determine if recombinant human bone morphogenetic protein‐2 (rhBMP‐2) has biological effects on the invasiveness of human oral squamous cell carcinoma (OSCCA) cell lines. Study Design: Laboratory investigation using six human OSCCA cell lines, with three cell lines having baseline gene expression of BMP‐2 and three cell lines without baseline gene expression of BMP‐2. Methods: The invasiveness of each cell line was measured using a matrigel invasion assay with or without stimulation by rhBMP‐2. A tumor metastasis quantitative PCR array was used to establish whether observed findings from the invasion assay correlated to changes in gene expression. Results: There was a significant increase in tumor cell invasion in response to rhBMP‐2 in all BMP‐2 positive cell lines but no change in the cell lines that did not express the BMP‐2 gene. Quantitative PCR revealed that changes in gene expression were distinctly different based on the baseline gene expression of BMP‐2 and favored a more metastatic genotype in the BMP‐2‐positive cells. Conclusions: Recombinant human BMP‐2 has an adverse biological effect on invasiveness of human OSCCA cell lines in vitro. This adverse effect is dependent on the baseline gene expression of BMP‐2. Changes in expression of genes involved with tumor metastasis correlated to the invasion assay findings. These data raise concern for the safe application of rhBMP‐2 for reconstruction of bone defects in oral cancer patients.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/87137/1/21914_ftp.pd

    A three-dimensional actively spreading bone repair material based on cell spheroids can facilitate the preservation of tooth extraction sockets

    Get PDF
    Introduction: Achieving a successful reconstruction of alveolar bone morphology still remains a challenge because of the irregularity and complex microenvironment of tooth sockets. Biological materials including hydroxyapatite and collagen, are used for alveolar ridge preservation. However, the healing effect is often unsatisfactory.Methods: Inspired by superwetting biomimetic materials, we constructed a 3D actively-spreading bone repair material. It consisted of photocurable polyether F127 diacrylate hydrogel loaded with mixed spheroids of mesenchymal stem cells (MSCs) and vascular endothelial cells (ECs).Results: Biologically, cells in the spheroids were able to spread and migrate outwards, and possessed both osteogenic and angiogenic potential. Meanwhile, ECs also enhanced osteogenic differentiation of MSCs. Mechanically, the excellent physical properties of F127DA hydrogel ensured that it was able to be injected directly into the tooth socket and stabilized after light curing. In vivo experiments showed that MSC-EC-F127DA system promoted bone repair and preserved the shape of alveolar ridge within a short time duration.Discussion: In conclusion, the novel photocurable injectable MSC-EC-F127DA hydrogel system was able to achieve three-dimensional tissue infiltration, and exhibited much therapeutic potential for complex oral bone defects in the future
    • 

    corecore