204 research outputs found

    EVote – The revolution of Vote

    Get PDF
    This report focuses on independently developed applications that are primarily used to support group or group solutions. The tools used to develop this application include Android development tools, and Google Firebase development tools are also involved in the development process

    The general linear hypothesis testing problem for multivariate functional data with applications

    Full text link
    As technology continues to advance at a rapid pace, the prevalence of multivariate functional data (MFD) has expanded across diverse disciplines, spanning biology, climatology, finance, and numerous other fields of study. Although MFD are encountered in various fields, the development of methods for hypotheses on mean functions, especially the general linear hypothesis testing (GLHT) problem for such data has been limited. In this study, we propose and study a new global test for the GLHT problem for MFD, which includes the one-way FMANOVA, post hoc, and contrast analysis as special cases. The asymptotic null distribution of the test statistic is shown to be a chi-squared-type mixture dependent of eigenvalues of the heteroscedastic covariance functions. The distribution of the chi-squared-type mixture can be well approximated by a three-cumulant matched chi-squared-approximation with its approximation parameters estimated from the data. By incorporating an adjustment coefficient, the proposed test performs effectively irrespective of the correlation structure in the functional data, even when dealing with a relatively small sample size. Additionally, the proposed test is shown to be root-n consistent, that is, it has a nontrivial power against a local alternative. Simulation studies and a real data example demonstrate finite-sample performance and broad applicability of the proposed test

    Two-sample Behrens--Fisher problems for high-dimensional data: a normal reference F-type test

    Full text link
    The problem of testing the equality of mean vectors for high-dimensional data has been intensively investigated in the literature. However, most of the existing tests impose strong assumptions on the underlying group covariance matrices which may not be satisfied or hardly be checked in practice. In this article, an F-type test for two-sample Behrens--Fisher problems for high-dimensional data is proposed and studied. When the two samples are normally distributed and when the null hypothesis is valid, the proposed F-type test statistic is shown to be an F-type mixture, a ratio of two independent chi-square-type mixtures. Under some regularity conditions and the null hypothesis, it is shown that the proposed F-type test statistic and the above F-type mixture have the same normal and non-normal limits. It is then justified to approximate the null distribution of the proposed F-type test statistic by that of the F-type mixture, resulting in the so-called normal reference F-type test. Since the F-type mixture is a ratio of two independent chi-square-type mixtures, we employ the Welch--Satterthwaite chi-square-approximation to the distributions of the numerator and the denominator of the F-type mixture respectively, resulting in an approximation F-distribution whose degrees of freedom can be consistently estimated from the data. The asymptotic power of the proposed F-type test is established. Two simulation studies are conducted and they show that in terms of size control, the proposed F-type test outperforms two existing competitors. The proposed F-type test is also illustrated by a real data example

    Robust Core-Periphery Constrained Transformer for Domain Adaptation

    Full text link
    Unsupervised domain adaptation (UDA) aims to learn transferable representation across domains. Recently a few UDA works have successfully applied Transformer-based methods and achieved state-of-the-art (SOTA) results. However, it remains challenging when there exists a large domain gap between the source and target domain. Inspired by humans' exceptional transferability abilities to adapt knowledge from familiar to uncharted domains, we try to apply the universally existing organizational structure in the human functional brain networks, i.e., the core-periphery principle to design the Transformer and improve its UDA performance. In this paper, we propose a novel brain-inspired robust core-periphery constrained transformer (RCCT) for unsupervised domain adaptation, which brings a large margin of performance improvement on various datasets. Specifically, in RCCT, the self-attention operation across image patches is rescheduled by an adaptively learned weighted graph with the Core-Periphery structure (CP graph), where the information communication and exchange between images patches are manipulated and controlled by the connection strength, i.e., edge weight of the learned weighted CP graph. Besides, since the data in domain adaptation tasks can be noisy, to improve the model robustness, we intentionally add perturbations to the patches in the latent space to ensure generating robust learned weighted core-periphery graphs. Extensive evaluations are conducted on several widely tested UDA benchmarks. Our proposed RCCT consistently performs best compared to existing works, including 88.3\% on Office-Home, 95.0\% on Office-31, 90.7\% on VisDA-2017, and 46.0\% on DomainNet.Comment: Core-Periphery, ViT, Unsupervised domain adaptatio

    Seeding Rate and Row-Spacing Effects on Seed Yield and Yield Components of \u3cem\u3eLeymus chinensis\u3c/em\u3e (Trin.) Tzvel.

    Get PDF
    Chinese sheepgrass (Leymus chinensis (Trin.) Tzvel.) is widely distributed in the eastern portion of the Inner Mongolian Plateau and the Songnen Grassland of China. This grass is highly salt, cold and drought tolerant and has been the major source of forage for cows and other ruminants in China (Gao et al. 2012). Seed yield of this grass is very low under native conditions because of the low heading percentage and percentage of seed set (Wang et al. 2010). The Hexi Corridor, located in China’s northwestern Gansu Province, is the seed production center of China because of its dry, sunny climate and favorable irrigation conditions. Our field study was conducted to determine the optimum seeding rate and row-spacing for seed production of Chinese sheepgrass in the Hexi Corridor, where this grass has not been previously grown

    Exploring the Influence of Information Entropy Change in Learning Systems

    Full text link
    In this work, we explore the influence of entropy change in deep learning systems by adding noise to the inputs/latent features. The applications in this paper focus on deep learning tasks within computer vision, but the proposed theory can be further applied to other fields. Noise is conventionally viewed as a harmful perturbation in various deep learning architectures, such as convolutional neural networks (CNNs) and vision transformers (ViTs), as well as different learning tasks like image classification and transfer learning. However, this paper aims to rethink whether the conventional proposition always holds. We demonstrate that specific noise can boost the performance of various deep architectures under certain conditions. We theoretically prove the enhancement gained from positive noise by reducing the task complexity defined by information entropy and experimentally show the significant performance gain in large image datasets, such as the ImageNet. Herein, we use the information entropy to define the complexity of the task. We categorize the noise into two types, positive noise (PN) and harmful noise (HN), based on whether the noise can help reduce the complexity of the task. Extensive experiments of CNNs and ViTs have shown performance improvements by proactively injecting positive noise, where we achieved an unprecedented top 1 accuracy of over 95% on ImageNet. Both theoretical analysis and empirical evidence have confirmed that the presence of positive noise can benefit the learning process, while the traditionally perceived harmful noise indeed impairs deep learning models. The different roles of noise offer new explanations for deep models on specific tasks and provide a new paradigm for improving model performance. Moreover, it reminds us that we can influence the performance of learning systems via information entropy change.Comment: Information Entropy, CNN, Transforme

    The nestin-expressing and non-expressing neurons in rat basal forebrain display different electrophysiological properties and project to hippocampus

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Nestin-immunoreactive (nestin-ir) neurons have been identified in the medial septal/diagonal band complex (MS/DBB) of adult rat and human, but the significance of nestin expression in functional neurons is not clear. This study investigated electrophysiological properties and neurochemical phenotypes of nestin-expressing (nestin+) neurons using whole-cell recording combined with single-cell RT-PCR to explore the significance of nestin expression in functional MS/DBB neurons. The retrograde labelling and immunofluorescence were used to investigate the nestin+ neuron related circuit in the septo-hippocampal pathway.</p> <p>Results</p> <p>The results of single-cell RT-PCR showed that 87.5% (35/40) of nestin+ cells expressed choline acetyltransferase mRNA (ChAT+), only 44.3% (35/79) of ChAT+ cells expressed nestin mRNA. Furthermore, none of the nestin+ cells expressed glutamic acid decarboxylases 67 (GAD<sub>67</sub>) or vesicular glutamate transporters (VGLUT) mRNA. All of the recorded nestin+ cells were excitable and demonstrated slow-firing properties, which were distinctive from those of GAD<sub>67 </sub>or VGLUT mRNA-positive neurons. These results show that the MS/DBB cholinergic neurons could be divided into nestin-expressing cholinergic neurons (NEChs) and nestin non-expressing cholinergic neurons (NNChs). Interestingly, NEChs had higher excitability and received stronger spontaneous excitatory synaptic inputs than NNChs. Retrograde labelling combined with choline acetyltransferase and nestin immunofluorescence showed that both of the NEChs and NNChs projected to hippocampus.</p> <p>Conclusions</p> <p>These results suggest that there are two parallel cholinergic septo-hippocampal pathways that may have different functions. The significance of nestin expressing in functional neurons has been discussed.</p

    On-chip integrated process-programmable sub-10 nm thick molecular devices switching between photomultiplication and memristive behaviour

    Get PDF
    Molecular devices constructed by sub-10 nm thick molecular layers are promising candidates for a new generation of integratable nanoelectronic applications. Here, we report integrated molecular devices based on ultrathin copper phthalocyanine/fullerene hybrid layers with microtubular soft-contacts, which exhibit process-programmable functionality switching between photomultiplication and memristive behaviour. The local electric field at the interface between the polymer bottom electrode and the enclosed molecular channels modulates the ionic-electronic charge interaction and hence determines the transition of the device function. When ions are not driven into the molecular channels at a low interface electric field, photogenerated holes are trapped as electronic space charges, resulting in photomultiplication with a high external quantum efficiency. Once mobile ions are polarized and accumulated as ionic space charges in the molecular channels at a high interface electric field, the molecular devices show ferroelectric-like memristive switching with remarkable resistive ON/OFF and rectification ratios

    Water-Soluble N-Acetyl-L-cysteine-Capped CdTe Quantum Dots Application for Hg(II) Detection

    Get PDF
    A simple, rapid, and specific method for Hg(II) detection has been proposed based on the fluorescence change of N-acetyl-Lcysteine-capped CdTe quantum dots (QDs). The presence of Hg(II) ions could quench the fluorescence of QDs at 565 nm and meanwhile produce new peak in 700-860 nm wavelength range. The linear response range is 20-430 nM with the detection limit at 8.0 nM Hg(II). It was found that the position of the new peak was irrelevant to the size of QDs. Furthermore, the mechanism of the quenching of QDs fluorescence by Hg(II) and the appearance of new peak in near-infrared area were also discussed and deduced through ultraviolet absorption spectrum, fluorescence spectrum, and X-ray photoelectron spectrum

    Core-Periphery Principle Guided Redesign of Self-Attention in Transformers

    Full text link
    Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques. Previous studies, by post-hoc analysis, have found that the best-performing ANNs surprisingly resemble biological neural networks (BNN), which indicates that ANNs and BNNs may share some common principles to achieve optimal performance in either machine learning or cognitive/behavior tasks. Inspired by this phenomenon, we proactively instill organizational principles of BNNs to guide the redesign of ANNs. We leverage the Core-Periphery (CP) organization, which is widely found in human brain networks, to guide the information communication mechanism in the self-attention of vision transformer (ViT) and name this novel framework as CP-ViT. In CP-ViT, the attention operation between nodes is defined by a sparse graph with a Core-Periphery structure (CP graph), where the core nodes are redesigned and reorganized to play an integrative role and serve as a center for other periphery nodes to exchange information. We evaluated the proposed CP-ViT on multiple public datasets, including medical image datasets (INbreast) and natural image datasets. Interestingly, by incorporating the BNN-derived principle (CP structure) into the redesign of ViT, our CP-ViT outperforms other state-of-the-art ANNs. In general, our work advances the state of the art in three aspects: 1) This work provides novel insights for brain-inspired AI: we can utilize the principles found in BNNs to guide and improve our ANN architecture design; 2) We show that there exist sweet spots of CP graphs that lead to CP-ViTs with significantly improved performance; and 3) The core nodes in CP-ViT correspond to task-related meaningful and important image patches, which can significantly enhance the interpretability of the trained deep model.Comment: Core-periphery, functional brain networks, Vi
    • …
    corecore