1,046 research outputs found

    Sparse Functional Boxplots for Multivariate Curves

    Full text link
    This paper introduces the sparse functional boxplot and the intensity sparse functional boxplot as practical exploratory tools. Besides being available for complete functional data, they can be used in sparse univariate and multivariate functional data. The sparse functional boxplot, based on the functional boxplot, displays sparseness proportions within the 50\% central region. The intensity sparse functional boxplot indicates the relative intensity of fitted sparse point patterns in the central region. The two-stage functional boxplot, which derives from the functional boxplot to detect outliers, is furthermore extended to its sparse form. We also contribute to sparse data fitting improvement and sparse multivariate functional data depth. In a simulation study, we evaluate the goodness of data fitting, several depth proposals for sparse multivariate functional data, and compare the results of outlier detection between the sparse functional boxplot and its two-stage version. The practical applications of the sparse functional boxplot and intensity sparse functional boxplot are illustrated with two public health datasets. Supplementary materials and codes are available for readers to apply our visualization tools and replicate the analysis.Comment: 33 pages, 7 figure

    Global Depths for Irregularly Observed Multivariate Functional Data

    Full text link
    Two frameworks for multivariate functional depth based on multivariate depths are introduced in this paper. The first framework is multivariate functional integrated depth, and the second framework involves multivariate functional extremal depth, which is an extension of the extremal depth for univariate functional data. In each framework, global and local multivariate functional depths are proposed. The properties of population multivariate functional depths and consistency of finite sample depths to their population versions are established. In addition, finite sample depths under irregularly observed time grids are estimated. As a by-product, the simplified sparse functional boxplot and simplified intensity sparse functional boxplot are proposed for visualization without data reconstruction. A simulation study demonstrates the advantages of global multivariate functional depths over local multivariate functional depths in outlier detection and running time for big functional data. An application of our frameworks to cyclone tracks data demonstrates the excellent performance of our global multivariate functional depths.Comment: 29 pages, 6 figure

    A transcription factor for cold sensation!

    Get PDF
    The ability to feel hot and cold is critical for animals and human beings to survive in the natural environment. Unlike other sensations, the physiology of cold sensation is mostly unknown. In the present study, we use genetically modified mice that do not express nerve growth factor-inducible B (NGFIB) to investigate the possible role of NGFIB in cold sensation. We found that genetic deletion of NGFIB selectively affected behavioral responses to cold stimuli while behavioral responses to noxious heat or mechanical stimuli were normal. Furthermore, behavioral responses remained reduced or blocked in NGFIB knockout mice even after repetitive application of cold stimuli. Our results provide strong evidence that the first transcription factor NGFIB determines the ability of animals to respond to cold stimulation

    How to Test the Randomness from the Wireless Channel for Security?

    Full text link
    We revisit the traditional framework of wireless secret key generation, where two parties leverage the wireless channel randomness to establish a secret key. The essence in the framework is to quantify channel randomness into bit sequences for key generation. Conducting randomness tests on such bit sequences has been a common practice to provide the confidence to validate whether they are random. Interestingly, despite different settings in the tests, existing studies interpret the results the same: passing tests means that the bit sequences are indeed random. In this paper, we investigate how to properly test the wireless channel randomness to ensure enough security strength and key generation efficiency. In particular, we define an adversary model that leverages the imperfect randomness of the wireless channel to search the generated key, and create a guideline to set up randomness testing and privacy amplification to eliminate security loss and achieve efficient key generation rate. We use theoretical analysis and comprehensive experiments to reveal that common practice misuses randomness testing and privacy amplification: (i) no security insurance of key strength, (ii) low efficiency of key generation rate. After revision by our guideline, security loss can be eliminated and key generation rate can be increased significantly

    When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

    Full text link
    With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}

    Parrot-Trained Adversarial Examples: Pushing the Practicality of Black-Box Audio Attacks against Speaker Recognition Models

    Full text link
    Audio adversarial examples (AEs) have posed significant security challenges to real-world speaker recognition systems. Most black-box attacks still require certain information from the speaker recognition model to be effective (e.g., keeping probing and requiring the knowledge of similarity scores). This work aims to push the practicality of the black-box attacks by minimizing the attacker's knowledge about a target speaker recognition model. Although it is not feasible for an attacker to succeed with completely zero knowledge, we assume that the attacker only knows a short (or a few seconds) speech sample of a target speaker. Without any probing to gain further knowledge about the target model, we propose a new mechanism, called parrot training, to generate AEs against the target model. Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech. Then, we use these parrot speech samples to train a parrot-trained(PT) surrogate model for the attacker. Under a joint transferability and perception framework, we investigate different ways to generate AEs on the PT model (called PT-AEs) to ensure the PT-AEs can be generated with high transferability to a black-box target model with good human perceptual quality. Real-world experiments show that the resultant PT-AEs achieve the attack success rates of 45.8% - 80.8% against the open-source models in the digital-line scenario and 47.9% - 58.3% against smart devices, including Apple HomePod (Siri), Amazon Echo, and Google Home, in the over-the-air scenario

    Colorimetric sensing of copper(II) based on catalytic etching of gold nanoparticles

    Get PDF
    Based on the catalytic etching of gold nanoparticles (AuNPs), a label-free colorimetric probe was developed for the detection of Cu2+ in aqueous solutions. AuNPs were first stabilized by hexadecyltrimethylammonium bromide in NH3-NH4Cl (0.6 M/0.1 M) solutions. Then thiosulfate (S2O32-) ions were introduced and AuNPs were gradually dissolved by dissolved oxygen. With the further addition of Cu2+, Cu(NH3)(4)(2+) oxidized AuNPs to produce Au(S2O3)(2)(3-) and Cu(S2O3)(3)(5-), while the later was oxidized to Cu(NH3)(4)(2+) again by dissolved oxygen. The dissolving rate of AuNPs was thereby remarkably promoted and Cu2+ acted as the catalyst. The process went on due to the sufficient supply of dissolved oxygen and AuNPs were rapidly etched. Meanwhile, a visible color change from red to colorless was observed. Subsequent tests confirmed such a non-aggregation-based method as a sensitive (LOD= 5.0 nM or 032 ppb) and selective (at least 100-fold over other metal ions except for Pb2+ and Mn2+) way for the detection of Cu2+ (linear range, 10-80 nM). Moreover, our results show that the color change induced by 40 nM Cu2+ can be easily observed by naked eyes, which is particularly applicable to fast on-site investigations. (C) 2013 Elsevier B.V. All rights reserved.Based on the catalytic etching of gold nanoparticles (AuNPs), a label-free colorimetric probe was developed for the detection of Cu2+ in aqueous solutions. AuNPs were first stabilized by hexadecyltrimethylammonium bromide in NH3-NH4Cl (0.6 M/0.1 M) solutions. Then thiosulfate (S2O32-) ions were introduced and AuNPs were gradually dissolved by dissolved oxygen. With the further addition of Cu2+, Cu(NH3)(4)(2+) oxidized AuNPs to produce Au(S2O3)(2)(3-) and Cu(S2O3)(3)(5-), while the later was oxidized to Cu(NH3)(4)(2+) again by dissolved oxygen. The dissolving rate of AuNPs was thereby remarkably promoted and Cu2+ acted as the catalyst. The process went on due to the sufficient supply of dissolved oxygen and AuNPs were rapidly etched. Meanwhile, a visible color change from red to colorless was observed. Subsequent tests confirmed such a non-aggregation-based method as a sensitive (LOD= 5.0 nM or 032 ppb) and selective (at least 100-fold over other metal ions except for Pb2+ and Mn2+) way for the detection of Cu2+ (linear range, 10-80 nM). Moreover, our results show that the color change induced by 40 nM Cu2+ can be easily observed by naked eyes, which is particularly applicable to fast on-site investigations. (C) 2013 Elsevier B.V. All rights reserved

    FACTUAL: A Benchmark for Faithful and Consistent Textual Scene Graph Parsing

    Full text link
    Textual scene graph parsing has become increasingly important in various vision-language applications, including image caption evaluation and image retrieval. However, existing scene graph parsers that convert image captions into scene graphs often suffer from two types of errors. First, the generated scene graphs fail to capture the true semantics of the captions or the corresponding images, resulting in a lack of faithfulness. Second, the generated scene graphs have high inconsistency, with the same semantics represented by different annotations. To address these challenges, we propose a novel dataset, which involves re-annotating the captions in Visual Genome (VG) using a new intermediate representation called FACTUAL-MR. FACTUAL-MR can be directly converted into faithful and consistent scene graph annotations. Our experimental results clearly demonstrate that the parser trained on our dataset outperforms existing approaches in terms of faithfulness and consistency. This improvement leads to a significant performance boost in both image caption evaluation and zero-shot image retrieval tasks. Furthermore, we introduce a novel metric for measuring scene graph similarity, which, when combined with the improved scene graph parser, achieves state-of-the-art (SOTA) results on multiple benchmark datasets for the aforementioned tasks. The code and dataset are available at https://github.com/zhuang-li/FACTUAL .Comment: 9 pages, ACL 2023 (findings
    corecore