2,256 research outputs found

    ARPA Whitepaper

    Get PDF
    We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange

    Discordance between cosmogenic nuclide concentrations in amalgamated sands and individual fluvial pebbles in an arid zone catchment

    Get PDF
    Based on cosmogenic 10Be and 26Al analyses in 15 individual detrital quartz pebbles (16–21 mm) and cosmogenic 10Be in amalgamated medium sand (0.25–0.50 mm), all collected from the outlet of the upper Gaub River catchment in Namibia, quartz pebbles yield a substantially lower average denudation rate than those yielded by the amalgamated sand sample. 10Be and 26Al concentrations in the 15 individual pebbles span nearly two orders of magnitude (0.22 ± 0.01 to 20.74 ± 0.52 × 10610Be atoms g−1 and 1.35 ± 0.09 to 72.76 ± 2.04 × 10626Al atoms g−1, respectively) and yield average denudation rates of ∼0.7 m Myr−1 (10Be) and ∼0.9 m Myr−1 (26Al). In contrast, the amalgamated sand yields an average 10Be concentration of 0.77 ± 0.03 × 106 atoms g−1, and an associated mean denudation rate of 9.6 ± 1.1 m Myr−1, an order of magnitude greater than the rates obtained for the amalgamated pebbles. The inconsistency between the 10Be and 26Al in the pebbles and the 10Be in the amalgamated sand is likely due to the combined effect of differential sediment sourcing and longer sediment transport times for the pebbles compared to the sand-sized grains. The amalgamated sands leaving the catchment are an aggregate of grains originating from all quartz-bearing rocks in all parts of the catchment. Thus, the cosmogenic nuclide inventories of these sands record the overall average lowering rate of the landscape. The pebbles originate from quartz vein outcrops throughout the catchment, and the episodic erosion of the latter means that the pebbles will have higher nuclide inventories than the surrounding bedrock and soil, and therefore also higher than the amalgamated sand grains. The order-of-magnitude grain size bias observed in the Gaub has important implications for using cosmogenic nuclide abundances in depositional surfaces because in arid environments, akin to our study catchment, pebble-sized clasts yield substantially underestimated palaeo-denudation rates. Our results highlight the importance of carefully considering geomorphology and grain size when interpreting cosmogenic nuclide data in depositional surfaces

    Can NLI Provide Proper Indirect Supervision for Low-resource Biomedical Relation Extraction?

    Full text link
    Two key obstacles in biomedical relation extraction (RE) are the scarcity of annotations and the prevalence of instances without explicitly pre-defined labels due to low annotation coverage. Existing approaches, which treat biomedical RE as a multi-class classification task, often result in poor generalization in low-resource settings and do not have the ability to make selective prediction on unknown cases but give a guess from seen relations, hindering the applicability of those approaches. We present NBR, which converts biomedical RE as natural language inference formulation through indirect supervision. By converting relations to natural language hypotheses, NBR is capable of exploiting semantic cues to alleviate annotation scarcity. By incorporating a ranking-based loss that implicitly calibrates abstinent instances, NBR learns a clearer decision boundary and is instructed to abstain on uncertain instances. Extensive experiments on three widely-used biomedical RE benchmarks, namely ChemProt, DDI and GAD, verify the effectiveness of NBR in both full-set and low-resource regimes. Our analysis demonstrates that indirect supervision benefits biomedical RE even when a domain gap exists, and combining NLI knowledge with biomedical knowledge leads to the best performance gains.Comment: 16 pages; ACL 2023; code in https://github.com/luka-group/NLI_as_Indirect_Supervisio

    Detecting Small Query Graphs in A Large Graph via Neural Subgraph Search

    Full text link
    Recent advances have shown the success of using reinforcement learning and search to solve NP-hard graph-related tasks, such as Traveling Salesman Optimization, Graph Edit Distance computation, etc. However, it remains unclear how one can efficiently and accurately detect the occurrences of a small query graph in a large target graph, which is a core operation in graph database search, biomedical analysis, social group finding, etc. This task is called Subgraph Matching which essentially performs subgraph isomorphism check between a query graph and a large target graph. One promising approach to this classical problem is the "learning-to-search" paradigm, where a reinforcement learning (RL) agent is designed with a learned policy to guide a search algorithm to quickly find the solution without any solved instances for supervision. However, for the specific task of Subgraph Matching, though the query graph is usually small given by the user as input, the target graph is often orders-of-magnitude larger. It poses challenges to the neural network design and can lead to solution and reward sparsity. In this paper, we propose NSUBS with two innovations to tackle the challenges: (1) A novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs at each search state; (2) A novel look-ahead loss function for training the policy network. Experiments on six large real-world target graphs show that NSUBS can significantly improve the subgraph matching performance

    A generic neural network model to estimate populational neural activity for robust neural decoding

    Get PDF
    BACKGROUND: Robust and continuous neural decoding is crucial for reliable and intuitive neural-machine interactions. This study developed a novel generic neural network model that can continuously predict finger forces based on decoded populational motoneuron firing activities. METHOD: We implemented convolutional neural networks (CNNs) to learn the mapping from high-density electromyogram (HD-EMG) signals of forearm muscles to populational motoneuron firing frequency. We first extracted the spatiotemporal features of EMG energy and frequency maps to improve learning efficiency, given that EMG signals are intrinsically stochastic. We then established a generic neural network model by training on the populational neuron firing activities of multiple participants. Using a regression model, we continuously predicted individual finger forces in real-time. We compared the force prediction performance with two state-of-the-art approaches: a neuron-decomposition method and a classic EMG-amplitude method. RESULTS: Our results showed that the generic CNN model outperformed the subject-specific neuron-decomposition method and the EMG-amplitude method, as demonstrated by a higher correlation coefficient between the measured and predicted forces, and a lower force prediction error. In addition, the CNN model revealed more stable force prediction performance over time. CONCLUSIONS: Overall, our approach provides a generic and efficient continuous neural decoding approach for real-time and robust human-robot interactions

    Evidence for rapid paraglacial formation of rock glaciers in southern Norway from 10Be surface-exposure dating

    Get PDF
    We evaluate the timing and environmental controls on past rock-glacier activity at Øyberget, upper Ottadalen, southern Norway, using in situ 10Be surface-exposure dating on (1) boulders belonging to relict rock-glacier lobes at c. 530 m asl, (2) bedrock and boulder surfaces at the Øyberget summit (c. 1200 m asl), and (3) bedrock at an up-valley site (c. 615 m asl). We find that the rock-glacier lobes became inactive around 11.1 ± 1.2 ka, coeval with the timing of summit deglaciation (11.2 ± 0.7 ka). This is slightly older than previously published Schmidt-hammer surface-exposure ages. The timing does not match known climatic conditions promoting rock-glacier formation in the early Holocene; hence we infer that lobe formation resulted from enhanced debris supply and burial of residual ice during and soon after deglaciation. The results demonstrate that rock glaciers may form over a relatively short period of time (hundreds rather than thousands of years) under non-permafrost conditions and possibly indicate a paraglacial type of process

    Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models

    Full text link
    We investigate security concerns of the emergent instruction tuning paradigm, that models are trained on crowdsourced datasets with task instructions to achieve superior performance. Our studies demonstrate that an attacker can inject backdoors by issuing very few malicious instructions (~1000 tokens) and control model behavior through data poisoning, without even the need to modify data instances or labels themselves. Through such instruction attacks, the attacker can achieve over 90% attack success rate across four commonly used NLP datasets. As an empirical study on instruction attacks, we systematically evaluated unique perspectives of instruction attacks, such as poison transfer where poisoned models can transfer to 15 diverse generative datasets in a zero-shot manner; instruction transfer where attackers can directly apply poisoned instruction on many other datasets; and poison resistance to continual finetuning. Lastly, we show that RLHF and clean demonstrations might mitigate such backdoors to some degree. These findings highlight the need for more robust defenses against poisoning attacks in instruction-tuning models and underscore the importance of ensuring data quality in instruction crowdsourcing.Comment: NAACL 202

    Implementation-Oblivious Transparent Checkpoint-Restart for MPI

    Full text link
    This work presents experience with traditional use cases of checkpointing on a novel platform. A single codebase (MANA) transparently checkpoints production workloads for major available MPI implementations: "develop once, run everywhere". The new platform enables application developers to compile their application against any of the available standards-compliant MPI implementations, and test each MPI implementation according to performance or other features.Comment: 17 pages, 4 figure
    corecore