90 research outputs found

    An acoustic metamaterial lens for acoustic point-to-point communication in air

    Full text link
    Acoustic metamaterials have become a novel and effective way to control sound waves and design acoustic devices. In this study, we design a 3D acoustic metamaterial lens (AML) to achieve point-to-point acoustic communication in air: any acoustic source (i.e. a speaker) in air enclosed by such an AML can produce an acoustic image where the acoustic wave is focused (i.e. the field intensity is at a maximum, and the listener can receive the information), while the acoustic field at other spatial positions is low enough that listeners can hear almost nothing. Unlike a conventional elliptical reflective mirror, the acoustic source can be moved around inside our proposed AML. Numerical simulations are given to verify the performance of the proposed AML

    Who uses Fintech products: evidence from the pay-on-demand market

    Get PDF
    This thesis examines a novel Fintech lending product in Australia named pay-ondemand. This product claims to let its users access their wages in advance, at a flat “5% transaction fee” without any hidden cost. Using the transaction data of customers at a major Australian bank, I have found that, on average, pay-on-demand users are usually more financially constrained compared to the general public. They tend to live in poorer socioeconomic regions, earn lower incomes, have less savings, and lack alternative access to credit due to past delinquency records and damaged credit reputation. The product is predominantly accessed by younger males. Almost half of the users are additionally paying an unpaid payment fee, which is a cost imposed by the banks due to a failed direct debit request and represents a hidden cost of using pay-on-demand. An average user of pay-on-demand will be charged unpaid payment fees 2.24 times a month, which given the average loan size of $250.73, increased the Effective Annual Rate by 54%. On average, users who are from poorer socioeconomic region, have less saving balance, earn lower wages, have high credit risks, and are in hardship are more likely to pay an unpaid payment fee. Finally, I found that users with lower savings balance use pay-on-demand more frequently, although they borrow less in total because they are deemed riskier by pay-ondemand lenders. These users paid more unpaid payment fees, which worsened their financial status and forced them to borrow more from pay-on-demand to guard against future cash flow mismatches. Overall, the results highlight the importance of a strict underwriting procedure. Payon- demand is excluded from the responsible lending criteria, so lenders do not perform a credit check. If a credit check is performed, constrained borrowers would be better off, because they could save on unpaid payment fees and their financial resilience would improve

    Towards Code Watermarking with Dual-Channel Transformations

    Full text link
    The expansion of the open source community and the rise of large language models have raised ethical and security concerns on the distribution of source code, such as misconduct on copyrighted code, distributions without proper licenses, or misuse of the code for malicious purposes. Hence it is important to track the ownership of source code, in wich watermarking is a major technique. Yet, drastically different from natural languages, source code watermarking requires far stricter and more complicated rules to ensure the readability as well as the functionality of the source code. Hence we introduce SrcMarker, a watermarking system to unobtrusively encode ID bitstrings into source code, without affecting the usage and semantics of the code. To this end, SrcMarker performs transformations on an AST-based intermediate representation that enables unified transformations across different programming languages. The core of the system utilizes learning-based embedding and extraction modules to select rule-based transformations for watermarking. In addition, a novel feature-approximation technique is designed to tackle the inherent non-differentiability of rule selection, thus seamlessly integrating the rule-based transformations and learning-based networks into an interconnected system to enable end-to-end training. Extensive experiments demonstrate the superiority of SrcMarker over existing methods in various watermarking requirements.Comment: 16 page

    Hufu: A Modality-Agnositc Watermarking System for Pre-Trained Transformers via Permutation Equivariance

    Full text link
    With the blossom of deep learning models and services, it has become an imperative concern to safeguard the valuable model parameters from being stolen. Watermarking is considered an important tool for ownership verification. However, current watermarking schemes are customized for different models and tasks, hard to be integrated as an integrated intellectual protection service. We propose Hufu, a modality-agnostic watermarking system for pre-trained Transformer-based models, relying on the permutation equivariance property of Transformers. Hufu embeds watermark by fine-tuning the pre-trained model on a set of data samples specifically permuted, and the embedded model essentially contains two sets of weights -- one for normal use and the other for watermark extraction which is triggered on permuted inputs. The permutation equivariance ensures minimal interference between these two sets of model weights and thus high fidelity on downstream tasks. Since our method only depends on the model itself, it is naturally modality-agnostic, task-independent, and trigger-sample-free. Extensive experiments on the state-of-the-art vision Transformers, BERT, and GPT2 have demonstrated Hufu's superiority in meeting watermarking requirements including effectiveness, efficiency, fidelity, and robustness, showing its great potential to be deployed as a uniform ownership verification service for various Transformers

    Temporal Knowledge Graph Completion: A Survey

    Full text link
    Knowledge graph completion (KGC) can predict missing links and is crucial for real-world knowledge graphs, which widely suffer from incompleteness. KGC methods assume a knowledge graph is static, but that may lead to inaccurate prediction results because many facts in the knowledge graphs change over time. Recently, emerging methods have shown improved predictive results by further incorporating the timestamps of facts; namely, temporal knowledge graph completion (TKGC). With this temporal information, TKGC methods can learn the dynamic evolution of the knowledge graph that KGC methods fail to capture. In this paper, for the first time, we summarize the recent advances in TKGC research. First, we detail the background of TKGC, including the problem definition, benchmark datasets, and evaluation metrics. Then, we summarize existing TKGC methods based on how timestamps of facts are used to capture the temporal dynamics. Finally, we conclude the paper and present future research directions of TKGC

    Curriculum Temperature for Knowledge Distillation

    Full text link
    Most existing distillation methods ignore the flexible role of the temperature in the loss function and fix it as a hyper-parameter that can be decided by an inefficient grid search. In general, the temperature controls the discrepancy between two distributions and can faithfully determine the difficulty level of the distillation task. Keeping a constant temperature, i.e., a fixed level of task difficulty, is usually sub-optimal for a growing student during its progressive learning stages. In this paper, we propose a simple curriculum-based technique, termed Curriculum Temperature for Knowledge Distillation (CTKD), which controls the task difficulty level during the student's learning career through a dynamic and learnable temperature. Specifically, following an easy-to-hard curriculum, we gradually increase the distillation loss w.r.t. the temperature, leading to increased distillation difficulty in an adversarial manner. As an easy-to-use plug-in technique, CTKD can be seamlessly integrated into existing knowledge distillation frameworks and brings general improvements at a negligible additional computation cost. Extensive experiments on CIFAR-100, ImageNet-2012, and MS-COCO demonstrate the effectiveness of our method. Our code is available at https://github.com/zhengli97/CTKD.Comment: AAAI 202

    Controllable group delay in a θ-shaped microfiber resonator with coupled-resonator-induced transparency

    Get PDF
    The control of Light velocity is theoretically and experimentally demonstrated in a θ-shaped microfiber resonator with coupled-resonator-induced transparency. By adjusting the structure parameters, group delays from -60ps to 200ps are achieved in the all-fiber resonator

    Ellipse Fitting Based Approach for Extended Object Tracking

    Get PDF
    With the increase of sensors’ resolution, traditional object tracking technology, which ignores object’s physical extension, gradually becomes inappropriate. Extended object tracking (EOT) technology is able to obtain more information about the object through jointly estimating both centroid’s dynamic state and physical extension of the object. Random matrix based approach is a promising method for EOT. It uses ellipse/ellipsoid to describe the physical extension of the object. In order to reduce the physical extension estimation error when object maneuvers, the relationship between ellipse/ellipsoid and symmetrical positive definite matrix is analyzed at first. On this basis, ellipse/ellipsoid fitting based approach (EFA) for EOT is proposed based on the measurement model and centroid’s dynamic model of random matrix based EOT approach. Simulation results show that EFA is effective. The physical extension estimation error of EFA is lower than those of random matrix based approaches when object maneuvers. Besides, the estimation error of centroid’s dynamic state of EFA is also lower
    • …
    corecore