69 research outputs found

    Enhanced super-Heisenberg scaling precision by nonlinear coupling and postselection

    Full text link
    In quantum precision metrology, the famous result of Heisenberg limit scaling as 1/N1/N (with NN the number of probes) can be surpassed by considering nonlinear coupling measurement. In this work, we consider the most practice-relevant quadratic nonlinear coupling and show that the metrological precision can be enhanced from the 1/N321/N^{\frac{3}{2}} super-Heisenberg scaling to 1/N21/N^2, by simply employing a pre- and post-selection (PPS) technique, but not using any expensive quantum resources such as quantum entangled state of probes.Comment: 6 pages, 4 figure

    Quantum-coherence-free precision metrology by means of difference-signal amplification

    Full text link
    The novel weak-value-amplification (WVA) scheme of precision metrology is deeply rooted in the quantum nature of destructive interference between the pre- and post-selection states. And, an alternative version, termed as joint WVA (JWVA), which employs the difference-signal from the post-selection accepted and rejected results, has been found possible to achieve even better sensitivity (two orders of magnitude higher) under some technical limitations (e.g. misalignment errors). In this work, after erasing the quantum coherence, we analyze the difference-signal amplification (DSA) technique, which serves as a classical counterpart of the JWVA, and show that similar amplification effect can be achieved. We obtain a simple expression for the amplified signal, carry out characterization of precision, and point out the optimal working regime. We also discuss how to implement the post-selection of a classical mixed state. The proposed classical DSA technique holds similar technical advantages of the JWVA and may find interesting applications in practice.Comment: 7pages, 5 figures. arXiv admin note: text overlap with arXiv:2207.0366

    A General-Purpose Transferable Predictor for Neural Architecture Search

    Full text link
    Understanding and modelling the performance of neural architectures is key to Neural Architecture Search (NAS). Performance predictors have seen widespread use in low-cost NAS and achieve high ranking correlations between predicted and ground truth performance in several NAS benchmarks. However, existing predictors are often designed based on network encodings specific to a predefined search space and are therefore not generalizable to other search spaces or new architecture families. In this paper, we propose a general-purpose neural predictor for NAS that can transfer across search spaces, by representing any given candidate Convolutional Neural Network (CNN) with a Computation Graph (CG) that consists of primitive operators. We further combine our CG network representation with Contrastive Learning (CL) and propose a graph representation learning procedure that leverages the structural information of unlabeled architectures from multiple families to train CG embeddings for our performance predictor. Experimental results on NAS-Bench-101, 201 and 301 demonstrate the efficacy of our scheme as we achieve strong positive Spearman Rank Correlation Coefficient (SRCC) on every search space, outperforming several Zero-Cost Proxies, including Synflow and Jacov, which are also generalizable predictors across search spaces. Moreover, when using our proposed general-purpose predictor in an evolutionary neural architecture search algorithm, we can find high-performance architectures on NAS-Bench-101 and find a MobileNetV3 architecture that attains 79.2% top-1 accuracy on ImageNet.Comment: Accepted to SDM2023; version includes supplementary material; 12 Pages, 3 Figures, 6 Table

    AIO-P: Expanding Neural Performance Predictors Beyond Image Classification

    Full text link
    Evaluating neural network performance is critical to deep neural network design but a costly procedure. Neural predictors provide an efficient solution by treating architectures as samples and learning to estimate their performance on a given task. However, existing predictors are task-dependent, predominantly estimating neural network performance on image classification benchmarks. They are also search-space dependent; each predictor is designed to make predictions for a specific architecture search space with predefined topologies and set of operations. In this paper, we propose a novel All-in-One Predictor (AIO-P), which aims to pretrain neural predictors on architecture examples from multiple, separate computer vision (CV) task domains and multiple architecture spaces, and then transfer to unseen downstream CV tasks or neural architectures. We describe our proposed techniques for general graph representation, efficient predictor pretraining and knowledge infusion techniques, as well as methods to transfer to downstream tasks/spaces. Extensive experimental results show that AIO-P can achieve Mean Absolute Error (MAE) and Spearman's Rank Correlation (SRCC) below 1% and above 0.5, respectively, on a breadth of target downstream CV tasks with or without fine-tuning, outperforming a number of baselines. Moreover, AIO-P can directly transfer to new architectures not seen during training, accurately rank them and serve as an effective performance estimator when paired with an algorithm designed to preserve performance while reducing FLOPs.Comment: AAAI 2023 Oral Presentation; version includes supplementary material; 16 Pages, 4 Figures, 22 Table

    GENNAPE: Towards Generalized Neural Architecture Performance Estimators

    Full text link
    Predicting neural architecture performance is a challenging task and is crucial to neural architecture design and search. Existing approaches either rely on neural performance predictors which are limited to modeling architectures in a predefined design space involving specific sets of operators and connection rules, and cannot generalize to unseen architectures, or resort to zero-cost proxies which are not always accurate. In this paper, we propose GENNAPE, a Generalized Neural Architecture Performance Estimator, which is pretrained on open neural architecture benchmarks, and aims to generalize to completely unseen architectures through combined innovations in network representation, contrastive pretraining, and fuzzy clustering-based predictor ensemble. Specifically, GENNAPE represents a given neural network as a Computation Graph (CG) of atomic operations which can model an arbitrary architecture. It first learns a graph encoder via Contrastive Learning to encourage network separation by topological features, and then trains multiple predictor heads, which are soft-aggregated according to the fuzzy membership of a neural network. Experiments show that GENNAPE pretrained on NAS-Bench-101 can achieve superior transferability to 5 different public neural network benchmarks, including NAS-Bench-201, NAS-Bench-301, MobileNet and ResNet families under no or minimum fine-tuning. We further introduce 3 challenging newly labelled neural network benchmarks: HiAML, Inception and Two-Path, which can concentrate in narrow accuracy ranges. Extensive experiments show that GENNAPE can correctly discern high-performance architectures in these families. Finally, when paired with a search algorithm, GENNAPE can find architectures that improve accuracy while reducing FLOPs on three families.Comment: AAAI 2023 Oral Presentation; includes supplementary materials with more details on introduced benchmarks; 14 Pages, 6 Figures, 10 Table

    Guests mediated supramolecule-modified gold nanoparticles network for mimic enzyme application

    Get PDF
    1434-1441Supramolecules mediated porous metal nanostructures are meaningful materials because of their specific properties and wide range of applications. Here, we describe a general and simple strategy for building Au-networks based on the guest-induced 3D assembly of Au nanoparticles (Au-NPs) resulted in host-guest interaction resolved sulfonatocalix[4]arene (pSC4)-modified Au-NPs aggregate. The diverse guest molecules induced different porous network structures resulting in their different oxidize ability toward glucose. Among three different kinds of guest, hexamethylenediamine-pSC4-Au-NPs have high sensitivity, wide linear range and good stability. By surface characterization and calculating the electrochemical properties of the Au-NPs networks modified glassy carbon electrodes, the giving Au-NPs network reveals good porosity, high surface areas and increased conductance and electron transfer for the electrocatalysis. The synthesized nano-structures afford fast transport of glucose and ensure contact with a larger reaction surface due to high surface area. The fabricated sensor provides a platform for developing a more stable and efficient glucose sensor based on supramolecules mediated Au-NPs networks
    corecore