257 research outputs found

    PIDS: Joint Point Interaction-Dimension Search for 3D Point Cloud

    Full text link
    The interaction and dimension of points are two important axes in designing point operators to serve hierarchical 3D models. Yet, these two axes are heterogeneous and challenging to fully explore. Existing works craft point operator under a single axis and reuse the crafted operator in all parts of 3D models. This overlooks the opportunity to better combine point interactions and dimensions by exploiting varying geometry/density of 3D point clouds. In this work, we establish PIDS, a novel paradigm to jointly explore point interactions and point dimensions to serve semantic segmentation on point cloud data. We establish a large search space to jointly consider versatile point interactions and point dimensions. This supports point operators with various geometry/density considerations. The enlarged search space with heterogeneous search components calls for a better ranking of candidate models. To achieve this, we improve the search space exploration by leveraging predictor-based Neural Architecture Search (NAS), and enhance the quality of prediction by assigning unique encoding to heterogeneous search components based on their priors. We thoroughly evaluate the networks crafted by PIDS on two semantic segmentation benchmarks, showing ~1% mIOU improvement on SemanticKITTI and S3DIS over state-of-the-art 3D models.Comment: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023: 1298-130

    UWB-INS Fusion Positioning Based on a Two-Stage Optimization Algorithm

    Get PDF
    Ultra-wideband (UWB) is a carrier-less communication technology that transmits data using narrow pulses of non-sine waves on the nanosecond scale. The UWB positioning system uses the multi-lateral positioning algorithm to accurately locate the target, and the positioning accuracy is seriously affected by the non-line-of-sight (NLOS) error. The existing non-line-of-sight error compensation methods lack multidimensional consideration. To combine the advantages of various methods, a two-stage UWB-INS fusion localization algorithm is proposed. In the first stage, an NLOS signal filter is designed based on support vector machines (SVM). In the second stage, the results of UWB and Inertial Navigation System (INS) are fused based on Kalman filter algorithm. The two-stage fusion localization algorithm achieves a great improvement on positioning system, it can improve the localization accuracy by 79.8% in the NLOS environment and by 36% in the (line-of-sight) LOS environment

    Farthest Greedy Path Sampling for Two-shot Recommender Search

    Full text link
    Weight-sharing Neural Architecture Search (WS-NAS) provides an efficient mechanism for developing end-to-end deep recommender models. However, in complex search spaces, distinguishing between superior and inferior architectures (or paths) is challenging. This challenge is compounded by the limited coverage of the supernet and the co-adaptation of subnet weights, which restricts the exploration and exploitation capabilities inherent to weight-sharing mechanisms. To address these challenges, we introduce Farthest Greedy Path Sampling (FGPS), a new path sampling strategy that balances path quality and diversity. FGPS enhances path diversity to facilitate more comprehensive supernet exploration, while emphasizing path quality to ensure the effective identification and utilization of promising architectures. By incorporating FGPS into a Two-shot NAS (TS-NAS) framework, we derive high-performance architectures. Evaluations on three Click-Through Rate (CTR) prediction benchmarks demonstrate that our approach consistently achieves superior results, outperforming both manually designed and most NAS-based models.Comment: 9 pages, 5 figure

    Up-regulation on cytochromes P450 in rat mediated by total alkaloid extract from Corydalis yanhusuo

    Get PDF
    BACKGROUND: Yanhusuo (Corydalis yanhusuo W.T. Wang; YHS), is a well-known traditional Chinese herbal medicine, has been used in China for treating pain including chest pain, epigastric pain, and dysmenorrhea. Its alkaloid ingredients including tetrahydropalmatine are reported to inhibit cytochromes P450 (CYPs) activity in vitro. The present study is aimed to assess the potential of total alkaloid extract (TAE) from YHS to effect the activity and mRNA levels of five cytochromes P450 (CYPs) in rat. METHODS: Rats were administered TAE from YHS (0, 6, 30, and 150 mg/kg, daily) for 14 days, alanine aminotransferase (ALT) levels in serum were assayed, and hematoxylin and eosin-stained sections of the liver were prepared for light microscopy. The effects of TAE on five CYPs activity and mRNA levels were quantitated by cocktail probe drugs using a rapid chromatography/tandem mass spectrometry (LC-MS/MS) method and reverse transcription-polymerase chain reaction (RT-PCR), respectively. RESULTS: In general, serum ALT levels showed no significant changes, and the histopathology appeared largely normal compared with that in the control rats. At 30 and 150 mg/kg TAE dosages, an increase in liver CYP2E1 and CYP3A1 enzyme activity were observed. Moreover, the mRNA levels of CYP2E1 and CYP3A1 in the rat liver, lung, and intestine were significantly up-regulated with TAE from 6 and 30 mg/kg, respectively. Furthermore, treatment with TAE (150 mg/kg) enhanced the activities and the mRNA levels of CYP1A2 and CYP2C11 in rats. However, the activity or mRNA level of CYP2D1 remained unchanged. CONCLUSIONS: These results suggest that TAE-induced CYPs activity in the rat liver results from the elevated mRNA levels of CYPs. Co-administration of prescriptions containing YHS should consider a potential herb (drug)–drug interaction mediated by the induction of CYP2E1 and CYP3A1 enzymes

    LEASGD: an Efficient and Privacy-Preserving Decentralized Algorithm for Distributed Learning

    Get PDF
    Distributed learning systems have enabled training large-scale models over large amount of data in significantly shorter time. In this paper, we focus on decentralized distributed deep learning systems and aim to achieve differential privacy with good convergence rate and low communication cost. To achieve this goal, we propose a new learning algorithm LEASGD (Leader-Follower Elastic Averaging Stochastic Gradient Descent), which is driven by a novel Leader-Follower topology and a differential privacy model.We provide a theoretical analysis of the convergence rate and the trade-off between the performance and privacy in the private setting.The experimental results show that LEASGD outperforms state-of-the-art decentralized learning algorithm DPSGD by achieving steadily lower loss within the same iterations and by reducing the communication cost by 30%. In addition, LEASGD spends less differential privacy budget and has higher final accuracy result than DPSGD under private setting

    AutoShrink: A Topology-aware NAS for Discovering Efficient Neural Architecture

    Full text link
    Resource is an important constraint when deploying Deep Neural Networks (DNNs) on mobile and edge devices. Existing works commonly adopt the cell-based search approach, which limits the flexibility of network patterns in learned cell structures. Moreover, due to the topology-agnostic nature of existing works, including both cell-based and node-based approaches, the search process is time consuming and the performance of found architecture may be sub-optimal. To address these problems, we propose AutoShrink, a topology-aware Neural Architecture Search(NAS) for searching efficient building blocks of neural architectures. Our method is node-based and thus can learn flexible network patterns in cell structures within a topological search space. Directed Acyclic Graphs (DAGs) are used to abstract DNN architectures and progressively optimize the cell structure through edge shrinking. As the search space intrinsically reduces as the edges are progressively shrunk, AutoShrink explores more flexible search space with even less search time. We evaluate AutoShrink on image classification and language tasks by crafting ShrinkCNN and ShrinkRNN models. ShrinkCNN is able to achieve up to 48% parameter reduction and save 34% Multiply-Accumulates (MACs) on ImageNet-1K with comparable accuracy of state-of-the-art (SOTA) models. Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1.5 GPU hours, which is 7.2x and 6.7x faster than the crafting time of SOTA CNN and RNN models, respectively
    corecore