57 research outputs found

    A 0.1–5.0 GHz flexible SDR receiver with digitally assisted calibration in 65 nm CMOS

    Get PDF
    © 2017 Elsevier Ltd. All rights reserved.A 0.1–5.0 GHz flexible software-defined radio (SDR) receiver with digitally assisted calibration is presented, employing a zero-IF/low-IF reconfigurable architecture for both wideband and narrowband applications. The receiver composes of a main-path based on a current-mode mixer for low noise, a high linearity sub-path based on a voltage-mode passive mixer for out-of-band rejection, and a harmonic rejection (HR) path with vector gain calibration. A dual feedback LNA with “8” shape nested inductor structure, a cascode inverter-based TCA with miller feedback compensation, and a class-AB full differential Op-Amp with Miller feed-forward compensation and QFG technique are proposed. Digitally assisted calibration methods for HR, IIP2 and image rejection (IR) are presented to maintain high performance over PVT variations. The presented receiver is implemented in 65 nm CMOS with 5.4 mm2 core area, consuming 9.6–47.4 mA current under 1.2 V supply. The receiver main path is measured with +5 dB m/+5dBm IB-IIP3/OB-IIP3 and +61dBm IIP2. The sub-path achieves +10 dB m/+18dBm IB-IIP3/OB-IIP3 and +62dBm IIP2, as well as 10 dB RF filtering rejection at 10 MHz offset. The HR-path reaches +13 dB m/+14dBm IB-IIP3/OB-IIP3 and 62/66 dB 3rd/5th-order harmonic rejection with 30–40 dB improvement by the calibration. The measured sensitivity satisfies the requirements of DVB-H, LTE, 802.11 g, and ZigBee.Peer reviewedFinal Accepted Versio

    Rethinking SIGN Training: Provable Nonconvex Acceleration without First- and Second-Order Gradient Lipschitz

    Full text link
    Sign-based stochastic methods have gained attention due to their ability to achieve robust performance despite using only the sign information for parameter updates. However, the current convergence analysis of sign-based methods relies on the strong assumptions of first-order gradient Lipschitz and second-order gradient Lipschitz, which may not hold in practical tasks like deep neural network training that involve high non-smoothness. In this paper, we revisit sign-based methods and analyze their convergence under more realistic assumptions of first- and second-order smoothness. We first establish the convergence of the sign-based method under weak first-order Lipschitz. Motivated by the weak first-order Lipschitz, we propose a relaxed second-order condition that still allows for nonconvex acceleration in sign-based methods. Based on our theoretical results, we gain insights into the computational advantages of the recently developed LION algorithm. In distributed settings, we prove that this nonconvex acceleration persists with linear speedup in the number of nodes, when utilizing fast communication compression gossip protocols. The novelty of our theoretical results lies in that they are derived under much weaker assumptions, thereby expanding the provable applicability of sign-based algorithms to a wider range of problems

    Hard Sample Aware Network for Contrastive Deep Graph Clustering

    Full text link
    Contrastive deep graph clustering, which aims to divide nodes into disjoint groups via contrastive mechanisms, is a challenging research spot. Among the recent works, hard sample mining-based algorithms have achieved great attention for their promising performance. However, we find that the existing hard sample mining methods have two problems as follows. 1) In the hardness measurement, the important structural information is overlooked for similarity calculation, degrading the representativeness of the selected hard negative samples. 2) Previous works merely focus on the hard negative sample pairs while neglecting the hard positive sample pairs. Nevertheless, samples within the same cluster but with low similarity should also be carefully learned. To solve the problems, we propose a novel contrastive deep graph clustering method dubbed Hard Sample Aware Network (HSAN) by introducing a comprehensive similarity measure criterion and a general dynamic sample weighing strategy. Concretely, in our algorithm, the similarities between samples are calculated by considering both the attribute embeddings and the structure embeddings, better revealing sample relationships and assisting hardness measurement. Moreover, under the guidance of the carefully collected high-confidence clustering information, our proposed weight modulating function will first recognize the positive and negative samples and then dynamically up-weight the hard sample pairs while down-weighting the easy ones. In this way, our method can mine not only the hard negative samples but also the hard positive sample, thus improving the discriminative capability of the samples further. Extensive experiments and analyses demonstrate the superiority and effectiveness of our proposed method.Comment: 9 pages, 6 figure

    2023 Low-Power Computer Vision Challenge (LPCVC) Summary

    Full text link
    This article describes the 2023 IEEE Low-Power Computer Vision Challenge (LPCVC). Since 2015, LPCVC has been an international competition devoted to tackling the challenge of computer vision (CV) on edge devices. Most CV researchers focus on improving accuracy, at the expense of ever-growing sizes of machine models. LPCVC balances accuracy with resource requirements. Winners must achieve high accuracy with short execution time when their CV solutions run on an embedded device, such as Raspberry PI or Nvidia Jetson Nano. The vision problem for 2023 LPCVC is segmentation of images acquired by Unmanned Aerial Vehicles (UAVs, also called drones) after disasters. The 2023 LPCVC attracted 60 international teams that submitted 676 solutions during the submission window of one month. This article explains the setup of the competition and highlights the winners' methods that improve accuracy and shorten execution time.Comment: LPCVC 2023, website: https://lpcv.ai

    Robust graph regularized unsupervised feature selection

    No full text
    Recent research indicates the critical importance of preserving local geometric structure of data in unsupervised feature selection (UFS), and the well studied graph Laplacian is usually deployed to capture this property. By using a squared l 2 -norm, we observe that conventional graph Laplacian is sensitive to noisy data, leading to unsatisfying data processing performance. To address this issue, we propose a unified UFS framework via feature self-representation and robust graph regularization, with the aim at reducing the sensitivity to outliers from the following two aspects: i) an l 2, 1 -norm is used to characterize the feature representation residual matrix; and ii) an l 1 -norm based graph Laplacian regularization term is adopted to preserve the local geometric structure of data. By this way, the proposed framework is able to reduce the effect of noisy data on feature selection. Furthermore, the proposed l 1 -norm based graph Laplacian is readily extendible, which can be easily integrated into other UFS methods and machine learning tasks with local geometrical structure of data being preserved. As demonstrated on ten challenging benchmark data sets, our algorithm significantly and consistently outperforms state-of-the-art UFS methods in the literature, suggesting the effectiveness of the proposed UFS framework

    Study on the Effect of Spoiler Columns on the Heat Dissipation Performance of S-Type Runner Water-Cooling Plates

    No full text
    To solve the problem of low heat dissipation efficiency for the conventional S-type runner water-cooling plate of the fan converter IGBT module, two new water-cooling plates were designed with rectangular and elliptical column structures in the S-shaped runner of the water-cooling plate. The heat dissipation performance, the fluidity of cooling water, and pressure drop of different spoiler column structures were compared using Fluent software for the simulation and experiment. The comparative results show, compared with the water-cooling plate without a spoiler column in the flow channel of the control group, that the spoiler column structure in the flow channel significantly improved the heat dissipation performance of the water-cooling plate. When the inlet velocity of the water-cooling plate was 2 m/s, the highest temperature inside the water-cooling plate with a rectangular spoiler column structure was 12.25 °C, lower than the control water-cooling plate. The highest temperature inside the water-cooled plate with an elliptical structure was 12.40 °C, lower than the control water-cooled plate. The obstructive effect of the elliptical spoiler column structure on water flow was smaller than in the rectangular spoiler column structure. The fluidity of the cooling water inside the elliptical spoiler column structure water-cooling plate was better. When the inlet velocity of the water-cooling plate was 2 m/s, the cooling water flowing through the former was 282 L more than the latter in half an hour. Compared to the pressure drop, we found that in the design group, the pressure drop of the water-cooled plate with a rectangular spoiler column structure was 40,988.3 Pa. The pressure drop of the water-cooled plate with an elliptical spoiler column structure was 25,576.6 Pa. The difference between the two was 15,411.7 Pa, which proves that the energy loss inside the latter is smaller. To further explore the relationship between the heat dissipation and energy consumption of the two types of water-cooled plates, the comprehensive evaluation index η was calculated, ηb = 26.2, ηc = 31.6; therefore, ηb was significantly smaller than ηc. The overall performance of the water-cooled plate with an elliptical spoiler column structure was superior

    Study on the Effect of Spoiler Columns on the Heat Dissipation Performance of S-Type Runner Water-Cooling Plates

    No full text
    To solve the problem of low heat dissipation efficiency for the conventional S-type runner water-cooling plate of the fan converter IGBT module, two new water-cooling plates were designed with rectangular and elliptical column structures in the S-shaped runner of the water-cooling plate. The heat dissipation performance, the fluidity of cooling water, and pressure drop of different spoiler column structures were compared using Fluent software for the simulation and experiment. The comparative results show, compared with the water-cooling plate without a spoiler column in the flow channel of the control group, that the spoiler column structure in the flow channel significantly improved the heat dissipation performance of the water-cooling plate. When the inlet velocity of the water-cooling plate was 2 m/s, the highest temperature inside the water-cooling plate with a rectangular spoiler column structure was 12.25 °C, lower than the control water-cooling plate. The highest temperature inside the water-cooled plate with an elliptical structure was 12.40 °C, lower than the control water-cooled plate. The obstructive effect of the elliptical spoiler column structure on water flow was smaller than in the rectangular spoiler column structure. The fluidity of the cooling water inside the elliptical spoiler column structure water-cooling plate was better. When the inlet velocity of the water-cooling plate was 2 m/s, the cooling water flowing through the former was 282 L more than the latter in half an hour. Compared to the pressure drop, we found that in the design group, the pressure drop of the water-cooled plate with a rectangular spoiler column structure was 40,988.3 Pa. The pressure drop of the water-cooled plate with an elliptical spoiler column structure was 25,576.6 Pa. The difference between the two was 15,411.7 Pa, which proves that the energy loss inside the latter is smaller. To further explore the relationship between the heat dissipation and energy consumption of the two types of water-cooled plates, the comprehensive evaluation index η was calculated, ηb = 26.2, ηc = 31.6; therefore, ηb was significantly smaller than ηc. The overall performance of the water-cooled plate with an elliptical spoiler column structure was superior

    Robust graph regularized unsupervised feature selection

    Get PDF
    Recent research indicates the critical importance of preserving local geometric structure of data in unsupervised feature selection (UFS), and the well studied graph Laplacian is usually deployed to capture this property. By using a squared l 2 -norm, we observe that conventional graph Laplacian is sensitive to noisy data, leading to unsatisfying data processing performance. To address this issue, we propose a unified UFS framework via feature self-representation and robust graph regularization, with the aim at reducing the sensitivity to outliers from the following two aspects: i) an l 2, 1 -norm is used to characterize the feature representation residual matrix; and ii) an l 1 -norm based graph Laplacian regularization term is adopted to preserve the local geometric structure of data. By this way, the proposed framework is able to reduce the effect of noisy data on feature selection. Furthermore, the proposed l 1 -norm based graph Laplacian is readily extendible, which can be easily integrated into other UFS methods and machine learning tasks with local geometrical structure of data being preserved. As demonstrated on ten challenging benchmark data sets, our algorithm significantly and consistently outperforms state-of-the-art UFS methods in the literature, suggesting the effectiveness of the proposed UFS framework
    • …
    corecore