4,303 research outputs found

    Linear magnetoresistance in metals: guiding center diffusion in a smooth random potential

    Get PDF
    We predict that guiding center (GC) diffusion yields a linear and non-saturating (transverse) magnetoresistance in 3D metals. Our theory is semi-classical and applies in the regime where the transport time is much greater than the cyclotron period, and for weak disorder potentials which are slowly varying on a length scale much greater than the cyclotron radius. Under these conditions, orbits with small momenta along magnetic field BB are squeezed and dominate the transverse conductivity. When disorder potentials are stronger than the Debye frequency, linear magnetoresistance is predicted to survive up to room temperature and beyond. We argue that magnetoresistance from GC diffusion explains the recently observed giant linear magnetoresistance in 3D Dirac materials

    FloWaveNet : A Generative Flow for Raw Audio

    Full text link
    Most modern text-to-speech architectures use a WaveNet vocoder for synthesizing high-fidelity waveform audio, but there have been limitations, such as high inference time, in its practical application due to its ancestral sampling scheme. The recently suggested Parallel WaveNet and ClariNet have achieved real-time audio synthesis capability by incorporating inverse autoregressive flow for parallel sampling. However, these approaches require a two-stage training pipeline with a well-trained teacher network and can only produce natural sound by using probability distillation along with auxiliary loss terms. We propose FloWaveNet, a flow-based generative model for raw audio synthesis. FloWaveNet requires only a single-stage training procedure and a single maximum likelihood loss, without any additional auxiliary terms, and it is inherently parallel due to the characteristics of generative flow. The model can efficiently sample raw audio in real-time, with clarity comparable to previous two-stage parallel models. The code and samples for all models, including our FloWaveNet, are publicly available.Comment: 9 pages, ICML'201

    Toward Robustness in Multi-label Classification: A Data Augmentation Strategy against Imbalance and Noise

    Full text link
    Multi-label classification poses challenges due to imbalanced and noisy labels in training data. We propose a unified data augmentation method, named BalanceMix, to address these challenges. Our approach includes two samplers for imbalanced labels, generating minority-augmented instances with high diversity. It also refines multi-labels at the label-wise granularity, categorizing noisy labels as clean, re-labeled, or ambiguous for robust optimization. Extensive experiments on three benchmark datasets demonstrate that BalanceMix outperforms existing state-of-the-art methods. We release the code at https://github.com/DISL-Lab/BalanceMix.Comment: This paper was accepted at AAAI 2024. We upload the full version of our paper on arXiv due to the page limit of AAA

    Data Collection and Quality Challenges in Deep Learning: A Data-Centric AI Perspective

    Full text link
    Data-centric AI is at the center of a fundamental shift in software engineering where machine learning becomes the new software, powered by big data and computing infrastructure. Here software engineering needs to be re-thought where data becomes a first-class citizen on par with code. One striking observation is that a significant portion of the machine learning process is spent on data preparation. Without good data, even the best machine learning algorithms cannot perform well. As a result, data-centric AI practices are now becoming mainstream. Unfortunately, many datasets in the real world are small, dirty, biased, and even poisoned. In this survey, we study the research landscape for data collection and data quality primarily for deep learning applications. Data collection is important because there is lesser need for feature engineering for recent deep learning approaches, but instead more need for large amounts of data. For data quality, we study data validation, cleaning, and integration techniques. Even if the data cannot be fully cleaned, we can still cope with imperfect data during model training using robust model training techniques. In addition, while bias and fairness have been less studied in traditional data management research, these issues become essential topics in modern machine learning applications. We thus study fairness measures and unfairness mitigation techniques that can be applied before, during, or after model training. We believe that the data management community is well poised to solve these problems

    End-functionalized glycopolymers as mimetics of chondroitin sulfate proteoglycans

    Get PDF
    Glycosaminoglycans are sulfated polysaccharides that play important roles in fundamental biological processes, such as cell division, viral invasion, cancer and neuroregeneration. The multivalent presentation of multiple glycosaminoglycan chains on proteoglycan scaffolds may profoundly influence their interactions with proteins and subsequent biological activity. However, the importance of this multivalent architecture remains largely unexplored, and few synthetic mimics exist for probing and manipulating glycosaminoglycan activity. Here, we describe a new class of end-functionalized ring-opening metathesis polymerization (ROMP) polymers that mimic the native-like, multivalent architecture found on chondroitin sulfate (CS) proteoglycans. We demonstrate that these glycopolymers can be readily integrated with microarray and surface plasmon resonance technology platforms, where they retain the ability to interact selectively with proteins. ROMP-based glycopolymers are part of a growing arsenal of chemical tools for probing the functions of glycosaminoglycans and for studying their interactions with proteins
    corecore