333 research outputs found

    Reassigned spectrogram and its time-domain conversion

    Get PDF
    Audio information accounts for a large portion of existing digital data, and numerous researchers are constantly developing new methods and tools for audio analysis. Short-time Fourier transformed based analysis is one of the most prominent analysis tools because of its linearity; however, this mechanism cannot resolve issues with inaccurate localization of energy in both time and frequency, especially for audio editing types of applications. Thus, the reassigned spectrogram method, developed by Auger and Flandrin in 1995, has won its place due to higher accuracy in energy localization by turning discretely sampled signal into continuous domain, and by reassigning the energy to the center of mass for each analysis bin. This method, nevertheless, is non-linear, and it is very di cult to synthesize analysis data back to time-domain representation. This thesis introduces mechanisms to solve this problem and summarizes the result of testing.Ope

    Creating a Strong Group Culture

    Full text link
    This book review of The Culture Code summarizes three significant skills that contribute to creating a highly successful group. These skills include building safety, sharing vulnerability, and establishing purpose. Audiences are encouraged to appreciate the book for two reasons. First, it conveys that the approaches to strong group culture are multiple. Second, culture is one of the significant factors that policymakers should consider. By reading The Culture Code, readers can better understand how culture can help society be more inclusive and dynamic

    Model-Assisted Online Optimization of Gain-Scheduled PID Control Using NSGA-II Iterative Genetic Algorithm

    Get PDF
    In the practical control of nonlinear valve systems, PID control, as a model-free method, continues to play a crucial role thanks to its simple structure and performance-oriented tuning process. To improve the control performance, advanced gain-scheduling methods are used to schedule the PID control gains based on the operating conditions and/or tracking error. However, determining the scheduled gain is a major challenge, as PID control gains need to be determined at each operating condition. In this paper, a model-assisted online optimization method is proposed based on the modified Non-Dominated Sorting Genetic Algorithms-II (NSGA-II) to obtain the optimal gain-scheduled PID controller. Model-assisted offline optimization through computer-in-the-loop simulation provides the initial scheduled gains for an online algorithm, which then uses the iterative NSGA-II algorithm to automatically schedule and tune PID gains by online searching of the parameter space. As a summary, the proposed approach presents a PID controller optimized through both model-assisted learning based on prior model knowledge and model-free online learning. The proposed approach is demonstrated in the case of a nonlinear valve system able to obtain optimal PID control gains with a given scheduled gain structure. The performance improvement of the optimized gain-scheduled PID control is demonstrated by comparing it with fixed-gain controllers under multiple operating conditions

    A Scaling Algorithm for Weighted f-Factors in General Graphs

    Get PDF
    We study the maximum weight perfect f-factor problem on any general simple graph G = (V,E,?) with positive integral edge weights w, and n = |V|, m = |E|. When we have a function f:V ? ?_+ on vertices, a perfect f-factor is a generalized matching so that every vertex u is matched to exactly f(u) different edges. The previous best results on this problem have running time O(m f(V)) [Gabow 2018] or O?(W(f(V))^2.373)) [Gabow and Sankowski 2013], where W is the maximum edge weight, and f(V) = ?_{u ? V}f(u). In this paper, we present a scaling algorithm for this problem with running time O?(mn^{2/3} log W). Previously this bound is only known for bipartite graphs [Gabow and Tarjan 1989]. The advantage is that the running time is independent of f(V), and consequently it breaks the ?(mn) barrier for large f(V) even for the unweighted f-factor problem in general graphs

    You Only Condense Once: Two Rules for Pruning Condensed Datasets

    Full text link
    Dataset condensation is a crucial tool for enhancing training efficiency by reducing the size of the training dataset, particularly in on-device scenarios. However, these scenarios have two significant challenges: 1) the varying computational resources available on the devices require a dataset size different from the pre-defined condensed dataset, and 2) the limited computational resources often preclude the possibility of conducting additional condensation processes. We introduce You Only Condense Once (YOCO) to overcome these limitations. On top of one condensed dataset, YOCO produces smaller condensed datasets with two embarrassingly simple dataset pruning rules: Low LBPE Score and Balanced Construction. YOCO offers two key advantages: 1) it can flexibly resize the dataset to fit varying computational constraints, and 2) it eliminates the need for extra condensation processes, which can be computationally prohibitive. Experiments validate our findings on networks including ConvNet, ResNet and DenseNet, and datasets including CIFAR-10, CIFAR-100 and ImageNet. For example, our YOCO surpassed various dataset condensation and dataset pruning methods on CIFAR-10 with ten Images Per Class (IPC), achieving 6.98-8.89% and 6.31-23.92% accuracy gains, respectively. The code is available at: https://github.com/he-y/you-only-condense-once.Comment: Accepted by NeurIPS 202
    • …
    corecore