28 research outputs found

    A Nearly-Linear Time Algorithm for Linear Programs with Small Treewidth: A Multiscale Representation of Robust Central Path

    Full text link
    Arising from structural graph theory, treewidth has become a focus of study in fixed-parameter tractable algorithms in various communities including combinatorics, integer-linear programming, and numerical analysis. Many NP-hard problems are known to be solvable in O~(n2O(tw))\widetilde{O}(n \cdot 2^{O(\mathrm{tw})}) time, where tw\mathrm{tw} is the treewidth of the input graph. Analogously, many problems in P should be solvable in O~(ntwO(1))\widetilde{O}(n \cdot \mathrm{tw}^{O(1)}) time; however, due to the lack of appropriate tools, only a few such results are currently known. [Fom+18] conjectured this to hold as broadly as all linear programs; in our paper, we show this is true: Given a linear program of the form minAx=b,xucx\min_{Ax=b,\ell \leq x\leq u} c^{\top} x, and a width-τ\tau tree decomposition of a graph GAG_A related to AA, we show how to solve it in time O~(nτ2log(1/ε)),\widetilde{O}(n \cdot \tau^2 \log (1/\varepsilon)), where nn is the number of variables and ε\varepsilon is the relative accuracy. Combined with recent techniques in vertex-capacitated flow [BGS21], this leads to an algorithm with O~(ntw2log(1/ε))\widetilde{O}(n \cdot \mathrm{tw}^2 \log (1/\varepsilon)) run-time. Besides being the first of its kind, our algorithm has run-time nearly matching the fastest run-time for solving the sub-problem Ax=bAx=b (under the assumption that no fast matrix multiplication is used). We obtain these results by combining recent techniques in interior-point methods (IPMs), sketching, and a novel representation of the solution under a multiscale basis similar to the wavelet basis

    DFedADMM: Dual Constraints Controlled Model Inconsistency for Decentralized Federated Learning

    Full text link
    To address the communication burden issues associated with federated learning (FL), decentralized federated learning (DFL) discards the central server and establishes a decentralized communication network, where each client communicates only with neighboring clients. However, existing DFL methods still suffer from two major challenges: local inconsistency and local heterogeneous overfitting, which have not been fundamentally addressed by existing DFL methods. To tackle these issues, we propose novel DFL algorithms, DFedADMM and its enhanced version DFedADMM-SAM, to enhance the performance of DFL. The DFedADMM algorithm employs primal-dual optimization (ADMM) by utilizing dual variables to control the model inconsistency raised from the decentralized heterogeneous data distributions. The DFedADMM-SAM algorithm further improves on DFedADMM by employing a Sharpness-Aware Minimization (SAM) optimizer, which uses gradient perturbations to generate locally flat models and searches for models with uniformly low loss values to mitigate local heterogeneous overfitting. Theoretically, we derive convergence rates of O(1KT+1KT(1ψ)2)\small \mathcal{O}\Big(\frac{1}{\sqrt{KT}}+\frac{1}{KT(1-\psi)^2}\Big) and O(1KT+1KT(1ψ)2+1T3/2K1/2)\small \mathcal{O}\Big(\frac{1}{\sqrt{KT}}+\frac{1}{KT(1-\psi)^2}+ \frac{1}{T^{3/2}K^{1/2}}\Big) in the non-convex setting for DFedADMM and DFedADMM-SAM, respectively, where 1ψ1 - \psi represents the spectral gap of the gossip matrix. Empirically, extensive experiments on MNIST, CIFAR10 and CIFAR100 datesets demonstrate that our algorithms exhibit superior performance in terms of both generalization and convergence speed compared to existing state-of-the-art (SOTA) optimizers in DFL.Comment: 24 page

    User independent Emotion Recognition with Residual Signal-Image Network

    Full text link
    User independent emotion recognition with large scale physiological signals is a tough problem. There exist many advanced methods but they are conducted under relatively small datasets with dozens of subjects. Here, we propose Res-SIN, a novel end-to-end framework using Electrodermal Activity(EDA) signal images to classify human emotion. We first apply convex optimization-based EDA (cvxEDA) to decompose signals and mine the static and dynamic emotion changes. Then, we transform decomposed signals to images so that they can be effectively processed by CNN frameworks. The Res-SIN combines individual emotion features and external emotion benchmarks to accelerate convergence. We evaluate our approach on the PMEmo dataset, the currently largest emotional dataset containing music and EDA signals. To the best of author's knowledge, our method is the first attempt to classify large scale subject-independent emotion with 7962 pieces of EDA signals from 457 subjects. Experimental results demonstrate the reliability of our model and the binary classification accuracy of 73.65% and 73.43% on arousal and valence dimension can be used as a baseline

    MAFW: A Large-scale, Multi-modal, Compound Affective Database for Dynamic Facial Expression Recognition in the Wild

    Full text link
    Dynamic facial expression recognition (FER) databases provide important data support for affective computing and applications. However, most FER databases are annotated with several basic mutually exclusive emotional categories and contain only one modality, e.g., videos. The monotonous labels and modality cannot accurately imitate human emotions and fulfill applications in the real world. In this paper, we propose MAFW, a large-scale multi-modal compound affective database with 10,045 video-audio clips in the wild. Each clip is annotated with a compound emotional category and a couple of sentences that describe the subjects' affective behaviors in the clip. For the compound emotion annotation, each clip is categorized into one or more of the 11 widely-used emotions, i.e., anger, disgust, fear, happiness, neutral, sadness, surprise, contempt, anxiety, helplessness, and disappointment. To ensure high quality of the labels, we filter out the unreliable annotations by an Expectation Maximization (EM) algorithm, and then obtain 11 single-label emotion categories and 32 multi-label emotion categories. To the best of our knowledge, MAFW is the first in-the-wild multi-modal database annotated with compound emotion annotations and emotion-related captions. Additionally, we also propose a novel Transformer-based expression snippet feature learning method to recognize the compound emotions leveraging the expression-change relations among different emotions and modalities. Extensive experiments on MAFW database show the advantages of the proposed method over other state-of-the-art methods for both uni- and multi-modal FER. Our MAFW database is publicly available from https://mafw-database.github.io/MAFW.Comment: This paper has been accepted by ACM MM'2

    Online Streaming Video Super-Resolution with Convolutional Look-Up Table

    Full text link
    Online video streaming has fundamental limitations on the transmission bandwidth and computational capacity and super-resolution is a promising potential solution. However, applying existing video super-resolution methods to online streaming is non-trivial. Existing video codecs and streaming protocols (\eg, WebRTC) dynamically change the video quality both spatially and temporally, which leads to diverse and dynamic degradations. Furthermore, online streaming has a strict requirement for latency that most existing methods are less applicable. As a result, this paper focuses on the rarely exploited problem setting of online streaming video super resolution. To facilitate the research on this problem, a new benchmark dataset named LDV-WebRTC is constructed based on a real-world online streaming system. Leveraging the new benchmark dataset, we proposed a novel method specifically for online video streaming, which contains a convolution and Look-Up Table (LUT) hybrid model to achieve better performance-latency trade-off. To tackle the changing degradations, we propose a mixture-of-expert-LUT module, where a set of LUT specialized in different degradations are built and adaptively combined to handle different degradations. Experiments show our method achieves 720P video SR around 100 FPS, while significantly outperforms existing LUT-based methods and offers competitive performance compared to efficient CNN-based methods

    Stress in Regulation of GABA Amygdala System and Relevance to Neuropsychiatric Diseases

    Get PDF
    The amygdala is an almond-shaped nucleus located deep and medially within the temporal lobe and is thought to play a crucial role in the regulation of emotional processes. GABAergic neurotransmission inhibits the amygdala and prevents us from generating inappropriate emotional and behavioral responses. Stress may cause the reduction of the GABAergic interneuronal network and the development of neuropsychological diseases. In this review, we summarize the recent evidence investigating the possible mechanisms underlying GABAergic control of the amygdala and its interaction with acute and chronic stress. Taken together, this study may contribute to future progress in finding new approaches to reverse the attenuation of GABAergic neurotransmission induced by stress in the amygdala

    Sensors and Data Processing Techniques for Future Medicine

    No full text
    Varieties of innovative and high precision sensors have been developed and became available for versatile application. Such sensors, when combined with data processing techniques of artificial intelligence, can make a huge impact on healthcare technologies. That is, a system can screen symptoms such as infection, cardiovascular failure, and major depressive disorders, just as experienced physicians diagnose with stethoscope and percussion.Published versio
    corecore