116 research outputs found

    Deep Reinforcement Learning Empowered Activity-Aware Dynamic Health Monitoring Systems

    Full text link
    In smart healthcare, health monitoring utilizes diverse tools and technologies to analyze patients' real-time biosignal data, enabling immediate actions and interventions. Existing monitoring approaches were designed on the premise that medical devices track several health metrics concurrently, tailored to their designated functional scope. This means that they report all relevant health values within that scope, which can result in excess resource use and the gathering of extraneous data due to monitoring irrelevant health metrics. In this context, we propose Dynamic Activity-Aware Health Monitoring strategy (DActAHM) for striking a balance between optimal monitoring performance and cost efficiency, a novel framework based on Deep Reinforcement Learning (DRL) and SlowFast Model to ensure precise monitoring based on users' activities. Specifically, with the SlowFast Model, DActAHM efficiently identifies individual activities and captures these results for enhanced processing. Subsequently, DActAHM refines health metric monitoring in response to the identified activity by incorporating a DRL framework. Extensive experiments comparing DActAHM against three state-of-the-art approaches demonstrate it achieves 27.3% higher gain than the best-performing baseline that fixes monitoring actions over timeline

    Multi-Resource Allocation for On-Device Distributed Federated Learning Systems

    Full text link
    This work poses a distributed multi-resource allocation scheme for minimizing the weighted sum of latency and energy consumption in the on-device distributed federated learning (FL) system. Each mobile device in the system engages the model training process within the specified area and allocates its computation and communication resources for deriving and uploading parameters, respectively, to minimize the objective of system subject to the computation/communication budget and a target latency requirement. In particular, mobile devices are connect via wireless TCP/IP architectures. Exploiting the optimization problem structure, the problem can be decomposed to two convex sub-problems. Drawing on the Lagrangian dual and harmony search techniques, we characterize the global optimal solution by the closed-form solutions to all sub-problems, which give qualitative insights to multi-resource tradeoff. Numerical simulations are used to validate the analysis and assess the performance of the proposed algorithm

    Neural topic modeling with bidirectional adversarial training

    Get PDF
    Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA). However, these models either typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent topic space or could not infer topic distribution for a given document. To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training for neural topic modeling. The proposed BAT builds a two-way projection between the document-topic distribution and the document-word distribution. It uses a generator to capture the semantic patterns from texts and an encoder for topic inference. Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT. To verify the effectiveness of BAT and Gaussian-BAT, three benchmark corpora are used in our experiments. The experimental results show that BAT and Gaussian-BAT obtain more coherent topics, outperforming several competitive baselines. Moreover, when performing text clustering based on the extracted topics, our models outperform all the baselines, with more significant improvements achieved by Gaussian-BAT where an increase of near 6% is observed in accuracy

    Generative AI-enabled Quantum Computing Networks and Intelligent Resource Allocation

    Full text link
    Quantum computing networks enable scalable collaboration and secure information exchange among multiple classical and quantum computing nodes while executing large-scale generative AI computation tasks and advanced quantum algorithms. Quantum computing networks overcome limitations such as the number of qubits and coherence time of entangled pairs and offer advantages for generative AI infrastructure, including enhanced noise reduction through distributed processing and improved scalability by connecting multiple quantum devices. However, efficient resource allocation in quantum computing networks is a critical challenge due to factors including qubit variability and network complexity. In this article, we propose an intelligent resource allocation framework for quantum computing networks to improve network scalability with minimized resource costs. To achieve scalability in quantum computing networks, we formulate the resource allocation problem as stochastic programming, accounting for the uncertain fidelities of qubits and entangled pairs. Furthermore, we introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation to resolve the proposed stochastic resource allocation problem efficiently. Finally, we optimize the resource allocation in heterogeneous quantum computing networks supporting quantum generative learning applications and propose a multi-agent RL-based algorithm to learn the optimal resource allocation policies without prior knowledge

    Functional interaction of Parkinson's disease-associated LRRK2 with members of the dynamin GTPase superfamily

    Get PDF
    Mutations in LRRK2 cause autosomal dominant Parkinson's disease (PD). LRRK2 encodes a multi-domain protein containing GTPase and kinase domains, and putative protein-protein interaction domains. Familial PD mutations alter the GTPase and kinase activity of LRRK2 in vitro. LRRK2 is suggested to regulate a number of cellular pathways although the underlying mechanisms are poorly understood. To explore such mechanisms, it has proved informative to identify LRRK2-interacting proteins, some of which serve as LRRK2 kinase substrates. Here, we identify common interactions of LRRK2 with members of the dynamin GTPase superfamily. LRRK2 interacts with dynamin 1-3 that mediate membrane scission in clathrin-mediated endocytosis and with dynamin-related proteins that mediate mitochondrial fission (Drp1) and fusion (mitofusins and OPA1). LRRK2 partially co-localizes with endosomal dynamin-1 or with mitofusins and OPA1 at mitochondrial membranes. The subcellular distribution and oligomeric complexes of dynamin GTPases are not altered by modulating LRRK2 in mouse brain, whereas mature OPA1 levels are reduced in G2019S PD brains. LRRK2 enhances mitofusin-1 GTP binding, whereas dynamin-1 and OPA1 serve as modest substrates of LRRK2-mediated phosphorylation in vitro. While dynamin GTPase orthologs are not required for LRRK2-induced toxicity in yeast, LRRK2 functionally interacts with dynamin-1 and mitofusin-1 in cultured neurons. LRRK2 attenuates neurite shortening induced by dynamin-1 by reducing its levels, whereas LRRK2 rescues impaired neurite outgrowth induced by mitofusin-1 potentially by reversing excessive mitochondrial fusion. Our study elucidates novel functional interactions of LRRK2 with dynamin-superfamily GTPases that implicate LRRK2 in the regulation of membrane dynamics important for endocytosis and mitochondrial morpholog

    The AAA+ ATPase Thorase Regulates AMPA Receptor-Dependent Synaptic Plasticity and Behavior

    Get PDF
    SummaryThe synaptic insertion or removal of AMPA receptors (AMPAR) plays critical roles in the regulation of synaptic activity reflected in the expression of long-term potentiation (LTP) and long-term depression (LTD). The cellular events underlying this important process in learning and memory are still being revealed. Here we describe and characterize the AAA+ ATPase Thorase, which regulates the expression of surface AMPAR. In an ATPase-dependent manner Thorase mediates the internalization of AMPAR by disassembling the AMPAR-GRIP1 complex. Following genetic deletion of Thorase, the internalization of AMPAR is substantially reduced, leading to increased amplitudes of miniature excitatory postsynaptic currents, enhancement of LTP, and elimination of LTD. These molecular events are expressed as deficits in learning and memory in Thorase null mice. This study identifies an AAA+ ATPase that plays a critical role in regulating the surface expression of AMPAR and thereby regulates synaptic plasticity and learning and memory
    corecore