1,993 research outputs found
Machine Unlearning in Contrastive Learning
Machine unlearning is a complex process that necessitates the model to
diminish the influence of the training data while keeping the loss of accuracy
to a minimum. Despite the numerous studies on machine unlearning in recent
years, the majority of them have primarily focused on supervised learning
models, leaving research on contrastive learning models relatively
underexplored. With the conviction that self-supervised learning harbors a
promising potential, surpassing or rivaling that of supervised learning, we set
out to investigate methods for machine unlearning centered around contrastive
learning models. In this study, we introduce a novel gradient constraint-based
approach for training the model to effectively achieve machine unlearning. Our
method only necessitates a minimal number of training epochs and the
identification of the data slated for unlearning. Remarkably, our approach
demonstrates proficient performance not only on contrastive learning models but
also on supervised learning models, showcasing its versatility and adaptability
in various learning paradigms
Sub-megahertz frequency stabilization of a diode 2 laser by digital laser current modulation
Digital laser current modulation (DLCM) is a convenient laser stabilization scheme whose major advantages are simplicity and inexpensiveness of implementation. However, there is a tradeoff between the SNR of the error signal and the laser linewidth due to the direct laser frequency modulation. In this paper, we demonstrated that DLCM can reduce the FWHM linewidth of a tunable diode laser down to 500 kHz using the modulation transfer spectrum of 2 line of a Li6 atomic vapor. For this purpose, a theoretical model is provided to analyze the DLCM-based modulation transfer spectrum. From the analysis, we experimentally explored the modulation effect on the DLCM spectrum to minimize the laser linewidth. Our result shows the optimized DLCM can stabilize a diode laser into the sub-megahertz regime without requiring acousto-optic and electro-optic modulators
Anonymous Expression in an Online Community for Women in China
Gender issues faced by women can range from workplace harassment to domestic violence. While publicly disclosing these issues on social media can be hard, some may incline to express themselves anonymously. We approached such an anonymous female community on Chinese social media where discussion on gender issues takes place with a qualitative content analysis. By observing anonymous experiences contributed by female users and made publicly available by an influencer, we identified 20 issues commonly discussed, with cheating-partner, controlling parents and age anxiety taking the lead. By describing the anonymously expressed social challenges faced by women in China, in the context of Chinese cultures and expectations about gender, we aim to motivate more policies and platform designs to accommodate the needs of the affected population
Optimizing of Convolutional Neural Network Accelerator
In recent years, convolution neural network (CNN) had been widely used in many image-related machine learning algorithms since its high accuracy for image recognition. As CNN involves an enormous number of computations, it is necessary to accelerate the CNN computation by a hardware accelerator, such as FPGA, GPU and ASIC designs. However, CNN accelerator faces a critical problem: the large time and power consumption caused by the data access of off-chip memory. Here, we describe two methods of CNN accelerator to optimize CNN accelerator, reducing data precision and data-reusing, which can improve the performance of accelerator with the limited on-chip buffer. Three influence factors to data-reusing are proposed and analyzed, including loop execution order, reusing strategy and parallelism strategy. Based on the analysis, we enumerate all legal design possibilities and find out the optimal hardware design with low off-chip memory access and low buffer size. In this way, we can improve the performance and reduce the power consumption of accelerator effectively
Graph Attention-based MADRL for Access Control and Resource Allocation in Wireless Networked Control Systems
Abstract
Wireless networked control systems (WNCS) offer great potential for revolutionizing the industrial automation by enabling wireless coordination between sensors, decision centers, and actuators. However, inefficient access control and resource allocation in WNCS are two critical factors that limit closed-loop performance and control stability, especially when the spectral and energy resources are limited. In this paper, we first analyze the optimal scheduling condition for maintaining control stability of a WNCS and then formulate a long-term optimization problem that jointly optimizes the access policy of edge devices, and grant policy and resource allocation at the edge server. We employ Lyapunov optimization to decompose the long-term optimization problem into a sequence of independent sub-problems, and propose a heterogeneous attention graph based multi-agent deep reinforcement learning algorithm that jointly optimizes the access and resource allocation policy. By leveraging the attention mechanism to project the graph representations from heterogeneous agents into a unified space, our proposed algorithm facilitates coordination among heterogeneous agents, thereby enhancing the overall system performance. Simulation results demonstrate that our proposed framework outperforms several benchmarks, validating its effectiveness.Abstract
Wireless networked control systems (WNCS) offer great potential for revolutionizing the industrial automation by enabling wireless coordination between sensors, decision centers, and actuators. However, inefficient access control and resource allocation in WNCS are two critical factors that limit closed-loop performance and control stability, especially when the spectral and energy resources are limited. In this paper, we first analyze the optimal scheduling condition for maintaining control stability of a WNCS and then formulate a long-term optimization problem that jointly optimizes the access policy of edge devices, and grant policy and resource allocation at the edge server. We employ Lyapunov optimization to decompose the long-term optimization problem into a sequence of independent sub-problems, and propose a heterogeneous attention graph based multi-agent deep reinforcement learning algorithm that jointly optimizes the access and resource allocation policy. By leveraging the attention mechanism to project the graph representations from heterogeneous agents into a unified space, our proposed algorithm facilitates coordination among heterogeneous agents, thereby enhancing the overall system performance. Simulation results demonstrate that our proposed framework outperforms several benchmarks, validating its effectiveness
- …