95 research outputs found

    BSN++: Complementary Boundary Regressor with Scale-Balanced Relation Modeling for Temporal Action Proposal Generation

    Full text link
    Generating human action proposals in untrimmed videos is an important yet challenging task with wide applications. Current methods often suffer from the noisy boundary locations and the inferior quality of confidence scores used for proposal retrieving. In this paper, we present BSN++, a new framework which exploits complementary boundary regressor and relation modeling for temporal proposal generation. First, we propose a novel boundary regressor based on the complementary characteristics of both starting and ending boundary classifiers. Specifically, we utilize the U-shaped architecture with nested skip connections to capture rich contexts and introduce bi-directional boundary matching mechanism to improve boundary precision. Second, to account for the proposal-proposal relations ignored in previous methods, we devise a proposal relation block to which includes two self-attention modules from the aspects of position and channel. Furthermore, we find that there inevitably exists data imbalanced problems in the positive/negative proposals and temporal durations, which harm the model performance on tail distributions. To relieve this issue, we introduce the scale-balanced re-sampling strategy. Extensive experiments are conducted on two popular benchmarks: ActivityNet-1.3 and THUMOS14, which demonstrate that BSN++ achieves the state-of-the-art performance.Comment: Submitted to AAAI21. arXiv admin note: substantial text overlap with arXiv:2007.0988

    Data-constrained MHD simulation for the eruption of a filament-sigmoid system in solar active region 11520

    Full text link
    The separation of a filament and sigmoid is observed during an X1.4 flare on July 12, 2012 in solar active region 11520, but the corresponding magnetic field change is not clear. We construct a data-constrained magnetohydrodynamic simulation of the filament-sigmoid system with the flux rope insertion method and magnetic flux eruption code, which produces the magnetic field evolution that may explain the separation of the low-lying filament and high-lying hot channel (sigmoid). The initial state of the magnetic model contains a magnetic flux rope with a hyperbolic flux tube, a null point structure and overlying confining magnetic fields. We find that the magnetic reconnections at the null point make the right footpoint of the sigmoid move from one positive magnetic polarity (P1) to another (P3). The tether-cutting reconnection at the hyperbolic flux tube occurs and quickly cuts off the connection of the low-lying filament and high-lying sigmoid. In the end, the high-lying sigmoid erupts and grows into a coronal mass ejection, while the low-lying filament stays stable. The observed double J-shaped flare ribbons, semi-circular ribbon, and brightenings of several loops are reproduced in the simulation, where the eruption of the magnetic flux rope includes the impulsive acceleration and propagation phases.Comment: This paper has been accepted for publication in the Ap

    ChatEDA: A Large Language Model Powered Autonomous Agent for EDA

    Full text link
    The integration of a complex set of Electronic Design Automation (EDA) tools to enhance interoperability is a critical concern for circuit designers. Recent advancements in large language models (LLMs) have showcased their exceptional capabilities in natural language processing and comprehension, offering a novel approach to interfacing with EDA tools. This research paper introduces ChatEDA, an autonomous agent for EDA empowered by a large language model, AutoMage, complemented by EDA tools serving as executors. ChatEDA streamlines the design flow from the Register-Transfer Level (RTL) to the Graphic Data System Version II (GDSII) by effectively managing task planning, script generation, and task execution. Through comprehensive experimental evaluations, ChatEDA has demonstrated its proficiency in handling diverse requirements, and our fine-tuned AutoMage model has exhibited superior performance compared to GPT-4 and other similar LLMs
    • …
    corecore