143 research outputs found

    Exploring the characteristics of issue-related behaviors in GitHub using visualization techniques

    Get PDF

    Study on microstructure mechanism of sandstone based on complex network theory

    Get PDF
    Rock contains a large number of micro-pores, which are of different shapes and complex structures. The structure information of sandstones is extracted based on different porosities through X-ray CT (Computer Tomography) scanning, photo processing techniques and complex network method to explore the topological structure of sandstone seepage network. The results show that sandstone seepage network has scale-free property. The minute quantities of pores with more throat connections have vital functions of overall connectivity of sandstone seepage network, while sandstone seepage network has strong robustness with random error. This research can provide reference for across scales research of porous seepage and multi-disciplinary application of complex network theory

    TMac: Temporal Multi-Modal Graph Learning for Acoustic Event Classification

    Full text link
    Audiovisual data is everywhere in this digital age, which raises higher requirements for the deep learning models developed on them. To well handle the information of the multi-modal data is the key to a better audiovisual modal. We observe that these audiovisual data naturally have temporal attributes, such as the time information for each frame in the video. More concretely, such data is inherently multi-modal according to both audio and visual cues, which proceed in a strict chronological order. It indicates that temporal information is important in multi-modal acoustic event modeling for both intra- and inter-modal. However, existing methods deal with each modal feature independently and simply fuse them together, which neglects the mining of temporal relation and thus leads to sub-optimal performance. With this motivation, we propose a Temporal Multi-modal graph learning method for Acoustic event Classification, called TMac, by modeling such temporal information via graph learning techniques. In particular, we construct a temporal graph for each acoustic event, dividing its audio data and video data into multiple segments. Each segment can be considered as a node, and the temporal relationships between nodes can be considered as timestamps on their edges. In this case, we can smoothly capture the dynamic information in intra-modal and inter-modal. Several experiments are conducted to demonstrate TMac outperforms other SOTA models in performance. Our code is available at https://github.com/MGitHubL/TMac.Comment: This work has been accepted by ACM MM 2023 for publicatio
    • ÔÇŽ
    corecore