151 research outputs found
Study on microstructure mechanism of sandstone based on complex network theory
Rock contains a large number of micro-pores, which are of different shapes and complex structures. The structure information of sandstones is extracted based on different porosities through X-ray CT (Computer Tomography) scanning, photo processing techniques and complex network method to explore the topological structure of sandstone seepage network. The results show that sandstone seepage network has scale-free property. The minute quantities of pores with more throat connections have vital functions of overall connectivity of sandstone seepage network, while sandstone seepage network has strong robustness with random error. This research can provide reference for across scales research of porous seepage and multi-disciplinary application of complex network theory
Single-cell Multi-view Clustering via Community Detection with Unknown Number of Clusters
Single-cell multi-view clustering enables the exploration of cellular
heterogeneity within the same cell from different views. Despite the
development of several multi-view clustering methods, two primary challenges
persist. Firstly, most existing methods treat the information from both
single-cell RNA (scRNA) and single-cell Assay of Transposase Accessible
Chromatin (scATAC) views as equally significant, overlooking the substantial
disparity in data richness between the two views. This oversight frequently
leads to a degradation in overall performance. Additionally, the majority of
clustering methods necessitate manual specification of the number of clusters
by users. However, for biologists dealing with cell data, precisely determining
the number of distinct cell types poses a formidable challenge. To this end, we
introduce scUNC, an innovative multi-view clustering approach tailored for
single-cell data, which seamlessly integrates information from different views
without the need for a predefined number of clusters. The scUNC method
comprises several steps: initially, it employs a cross-view fusion network to
create an effective embedding, which is then utilized to generate initial
clusters via community detection. Subsequently, the clusters are automatically
merged and optimized until no further clusters can be merged. We conducted a
comprehensive evaluation of scUNC using three distinct single-cell datasets.
The results underscored that scUNC outperforms the other baseline methods
TMac: Temporal Multi-Modal Graph Learning for Acoustic Event Classification
Audiovisual data is everywhere in this digital age, which raises higher
requirements for the deep learning models developed on them. To well handle the
information of the multi-modal data is the key to a better audiovisual modal.
We observe that these audiovisual data naturally have temporal attributes, such
as the time information for each frame in the video. More concretely, such data
is inherently multi-modal according to both audio and visual cues, which
proceed in a strict chronological order. It indicates that temporal information
is important in multi-modal acoustic event modeling for both intra- and
inter-modal. However, existing methods deal with each modal feature
independently and simply fuse them together, which neglects the mining of
temporal relation and thus leads to sub-optimal performance. With this
motivation, we propose a Temporal Multi-modal graph learning method for
Acoustic event Classification, called TMac, by modeling such temporal
information via graph learning techniques. In particular, we construct a
temporal graph for each acoustic event, dividing its audio data and video data
into multiple segments. Each segment can be considered as a node, and the
temporal relationships between nodes can be considered as timestamps on their
edges. In this case, we can smoothly capture the dynamic information in
intra-modal and inter-modal. Several experiments are conducted to demonstrate
TMac outperforms other SOTA models in performance. Our code is available at
https://github.com/MGitHubL/TMac.Comment: This work has been accepted by ACM MM 2023 for publicatio
- …