370,714 research outputs found

    GTRL: An Entity Group-Aware Temporal Knowledge Graph Representation Learning Method

    Full text link
    Temporal Knowledge Graph (TKG) representation learning embeds entities and event types into a continuous low-dimensional vector space by integrating the temporal information, which is essential for downstream tasks, e.g., event prediction and question answering. Existing methods stack multiple graph convolution layers to model the influence of distant entities, leading to the over-smoothing problem. To alleviate the problem, recent studies infuse reinforcement learning to obtain paths that contribute to modeling the influence of distant entities. However, due to the limited number of hops, these studies fail to capture the correlation between entities that are far apart and even unreachable. To this end, we propose GTRL, an entity Group-aware Temporal knowledge graph Representation Learning method. GTRL is the first work that incorporates the entity group modeling to capture the correlation between entities by stacking only a finite number of layers. Specifically, the entity group mapper is proposed to generate entity groups from entities in a learning way. Based on entity groups, the implicit correlation encoder is introduced to capture implicit correlations between any pairwise entity groups. In addition, the hierarchical GCNs are exploited to accomplish the message aggregation and representation updating on the entity group graph and the entity graph. Finally, GRUs are employed to capture the temporal dependency in TKGs. Extensive experiments on three real-world datasets demonstrate that GTRL achieves the state-of-the-art performances on the event prediction task, outperforming the best baseline by an average of 13.44%, 9.65%, 12.15%, and 15.12% in MRR, Hits@1, Hits@3, and Hits@10, respectively.Comment: Accepted by TKDE, 16 pages, and 9 figure

    EE3P: Event-based Estimation of Periodic Phenomena Properties

    Full text link
    We introduce a novel method for measuring properties of periodic phenomena with an event camera, a device asynchronously reporting brightness changes at independently operating pixels. The approach assumes that for fast periodic phenomena, in any spatial window where it occurs, a very similar set of events is generated at the time difference corresponding to the frequency of the motion. To estimate the frequency, we compute correlations of spatio-temporal windows in the event space. The period is calculated from the time differences between the peaks of the correlation responses. The method is contactless, eliminating the need for markers, and does not need distinguishable landmarks. We evaluate the proposed method on three instances of periodic phenomena: (i) light flashes, (ii) vibration, and (iii) rotational speed. In all experiments, our method achieves a relative error lower than 0.04%, which is within the error margin of ground truth measurements.Comment: 9 pages, 55 figures, accepted and presented at CVWW24, published in Proceedings of the 27th Computer Vision Winter Workshop, 202

    Human Processing of Short Temporal Intervals as Revealed by an ERP Waveform Analysis

    Get PDF
    To clarify the time course over which the human brain processes information about durations up to ∼300 ms, we reanalyzed the data that were previously reported by Mitsudo et al. (2009) using a multivariate analysis method. Event-related potentials were recorded from 19 scalp electrodes on 11 (nine original and two additional) participants while they judged whether two neighboring empty time intervals – called t1 and t2 and marked by three tone bursts – had equal durations. There was also a control condition in which the participants were presented the same temporal patterns but without a judgment task. In the present reanalysis, we sought to visualize how the temporal patterns were represented in the brain over time. A correlation matrix across channels was calculated for each temporal pattern. Geometric separations between the correlation matrices were calculated, and subjected to multidimensional scaling. We performed such analyses for a moving 100-ms time window after the t1 presentations. In the windows centered at <100 ms after the t2 presentation, the analyses revealed the local maxima of categorical separation between temporal patterns of perceptually equal durations versus perceptually unequal durations, both in the judgment condition and in the control condition. Such categorization of the temporal patterns was prominent only in narrow temporal regions. The analysis indicated that the participants determined whether the two neighboring time intervals were of equal duration mostly within 100 ms after the presentation of the temporal patterns. A very fast brain activity was related to the perception of elementary temporal patterns without explicit judgments. This is consistent with the findings of Mitsudo et al. and it is in line with the processing time hypothesis proposed by Nakajima et al. (2004). The validity of the correlation matrix analyses turned out to be an effective tool to grasp the overall responses of the brain to temporal patterns

    CNN-AIDED FACTOR GRAPHS WITH ESTIMATED MUTUAL INFORMATION FEATURES FOR SEIZURE DETECTION

    Get PDF
    We propose a convolutional neural network (CNN) aided factor graphs assisted by mutual information features estimated by a neural network for seizure detection. Specifically, we use neural mutual information estimation to evaluate the correlation between different electroencephalogram (EEG) channels as features. We then use a 1D-CNN to extract extra features from the EEG signals and use both features to estimate the probability of a seizure event. Finally, learned factor graphs are employed to capture the temporal correlation in the signal. Both sets of features from the neural mutual estimation and the 1D-CNN are used to learn the factor nodes. We show that the proposed method achieves state-of-the-art performance using 6-fold leave-four-patients-out cross-validation

    Quantum image rain removal: second-order photon number fluctuation correlations in the time domain

    Full text link
    Falling raindrops are usually considered purely negative factors for traditional optical imaging because they generate not only rain streaks but also rain fog, resulting in a decrease in the visual quality of images. However, this work demonstrates that the image degradation caused by falling raindrops can be eliminated by the raindrops themselves. The temporal second-order correlation properties of the photon number fluctuation introduced by falling raindrops has a remarkable attribute: the rain streak photons and rain fog photons result in the absence of a stable second-order photon number correlation, while this stable correlation exists for photons that do not interact with raindrops. This fundamental difference indicates that the noise caused by falling raindrops can be eliminated by measuring the second-order photon number fluctuation correlation in the time domain. The simulation and experimental results demonstrate that the rain removal effect of this method is even better than that of deep learning methods when the integration time of each measurement event is short. This high-efficient quantum rain removal method can be used independently or integrated into deep learning algorithms to provide front-end processing and high-quality materials for deep learning.Comment: 5 pages, 7 figure
    corecore