254 research outputs found

    Multi-behavior Recommendation with SVD Graph Neural Networks

    Full text link
    Graph Neural Networks (GNNs) has been extensively employed in the field of recommender systems, offering users personalized recommendations and yielding remarkable outcomes. Recently, GNNs incorporating contrastive learning have demonstrated promising performance in handling sparse data problem of recommendation system. However, existing contrastive learning methods still have limitations in addressing the cold-start problem and resisting noise interference especially for multi-behavior recommendation. To mitigate the aforementioned issues, the present research posits a GNNs based multi-behavior recommendation model MB-SVD that utilizes Singular Value Decomposition (SVD) graphs to enhance model performance. In particular, MB-SVD considers user preferences under different behaviors, improving recommendation effectiveness while better addressing the cold-start problem. Our model introduces an innovative methodology, which subsume multi-behavior contrastive learning paradigm to proficiently discern the intricate interconnections among heterogeneous manifestations of user behavior and generates SVD graphs to automate the distillation of crucial multi-behavior self-supervised information for robust graph augmentation. Furthermore, the SVD based framework reduces the embedding dimensions and computational load. Thorough experimentation showcases the remarkable performance of our proposed MB-SVD approach in multi-behavior recommendation endeavors across diverse real-world datasets

    Determinación de la máxima sobrecarga admisible de corta duración en un transformador

    Get PDF
    El objetivo del presente trabajo consiste en desarrollar un programa informático que sirve de ayuda para determinar si una sobrecarga de corta duración en el transformador es admisible o no. Se trata de una herramienta que combina el uso de Excel y el Matlab: Los datos de entradas son registradas por una hoja del Excel que los transfieren posteriormente al programa del Matlab para hacer los cálculos pertinentes y una vez finalizado las ejecuciones los resultados vuelven a salir en otra hoja del Excel. El programa se construirá a base del modelo térmico de la norma IEC 60076-7 del año 2010. Y en ella se registrará también ecuaciones para el cálculo de pérdida de vida y la formación de burbujas en el transformador. El programa será capaz de evaluar tanto la sobrecarga de larga duración como el de corta duración. La herramienta desarrollada debe de ser capaz de acoplarse al programa desarrollada por el autor previo, extrayendo los resultados de las temperaturas significativas que posteriormente se combinan con las temperaturas sacadas de cada uno de los modelos existentes de la forma-ción de burbujas para la determinación final de la aparición de vapores de agua en el transfor-mador. Se verá en el capítulo de los resultados la comprobación y el análisis de cada uno de los criterios de la máxima sobrecarga admisible que viene definido en el reglamento IEC 60076-7[1]. Por lo tanto, el presente trabajo constituirá la parte final que completa al deseo del departa-mento de Ingeniería Eléctrica de la Universidad Carlos III de evaluar la capacidad de carga de un transformador de potencia.Ingeniería Eléctric

    Lifelong Spectral Clustering

    Full text link
    In the past decades, spectral clustering (SC) has become one of the most effective clustering algorithms. However, most previous studies focus on spectral clustering tasks with a fixed task set, which cannot incorporate with a new spectral clustering task without accessing to previously learned tasks. In this paper, we aim to explore the problem of spectral clustering in a lifelong machine learning framework, i.e., Lifelong Spectral Clustering (L2SC). Its goal is to efficiently learn a model for a new spectral clustering task by selectively transferring previously accumulated experience from knowledge library. Specifically, the knowledge library of L2SC contains two components: 1) orthogonal basis library: capturing latent cluster centers among the clusters in each pair of tasks; 2) feature embedding library: embedding the feature manifold information shared among multiple related tasks. As a new spectral clustering task arrives, L2SC firstly transfers knowledge from both basis library and feature library to obtain encoding matrix, and further redefines the library base over time to maximize performance across all the clustering tasks. Meanwhile, a general online update formulation is derived to alternatively update the basis library and feature library. Finally, the empirical experiments on several real-world benchmark datasets demonstrate that our L2SC model can effectively improve the clustering performance when comparing with other state-of-the-art spectral clustering algorithms.Comment: 9 pages,7 figure

    Metadata as a Methodological Commons: From Aboutness Description to Cognitive Modeling

    Get PDF
    ABSTRACTMetadata is data about data, which is generated mainly for resources organization and description, facilitating finding, identifying, selecting and obtaining information①. With the advancement of technologies, the acquisition of metadata has gradually become a critical step in data modeling and function operation, which leads to the formation of its methodological commons. A series of general operations has been developed to achieve structured description, semantic encoding and machine-understandable information, including entity definition, relation description, object analysis, attribute extraction, ontology modeling, data cleaning, disambiguation, alignment, mapping, relating, enriching, importing, exporting, service implementation, registry and discovery, monitoring etc. Those operations are not only necessary elements in semantic technologies (including linked data) and knowledge graph technology, but has also developed into the common operation and primary strategy in building independent and knowledge-based information systems.In this paper, a series of metadata-related methods are collectively referred to as ‘metadata methodological commons’, which has a lot of best practices reflected in the various standard specifications of the Semantic Web. In the future construction of a multi-modal metaverse based on Web 3.0, it shall play an important role, for example, in building digital twins through adopting knowledge models, or supporting the modeling of the entire virtual world, etc. Manual-based description and coding obviously cannot adapted to the UGC (User Generated Contents) and AIGC (AI Generated Contents)-based content production in the metaverse era. The automatic processing of semantic formalization must be considered as a sure way to adapt metadata methodological commons to meet the future needs of AI era
    • …
    corecore