5,800 research outputs found
A knowledge graph-supported information fusion approach for multi-faceted conceptual modelling
It has become progressively more evident that a single data source is unable to comprehensively capture the
variability of a multi-faceted concept, such as product design, driving behaviour or human trust, which has
diverse semantic orientations. Therefore, multi-faceted conceptual modelling is often conducted based on multi-sourced data covering indispensable aspects, and information fusion is frequently applied to cope with the high
dimensionality and data heterogeneity. The consideration of intra-facets relationships is also indispensable. In
this context, a knowledge graph (KG), which can aggregate the relationships of multiple aspects by semantic
associations, was exploited to facilitate the multi-faceted conceptual modelling based on heterogeneous and
semantic-rich data. Firstly, rules of fault mechanism are extracted from the existing domain knowledge repository, and node attributes are extracted from multi-sourced data. Through abstraction and tokenisation of
existing knowledge repository and concept-centric data, rules of fault mechanism were symbolised and integrated with the node attributes, which served as the entities for the concept-centric knowledge graph (CKG).
Subsequently, the transformation of process data to a stack of temporal graphs was conducted under the CKG
backbone. Lastly, the graph convolutional network (GCN) model was applied to extract temporal and attribute
correlation features from the graphs, and a temporal convolution network (TCN) was built for conceptual
modelling using these features. The effectiveness of the proposed approach and the close synergy between the
KG-supported approach and multi-faceted conceptual modelling is demonstrated and substantiated in a case
study using real-world data
Text classification supervised algorithms with term frequency inverse document frequency and global vectors for word representation: a comparative study
Over the course of the previous two decades, there has been a rise in the quantity of text documents stored digitally. The ability to organize and categorize those documents in an automated mechanism, is known as text categorization which is used to classify them into a set of predefined categories so they may be preserved and sorted more efficiently. Identifying appropriate structures, architectures, and methods for text classification presents a challenge for researchers. This is due to the significant impact this concept has on content management, contextual search, opinion mining, product review analysis, spam filtering, and text sentiment mining. This study analyzes the generic categorization strategy and examines supervised machine learning approaches and their ability to comprehend complex models and nonlinear data interactions. Among these methods are k-nearest neighbors (KNN), support vector machine (SVM), and ensemble learning algorithms employing various evaluation techniques. Thereafter, an evaluation is conducted on the constraints of every technique and how they can be applied to real-life situations
Systematic mapping of software engineering management with an agile approach
El enfoque ágil ha generado una amplia variedad de estrategias para administrar con éxito
diversos proyectos de software en todo el mundo. Además, podemos asegurar que los
proyectos de software se han beneficiado de los métodos ágiles ya conocidos. En este
sentido, este artículo busca demostrar cómo se aplica el enfoque ágil en las áreas de la
gestión en la ingeniería del Software. Para ello, este estudio realiza un mapeo sistemático
para identificar las principales tendencias en la gestión de la ingeniería de software con
un enfoque ágil. Se han identificado un total de 1137 artículos, de los cuales 165 son
relevantes para los fines de este estudio, estos indican que la entrega temprana de valor,
un principio clave de la agilidad, sigue siendo la principal tendencia para el uso de
métodos ágiles. Sin embargo, también existen fuertes tendencias enfocadas en puntos
clave de la gestión en ingeniería de software, como optimizar la gestión de calidad,
optimizar la especificación de requisitos, optimizar la gestión de riesgos y mejorar la
comunicación y coordinación del equipo, estos resultados permitirán generar nuevas
líneas de investigación para cada punto clave de la gestión en la ingeniería del software
impactado por el enfoque ágil.The agile approach has generated a wide variety of strategies to successfully manage
various software projects worldwide. In addition, we can ensure that software projects
have benefited from the already known agile methods. In this sense, this article seeks to
demonstrate how the agile approach is applied in Software engineering management
areas. To do this, this study performs a systematic mapping to identify the main trends in
software engineering management with an agile approach. A total of 1137 articles have
identified, of which 165 are relevant for the purposes of this study, these indicate that
early value delivery, a key principle of agility, continues to be the main trend for the use
of agile methods. However, there are also strong trends focused on key points of
management in software engineering, such as optimize quality management, optimize
requirements specification, optimize risk management, and improve team communication
and coordination, these results will allow generating new lines of research for each key
point of management in software engineering impacted by the agile approach
E-learning in the Cloud Computing Environment: Features, Architecture, Challenges and Solutions
The need to constantly and consistently improve the quality and quantity of the educational system is essential. E-learning has emerged from the rapid cycle of change and the expansion of new technologies. Advances in information technology have increased network bandwidth, data access speed, and reduced data storage costs. In recent years, the implementation of cloud computing in educational settings has garnered the interest of major companies, leading to substantial investments in this area. Cloud computing improves engineering education by providing an environment that can be accessed from anywhere and allowing access to educational resources on demand. Cloud computing is a term used to describe the provision of hosting services on the Internet. It is predicted to be the next generation of information technology architecture and offers great potential to enhance productivity and reduce costs. Cloud service providers offer their processing and memory resources to users. By paying for the use of these resources, users can access them for their calculations and processing anytime and anywhere. Cloud computing provides the ability to increase productivity, save information technology resources, and enhance computing power, converting processing power into a tool with constant access capabilities. The use of cloud computing in a system that supports remote education has its own set of characteristics and requires a unique strategy. Students can access a wide variety of instructional engineering materials at any time and from any location, thanks to cloud computing. Additionally, they can share their materials with other community members. The use of cloud computing in e-learning offers several advantages, such as unlimited computing resources, high scalability, and reduced costs associated with e-learning. An improvement in the quality of teaching and learning is achieved through the use of flexible cloud computing, which offers a variety of resources for educators and students. In light of this, the current research presents cloud computing technology as a suitable and superior option for e-learning systems
Statistical analysis of grouped text documents
L'argomento di questa tesi sono i modelli statistici per l'analisi dei dati testuali, con particolare attenzione ai contesti in cui i campioni di testo sono raggruppati.
Quando si ha a che fare con dati testuali, il primo problema è quello di elaborarli, per renderli compatibili dal punto di vista computazionale e metodologico con i metodi matematici e statistici prodotti e continuamente sviluppati dalla comunità scientifica. Per questo motivo, la tesi passa in rassegna i metodi esistenti per la rappresentazione analitica e l'elaborazione di campioni di dati testuali, compresi i "Vector Space Models", le "rappresentazioni distribuite" di parole e documenti e i "contextualized embeddings". Questa rassegna comporta la standardizzazione di una notazione che, anche all'interno dello stesso approccio di rappresentazione, appare molto eterogenea in letteratura.
Vengono poi esplorati due domini di applicazione: i social media e il turismo culturale. Per quanto riguarda il primo, viene proposto uno studio sull'autodescrizione di gruppi diversi di individui sulla piattaforma StockTwits, dove i mercati finanziari sono gli argomenti dominanti. La metodologia proposta ha integrato diversi tipi di dati, sia testuali che variabili categoriche. Questo studio ha agevolato la comprensione sul modo in cui le persone si presentano online e ha trovato stutture di comportamento ricorrenti all'interno di gruppi di utenti.
Per quanto riguarda il turismo culturale, la tesi approfondisce uno studio condotto nell'ambito del progetto "Data Science for Brescia - Arts and Cultural Places", in cui è stato addestrato un modello linguistico per classificare le recensioni online scritte in italiano in quattro aree semantiche distinte relative alle attrazioni culturali della città di Brescia. Il modello proposto permette di identificare le attrazioni nei documenti di testo, anche quando non sono esplicitamente menzionate nei metadati del documento, aprendo così la possibilità di espandere il database relativo a queste attrazioni culturali con nuove fonti, come piattaforme di social media, forum e altri spazi online.
Infine, la tesi presenta uno studio metodologico che esamina la specificità di gruppo delle parole, analizzando diversi stimatori di specificità di gruppo proposti in letteratura. Lo studio ha preso in considerazione documenti testuali raggruppati con variabile di "outcome" e variabile di gruppo. Il suo contributo consiste nella proposta di modellare il corpus di documenti come una distribuzione multivariata, consentendo la simulazione di corpora di documenti di testo con caratteristiche predefinite. La simulazione ha fornito preziose indicazioni sulla relazione tra gruppi di documenti e parole. Inoltre, tutti i risultati possono essere liberamente esplorati attraverso un'applicazione web, i cui componenti sono altresì descritti in questo manoscritto.
In conclusione, questa tesi è stata concepita come una raccolta di studi, ognuno dei quali suggerisce percorsi di ricerca futuri per affrontare le sfide dell'analisi dei dati testuali raggruppati.The topic of this thesis is statistical models for the analysis of textual data, emphasizing contexts in which text samples are grouped.
When dealing with text data, the first issue is to process it, making it computationally and methodologically compatible with the existing mathematical and statistical methods produced and continually developed by the scientific community. Therefore, the thesis firstly reviews existing methods for analytically representing and processing textual datasets, including Vector Space Models, distributed representations of words and documents, and contextualized embeddings. It realizes this review by standardizing a notation that, even within the same representation approach, appears highly heterogeneous in the literature.
Then, two domains of application are explored: social media and cultural tourism. About the former, a study is proposed about self-presentation among diverse groups of individuals on the StockTwits platform, where finance and stock markets are the dominant topics. The methodology proposed integrated various types of data, including textual and categorical data. This study revealed insights into how people present themselves online and found recurring patterns within groups of users.
About the latter, the thesis delves into a study conducted as part of the "Data Science for Brescia - Arts and Cultural Places" Project, where a language model was trained to classify Italian-written online reviews into four distinct semantic areas related to cultural attractions in the Italian city of Brescia. The model proposed allows for the identification of attractions in text documents, even when not explicitly mentioned in document metadata, thus opening possibilities for expanding the database related to these cultural attractions with new sources, such as social media platforms, forums, and other online spaces.
Lastly, the thesis presents a methodological study examining the group-specificity of words, analyzing various group-specificity estimators proposed in the literature. The study considered grouped text documents with both outcome and group variables. Its contribution consists of the proposal of modeling the corpus of documents as a multivariate distribution, enabling the simulation of corpora of text documents with predefined characteristics. The simulation provided valuable insights into the relationship between groups of documents and words. Furthermore, all its results can be freely explored through a web application, whose components are also described in this manuscript.
In conclusion, this thesis has been conceived as a collection of papers. It aimed to contribute to the field with both applications and methodological proposals, and each study presented here suggests paths for future research to address the challenges in the analysis of grouped textual data
Mobile Recommendation System to Provide Emotional Support and Promote Active Aging for Older Adults in the Republic of Panama
Aging brings with it physical and cognitive changes that can lead to health problems such as chronic disease and cognitive impairment. Technology is a fundamental ally in improving the quality of life of older adults by enabling accurate and early diagnosis. In this context, we present a mobile application designed to provide emotional support and guidance, thus contributing to the well-being of this demographic group. Our study was based on quantitative research methods, using an experimental approach on a sample of users aged between 60 and 80 years. The results showed that 93.3% of users found the app to be a useful resource for adopting a healthier lifestyle. The app provides specific recommendations, such as breathing exercises to reduce anxiety, recreational activities, exercises tailored to physical ability, and meditation practices. These specific features have been shown to improve the well-being of older adults by providing a personalized approach to the challenges of aging
Recommended from our members
Computational Argumentation-based Chatbots: a Survey
The article archived on this institutional repository is a preprint. It has not been certified by peer review.Chatbots are conversational software applications designed to interact dialectically with users for a plethora of different purposes. Surprisingly, these colloquial agents have only recently been coupled with computational models of arguments (i.e. computational argumentation), whose aim is to formalise, in a machine-readable format, the ordinary exchange of information that characterises human communications. Chatbots may employ argumentation with different degrees and in a variety of manners. The present survey sifts through the literature to review papers concerning this kind of argumentation-based bot, drawing conclusions about the benefits and drawbacks that this approach entails in comparison with standard chatbots, while also envisaging possible future development and integration with the Transformer-based architecture and state-of-the-art Large Language models
Towards Integration of Artificial Intelligence into Medical Devices as a Real-Time Recommender System for Personalised Healthcare:State-of-the-Art and Future Prospects
In the era of big data, artificial intelligence (AI) algorithms have the potential to revolutionize healthcare by improving patient outcomes and reducing healthcare costs. AI algorithms have frequently been used in health care for predictive modelling, image analysis and drug discovery. Moreover, as a recommender system, these algorithms have shown promising impacts on personalized healthcare provision. A recommender system learns the behaviour of the user and predicts their current preferences (recommends) based on their previous preferences. Implementing AI as a recommender system improves this prediction accuracy and solves cold start and data sparsity problems. However, most of the methods and algorithms are tested in a simulated setting which cannot recapitulate the influencing factors of the real world. This review article systematically reviews prevailing methodologies in recommender systems and discusses the AI algorithms as recommender systems specifically in the field of healthcare. It also provides discussion around the most cutting-edge academic and practical contributions present in the literature, identifies performance evaluation matrices, challenges in the implementation of AI as a recommender system, and acceptance of AI-based recommender systems by clinicians. The findings of this article direct researchers and professionals to comprehend currently developed recommender systems and the future of medical devices integrated with real-time recommender systems for personalized healthcare
AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0
The advancements in human-centered artificial intelligence (HCAI) systems for Industry 5.0 is a new phase of industrialization that places the worker at the center of the production process and uses new technologies to increase prosperity beyond jobs and growth. HCAI presents new objectives that were unreachable by either humans or machines alone, but this also comes with a new set of challenges. Our proposed method accomplishes this through the knowlEdge architecture, which enables human operators to implement AI solutions using a zero-touch framework. It relies on containerized AI model training and execution, supported by a robust data pipeline and rounded off with human feedback and evaluation interfaces. The result is a platform built from a number of components, spanning all major areas of the AI lifecycle. We outline both the architectural concepts and implementation guidelines and explain how they advance HCAI systems and Industry 5.0. In this article, we address the problems we encountered while implementing the ideas within the edge-to-cloud continuum. Further improvements to our approach may enhance the use of AI in Industry 5.0 and strengthen trust in AI systems
- …