463 research outputs found

    JKarma: A Highly-Modular Framework for Pattern-Based Change Detection on Evolving Data

    Get PDF
    Pattern-based change detection (PBCD) describes a class of change detection algorithms for evolving data. Contrary to conventional solutions, PBCD seeks changes exhibited by the patterns over time and therefore works on an abstract form of the data, which prevents the search for changes on the raw data. Moreover, PBCD provides arguments on the validity of the results because patterns mirror changes occurred with any form of evidence. However, the existing solutions differ on data representation, mining algorithm and change identification strategy, which we can deem as main modules of a general architecture, so that any PBCD task could be designed by accommodating custom implementations for those modules. This is what we propose in this paper through jKarma, a highly-modular framework for designing and performing PBCD

    Reference System Element Identification Atlas – methods and tools to identify references system elements in product engineering

    Get PDF
    Companies target innovations, successful new products. One major challenge is to increase efficiency and decrease the risk of developing new successful products. We want to reach these goals by improving the reusability of already existing knowledge elements extracted from e.g., already existing (sub-)systems or their documentation. These elements are called reference system elements and are meant to be the starting point for product development projects. Based on a systematic literature review complemented by an expert workshop and analysis of established methods and tools in product engineering, we developed the Reference System Elements Identification Atlas to support the identification of suiting reference system elements. Within the Reference System Elements Identification Atlas, we collected 30 methods and tools to identify reference system elements and allocated them to the various knowledge spaces they search. All 30 methods and tools were grouped in five clusters – creativity methods, data analysis methods, market/competition analysis methods, similarity methods, and trend analysis methods. We observed that methods and tools are hardly related to the identification of reference system elements in literature explicitly. We believe the Reference System Elements Identification Atlas provides valuable support to collect valuable reference system elements as the starting point in product engineering

    Fine-grained Spatio-Temporal Distribution Prediction of Mobile Content Delivery in 5G Ultra-Dense Networks

    Full text link
    The 5G networks have extensively promoted the growth of mobile users and novel applications, and with the skyrocketing user requests for a large amount of popular content, the consequent content delivery services (CDSs) have been bringing a heavy load to mobile service providers. As a key mission in intelligent networks management, understanding and predicting the distribution of CDSs benefits many tasks of modern network services such as resource provisioning and proactive content caching for content delivery networks. However, the revolutions in novel ubiquitous network architectures led by ultra-dense networks (UDNs) make the task extremely challenging. Specifically, conventional methods face the challenges of insufficient spatio precision, lacking generalizability, and complex multi-feature dependencies of user requests, making their effectiveness unreliable in CDSs prediction under 5G UDNs. In this paper, we propose to adopt a series of encoding and sampling methods to model CDSs of known and unknown areas at a tailored fine-grained level. Moreover, we design a spatio-temporal-social multi-feature extraction framework for CDSs hotspots prediction, in which a novel edge-enhanced graph convolution block is proposed to encode dynamic CDSs networks based on the social relationships and the spatio features. Besides, we introduce the Long-Short Term Memory (LSTM) to further capture the temporal dependency. Extensive performance evaluations with real-world measurement data collected in two mobile content applications demonstrate the effectiveness of our proposed solution, which can improve the prediction area under the curve (AUC) by 40.5% compared to the state-of-the-art proposals at a spatio granularity of 76m, with up to 80% of the unknown areas

    A Knowledge Graph Framework for Dementia Research Data

    Get PDF
    Dementia disease research encompasses diverse data modalities, including advanced imaging, deep phenotyping, and multi-omics analysis. However, integrating these disparate data sources has historically posed a significant challenge, obstructing the unification and comprehensive analysis of collected information. In recent years, knowledge graphs have emerged as a powerful tool to address such integration issues by enabling the consolidation of heterogeneous data sources into a structured, interconnected network of knowledge. In this context, we introduce DemKG, an open-source framework designed to facilitate the construction of a knowledge graph integrating dementia research data, comprising three core components: a KG-builder that integrates diverse domain ontologies and data annotations, an extensions ontology providing necessary terms tailored for dementia research, and a versatile transformation module for incorporating study data. In contrast with other current solutions, our framework provides a stable foundation by leveraging established ontologies and community standards and simplifies study data integration while delivering solid ontology design patterns, broadening its usability. Furthermore, the modular approach of its components enhances flexibility and scalability. We showcase how DemKG might aid and improve multi-modal data investigations through a series of proof-of-concept scenarios focused on relevant Alzheimer’s disease biomarkers

    Does Form follow Function? Connecting Function Modelling and Geometry Modelling for Design Space Exploration

    Get PDF
    The aerospace industry, representative of industries developing complex products, faces challenges from changes in user behaviour, legislation, environmental policy. Meeting these challenges will require the development of radically new products. Radically new technologies and solutions need to be explored, investigated, and integrated into existing aerospace component architectures. The currently available design space exploration (DSE) methods, mainly based around computer-aided design (CAD) modelling, do not provide sufficient support for this exploration. These methods often lack a representation of the product’s architecture in relation to its design rationale (DR)—they do not illustrate how form follows function. Hence, relations between different functions and solutions, as well as how novel ideas relate to the legacy design, are not captured. In particular, the connection between a product’s function and the embodiment of its solution is not captured in the applied product modelling approaches, and can therefore not be used in the product development process.To alleviate this situation, this thesis presents a combined function and geometry-modelling approach with automated generation of CAD models for variant concepts. The approach builds on enhanced function means (EF-M) modelling for representation of the design space and the legacy design’s position in it. EF-M is also used to capture novel design solutions and reference them to the legacy design’s architecture. A design automation (DA) approach based on modularisation of the CAD model, which in turn is based on the functional decomposition of the product concepts, is used to capture geometric product information. A combined function-geometry object model captures the relations between functions, solutions, and geometry. This allows for CAD models of concepts based on alternative solutions to be generated.The function- and geometry-exploration (FGE) approach has been developed and tested in collaboration with an aerospace manufacturing company. A proof-of-concept tool implementing the approach has been realised. The approach has been validated for decomposition, innovation, and embodiment of new concepts in multiple studies involving three different aerospace suppliers. Application of FGE provides knowledge capture and representation, connecting the teleological and geometric aspects of the product. Furthermore, it supports the exploration of increasingly novel solutions, enabling the coverage of a wider area of the design space.The connection between the modelling domains addresses a research gap for the “integration of function architectures with CAD models”.While the FGE approach has been tested in laboratory environments as well as in applied product development projects, further development is needed to refine CAD integration and user experience and integrate additional modelling domains

    Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions

    Full text link
    Sixth-generation (6G) networks anticipate intelligently supporting a wide range of smart services and innovative applications. Such a context urges a heavy usage of Machine Learning (ML) techniques, particularly Deep Learning (DL), to foster innovation and ease the deployment of intelligent network functions/operations, which are able to fulfill the various requirements of the envisioned 6G services. Specifically, collaborative ML/DL consists of deploying a set of distributed agents that collaboratively train learning models without sharing their data, thus improving data privacy and reducing the time/communication overhead. This work provides a comprehensive study on how collaborative learning can be effectively deployed over 6G wireless networks. In particular, our study focuses on Split Federated Learning (SFL), a technique recently emerged promising better performance compared with existing collaborative learning approaches. We first provide an overview of three emerging collaborative learning paradigms, including federated learning, split learning, and split federated learning, as well as of 6G networks along with their main vision and timeline of key developments. We then highlight the need for split federated learning towards the upcoming 6G networks in every aspect, including 6G technologies (e.g., intelligent physical layer, intelligent edge computing, zero-touch network management, intelligent resource management) and 6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous systems). Furthermore, we review existing datasets along with frameworks that can help in implementing SFL for 6G networks. We finally identify key technical challenges, open issues, and future research directions related to SFL-enabled 6G networks

    PADL: A Modeling and Deployment Language for Advanced Analytical Services

    Get PDF
    In the smart city context, Big Data analytics plays an important role in processing the data collected through IoT devices. The analysis of the information gathered by sensors favors the generation of specific services and systems that not only improve the quality of life of the citizens, but also optimize the city resources. However, the difficulties of implementing this entire process in real scenarios are manifold, including the huge amount and heterogeneity of the devices, their geographical distribution, and the complexity of the necessary IT infrastructures. For this reason, the main contribution of this paper is the PADL description language, which has been specifically tailored to assist in the definition and operationalization phases of the machine learning life cycle. It provides annotations that serve as an abstraction layer from the underlying infrastructure and technologies, hence facilitating the work of data scientists and engineers. Due to its proficiency in the operationalization of distributed pipelines over edge, fog, and cloud layers, it is particularly useful in the complex and heterogeneous environments of smart cities. For this purpose, PADL contains functionalities for the specification of monitoring, notifications, and actuation capabilities. In addition, we provide tools that facilitate its adoption in production environments. Finally, we showcase the usefulness of the language by showing the definition of PADL-compliant analytical pipelines over two uses cases in a smart city context (flood control and waste management), demonstrating that its adoption is simple and beneficial for the definition of information and process flows in such environments.This work was partially supported by the SPRI–Basque Government through their ELKARTEK program (3KIA project, ref. KK-2020/00049). Aitor Almeida’s participation was supported by the FuturAAL-Ego project (RTI2018-101045-A-C22) granted by the Spanish Ministry of Science, Innovation and Universities. Javier Del Ser also acknowledges funding support from the Consolidated Research Group MATHMODE (IT1294-19), granted by the Department of Education of the Basque Government

    Visual and linguistic processes in deep neural networks:A cognitive perspective

    Get PDF
    When people describe an image, there are complex visual and linguistic processes at work. For instance, speakers tend to look at an object right before mentioning it, but not every time. Similarly, during a conversation, speakers can refer to an entity multiple times, using expressions evolving in the common ground. In this thesis, I develop computational models of such visual and linguistic processes, drawing inspiration from theories and findings from cognitive science and psycholinguistics. This work, where I aim to capture the intricate relationship between non-linguistic modalities and language within deep artificial neural networks, contributes to the line of research into multimodal Natural Language Processing. This thesis consists of two parts: (1) modeling human gaze in language use (production and comprehension), and (2) modeling communication strategies in referential tasks in visually grounded dialogue. In the first part, I delve into enhancing image description generation models using eye-tracking data; evaluating the variation in human signals while describing images; and predicting human reading behavior in the form of eye movements. In the second part, I build models quantifying, generating, resolving, and adapting utterances in referential tasks situated within visual and conversational contexts. The outcomes advance our understanding of human visuo-linguistic processes by revealing intricate strategies at play in such processes, and point to the importance of accounting for them when developing and utilizing multimodal models. The findings shed light on how the advancements in artificial intelligence could contribute to advancing the research on crossmodal processes in humans and vice versa
    • …
    corecore