490,266 research outputs found

    Domain Adaptive Transfer Learning for Fault Diagnosis

    Full text link
    Thanks to digitization of industrial assets in fleets, the ambitious goal of transferring fault diagnosis models fromone machine to the other has raised great interest. Solving these domain adaptive transfer learning tasks has the potential to save large efforts on manually labeling data and modifying models for new machines in the same fleet. Although data-driven methods have shown great potential in fault diagnosis applications, their ability to generalize on new machines and new working conditions are limited because of their tendency to overfit to the training set in reality. One promising solution to this problem is to use domain adaptation techniques. It aims to improve model performance on the target new machine. Inspired by its successful implementation in computer vision, we introduced Domain-Adversarial Neural Networks (DANN) to our context, along with two other popular methods existing in previous fault diagnosis research. We then carefully justify the applicability of these methods in realistic fault diagnosis settings, and offer a unified experimental protocol for a fair comparison between domain adaptation methods for fault diagnosis problems.Comment: Presented at 2019 Prognostics and System Health Management Conference (PHM 2019) in Paris, Franc

    Are you in a Masquerade? Exploring the Behavior and Impact of Large Language Model Driven Social Bots in Online Social Networks

    Full text link
    As the capabilities of Large Language Models (LLMs) emerge, they not only assist in accomplishing traditional tasks within more efficient paradigms but also stimulate the evolution of social bots. Researchers have begun exploring the implementation of LLMs as the driving core of social bots, enabling more efficient and user-friendly completion of tasks like profile completion, social behavior decision-making, and social content generation. However, there is currently a lack of systematic research on the behavioral characteristics of LLMs-driven social bots and their impact on social networks. We have curated data from Chirper, a Twitter-like social network populated by LLMs-driven social bots and embarked on an exploratory study. Our findings indicate that: (1) LLMs-driven social bots possess enhanced individual-level camouflage while exhibiting certain collective characteristics; (2) these bots have the ability to exert influence on online communities through toxic behaviors; (3) existing detection methods are applicable to the activity environment of LLMs-driven social bots but may be subject to certain limitations in effectiveness. Moreover, we have organized the data collected in our study into the Masquerade-23 dataset, which we have publicly released, thus addressing the data void in the subfield of LLMs-driven social bots behavior datasets. Our research outcomes provide primary insights for the research and governance of LLMs-driven social bots within the research community.Comment: 18 pages, 7 figure

    Semantics-based platform for context-aware and personalized robot interaction in the internet of robotic things

    Get PDF
    Robots are moving from well-controlled lab environments to the real world, where an increasing number of environments has been transformed into smart sensorized IoT spaces. Users will expect these robots to adapt to their preferences and needs, and even more so for social robots that engage in personal interactions. In this paper, we present declarative ontological models and a middleware platform for building services that generate interaction tasks for social robots in smart IoT environments. The platform implements a modular, data-driven workflow that allows developers of interaction services to determine the appropriate time, content and style of human-robot interaction tasks by reasoning on semantically enriched loT sensor data. The platform also abstracts the complexities of scheduling, planning and execution of these tasks, and can automatically adjust parameters to the personal profile and current context. We present motivational scenarios in three environments: a smart home, a smart office and a smart nursing home, detail the interfaces and executional paths in our platform and present a proof-of-concept implementation. (C) 2018 Elsevier Inc. All rights reserved

    Learning To Scale Up Search-Driven Data Integration

    Get PDF
    A recent movement to tackle the long-standing data integration problem is a compositional and iterative approach, termed “pay-as-you-go” data integration. Under this model, the objective is to immediately support queries over “partly integrated” data, and to enable the user community to drive integration of the data that relate to their actual information needs. Over time, data will be gradually integrated. While the pay-as-you-go vision has been well-articulated for some time, only recently have we begun to understand how it can be manifested into a system implementation. One branch of this effort has focused on enabling queries through keyword search-driven data integration, in which users pose queries over partly integrated data encoded as a graph, receive ranked answers generated from data and metadata that is linked at query-time, and provide feedback on those answers. From this user feedback, the system learns to repair bad schema matches or record links. Many real world issues of uncertainty and diversity in search-driven integration remain open. Such tasks in search-driven integration require a combination of human guidance and machine learning. The challenge is how to make maximal use of limited human input. This thesis develops three methods to scale up search-driven integration, through learning from expert feedback: (1) active learning techniques to repair links from small amounts of user feedback; (2) collaborative learning techniques to combine users’ conflicting feedback; and (3) debugging techniques to identify where data experts could best improve integration quality. We implement these methods within the Q System, a prototype of search-driven integration, and validate their effectiveness over real-world datasets

    International organisation of ocean programs: Making a virtue of necessity

    Get PDF
    When faced with the needs of climate prediction, a sharp contrast is revealed between existing networks for the observation of the atmosphere and for the ocean. Even the largest and longest-serving ocean data networks were created for their value to a specific user (usually with a defence, fishing or other maritime purpose) and the major compilations of historical data have needed extensive scientific input to reconcile the differences and deficiencies of the various sources. Vast amounts of such data remain inaccessible or unusable. Observations for research purposes have been generally short lived and funded on the basis of single initiatives. Even major programs such as FGGE, TOGA and WOCE have been driven by the dedicated interest of a surprisingly small number of individuals, and have been funded from a wide variety of temporary allocations. Recognising the global scale of ocean observations needed for climate research, international cooperation and coordination is an unavoidable necessity, resulting in the creation of such bodies as the Committee for Climatic Changes and the Ocean (CCCO), with the tasks of: (1) defining the scientific elements of research and ocean observation which meet the needs of climate prediction and amelioration; (2) translating these elements into terms of programs, projects or requirements that can be understood and participated in by individual nations and marine agencies; and (3) the sponsorship of specialist groups to facilitate the definition of research programs, the implementation of cooperative international activity and the dissemination of results

    Efficient Generation of Multimodal Fluid Simulation Data

    Full text link
    Applying the representational power of machine learning to the prediction of complex fluid dynamics has been a relevant subject of study for years. However, the amount of available fluid simulation data does not match the notoriously high requirements of machine learning methods. Researchers have typically addressed this issue by generating their own datasets, preventing a consistent evaluation of their proposed approaches. Our work introduces a generation procedure for synthetic multi-modal fluid simulations datasets. By leveraging a GPU implementation, our procedure is also efficient enough that no data needs to be exchanged between users, except for configuration files required to reproduce the dataset. Furthermore, our procedure allows multiple modalities (generating both geometry and photorealistic renderings) and is general enough for it to be applied to various tasks in data-driven fluid simulation. We then employ our framework to generate a set of thoughtfully designed benchmark datasets, which attempt to span specific fluid simulation scenarios in a meaningful way. The properties of our contributions are demonstrated by evaluating recently published algorithms for the neural fluid simulation and fluid inverse rendering tasks using our benchmark datasets. Our contribution aims to fulfill the community's need for standardized benchmarks, fostering research that is more reproducible and robust than previous endeavors.Comment: 10 pages, 7 figure

    Towards a framework for developing visual analytics in supply chain environments

    Get PDF
    Visual Analytics (VA) has shown to be of significant importance for Supply Chain (SC) analytics. However, SC partners still have challenges incorporating it into their data-driven decision-making activities. A conceptual framework for the development and deployment of a VA system provides an abstract, platform-independent model for the whole process of VA, covering requirement specification, data collection and pre-processing, visualization recommendation, visualization specification and implementation, and evaluations. In this paper, we propose such a framework based on three main aspects: 1) Business view, 2) Asset view, and 3) Technology view. Each of these views covers a set of steps to facilitate the development and maintenance of the system in its context. The framework follows a consistent process structure that comprises activities, tasks, and people. The final output of the whole process is the VA as a deliverable. This facilitates the alignment of VA activities with business processes and decision-making activities. We presented the framework\u27s applicability using an actual usage scenario and left the implementation of the system for future work
    • …
    corecore