45 research outputs found

    Robust Multimodal Failure Detection for Microservice Systems

    Full text link
    Proactive failure detection of instances is vitally essential to microservice systems because an instance failure can propagate to the whole system and degrade the system's performance. Over the years, many single-modal (i.e., metrics, logs, or traces) data-based nomaly detection methods have been proposed. However, they tend to miss a large number of failures and generate numerous false alarms because they ignore the correlation of multimodal data. In this work, we propose AnoFusion, an unsupervised failure detection approach, to proactively detect instance failures through multimodal data for microservice systems. It applies a Graph Transformer Network (GTN) to learn the correlation of the heterogeneous multimodal data and integrates a Graph Attention Network (GAT) with Gated Recurrent Unit (GRU) to address the challenges introduced by dynamically changing multimodal data. We evaluate the performance of AnoFusion through two datasets, demonstrating that it achieves the F1-score of 0.857 and 0.922, respectively, outperforming the state-of-the-art failure detection approaches

    Electronic Regulation of Data Sharing and Processing Using Smart Ledger Technologies for Supply-Chain Security

    Get PDF
    Traditional centralized data storage and processing solutions manifest limitations with regards to overall operational cost and the security and auditability of data. One of the biggest issues with existing solutions is the difficulty of keeping track of who has had access to the data and how the data may have changed over its lifetime; while providing a secure and easy-to-use mechanism to share the data between different users. The ability to electronically regulate data sharing within and across different organizational entities in the supply chain (SC) is an open issue that is only addressed partially by existing legal and regulatory compliance frameworks. In this article, we present Cydon, a decentralized data management platform that executes bespoke distributed applications utilizing a novel search and retrieve algorithm leveraging metadata attributes. Cydon utilizes a smart distributed ledger to offer an immutable audit trail and transaction history for all different levels of data access and modification within a SC and for all data flows within the environment. Results suggest that Cydon provides authorized and fast access to secure distributed data, avoids single points of failure by securely distributing encrypted data across different nodes while maintains an “always-on” chain of custody

    RetouchingFFHQ: A Large-scale Dataset for Fine-grained Face Retouching Detection

    Full text link
    The widespread use of face retouching filters on short-video platforms has raised concerns about the authenticity of digital appearances and the impact of deceptive advertising. To address these issues, there is a pressing need to develop advanced face retouching techniques. However, the lack of large-scale and fine-grained face retouching datasets has been a major obstacle to progress in this field. In this paper, we introduce RetouchingFFHQ, a large-scale and fine-grained face retouching dataset that contains over half a million conditionally-retouched images. RetouchingFFHQ stands out from previous datasets due to its large scale, high quality, fine-grainedness, and customization. By including four typical types of face retouching operations and different retouching levels, we extend the binary face retouching detection into a fine-grained, multi-retouching type, and multi-retouching level estimation problem. Additionally, we propose a Multi-granularity Attention Module (MAM) as a plugin for CNN backbones for enhanced cross-scale representation learning. Extensive experiments using different baselines as well as our proposed method on RetouchingFFHQ show decent performance on face retouching detection. With the proposed new dataset, we believe there is great potential for future work to tackle the challenging problem of real-world fine-grained face retouching detection.Comment: Under revie

    ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation

    Full text link
    Personalizing generative models offers a way to guide image generation with user-provided references. Current personalization methods can invert an object or concept into the textual conditioning space and compose new natural sentences for text-to-image diffusion models. However, representing and editing specific visual attributes like material, style, layout, etc. remains a challenge, leading to a lack of disentanglement and editability. To address this, we propose a novel approach that leverages the step-by-step generation process of diffusion models, which generate images from low- to high-frequency information, providing a new perspective on representing, generating, and editing images. We develop Prompt Spectrum Space P*, an expanded textual conditioning space, and a new image representation method called ProSpect. ProSpect represents an image as a collection of inverted textual token embeddings encoded from per-stage prompts, where each prompt corresponds to a specific generation stage (i.e., a group of consecutive steps) of the diffusion model. Experimental results demonstrate that P* and ProSpect offer stronger disentanglement and controllability compared to existing methods. We apply ProSpect in various personalized attribute-aware image generation applications, such as image/text-guided material/style/layout transfer/editing, achieving previously unattainable results with a single image input without fine-tuning the diffusion models

    Towards Digital Twin Implementation for Assessing Production Line Performance and Balancing

    Get PDF
    The optimization of production processes has always been one of the cornerstones for manufacturing companies, aimed to increase their productivity, minimizing the related costs. In the Industry 4.0 era, some innovative technologies, perceived as far away until a few years ago, have become reachable by everyone. The massive introduction of these technologies directly in the factories allows interconnecting the resources (machines and humans) and the entire production chain to be kept under control, thanks to the collection and the analyses of real production data, supporting the decision making process. This article aims to propose a methodological framework that, thanks to the use of Industrial Internet of Things—IoT devices, in particular the wearable sensors, and simulation tools, supports the analyses of production line performance parameters, by considering both experimental and numerical data, allowing a continuous monitoring of the line balancing and performance at varying of the production demand. A case study, regarding a manual task of a real manufacturing production line, is presented to demonstrate the applicability and the effectiveness of the proposed procedure

    Colored Petri net modelling and evaluation of drone inspection methods for distribution networks

    Get PDF
    The UAV industry is developing rapidly and drones are increasingly used for monitoring industrial facilities. When designing such systems, operating companies have to find a system configuration of multiple drones that is near-optimal in terms of cost while achieving the required monitoring quality. Stochastic influences such as failures and maintenance have to be taken into account. Model-based systems engineering supplies tools and methods to solve such problems. This paper presents a method to model and evaluate such UAV systems with coloured Petri nets. It supports a modular view on typical setup elements and different types of UAVs and is based on UAV application standards. The model can be easily adapted to the most popular flight tasks and allows for estimating the monitoring frequency and determining the most appropriate grouping and configuration of UAVs, monitoring schemes, air time and maintenance periods. An important advantage is the ability to consider drone maintenance processes. Thus, the methodology will be useful in the conceptual design phase of UAVs, in monitoring planning, and in the selection of UAVs for specific monitoring tasks

    Charting Past, Present, and Future Research in the Semantic Web and Interoperability

    Get PDF
    Huge advances in peer-to-peer systems and attempts to develop the semantic web have revealed a critical issue in information systems across multiple domains: the absence of semantic interoperability. Today, businesses operating in a digital environment require increased supply-chain automation, interoperability, and data governance. While research on the semantic web and interoperability has recently received much attention, a dearth of studies investigates the relationship between these two concepts in depth. To address this knowledge gap, the objective of this study is to conduct a review and bibliometric analysis of 3511 Scopus-registered papers on the semantic web and interoperability published over the past two decades. In addition, the publications were analyzed using a variety of bibliometric indicators, such as publication year, journal, authors, countries, and institutions. Keyword co-occurrence and co-citation networks were utilized to identify the primary research hotspots and group the relevant literature. The findings of the review and bibliometric analysis indicate the dominance of conference papers as a means of disseminating knowledge and the substantial contribution of developed nations to the semantic web field. In addition, the keyword co-occurrence network analysis reveals a significant emphasis on semantic web languages, sensors and computing, graphs and models, and linking and integration techniques. Based on the co-citation clustering, the Internet of Things, semantic web services, ontology mapping, building information modeling, bioinformatics, education and e-learning, and semantic web languages were identified as the primary themes contributing to the flow of knowledge and the growth of the semantic web and interoperability field. Overall, this review substantially contributes to the literature and increases scholars’ and practitioners’ awareness of the current knowledge composition and future research directions of the semantic web field. View Full-Tex

    Feature-Model-Guided Online Learning for Self-Adaptive Systems

    Full text link
    A self-adaptive system can modify its own structure and behavior at runtime based on its perception of the environment, of itself and of its requirements. To develop a self-adaptive system, software developers codify knowledge about the system and its environment, as well as how adaptation actions impact on the system. However, the codified knowledge may be insufficient due to design time uncertainty, and thus a self-adaptive system may execute adaptation actions that do not have the desired effect. Online learning is an emerging approach to address design time uncertainty by employing machine learning at runtime. Online learning accumulates knowledge at runtime by, for instance, exploring not-yet executed adaptation actions. We address two specific problems with respect to online learning for self-adaptive systems. First, the number of possible adaptation actions can be very large. Existing online learning techniques randomly explore the possible adaptation actions, but this can lead to slow convergence of the learning process. Second, the possible adaptation actions can change as a result of system evolution. Existing online learning techniques are unaware of these changes and thus do not explore new adaptation actions, but explore adaptation actions that are no longer valid. We propose using feature models to give structure to the set of adaptation actions and thereby guide the exploration process during online learning. Experimental results involving four real-world systems suggest that considering the hierarchical structure of feature models may speed up convergence by 7.2% on average. Considering the differences between feature models before and after an evolution step may speed up convergence by 64.6% on average. [...
    corecore