122 research outputs found

    Intent‐Driven Orchestration: Enforcing Service Level Objectives for Cloud Native Deployments

    No full text
    The introduction of microservices and functions using serverless deployment styles for cloud-native applications will trigger a shift in the orchestration paradigm towards an intent-driven model. In this model we shift from imperatively declaring an object’s state to the declaration of a set of desired intents. Intent-driven orchestration (IDO) enables the management of applications through their service level objectives (SLOs) while minimizing service owner and administrator overhead. By enabling service owners to express the desired target key performance indicator (KPI) objectives for their service components instead of declaratively defining the required state and resources, we enable ease of use and abstraction from underlying platforms. By adding a planning component to a Kubernetes-based orchestration stack, the feasibility of translating service objectives into actionable decisions is demonstrated. As this new architecture component introduces more autonomy in the control plane, a means to evaluate the results of planning is defined

    Safety-critical computer vision : an empirical survey of adversarial evasion attacks and defenses on computer vision systems

    No full text
    Considering the growing prominence of production-level AI and the threat of adversarial attacks that can poison a machine learning model against a certain label, evade classification, or reveal sensitive data about the model and training data to an attacker, adversaries pose fundamental problems to machine learning systems. Furthermore, much research has focused on the inverse relationship between robustness and accuracy, raising problems for real-time and safety-critical systems particularly since they are governed by legal constraints in which software changes must be explainable and every change must be thoroughly tested. While many defenses have been proposed, they are often computationally expensive and tend to reduce model accuracy. We have therefore conducted a large survey of attacks and defenses and present a simple and practical framework for analyzing any machine-learning system from a safety-critical perspective using adversarial noise to find the upper bound of the failure rate. Using this method, we conclude that all tested configurations of the ResNet architecture fail to meet any reasonable definition of ‘safety-critical’ when tested on even small-scale benchmark data. We examine state of the art defenses and attacks against computer vision systems with a focus on safety-critical applications in autonomous driving, industrial control, and healthcare. By testing a combination of attacks and defenses, their efficacy, and their run-time requirements, we provide substantial empirical evidence that modern neural networks consistently fail to meet established safety-critical standards by a wide margin

    MadFed : enhancing federated learning with marginal-data model fusion

    No full text
    As the demand for intelligent applications at the network edge grows, so does the need for effective federated learning (FL) techniques. However, FL often relies on non-identically and non-independently distributed local datasets across end devices, which could result in considerable performance degradation. Prior solutions, such as model-driven approaches based on knowledge distillation, meta-learning, and transfer learning, have provided some reprieve. However, their performance suffers under heterogeneous local datasets and highly skewed data distributions. To address these challenges, this study introduces the MArginal Data fusion FEDerated Learning (MadFed) approach, a groundbreaking fusion of model- and data-driven methodologies. By utilizing marginal data, MadFed mitigates data distribution skewness, improves the maximum achievable accuracy, and reduces communication costs. Furthermore, the study demonstrates that the fusion of marginal data can significantly improve performance even with minimal data entries, such as a single entry. For instance, it provides up to a 15.4% accuracy increase and 70.4% communication cost savings when combined with established model-driven methodologies. Conversely, relying solely on these model-driven methodologies can result in poor performance, especially with highly skewed datasets. Significantly, MadFed extends its effectiveness across various FL algorithms and offers a unique method to augment label sets of end devices, thereby enhancing the utility and applicability of federated learning in real-world scenarios. The proposed approach is not only efficient but also adaptable and versatile, promising broader application and potential for widespread adoption in the field

    The 6G Computing Continuum (6GCC): Meeting the 6G computing challenges

    Get PDF
    6G systems, such as Large Intelligent Surfaces, will require distributed, complex, and coordinated decisions throughout a very heterogeneous and cell free infrastructure. This will require a fundamentally redesigned software infrastructure accompanied by massively distributed and heterogeneous computing resources, vastly different from current wireless networks.To address these challenges, in this paper, we propose and motivate the concept of a 6G Computing Continuum (6GCC) and two research testbeds, to advance the rate and quality of research. 6G Computing Continuum is an end-to-end computeand software platform for realizing large intelligent surfaces and its tenant users and applications. One for addressing the challenges or orchestrating shared computational resources in the wireless domain, implemented on a Large Intelligent Surfaces testbed. Another simulation-based testbed is intended to address scalability and global-scale orchestration challenges

    Reinforced Transformer Learning for VSI-DDoS Detection in Edge Clouds

    No full text
    Edge-driven software applications often deployed as online services in the cloud-to-edge continuum lack significant protection for services and infrastructures against emerging cyberattacks. Very-Short Intermittent Distributed Denial of Service (VSI-DDoS) attack is one of the biggest factor for diminishing the Quality of Services (QoS) and Quality of Experiences (QoE) for users on edge. Unlike conventional DDoS attacks, these attacks live for a very short time (on the order of a few milliseconds) in the traffic to deceive users with a legitimate service experience. To provide protection, we propose a novel and efficient approach for detecting VSI-DDoS attacks using reinforced transformer learning that mitigates the tail latency and service availability problems in edge clouds. In the presence of attacks, the users’ demand for availing ultra-low latency and high throughput services deployed on the edge, can never be met. Moreover, these attacks send very-short intermittent requests towards the target services that enforce longer delays in users’ responses. The assimilation of transformer with deep reinforcement learning accelerates detection performance under adverse conditions by adapting the dynamic and the most discernible patterns of attacks (e.g., multiplicative temporal dependency, attack dynamism). The extensive experiments with testbed and benchmark datasets demonstrate that the proposed approach is suitable, effective, and efficient for detecting VSI-DDoS attacks in edge clouds. The results outperform state-of-the-art methods with 0.9%-3.2% higher accuracy in both datasets

    An ICN-Based Data Marketplace Model Based on a Game Theoretic Approach Using Quality-Data Discovery and Profit Optimization

    No full text
    In the age of data and machine learning, massive amounts of data produced throughout our society can be rapidly delivered to various applications through a broad spectrum of cloud services. However, the spectrum of applications has vastly different data quality requirements and Willingness-To-Pay(WTP), creating a general and complex problem matching consumer quality requirements and budgets with providers’ data quality and price. This paper proposes the Information-Centric Networking(ICN)-based data marketplace to foster quality-data trading service to address the challenge above. We embed a WTP mechanism into an ICN-based data broker service running on cloud computing; therefore, a data consumer can request its desired data with a data name and quality requirement. By specifying nominal WTPs, data consumers can acquire data of the desired quality at the range of maximum nominal WTP. At the same time, a data broker can offer data of a suitable quality based on the profit-optimized price and the proposed service quality using ground-truth accuracy trained by data. We demonstrate that the data broker’s profit can be almost doubled by using the optimal data size and budget determined by considering the one-leader-multiple-followers Stackelberg game. These results show that a value-added data brokering service can profitably facilitate data trading

    MicroSplit: Efficient Splitting of Microservices on Edge Clouds

    No full text
    Edge cloud systems reduce the latency between users and applications by offloading computations to a set of small-scale computing resources deployed at the edge of the network. However, since edge resources are constrained, they can become saturated and bottlenecked due to increased load, resulting in an exponential increase in response times or failures. In this paper, we argue that an application can be split between the edge and the cloud, allowing for better performance compared to full migration to the cloud, releasing precious resources at the edge. We model an application\u27s internal call-Graph as a Directed-Acyclic-Graph. We use this model to develop MicroSplit, a tool for efficient splitting of microservices between constrained edge resources and large-scale distant backend clouds. MicroSplit analyzes the dependencies between the microservices of an application, and using the Louvain method for community detection-a popular algorithm from Network Science-decides how to split the microservices between the constrained edge and distant data centers. We test MicroSplit with four microservice based applications in various realistic cloud-edge settings. Our results show that Microsplit migrates up to 60 % of the microservices of an application with a slight increase in the mean-response time compared to running on the edge, and a latency reduction of up to 800 % compared to migrating the entire application to the cloud. Compared to other methods from the State-of-the-Art, MicroSplit reduces the total number of services on the edge by up to five times, with minimal reduction in response times

    Model-based Stream Processing Auto-scaling in Geo-Distributed Environments

    Get PDF
    International audienceData stream processing is an attractive paradigm for analyzing IoT data at the edge of the Internet before transmitting processed results to a cloud. However, the relative scarcity of fog computing resources combined with the workloads' nonstationary properties make it impossible to allocate a static set of resources for each application. We propose Gesscale, a resource auto-scaler which guarantees that a stream processing application maintains a sufficient Maximum Sustainable Throughput to process its incoming data with no undue delay, while not using more resources than strictly necessary. Gesscale derives its decisions about when to rescale and which geo-distributed resource(s) to add or remove on a performance model that gives precise predictions about the future maximum sustainable throughput after reconfiguration. We show that this auto-scaler uses 17% less resources, generates 52% fewer reconfigurations, and processes more input data than baseline auto-scalers based on threshold triggers or a simpler performance model. Index Terms-Stream processing, auto-scaling, fog computing

    Towards Soft Circuit Breaking in Service Meshes via Application-agnostic Caching

    Full text link
    Service meshes factor out code dealing with inter-micro-service communication, such as circuit breaking. Circuit breaking actuation is currently limited to an "on/off" switch, i.e., a tripped circuit breaker will return an application-level error indicating service unavailability to the calling micro-service. This paper proposes a soft circuit breaker actuator, which returns cached data instead of an error. The overall resilience of a cloud application is improved if constituent micro-services return stale data, instead of no data at all. While caching is widely employed for serving web service traffic, its usage in inter-micro-service communication is lacking. Micro-services responses are highly dynamic, which requires carefully choosing adaptive time-to-life caching algorithms. We evaluate our approach through two experiments. First, we quantify the trade-off between traffic reduction and data staleness using a purpose-build service, thereby identifying algorithm configurations that keep data staleness at about 3% or less while reducing network load by up to 30%. Second, we quantify the network load reduction with the micro-service benchmark by Google Cloud called Hipster Shop. Our approach results in caching of about 80% of requests. Results show the feasibility and efficiency of our approach, which encourages implementing caching as a circuit breaking actuator in service meshes

    mck8s: An orchestration platform for geo-distributed multi-cluster environments

    Get PDF
    International audienceFollowing the adoption of cloud computing, the proliferation of cloud data centers in multiple regions, and the emergence of computing paradigms such as fog computing, there is a need for integrated and efficient management of geodistributed clusters. Geo-distributed deployments suffer from resource fragmentation, as the resources in certain locations are over-allocated while others are under-utilized. Orchestration platforms such as Kubernetes and Kubernetes Federation offer the conceptual models and building blocks that can be used to build integrated solutions that address the resource fragmentation challenge. In this work, we propose mck8s-an orchestration platform for multi-cluster applications on multiple geo-distributed Kubernetes clusters. It offers controllers that automatically place, scale, and burst multi-cluster applications across multiple geo-distributed Kubernetes clusters. mck8s allocates the requested resources to all incoming applications while making efficient use of resources. We designed mck8s to be easy to use by development and operation teams by adopting Kubernetes' design principles and manifest files. We evaluated mck8s in a geo-distributed experimental testbed in Grid'5000. Our results show that mck8s balances the resource allocation across multiple clusters and reduces the fraction of pending pods to 6% as opposed to 65% in the case of Kubernetes Federation for the same workload
    • 

    corecore