480 research outputs found

    Migrating Constant Bandwidth Servers on Multi-Cores

    Get PDF
    This paper introduces a novel admission test for partitioned CBS reservations on multi-core systems, that, on a new reservation arrival, is capable of exploiting better the CPU capacity in cases in which tasks have just recently left the CPU (for example, due to termination or migration to a different CPU). This is particularly useful for highly dynamic scenarios (with frequent arrivals of new tasks or leaves of existing ones) or when adaptive and possibly power-aware partitioning techniques (in which task migrations are triggered quite often to re-balance the workload among the available cores) are used

    Towards a Holistic Cloud System with End-to-End Performance Guarantees

    Get PDF
    Computing technologies are undergoing a relentless evolution from both the hardware and software sides, incorporating new mechanisms for low-latency networking, virtualization, operating systems, hardware acceleration, smart services orchestration, serverless computing, hybrid private-public Cloud solutions and others. Therefore, Cloud infrastructures are becoming increasingly attractive for deploying a wider and wider range of applications, including those with more and more stringent timing constraints, like the emerging use case of deploying time-critical applications. However, despite the availability of a number of public Cloud offerings, and of products (or open-source suites) for deploying in-house private Cloud infrastructures, still there are no solutions readily available for managing time-critical software components with predictable end-to-end timing requirements in the range of hundreds or even tens of milliseconds. The goal of this discussion is to present the multi-domain challenges associated with orchestrating a holistic Cloud system with endto- end guarantees, which is the subject of my current PhD investigations

    RT-Kubernetes - Containerized Real-Time Cloud Computing

    Get PDF
    This paper presents RT-Kubernetes, a software architecture with the ability to deploy real-time software components within containers in cloud infrastructures. The deployment of containers with guaranteed CPU scheduling is obtained by using a hierarchical real-time scheduler based on the Linux SCHED_DEADLINE policy. Preliminary experimental results provide evidence that this new framework succeeds in providing timeliness guarantees in the target responsiveness range, while achieving strong temporal isolation among containers co-located on the same physical hosts

    Container-Based Real-Time Scheduling in the Linux Kernel

    Get PDF
    In recent years, there has been a growing interest in supporting component-based software development of complex real-time embedded systems. Techniques such as machine virtualisation have emerged as interesting mechanisms to enhance the security of these platforms, while real-time scheduling techniques have been proposed to guarantee temporal isolation of different virtualised components sharing the same physical resources. This combination also highlighted criticalities due to overheads introduced by hypervisors, particularly for low-end embedded devices. This led to the need of investigating deeper into solutions based on lightweight virtualisation alternatives, such as containers. In this context, this paper proposes to use a real-time deadline-based scheduling policy built into the Linux kernel to provide temporal scheduling guarantees to different co-located containers. The proposed solution extends the SCHED_DEADLINE scheduling policy to schedule Linux control groups, allowing user threads to be scheduled with fixed priorities inside the control group scheduled by SCHED_DEADLINE. The proposed mechanism can be configured via control groups, and it is compatible with commonly used tools such as LXC, Docker and similar. This solution is compatible with existing hierarchical real-time scheduling analysis, and some experiments demonstrate consistency between theory and practice

    Heuristic partitioning of real-time tasks on multi-processors

    Get PDF
    This paper tackles the problem of admitting real-time tasks onto a symmetric multi-processor platform, where a partitioned EDF-based scheduler is used. We propose to combine a well-known utilization-based test for the first-fit partitioning strategy, with a simple heuristic based on the number of tasks and exact knowledge of the utilization of the first few biggest tasks. This results in an effective and efficient test improving on the state of the art in terms of admitted tasks, as shown by extensive tests performed on task sets generated using the widely adopted randfixedsum algorithm

    Online Sensitivity Optimization in Differentially Private Learning

    Get PDF
    Training differentially private machine learning models requires constraining an individual’s contribution to the optimization process. This is achieved by clipping the 2-norm of their gradient at a predetermined threshold prior to averaging and batch sanitization. This selection adversely influences optimization in two opposing ways: it either exacerbates the bias due to excessive clipping at lower values, or augments sanitization noise at higher values. The choice significantly hinges on factors such as the dataset, model architecture, and even varies within the same optimization, demanding meticulous tuning usually accomplished through a grid search. In order to circumvent the privacy expenses incurred in hyperparameter tuning, we present a novel approach to dynamically optimize the clipping threshold. We treat this threshold as an additional learnable parameter, establishing a clean relationship between the threshold and the cost function. This allows us to optimize the former with gradient descent, with minimal repercussions on the overall privacy analysis. Our method is thoroughly assessed against alternative fixed and adaptive strategies across diverse datasets, tasks, model dimensions, and privacy levels. Our results indicate that it performs comparably or better in the evaluated scenarios, given the same privacy requirements

    Period Estimation for Linux-based Edge Computing Virtualization with Strong Temporal Isolation

    Get PDF
    Virtualization of edge nodes is paramount to avoid their under-exploitation, allowing applications from different tenants to share the underlying computing platform. Neverthe-less, enabling different applications to share the same hardware may expose them to uncontrolled mutual timing interference, as well as timing-related security attacks. Strong timing isolation through SCHED_DEADLINE reservations is an interesting solution to facilitate the safe and secure sharing of the processing platform: nevertheless, SCHED_DEADLINE reservations require proper parameter tuning that can be hard to achieve, especially in the case of highly dynamic environments, characterized by workloads that need to be served without knowing any accurate information about their timing. This paper presents an approach for estimating the periods of SCHED_DEADLINE reservations based on a spectral analysis of the activation pattern of the workload running in the reservation, which can be used to assign and refine reservation parameters in edge systems

    Noisy Neighbors: Efficient membership inference attacks against LLMs

    Get PDF
    The potential of transformer-based LLMs risks being hindered by privacy concerns due to their reliance on extensive datasets, possibly including sensitive information. Regulatory measures like GDPR and CCPA call for using robust auditing tools to address potential privacy issues, with Membership Inference Attacks (MIA) being the primary method for assessing LLMs’ privacy risks. Differently from traditional MIA approaches, often requiring computationally intensive training of additional models, this paper introduces an efficient methodology that generates noisy neighbors for a target sample by adding stochastic noise in the embedding space, requiring operating the target model in inference mode only. Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios

    Operating System Noise in the Linux Kernel

    Get PDF
    As modern network infrastructure moves from hardware-based to software-based using Network Function Virtualization, a new set of requirements is raised for operating system developers. By using the real-time kernel options and advanced CPU isolation features common to the HPC use-cases, Linux is becoming a central building block for this new architecture that aims to enable a new set of low latency networked services. Tuning Linux for these applications is not an easy task, as it requires a deep understanding of the Linux execution model and the mix of user-space tooling and tracing features. This paper discusses the internal aspects of Linux that influence the Operating System Noise from a timing perspective. It also presents Linux’s osnoise tracer, an in-kernel tracer that enables the measurement of the Operating System Noise as observed by a workload, and the tracing of the sources of the noise, in an integrated manner, facilitating the analysis and debugging of the system. Finally, this paper presents a series of experiments demonstrating both Linux’s ability to deliver low OS noise (in the single-digit μs order), and the ability of the proposed tool to provide precise information about root-cause of timing-related OS noise problems

    Analyzing Declarative Deployment Code with Large Language Models

    Get PDF
    In the cloud-native era, developers have at their disposal an unprecedented landscape of services to build scalable distributed systems. The DevOps paradigm emerged as a response to the increasing necessity of better automations, capable of dealing with the complexity of modern cloud systems. For instance, Infrastructure-as-Code tools provide a declarative way to define, track, and automate changes to the infrastructure underlying a cloud application. Assuring the quality of this part of a code base is of utmost importance. However, learning to produce robust deployment specifications is not an easy feat, and for the domain experts it is time-consuming to conduct code-reviews and transfer the appropriate knowledge to novice members of the team. Given the abundance of data generated throughout the DevOps cycle, machine learning (ML) techniques seem a promising way to tackle this problem. In this work, we propose an approach based on Large Language Models to analyze declarative deployment code and automatically provide QA-related recommendations to developers, such that they can benefit of established best practices and design patterns. We developed a prototype of our proposed ML pipeline, and empirically evaluated our approach on a collection of Kubernetes manifests exported from a repository of internal projects at Nokia Bell Labs
    • …
    corecore