751 research outputs found

    Operating System Noise in the Linux Kernel

    Get PDF
    As modern network infrastructure moves from hardware-based to software-based using Network Function Virtualization, a new set of requirements is raised for operating system developers. By using the real-time kernel options and advanced CPU isolation features common to the HPC use-cases, Linux is becoming a central building block for this new architecture that aims to enable a new set of low latency networked services. Tuning Linux for these applications is not an easy task, as it requires a deep understanding of the Linux execution model and the mix of user-space tooling and tracing features. This paper discusses the internal aspects of Linux that influence the Operating System Noise from a timing perspective. It also presents Linux’s osnoise tracer, an in-kernel tracer that enables the measurement of the Operating System Noise as observed by a workload, and the tracing of the sources of the noise, in an integrated manner, facilitating the analysis and debugging of the system. Finally, this paper presents a series of experiments demonstrating both Linux’s ability to deliver low OS noise (in the single-digit μs order), and the ability of the proposed tool to provide precise information about root-cause of timing-related OS noise problems

    Priority-Driven Differentiated Performance for NoSQL Database-As-a-Service

    Get PDF
    Designing data stores for native Cloud Computing services brings a number of challenges, especially if the Cloud Provider wants to offer database services capable of controlling the response time for specific customers. These requests may come from heterogeneous data-driven applications with conflicting responsiveness requirements. For instance, a batch processing workload does not require the same level of responsiveness as a time-sensitive one. Their coexistence may interfere with the responsiveness of the time-sensitive workload, such as online video gaming, virtual reality, and cloud-based machine learning. This paper presents a modification to the popular MongoDB NoSQL database to enable differentiated per-user/request performance on a priority basis by leveraging CPU scheduling and synchronization mechanisms available within the Operating System. This is achieved with minimally invasive changes to the source code and without affecting the performance and behavior of the database when the new feature is not in use. The proposed extension has been integrated with the access-control model of MongoDB for secure and controlled access to the new capability. Extensive experimentation with realistic workloads demonstrates how the proposed solution is able to reduce the response times for high-priority users/requests, with respect to lower-priority ones, in scenarios with mixed-priority clients accessing the data store

    Analyses of risks associated with radiation exposure from past major solar particle events

    Get PDF
    Radiation exposures and cancer induction/mortality risks were investigated for several major solar particle events (SPE's). The SPE's included are: February 1956, November 1960, August 1972, October 1989, and the September, August, and October 1989 events combined. The three 1989 events were treated as one since all three could affect a single lunar or Mars mission. A baryon transport code was used to propagate particles through aluminum and tissue shield materials. A free space environment was utilized for all calculations. Results show the 30-day blood forming organs (BFO) limit of 25 rem was surpassed by all five events using 10 g/sq cm of shielding. The BFO limit is based on a depth dose of 5 cm of tissue, while a more detailed shield distribution of the BFO's was utilized. A comparison between the 5 cm depth dose and the dose found using the BFO shield distribution shows that the 5 cm depth value slightly higher than the BFO dose. The annual limit of 50 rem was exceeded by the August 1972, October 1989, and the three combined 1989 events with 5 g/sq cm of shielding. Cancer mortality risks ranged from 1.5 to 17 percent at 1 g/sq cm and 0.5 to 1.1 percent behind 10 g/sq cm of shielding for five events. These ranges correspond to those for a 45 year old male. It is shown that secondary particles comprise about 1/3 of the total risk at 10 g/sq cm of shielding. Utilizing a computerized Space Shuttle shielding model to represent a typical spacecraft configuration in free space at the August 1972 SPE, average crew doses exceeded the BFO dose limit

    The effect of longitudinal rails on an air cavity stepped planing hull

    Get PDF
    The use of ventilated hulls is rapidly expanding. However, experimental and numerical analyses are still very limited, particularly for high-speed vessels and for stepped planing hulls. In this work, the authors present a comparison between towing tank tests and CFD analyses carried out on a single-stepped planing hull provided with forced ventilation on the bottom. The boat has identical geometries to those presented by the authors in other works, but with the addition of longitudinal rails. In particular, the study addresses the effect of the rails on the bottom of the hull, in terms of drag, and the wetted surface assessment. The computational methodology is based on URANS equation with multiphase models for high-resolution interface capture between air and water. The tests have been performed varying seven velocities and six airflow rates and the no-air injection condition. Compared to flat-bottomed hulls, a higher incidence of numerical ventilation and air–water mixing effects was observed. At the same time, no major differences were noted in terms of the ability to drag the flow aft at low speeds. Results in terms of drag reduction, wetted surface, and its shape are discussed

    Predictive auto-scaling with OpenStack Monasca

    Get PDF
    Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e.g., the average CPU usage among instances, exceeds a predefined threshold. Tuning these rules becomes particularly cumbersome when scaling-up a cluster involves non-negligible times to bootstrap new instances, as it happens frequently in production cloud services. To deal with this problem, we propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future. Our approach leverages on time-series forecasting techniques, like those based on machine learning and artificial neural networks, to predict the future dynamics of key metrics, e.g., resource consumption metrics, and apply a threshold-based scaling policy on them. The result is a predictive automation policy that is able, for instance, to automatically anticipate peaks in the load of a cloud application and trigger ahead of time appropriate scaling actions to accommodate the expected increase in traffic. We prototyped our approach as an open-source OpenStack component, which relies on, and extends, the monitoring capabilities offered by Monasca, resulting in the addition of predictive metrics that can be leveraged by orchestration components like Heat or Senlin. We show experimental results using a recurrent neural network and a multi-layer perceptron as predictor, which are compared with a simple linear regression and a traditional non-predictive auto-scaling policy. However, the proposed framework allows for the easy customization of the prediction policy as needed

    thermographic analysis during tensile tests and fatigue assessment of s355 steel

    Get PDF
    Abstract Structural S355 steel is widely applied in various sectors. Fatigue properties are of fundamental importance and extremely time consuming to be assessed. The aim of this research activity is to apply the Static Thermographic Method during tensile tests and correlate the temperature trend to the fatigue properties of the same steel. The Digital Image Correlation (DIC) and Infrared Thermography (IR) techniques have been used during all static tests. The Digital Image Correlation technique allowed the detection of displacements and strain, and so the evaluation of the mechanical properties of the material. Traditional fatigue tests were also performed in order to evaluate the stress-number of cycles to failure curve of the same steel. The value of the fatigue limit, obtained by the traditional procedure, was compared with the values predicted by means of the Static Thermographic Method (STM) obtained from tensile tests. The predicted values are in good agreement with the experimental values of fatigue life

    An Evaluation of Adaptive Partitioning of Real-Time Workloads on Linux

    Get PDF
    This paper provides an open implementation and an experimental evaluation of an adaptive partitioning approach for scheduling real-time tasks on symmetric multicore systems. The proposed technique is based on combining partitioned EDF scheduling with an adaptive migration policy that moves tasks across processors only when strictly needed to respect their temporal constraints. The implementation of the technique within the Linux kernel, via modifications to the SCHED_DEADLINE code base, is presented. An extensive experimentation-has been conducted by applying the technique on a real multi-core platform with several randomly generated synthetic task sets. The obtained experimental results highlight that the approach exhibits a promising performance to schedule real-time workloads on a real system, with a greatly reduced number of migrations compared to the original global EDF available in SCHED_DEADLINE

    Efficient Formal Verification for the Linux Kernel

    Get PDF
    Formal verification of the Linux kernel has been receiving increasing attention in recent years, with the development of many models, from memory subsystems to the synchronization primitives of the real-time kernel. The effort in developing formal verification methods is justified considering the large code-base, the complexity in synchronization required in a monolithic kernel and the support for multiple architectures, along with the usage of Linux on critical systems, from high-frequency trading to self-driven cars. Despite recent developments in the area, none of the proposed approaches are suitable and flexible enough to be applied in an efficient way to a running kernel. Aiming to fill such a gap, this paper proposes a formal verification approach for the Linux kernel, based on automata models. It presents a method to auto-generate verification code from an automaton, which can be integrated into a module and dynamically added into the kernel for efficient on-the-fly verification of the system, using in-kernel tracing features. Finally, a set of experiments demonstrate verification of three models, along with performance analysis of the impact of the verification, in terms of latency and throughput of the system, showing the efficiency of the approach

    Untangling the intricacies of thread synchronization in the PREEMPT-RT linux kernel

    Get PDF
    This article proposes an automata-based model for describing and validating the behavior of threads in the Linux PREEMPT-RT kernel, on a single-core system. The automata model defines the events and how they influence the timeline of threads' execution, comprising the preemption control, interrupt handlers, interrupt control, scheduling and locking. This article also presents the extension of the Linux trace features that enable the trace of the kernel events used in the modeling. The model and the tracing tool are used, initially, to validate the model, but preliminary results were enough to point to two problems in the Linux kernel. Finally, the analysis of the events involved in the activation of the highest priority thread is presented in terms of necessary and sufficient conditions, describing the delays occurred in this operation in the same granularity used by kernel developers, showing how it is possible to take advantage of the model for analyzing the thread wake-up latency, without any need for watching the corresponding kernel code

    Single-mode regime in large-mode-area rare-earth-doped rod-type PCFs

    Get PDF
    In this paper, large-mode-area, double-cladding, rare-earth-doped photonic crystal fibers are investigated in order to understand how the refractive index distribution and the mode competition given by the amplification can assure singlemode propagation. Fibers with different core diameters, i.e., 35,60, and 100 ÎĽm, are considered. The analysis of the mode effective index, overlap, effective area, gain, and power evolution along the doped fiber provides clear guidelines on the fiber physical characteristics to be matched in the fabrication process to obtain a truly or effectively single-mode output beam
    • …
    corecore