634 research outputs found
ReTiF: A declarative real-time scheduling framework for POSIX systems
This paper proposes a novel framework providing a declarative interface to access real-time process scheduling services available in an operating system kernel. The main idea is to let applications declare their temporal requirements or characteristics without knowing exactly which underlying scheduling algorithms are offered by the system. The proposed framework can adequately handle such a set of heterogeneous requirements configuring the platform and partitioning the requests among the available multitude of cores, so to exploit the various scheduling disciplines that are available in the kernel, matching application requirements in the best possible way. The framework is realized with a modular architecture in which different plugins handle independently certain real-time scheduling features. The architecture is designed to make its behavior customization easier and enhance the support for other operating systems by introducing and configuring additional plugins
Assessment of Damage Evolution in Sandwich Composite Material Subjected to Repeated Impacts by Means Optical Measurements
Abstract In the last decade, sandwich composite materials have had an increasing use in design of racing boats. The main reasons are: higher strength-weight ratio, low density, excellent durability and versatility. The knowledge of impact response is very important to design racing boats. The aim of the present study is the investigation of absorbing impact energy ability of a sandwich composite material used for offshore vessels in UIM (Unione Internationale Motonautique) Championship. The material analysed in this study is a sandwich manufactured with hand lay-up technique. In the first phase, the damage assessment of single impact has been studied with an optical measurement technique. In a second phase, the damage evaluation due to repeated impacts has been analysed with the similar technique
Analyzing Declarative Deployment Code with Large Language Models
In the cloud-native era, developers have at their disposal an unprecedented landscape of services to build scalable distributed systems. The DevOps paradigm emerged as a response to the increasing necessity of better automations, capable of dealing with the complexity of modern cloud systems. For instance, Infrastructure-as-Code tools provide a declarative way to define, track, and automate changes to the infrastructure underlying
a cloud application. Assuring the quality of this part of a code base is of utmost importance. However, learning to produce robust deployment specifications is not an easy feat, and for the domain experts it is time-consuming to conduct code-reviews and transfer the appropriate knowledge to novice members of the team. Given the abundance of data generated throughout the DevOps cycle, machine learning (ML) techniques seem
a promising way to tackle this problem. In this work, we propose an approach based on Large Language Models to analyze declarative deployment code and automatically provide QA-related recommendations to developers, such that they can benefit of established best practices and design patterns. We developed a prototype of our proposed ML pipeline, and empirically evaluated our approach on a collection of Kubernetes manifests exported from a repository of internal projects at Nokia Bell Labs
XPySom: High-performance self-organizing maps
In this paper, we introduce XPySom, a new open-source Python implementation of the well-known Self-Organizing Maps (SOM) technique. It is designed to achieve high performance on a single node, exploiting widely available Python libraries for vector processing on multi-core CPUs and GP-GPUs. We present results from an extensive experimental evaluation of XPySom in comparison to widely used open-source SOM implementations, showing that it outperforms the other available alternatives. Indeed, our experimentation carried out using the Extended MNIST open data set shows a speed-up of about 7x and 100x when compared to the best open-source multi-core implementations we could find with multi-core and GP-GPU acceleration, respectively, achieving the same accuracy levels in terms of quantization error
Comparative Evaluation of Kernel Bypass Mechanisms for High-performance Inter-container Communications
This work presents a framework for evaluating the performance of various virtual switching solutions, each widely adopted on Linux to provide virtual network connectivity to containers in high-performance scenarios, like in Network Function Virtualization (NFV). We present results from the use of this framework for the quantitative comparison of the performance of software-based and hardware-accelerated virtual switches on a real platform with respect to a number of key metrics, namely network throughput, latency and scalability
Extending OpenStack Monasca for Predictive Elasticity Control
Traditional auto-scaling approaches are conceived as reactive automations, typically triggered when predefined thresholds are breached by resource consumption metrics. Managing such rules at scale is cumbersome, especially when resources require non-negligible time to be instantiated. This paper introduces an architecture for predictive cloud operations, which enables orchestrators to apply time-series forecasting techniques to estimate the evolution of relevant metrics and take decisions based on the predicted state of the system. In this way, they can anticipate load peaks and trigger appropriate scaling actions in advance, such that new resources are available when needed. The proposed architecture is implemented in OpenStack, extending the monitoring capabilities of Monasca by injecting short-term forecasts of standard metrics. We use our architecture to implement predictive scaling policies leveraging on linear regression, autoregressive integrated moving average, feed-forward, and recurrent neural networks (RNN). Then, we evaluate their performance on a synthetic workload, comparing them to those of a traditional policy. To assess the ability of the different models to generalize to unseen patterns, we also evaluate them on traces from a real content delivery network (CDN) workload. In particular, the RNN model exhibites the best overall performance in terms of prediction error, observed client-side response latency, and forecasting overhead. The implementation of our architecture is open-source
A Framework for Comparative Evaluation of High-Performance Virtualized Networking Mechanisms
This paper presents an extension to a software framework designed to evaluate the efficiency of different software and hardware-accelerated virtual switches, each commonly adopted on Linux to provide virtual network connectivity to containers in high-performance scenarios, like in Network Function Virtualization (NFV). We present results from the use of our tools, showing the performance of multiple high-performance networking frameworks on a specific platform, comparing the collected data for various key metrics, namely throughput, latency and scalability, with respect to the required computational power
thermographic analysis during tensile tests and fatigue assessment of s355 steel
Abstract Structural S355 steel is widely applied in various sectors. Fatigue properties are of fundamental importance and extremely time consuming to be assessed. The aim of this research activity is to apply the Static Thermographic Method during tensile tests and correlate the temperature trend to the fatigue properties of the same steel. The Digital Image Correlation (DIC) and Infrared Thermography (IR) techniques have been used during all static tests. The Digital Image Correlation technique allowed the detection of displacements and strain, and so the evaluation of the mechanical properties of the material. Traditional fatigue tests were also performed in order to evaluate the stress-number of cycles to failure curve of the same steel. The value of the fatigue limit, obtained by the traditional procedure, was compared with the values predicted by means of the Static Thermographic Method (STM) obtained from tensile tests. The predicted values are in good agreement with the experimental values of fatigue life
Predictive auto-scaling with OpenStack Monasca
Cloud auto-scaling mechanisms are typically based on reactive automation rules that scale a cluster whenever some metric, e.g., the average CPU usage among instances, exceeds a predefined threshold. Tuning these rules becomes particularly cumbersome when scaling-up a cluster involves non-negligible times to bootstrap new instances, as it happens frequently in production cloud services. To deal with this problem, we propose an architecture for auto-scaling cloud services based on the status in which the system is expected to evolve in the near future. Our approach leverages on time-series forecasting techniques, like those based on machine learning and artificial neural networks, to predict the future dynamics of key metrics, e.g., resource consumption metrics, and apply a threshold-based scaling policy on them. The result is a predictive automation policy that is able, for instance, to automatically anticipate peaks in the load of a cloud application and trigger ahead of time appropriate scaling actions to accommodate the expected increase in traffic. We prototyped our approach as an open-source OpenStack component, which relies on, and extends, the monitoring capabilities offered by Monasca, resulting in the addition of predictive metrics that can be leveraged by orchestration components like Heat or Senlin. We show experimental results using a recurrent neural network and a multi-layer perceptron as predictor, which are compared with a simple linear regression and a traditional non-predictive auto-scaling policy. However, the proposed framework allows for the easy customization of the prediction policy as needed
The ALTCRISS project on board the International Space Station
The Altcriss project aims to perform a long term survey of the radiation
environment on board the International Space Station. Measurements are being
performed with active and passive devices in different locations and
orientations of the Russian segment of the station. The goal is to perform a
detailed evaluation of the differences in particle fluence and nuclear
composition due to different shielding material and attitude of the station.
The Sileye-3/Alteino detector is used to identify nuclei up to Iron in the
energy range above 60 MeV/n. Several passive dosimeters (TLDs, CR39) are also
placed in the same location of Sileye-3 detector. Polyethylene shielding is
periodically interposed in front of the detectors to evaluate the effectiveness
of shielding on the nuclear component of the cosmic radiation. The project was
submitted to ESA in reply to the AO in the Life and Physical Science of 2004
and data taking began in December 2005. Dosimeters and data cards are rotated
every six months: up to now three launches of dosimeters and data cards have
been performed and have been returned with the end of expedition 12 and 13.Comment: Accepted for publication on Advances in Space Research
http://dx.doi.org/10.1016/j.asr.2007.04.03
- …