141 research outputs found

    Workload characterization: a survey

    Full text link

    The Resource Usage Aware Backfilling

    Full text link
    Abstract. Job scheduling policies for HPC centers have been extensively stud-ied in the last few years, especially backfilling based policies. Almost all of these studies have been done using simulation tools. All the existent simulators use the runtime (either estimated or real) provided in the workload as a basis of their sim-ulations. In our previous work we analyzed the impact on system performance of considering the resource sharing (memory bandwidth) of running jobs including a new resource model in the Alvio simulator. Based on this studies we proposed the LessConsume and LessConsume Threshold resource selection policies. Both are oriented to reduce the saturation of the shared resources thus increasing the performance of the system. The results showed how both resource allocation poli-cies shown how the performance of the system can be improved by considering where the jobs are finally allocated. Using the LessConsume Threshold Resource Selection Policy, we propose a new backfilling strategy: the Resource Usage Aware Backfilling job scheduling policy. This is a backfilling based scheduling policy where the algorithms which decide which job has to be executed and how jobs have to be backfilled are based on a different Threshold configurations. This backfilling variant that considers how the shared resources are used by the scheduled jobs. Rather than backfilling the first job that can moved to the run queue based on the job arrival time or job size, it looks ahead to the next queued jobs, and tries to allocate jobs that would experience lower penalized runtime caused by the resource sharing saturation. In the paper we demostrate how the exchange of scheduling information between the local resource manager and the scheduler can improve substantially the per-formance of the system when the resource sharing is considered. We show how it can achieve a close response time performance that the shorest job first Back-filling with First Fit (oriented to improve the start time for the allocated jobs) providing a qualitative improvement in the number of killed jobs and in the per-centage of penalized runtime.

    Induction of neurotrophin expression via human adult mesenchymal stem cells: implication for cell therapy in neurodegenerative diseases.

    Get PDF
    In animal models of neurological disorders for cerebral ischemia, Parkinson's disease, and spinal cord lesions, transplantation of mesenchymal stem cells (MSCs) has been reported to improve functional outcome. Three mechanisms have been suggested for the effects of the MSCs: transdifferentiation of the grafted cells with replacement of degenerating neural cells, cell fusion, and neuroprotection of the dying cells. Here we demonstrate that a restricted number of cells with differentiated astroglial features can be obtained from human adult MSCs (hMSCs) both in vitro using different induction protocols and in vivo after transplantation into the developing mouse brain. We then examined the in vitro differentiation capacity of the hMSCs in coculture with slices of neonatal brain cortex. In this condition the hMSCs did not show any neuronal transdifferentiation but expressed neurotrophin low-affinity (NGFRp75) and high-affinity (trkC) receptors and released nerve growth factor (NGF) and neurotrophin-3 (NT-3). The same neurotrophin's expression was demonstrated 45 days after the intracerebral transplantation of hMSCs into nude mice with surviving astroglial cells. These data further confirm the limited capability of adult hMSC to differentiate into neurons whereas they differentiated in astroglial cells. Moreover, the secretion of neurotrophic factors combined with activation of the specific receptors of transplanted hMSCs demonstrated an alternative mechanism for neuroprotection of degenerating neurons. hMSCs are further defined in their transplantation potential for treating neurological disorders

    Formazione e certificazione informatica nelle scuole superiori

    Get PDF
    Questo articolo presenta i risultati di una rilevazione statistica e di un insieme di interviste a studenti sul tema della formazione e certificazione informatica nelle scuole superiori di tre regioni: Lazio, Lombardia e Puglia

    Analyzing PICL trace data with MEDEA

    Full text link

    A Methodological Framework for AI-Assisted Security Assessments of Active Directory Environments

    No full text
    The pervasiveness of complex technological infrastructures and services coupled with the continuously evolving threat landscape poses new sophisticated security risks. These risks are mostly associated with many diverse vulnerabilities related to software or hardware security flaws, misconfigurations and operational weaknesses. In this scenario, a timely assessment and mitigation of the security risks affecting technological environments are of paramount importance. To cope with these compelling issues, we propose an AI-assisted methodological framework aimed at evaluating whether the target environment is vulnerable or safe. The framework is based on the combined application of graph-based and machine learning techniques. More precisely, the components of the target together with their vulnerabilities are represented by graphs whose analysis identifies the attack paths associated with potential security threats. Machine learning techniques classify these paths and provide the security assessment of the target. The experimental evaluation of the proposed framework was performed on 220 artificially generated Active Directory environments, half of which injected with vulnerabilities. The results of the classification process were generally good. For example, the F1-score obtained by the Random Forest classifier for the assessment of vulnerable networks was equal to 0.91. These results suggest that our approach could be applied for automating the security assessment procedures of complex networked environments

    PERF (Performance Evaluation of Complex Systems: Techniques, Methodologies and Tools

    No full text
    The aim of this Project is to study the foundations of performance evaluation, i.e., reliability, efficacy, efficiency, safety, and other Quality of Service attributes, and to advance the state of the art of its methods and tools as to make them able to tackle the complexity of the applications and the systems driving the Information Society and easy to use even by non specialists. The research activities of the 15 research groups involved in the Project will focus on measurement techniques and workload characterization, on formalisms for modeling quantitative and qualitative aspects of complex systems, on solution methods of analytical, numerical and simulation models. In particular, the research is organized into five Work Packages. The activities carried out within these Work Packages deal with workload models, measurement techniques, models based on formalisms, such as, single queue, queueing networks, stochastic Markovian and non Markovian processes, timed Petri nets, stochastic non Markovian Petri nets, process algebra and fault tree, methodologies and tools to predict, at design time, the performance of the system under study, exact closed and iterative solutions, approximate solutions, simulation models of complex systems, distributed simulations. Within each activity, new techniques and methodologies will be proposed. Tools for performance evaluation will be developed as an integral part of such activities
    corecore