379 research outputs found

    A differentiated proposal of three dimension i/o performance characterization model focusing on storage environments

    Get PDF
    The I/O bottleneck remains a central issue in high-performance environments. Cloud computing, high-performance computing (HPC) and big data environments share many underneath difficulties to deliver data at a desirable time rate requested by high-performance applications. This increases the possibility of creating bottlenecks throughout the application feeding process by bottom hardware devices located in the storage system layer. In the last years, many researchers have been proposed solutions to improve the I/O architecture considering different approaches. Some of them take advantage of hardware devices while others focus on a sophisticated software approach. However, due to the complexity of dealing with high-performance environments, creating solutions to improve I/O performance in both software and hardware is challenging and gives researchers many opportunities. Classifying these improvements in different dimensions allows researchers to understand how these improvements have been built over the years and how it progresses. In addition, it also allows future efforts to be directed to research topics that have developed at a lower rate, balancing the general development process. This research present a three-dimension characterization model for classifying research works on I/O performance improvements for large scale storage computing facilities. This classification model can also be used as a guideline framework to summarize researches providing an overview of the actual scenario. We also used the proposed model to perform a systematic literature mapping that covered ten years of research on I/O performance improvements in storage environments. This study classified hundreds of distinct researches identifying which were the hardware, software, and storage systems that received more attention over the years, which were the most researches proposals elements and where these elements were evaluated. In order to justify the importance of this model and the development of solutions that targets I/O performance improvements, we evaluated a subset of these improvements using a a real and complete experimentation environment, the Grid5000. Analysis over different scenarios using a synthetic I/O benchmark demonstrates how the throughput and latency parameters behaves when performing different I/O operations using distinct storage technologies and approaches.O gargalo de E/S continua sendo um problema central em ambientes de alto desempenho. Os ambientes de computação em nuvem, computação de alto desempenho (HPC) e big data compartilham muitas dificuldades para fornecer dados em uma taxa de tempo desejável solicitada por aplicações de alto desempenho. Isso aumenta a possibilidade de criar gargalos em todo o processo de alimentação de aplicativos pelos dispositivos de hardware inferiores localizados na camada do sistema de armazenamento. Nos últimos anos, muitos pesquisadores propuseram soluções para melhorar a arquitetura de E/S considerando diferentes abordagens. Alguns deles aproveitam os dispositivos de hardware, enquanto outros se concentram em uma abordagem sofisticada de software. No entanto, devido à complexidade de lidar com ambientes de alto desempenho, criar soluções para melhorar o desempenho de E/S em software e hardware é um desafio e oferece aos pesquisadores muitas oportunidades. A classificação dessas melhorias em diferentes dimensões permite que os pesquisadores entendam como essas melhorias foram construídas ao longo dos anos e como elas progridem. Além disso, também permite que futuros esforços sejam direcionados para tópicos de pesquisa que se desenvolveram em menor proporção, equilibrando o processo geral de desenvolvimento. Esta pesquisa apresenta um modelo de caracterização tridimensional para classificar trabalhos de pesquisa sobre melhorias de desempenho de E/S para instalações de computação de armazenamento em larga escala. Esse modelo de classificação também pode ser usado como uma estrutura de diretrizes para resumir as pesquisas, fornecendo uma visão geral do cenário real. Também usamos o modelo proposto para realizar um mapeamento sistemático da literatura que abrangeu dez anos de pesquisa sobre melhorias no desempenho de E/S em ambientes de armazenamento. Este estudo classificou centenas de pesquisas distintas, identificando quais eram os dispositivos de hardware, software e sistemas de armazenamento que receberam mais atenção ao longo dos anos, quais foram os elementos de proposta mais pesquisados e onde esses elementos foram avaliados. Para justificar a importância desse modelo e o desenvolvimento de soluções que visam melhorias no desempenho de E/S, avaliamos um subconjunto dessas melhorias usando um ambiente de experimentação real e completo, o Grid5000. Análises em cenários diferentes usando um benchmark de E/S sintética demonstra como os parâmetros de vazão e latência se comportam ao executar diferentes operações de E/S usando tecnologias e abordagens distintas de armazenamento

    Draft Function Allocation Framework and Preliminary Technical Basis for Advanced SMR Concepts of Operations

    Full text link

    Architectural Support for Software Performance in Continuous Software Engineering: A Systematic Mapping Study

    Full text link
    The continuous software engineering paradigm is gaining popularity in modern development practices, where the interleaving of design and runtime activities is induced by the continuous evolution of software systems. In this context, performance assessment is not easy, but recent studies have shown that architectural models evolving with the software can support this goal. In this paper, we present a mapping study aimed at classifying existing scientific contributions that deal with the architectural support for performance-targeted continuous software engineering. We have applied the systematic mapping methodology to an initial set of 215 potentially relevant papers and selected 66 primary studies that we have analyzed to characterize and classify the current state of research. This classification helps to focus on the main aspects that are being considered in this domain and, mostly, on the emerging findings and implications for future researc

    Supervisory Control System Architecture for Advanced Small Modular Reactors

    Full text link

    Managing contamination delay to improve Timing Speculation architectures

    Get PDF
    Timing Speculation (TS) is a widely known method for realizing better-than-worst-case systems. Aggressive clocking, realizable by TS, enable systems to operate beyond specified safe frequency limits to effectively exploit the data dependent circuit delay. However, the range of aggressive clocking for performance enhancement under TS is restricted by short paths. In this paper, we show that increasing the lengths of short paths of the circuit increases the effectiveness of TS, leading to performance improvement. Also, we propose an algorithm to efficiently add delay buffers to selected short paths while keeping down the area penalty. We present our algorithm results for ISCAS-85 suite and show that it is possible to increase the circuit contamination delay by up to 30% without affecting the propagation delay. We also explore the possibility of increasing short path delays further by relaxing the constraint on propagation delay and analyze the performance impact

    Quality Time: A simple online technique for quantifying multicore execution efficiency

    Full text link
    Abstract—In order to increase utilization, multicore pro-cessors share memory resources among an increasing number of cores. This sharing leads to memory interference, which in turn leads to a non-uniform degradation in the execution of concurrent applications, even in the presence of fairness mechanisms. Many utilities rely on application CPU Time both for measuring resource usage and inferring application progress. These utilities are therefore directly affected by the distorting effects of multicore interference on the representativeness of CPU Time as a proxy for progress. This makes reasoning about myriad properties from fairness, to QoS, to throughput optimality very difficult in consolidated environments, such as IaaS. We introduce the notion of Quality Time, which provides a measure of application progress analogous to CPU Time’s measure of resource usage, and we propose a simple online sampling-based technique to approximate Quality Time with high accuracy. We have implemented three user-space tools called Qtime, Qtop, and Qplacer. Qtime can attach to an application to calculate its Quality Time online, Qtop is a dashboard that monitors the Quality Times of all applications on the system, and Qplacer leverages Quality Time information to find better application placements and improve overall system quality. With Quality Time, we are able to reduce the error in inferring execution efficiency from 150.3 % to 25.1 % in the worst case and from 30.0 % to 7.5 % on average. Qplacer can increase average system throughput by 3.2 % when compared to static application placement. I

    An Integrated Framework for Staffing and Shift Scheduling in Hospitals

    Get PDF
    Over the years, one of the main concerns confronting hospital management is optimising the staffing and scheduling decisions. Consequences of inappropriate staffing can adversely impact on hospital performance, patient experience and staff satisfaction alike. A comprehensive review of literature (more than 1300 journal articles) is presented in a new taxonomy of three dimensions; problem contextualisation, solution approach, evaluation perspective and uncertainty. Utilising Operations Research methods, solutions can provide a positive contribution in underpinning staffing and scheduling decisions. However, there are still opportunities to integrate decision levels; incorporate practitioners view in solution architectures; consider staff behaviour impact, and offer comprehensive applied frameworks. Practitioners’ perspectives have been collated using an extensive exploratory study in Irish hospitals. A preliminary questionnaire has indicated the need of effective staffing and scheduling decisions before semi-structured interviews have taken place with twenty-five managers (fourteen Directors and eleven head nurses) across eleven major acute Irish hospitals (about 50% of healthcare service deliverers). Thematic analysis has produced five key themes; demand for care, staffing and scheduling issues, organisational aspects, management concern, and technology-enabled. In addition to other factors that can contribute to the problem such as coordination, environment complexity, understaffing, variability and lack of decision support. A multi-method approach including data analytics, modelling and simulation, machine learning, and optimisation has been employed in order to deliver adequate staffing and shift scheduling framework. A comprehensive portfolio of critical factors regarding patients, staff and hospitals are included in the decision. The framework was piloted in the Emergency Department of one of the leading and busiest university hospitals in Dublin (Tallaght Hospital). Solutions resulted from the framework (i.e. new shifts, staff workload balance, increased demands) have showed significant improvement in all key performance measures (e.g. patient waiting time, staff utilisation). Management team of the hospital endorsed the solution framework and are currently discussing enablers to implement the recommendation
    corecore