352 research outputs found

    Capture and analysis of the NFS workload of an ISP email service

    Get PDF
    Tese de mestrado Segurança Informática, Universidade de Lisboa, Faculdade de Ciências, 2009Os objectivos desta tese são capturar a carga de comandos NFS de um serviço de email de um provedor de Internet, converter a captura para um formato mais flexível, e analisar as características do mesmo. Até ao momento, nenhum outro trabalho publicado, analisou a carga de comandos de um serviço de email de um provedor de Internet. Um novo estudo, irá ajudar a compreender qual o impacto das diferenças na carga de comandos de um sistema de ficheiros de rede, e o que caracteriza a carga de comandos de um sistema de email real. A captura será analisada, de forma a encontrar novas propriedades que futuros sistemas de ficheiros poderão suportar ou explorar. Nesta tese, fazemos uma análise exaustiva de como capturar altos débitos de tráfego, que envolve vários desafios. Identificamos os problemas encontrados e explicamos como contornar esses problemas. Devido ao elevado tamanho da captura e devido ao espaço limitado de armazenamento disponível, precisámos de converter a captura para um formato mais compacto e flexível, de forma a podermos fazer uma análise de forma eficiente. Descrevemos os desafios para analisar grandes volumes de dados e quais as técnicas utilizadas. Visto que a captura contém dados sensíveis das caixas de correio dos utilizadores, tivemos que anonimizar a captura. Descrevemos que dados têm de ser anonimizados de forma a disponibilizarmos a captura gratuitamente. Também analisamos a captura e demonstramos as características únicas da captura estudada, tais como a natureza periódica da actividade do sistema de ficheiros, a distribuição de tamanhos de todos os ficheiros acedidos, a sequencialidade dos dados acedidos e os tipos de anexos mais comuns numa típica caixa de correio.The aims of this thesis are to capture a real-world NFS workload of an ISP email service, convert the traces to a more useful and flexible format and analyze the characteristics of the workload. No published work has ever analyzed a large-scale, real-world ISP email workload. A new study will help to understand how these changes impact network file system workloads and what characterizes a real-world email workload. Storage traces are analyzed to find properties that future systems should support or exploit. In this thesis, we provide an in-depth explanation of how we were able to capture high data rates, which involves several challenges. We identify the bottlenecks faced and explain how we circumvented them. Due to the large size of the captured workload and limited available storage, we needed to convert the traces to a more compact and flexible format so we could further analyze the workload in an efficient manner. We describe the challenges of analyzing large datasets and the techniques that were used. Since the workload contains sensitive information about the mailboxes, we had to anonymize the workload. We will describe what needed to be anonymized and how it was done. This was an important step to get permission from the ISP to publish the anonymized traces, which will be available for free download. We also performed several analyses that demonstrate unique characteristics of the studied workload, such as the periodic nature of file system activity, the file size distribution for all accessed files, the sequentiality of accessed data, and the most common type of attachments found in a typical mailbox

    Dynamic service chain composition in virtualised environment

    Get PDF
    Network Function Virtualisation (NFV) has contributed to improving the flexibility of network service provisioning and reducing the time to market of new services. NFV leverages the virtualisation technology to decouple the software implementation of network appliances from the physical devices on which they run. However, with the emergence of this paradigm, providing data centre applications with an adequate network performance becomes challenging. For instance, virtualised environments cause network congestion, decrease the throughput and hurt the end user experience. Moreover, applications usually communicate through multiple sequences of virtual network functions (VNFs), aka service chains, for policy enforcement and performance and security enhancement, which increases the management complexity at to the network level. To address this problematic situation, existing studies have proposed high-level approaches of VNFs chaining and placement that improve service chain performance. They consider the VNFs as homogenous entities regardless of their specific characteristics. They have overlooked their distinct behaviour toward the traffic load and how their underpinning implementation can intervene in defining resource usage. Our research aims at filling this gap by finding out particular patterns on production and widely used VNFs. And proposing a categorisation that helps in reducing network latency at the chains. Based on experimental evaluation, we have classified firewalls, NAT, IDS/IPS, Flow monitors into I/O- and CPU-bound functions. The former category is mainly sensitive to the throughput, in packets per second, while the performance of the latter is primarily affected by the network bandwidth, in bits per second. By doing so, we correlate the VNF category with the traversing traffic characteristics and this will dictate how the service chains would be composed. We propose a heuristic called Natif, for a VNF-Aware VNF insTantIation and traFfic distribution scheme, to reconcile the discrepancy in VNF requirements based on the category they belong to and to eventually reduce network latency. We have deployed Natif in an OpenStack-based environment and have compared it to a network-aware VNF composition approach. Our results show a decrease in latency by around 188% on average without sacrificing the throughput

    Assert(!Defined(Sequential I/O))

    Get PDF
    The term sequential I/O is widely used in systems research with the intuitive understanding that it means consecutive access. From a survey of the literature, though, this intuitive understanding has translated into numerous, inconsistent definitions. Since sequential I/O is such a fundamental concept in systems research, we believe that a sequentiality metric should allow us to compare access patterns in a meaningful way. We explore access properties that could be incorporated into potential metrics for sequential I/O including: access size, gaps between accesses, multi-stream, and inter-arrival time. We then analyze hundreds of largescale storage traces and discuss how potential metrics compare. Interestingly, we find I/O traces considered highly sequential by one metric can be highly random to another metric. We further demonstrate that many plausible metrics are weakly correlated, though metrics weighted by size have more consistency. While there may not be a single metric for sequential I/O that is best in all cases, we believe systems researchers should more carefully consider, and state, which definition they use

    An insight in cloud computing solutions for intensive processing of remote sensing data

    Get PDF
    The investigation of Earth's surface deformation phenomena provides critical insights into several processes of great interest for science and society, especially from the perspective of further understanding the Earth System and the impact of the human activities. Indeed, the study of ground deformation phenomena can be helpful for the comprehension of the geophysical dynamics dominating natural hazards such as earthquakes, volcanoes and landslide. In this context, the microwave space-borne Earth Observation (EO) techniques represent very powerful instruments for the ground deformation estimation. In particular, Small BAseline Subset (SBAS) is regarded as one of the key techniques, for its ability to investigate surface deformation affecting large areas of the Earth with a centimeter to millimeter accuracy in different scenarios (volcanoes, tectonics, landslides, anthropogenic induced land motions). The current Remote Sensing scenario is characterized by the availability of huge archives of radar data that are going to increase with the advent of Sentinel-1 satellites. The effective exploitation of this large amount of data requires both adequate computing resources as well as advanced algorithms able to properly exploit such facilities. In this work we concentrated on the use of the P-SBAS algorithm (a parallel version of SBAS) within HPC infrastructure, to finally investigate the effectiveness of such technologies for EO applications. In particular we demonstrated that the cloud computing solutions represent a valid alternative for scientific application and a promising research scenario, indeed, from all the experiments that we have conducted and from the results obtained performing Parallel Small Baseline Subset (P-SBAS) processing, the cloud technologies and features result to be absolutely competitive in terms of performance with in-house HPC cluster solution

    Building Computing-As-A-Service Mobile Cloud System

    Get PDF
    The last five years have witnessed the proliferation of smart mobile devices, the explosion of various mobile applications and the rapid adoption of cloud computing in business, governmental and educational IT deployment. There is also a growing trends of combining mobile computing and cloud computing as a new popular computing paradigm nowadays. This thesis envisions the future of mobile computing which is primarily affected by following three trends: First, servers in cloud equipped with high speed multi-core technology have been the main stream today. Meanwhile, ARM processor powered servers is growingly became popular recently and the virtualization on ARM systems is also gaining wide ranges of attentions recently. Second, high-speed internet has been pervasive and highly available. Mobile devices are able to connect to cloud anytime and anywhere. Third, cloud computing is reshaping the way of using computing resources. The classic pay/scale-as-you-go model allows hardware resources to be optimally allocated and well-managed. These three trends lend credence to a new mobile computing model with the combination of resource-rich cloud and less powerful mobile devices. In this model, mobile devices run the core virtualization hypervisor with virtualized phone instances, allowing for pervasive access to more powerful, highly-available virtual phone clones in the cloud. The centralized cloud, powered by rich computing and memory recourses, hosts virtual phone clones and repeatedly synchronize the data changes with virtual phone instances running on mobile devices. Users can flexibly isolate different computing environments. In this dissertation, we explored the opportunity of leveraging cloud resources for mobile computing for the purpose of energy saving, performance augmentation as well as secure computing enviroment isolation. We proposed a framework that allows mo- bile users to seamlessly leverage cloud to augment the computing capability of mobile devices and also makes it simpler for application developers to run their smartphone applications in the cloud without tedious application partitioning. This framework was built with virtualization on both server side and mobile devices. It has three building blocks including agile virtual machine deployment, efficient virtual resource management, and seamless mobile augmentation. We presented the design, imple- mentation and evaluation of these three components and demonstrated the feasibility of the proposed mobile cloud model

    Electromagnetic Radiation

    Get PDF
    The application of electromagnetic radiation in modern life is one of the most developing technologies. In this timely book, the authors comprehensively treat two integrated aspects of electromagnetic radiation, theory and application. It covers a wide scope of practical topics, including medical treatment, telecommunication systems, and radiation effects. The book sections have clear presentation, some state of the art examples, which makes this book an indispensable reference book for electromagnetic radiation applications

    Fifth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This document contains copies of those technical papers received in time for publication prior to the Fifth Goddard Conference on Mass Storage Systems and Technologies held September 17 - 19, 1996, at the University of Maryland, University Conference Center in College Park, Maryland. As one of an ongoing series, this conference continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include storage architecture, database management, data distribution, file system performance and modeling, and optical recording technology. There will also be a paper on Application Programming Interfaces (API) for a Physical Volume Repository (PVR) defined in Version 5 of the Institute of Electrical and Electronics Engineers (IEEE) Reference Model (RM). In addition, there are papers on specific archives and storage products

    The Third NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in October 1993. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems involved. Discussion topics include the necessary use of computers in the solution of today's infinitely complex problems, the need for greatly increased storage densities in both optical and magnetic recording media, currently popular storage media and magnetic media storage risk factors, data archiving standards including a talk on the current status of the IEEE Storage Systems Reference Model (RM). Additional topics addressed System performance, data storage system concepts, communications technologies, data distribution systems, data compression, and error detection and correction
    • …
    corecore