2,549 research outputs found

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    Evaluation of load balancing approaches for Erlang concurrent application in cloud systems

    Get PDF
    Cloud system accommodates the computing environment including PaaS (platform as a service), SaaS (software as a service), and IaaS (infrastructure as service) that enables the services of cloud systems.  Cloud system allows multiple users to employ computing services through browsers, which reflects an alternative service model that alters the local computing workload to a distant site. Cloud virtualization is another characteristic of the clouds that deliver virtual computing services and imitate the functionality of physical computing resources. It refers to an elastic load balancing management that provides the flexible model of on-demand services. The virtualization allows organizations to improve high levels of reliability, accessibility, and scalability by having a capability to execute applications on multiple resources simultaneously. In this paper we use a queuing model to consider a flexible load balancing and evaluate performance metrics such as mean queue length, throughput, mean waiting time, utilization, and mean traversal time. The model is aware of the arrival of concurrent applications with an Erlang distribution. Simulation results regarding performance metrics are investigated. Results point out that in Cloud systems both the fairness and load balancing are to be significantly considered

    Cloud computing resource scheduling and a survey of its evolutionary approaches

    Get PDF
    A disruptive technology fundamentally transforming the way that computing services are delivered, cloud computing offers information and communication technology users a new dimension of convenience of resources, as services via the Internet. Because cloud provides a finite pool of virtualized on-demand resources, optimally scheduling them has become an essential and rewarding topic, where a trend of using Evolutionary Computation (EC) algorithms is emerging rapidly. Through analyzing the cloud computing architecture, this survey first presents taxonomy at two levels of scheduling cloud resources. It then paints a landscape of the scheduling problem and solutions. According to the taxonomy, a comprehensive survey of state-of-the-art approaches is presented systematically. Looking forward, challenges and potential future research directions are investigated and invited, including real-time scheduling, adaptive dynamic scheduling, large-scale scheduling, multiobjective scheduling, and distributed and parallel scheduling. At the dawn of Industry 4.0, cloud computing scheduling for cyber-physical integration with the presence of big data is also discussed. Research in this area is only in its infancy, but with the rapid fusion of information and data technology, more exciting and agenda-setting topics are likely to emerge on the horizon
    corecore