2,281 research outputs found

    VXA: A Virtual Architecture for Durable Compressed Archives

    Full text link
    Data compression algorithms change frequently, and obsolete decoders do not always run on new hardware and operating systems, threatening the long-term usability of content archived using those algorithms. Re-encoding content into new formats is cumbersome, and highly undesirable when lossy compression is involved. Processor architectures, in contrast, have remained comparatively stable over recent decades. VXA, an archival storage system designed around this observation, archives executable decoders along with the encoded content it stores. VXA decoders run in a specialized virtual machine that implements an OS-independent execution environment based on the standard x86 architecture. The VXA virtual machine strictly limits access to host system services, making decoders safe to run even if an archive contains malicious code. VXA's adoption of a "native" processor architecture instead of type-safe language technology allows reuse of existing "hand-optimized" decoders in C and assembly language, and permits decoders access to performance-enhancing architecture features such as vector processing instructions. The performance cost of VXA's virtualization is typically less than 15% compared with the same decoders running natively. The storage cost of archived decoders, typically 30-130KB each, can be amortized across many archived files sharing the same compression method.Comment: 14 pages, 7 figures, 2 table

    50 years of isolation

    Get PDF
    The traditional means for isolating applications from each other is via the use of operating system provided “process” abstraction facilities. However, as applications now consist of multiple fine-grained components, the traditional process abstraction model is proving to be insufficient in ensuring this isolation. Statistics indicate that a high percentage of software failure occurs due to propagation of component failures. These observations are further bolstered by the attempts by modern Internet browser application developers, for example, to adopt multi-process architectures in order to increase robustness. Therefore, a fresh look at the available options for isolating program components is necessary and this paper provides an overview of previous and current research on the area

    Classification of Existing Virtualization Methods Used in Telecommunication Networks

    Full text link
    This article studies the existing methods of virtualization of different resources. The positive and negative aspects of each of the methods are analyzed, the perspectivity of the approach is noted. It is also made an attempt to classify virtualization methods according to the application domain, which allows us to discover the method weaknesses which are needed to be optimized.Comment: 4 pages, 3 figure

    A family of droids -- Android malware detection via behavioral modeling: static vs dynamic analysis

    Full text link
    Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device's owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MaMaDroid, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls' sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AuntieDroid and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F-measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.Accepted manuscrip

    Development of a virtualization systems architecture course for the information sciences and technologies department at the Rochester Institute of Technology (RIT)

    Get PDF
    Virtualization is a revolutionary technology that has changed the way computing is performed in data centers. By converting traditionally siloed computing assets to shared pools of resources, virtualization provides a considerable number of advantages such as more efficient use of physical server resources, more efficient use of datacenter space, reduced energy consumption, simplified system administration, simplified backup and disaster recovery, and a host of other advantages. Due to the considerable number of advantages, companies and organizations of various sizes have either migrated their workloads to virtualized environments or are considering virtualization of their workloads. As per Gartner Magic Quadrant for x86 Server Virtualization Infrastructure 2013 , roughly two-third of x86 server workloads are virtualized [1]. The need for virtualization solutions by companies and organizations has increased the demand for qualified virtualization professionals for planning, designing, implementing, and maintaining virtualized infrastructure of different scales. Although universities are the main source for educating IT professionals, the field of information technology is so dynamic and changing so rapidly that not all universities can keep pace with the change. As a result, providing the latest technology that is being used in the information technology industry in the curriculums of universities is a big advantage for information technology universities. Taking into consideration the trend toward virtualization in computing environments and the great demand for virtualization professionals in the industry, the faculty of Information Sciences and Technologies department at RIT decided to prepare a graduate course in the master\u27s program in Networking and System Administration entitled Virtualization Systems Architecture , which better prepares students to a find a career in the field of enterprise computing. This research is composed of five chapters. It starts by briefly going through the history of computer virtualization and exploring when and why it came into existence and how it evolved. The second chapter of the research goes through the challenges in virtualization of the x86 platform architecture and the solutions used to overcome the challenges. In the third chapter, various types of hypervisors are discussed and the advantages and disadvantages of each one are discussed. In the fourth chapter, the architecture and features of the two leading virtualization solutions are explored. Then in the final chapter, the research goes through the contents of the Virtualization Systems Architecture course
    • …
    corecore