813 research outputs found

    Optimizing JavaScript Engines for Modern-day Workloads

    Get PDF
    In modern times, we have seen tremendous increase in popularity and usage of web-based applications. Applications such as presentation softwareand word processors, which were traditionally considered desktop applications are being ported to the web by compiling them to JavaScript. Since JavaScript is the de facto language of the web, JavaScript engine performance significantly affects the overall web application experience. JavaScript, initially intended solely as a client-side scripting language for web browsers, is now being used to implement server-side web applications (node.js) that traditionally have been written in languages like Java. Web application developers expect "C"-like performance out of their applications. Thus, there is a need to reevaluate the optimization strategies implemented in the modern day engines.Thesis statement: I propose that by using run-time and ahead-of-time profiling and type specialization techniques it is possible to improve the performance of JavaScript engines to cater to the needs of modern-day workloads.In this dissertation, we present an improved synergistic type specialization strategy for optimized JavaScript code execution, implemented on top of a research JavaScript engine called MuscalietJS. Our technique combines type feedback and type inference to reinforce and augment each other in a unique way. We then present a novel deoptimization strategy that enables type specialized code generation on top of typed, stack-based virtual machines like CLR. We also describe a server-side offline profiling technique to collect profile information for web application which helps client JavaScript engines (running in the browser) avoid deoptimizations and improve performance of the applications. Finally, we describe a technique to improve the performance of server-side JavaScript code by making use of intelligent profile caching and two new type stability heuristics

    Actionable Program Analyses for Improving Software Performance

    Get PDF
    Nowadays, we have greater expectations of software than ever before. This is followed by constant pressure to run the same program on smaller and cheaper machines. To meet this demand, the application’s performance has become the essential concern in software development. Unfortunately, many applications still suffer from performance issues: coding or design errors that lead to performance degradation. However, finding performance issues is a challenging task: there is limited knowledge on how performance issues are discovered and fixed in practice, and current performance profilers report only where resources are spent, but not where resources are wasted. The goal of this dissertation is to investigate actionable performance analyses that help developers optimize their software by applying relatively simple code changes. To understand causes and fixes of performance issues in real-world software, we first present an empirical study of 98 issues in popular JavaScript projects. The study illustrates the prevalence of simple and recurring optimization patterns that lead to significant performance improvements. Then, to help developers optimize their code, we propose two actionable performance analyses that suggest optimizations based on reordering opportunities and method inlining. In this work, we focus on optimizations with four key properties. First, the optimizations are effective, that is, the changes suggested by the analysis lead to statistically significant performance improvements. Second, the optimizations are exploitable, that is, they are easy to understand and apply. Third, the optimizations are recurring, that is, they are applicable across multiple projects. Fourth, the optimizations are out-of-reach for compilers, that is, compilers can not guarantee that a code transformation preserves the original semantics. To reliably detect optimization opportunities and measure their performance benefits, the code must be executed with sufficient test inputs. The last contribution complements state-of-the-art test generation techniques by proposing a novel automated approach for generating effective tests for higher-order functions. We implement our techniques in practical tools and evaluate their effectiveness on a set of popular software systems. The empirical evaluation demonstrates the potential of actionable analyses in improving software performance through relatively simple optimization opportunities

    Javascript runtime performance analysis: Node and Bun

    Get PDF
    Online services are seeing a growing demand for various use-cases. Functionality of web applications are at a premium. With every new application, different functionality is being implemented with more and more complicated logic. While there are newer technologies invented for the sake of increasing the computing power, it has also been a necessity to support new inventions and improve on existing machines to create a smoother experience on using those applications. Node.js, a JavaScript runtime, has been a reliable name in the technical industry. Node.js can generally satisfy the needs of online applications. However, its performance has been found to be uneven in applications that demand the highest levels of performance. There has been few attempts to out-weight the performance of Node.js and the most recent promising one is Bun. The purpose of this thesis is to compare the performance of Node.js and Bun. The comparison is carried out with different use cases that includes memory usage, execution time, response time, and request throughput. In all cases, multiple sampling has been used to get a precise picture of the factors that affects the performance. The outcome of the thesis shows that Bun is significantly faster compared to Node.js. However, the method of this thesis gives one sided view of the differences between Node.js and Bun. When considering the implementation of Bun, other factors such as security, compatibility and reliability should be taken into account

    Ontology Based Personalized Search Engine

    Get PDF
    An ontology is a representation of knowledge as hierarchies of concepts within domain, using a shared vocabulary to denote the types, properties and inter-relationships of those concepts [1][2]. Ontologies are often equated with classification of hierarchies of classes, class definitions, and the relations, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions, i.e., in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, axioms need to be proposed that constrain interpretation of defined terms [3]. Ontologies are frameworks for organizing information and are collections of URIs. It is a systematic arrangement of all important categories of objects and concepts within a particular field and relationship between them. Search engines are commonly used for information retrieval from web. The ontology based personalized search engine (OPSE) captures the user’s priorities in the form of concepts by mining through the data which has been previously clicked by them. Search results need to be provided according to user profile and user interest so that highly relevant search data is provided to the user. In order to do this, user profiles need to be maintained. Location information is important for searching data; OPSE needs to classify concepts into content concepts and location concepts. User locations (gathered during user registration) are used to supplement the location concepts in OPSE. Ontology based user profiles are used to organize user preferences and adapt personalized ranking function in order for relevant documents to be retrieved according to a suitable ranking. A client-server architecture is used for design of ontology based personalized search engine. The design involves in collecting and storing client clickthrough data. Functionalities such as re-ranking and concept extraction can be performed at the server side of personalized search engine. As an additional requirement, we can address the privacy issue by restricting the information in the user profile exposed to the personalized mobile search engine server with some privacy parameters. The Prototype of OPSE will be developed on the web platform. Ontology based personalized search engines can significantly improve the precision of results

    Web-sovelluksen asiakaspuolen muistinkulutuksen hallinta

    Get PDF
    Today web browsers are used more and more as application runtime environment in addition to their use and origins as document viewers. At the same time web application’s architecture is undergoing changes. For instance functionality is being moved from the backend into the client, following the so-called Thick client architecture. Currently it is quite easy to create client side web applications that do not manage their memory allocations. There has not been large focus in client side application’s memory usage for various reasons. However, currently client side web applications are widely being built and some of these applications are expected to be run for extended periods. Longevity of the application requires application’s succesful memory management. From the performance point of view it is also beneficial that the application manages its memory succesfully. The client-side behaviour of the application is developed with JavaScript, which has automatically managed memory allocations. However, like all abstractions, automatically managed memory is a leaky abstraction to an undecidable problem. In this thesis we aim at finding out what it takes to create client side applications that succesfully manage their memory allocations. We will take a look at the tools available for investigating memory issues during application development. We also developed a memory diagnostics module, in order to be able to diagnose application instance’s memory usage during its use. The diagnostics module developed during this thesis was used succesfully to monitor application’s memory usage over time. With the use of the data provided by the diagnostics module, we were able to identify memory issues from our demo application. However, currently the Web platform does not enable the creation of cross-browser standard relying solution for diagnosing web application’s memory usage

    XTribe: a web-based social computation platform

    Get PDF
    In the last few years the Web has progressively acquired the status of an infrastructure for social computation that allows researchers to coordinate the cognitive abilities of human agents in on-line communities so to steer the collective user activity towards predefined goals. This general trend is also triggering the adoption of web-games as a very interesting laboratory to run experiments in the social sciences and whenever the contribution of human beings is crucially required for research purposes. Nowadays, while the number of on-line users has been steadily growing, there is still a need of systematization in the approach to the web as a laboratory. In this paper we present Experimental Tribe (XTribe in short), a novel general purpose web-based platform for web-gaming and social computation. Ready to use and already operational, XTribe aims at drastically reducing the effort required to develop and run web experiments. XTribe has been designed to speed up the implementation of those general aspects of web experiments that are independent of the specific experiment content. For example, XTribe takes care of user management by handling their registration and profiles and in case of multi-player games, it provides the necessary user grouping functionalities. XTribe also provides communication facilities to easily achieve both bidirectional and asynchronous communication. From a practical point of view, researchers are left with the only task of designing and implementing the game interface and logic of their experiment, on which they maintain full control. Moreover, XTribe acts as a repository of different scientific experiments, thus realizing a sort of showcase that stimulates users' curiosity, enhances their participation, and helps researchers in recruiting volunteers.Comment: 11 pages, 2 figures, 1 table, 2013 Third International Conference on Cloud and Green Computing (CGC), Sept. 30 2013-Oct. 2 2013, Karlsruhe, German

    Web Tracking: Mechanisms, Implications, and Defenses

    Get PDF
    This articles surveys the existing literature on the methods currently used by web services to track the user online as well as their purposes, implications, and possible user's defenses. A significant majority of reviewed articles and web resources are from years 2012-2014. Privacy seems to be the Achilles' heel of today's web. Web services make continuous efforts to obtain as much information as they can about the things we search, the sites we visit, the people with who we contact, and the products we buy. Tracking is usually performed for commercial purposes. We present 5 main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, or yet other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users (price discrimination, assessing financial credibility, determining insurance coverage, government surveillance, and identity theft). For each of the tracking methods, we present possible defenses. Apart from describing the methods and tools used for keeping the personal data away from being tracked, we also present several tools that were used for research purposes - their main goal is to discover how and by which entity the users are being tracked on their desktop computers or smartphones, provide this information to the users, and visualize it in an accessible and easy to follow way. Finally, we present the currently proposed future approaches to track the user and show that they can potentially pose significant threats to the users' privacy.Comment: 29 pages, 212 reference

    HW-SW co-design techniques for modern programming languages

    Get PDF
    Modern programming languages raise the level of abstraction, hide the details of computer systems from programmers, and provide many convenient features. Such strong abstraction from the details of computer systems with runtime support of many convenient features increases the productivity of programmers. Such benefits, however, come with performance overheads. First, many of modern programming languages use a dynamic type system which incurs overheads of profiling program execution and generating specialized codes in the middle of execution. Second, such specialized codes constantly add overheads of dynamic type checks. Third, most of modern programming languages use automatic memory management which incurs memory overheads due to metadata and delayed reclamation as well as execution time overheads due to garbage collection operations. This thesis makes three contributions to address the overheads of modern programming languages. First, it describes the enhancements to the compiler of dynamic scripting languages necessary to enable sharing of compilation results across executions. These compilers have been developed with little consideration for reusing optimization efforts across executions since it is considered difficult due to dynamic nature of the languages. As a first step toward enabling the reuse of compilation results of dynamic scripting languages, it focuses on inline caching (IC) which is one of the fundamental optimization techniques for dynamic type systems. Second, it describes a HW-SW co-design technique to further improve IC operations. While the first proposal focuses on expensive IC miss handling during JavaScript initialization, the second proposal accelerates IC hit operations to improve the overall performance. Lastly, it describes how to exploit common sharing patterns of programs to reduce overheads of reference counting for garbage collection. It minimizes atomic operations in reference counting by biasing each object to a specific thread

    Streaming-Based Progressive Enhancement of Websites for Slow and Error-Prone Networks

    Get PDF
    This thesis aims to improve the loading times of web pages by streaming the content in a non-render-blocking way. At the beginning of this thesis, a large-scale analysis was performed, spanning all downloadable pages of the top 10.000 web pages according to the Tranco-list. This analysis aimed to gather data about the render-blocking properties of web page resources, including HTML, JavaScript, and CSS. It further gathered data about code coverage, giving insight into how much of the render-blocking code is actually used. Therefore, the structural optimization potential could be determined. Less render-blocking code will, in turn, lead to faster loading times due to requiring less data to display the page. The analysis showed that there is significant optimization potential left. On average, modern web pages are built with a combined 86.7% of JavaScript and CSS, the rest being HTML. Both JavaScript and CSS are loaded mostly render-blocking, with 91.8% of JavaScript and 89.47% of CSS loaded in this way. Furthermore, only 40.8% of JavaScript and 15.9% of CSS is used until render. This shows that, on average, web pages have significant room for improvement. The concept, which is then developed based on the results of this analysis, aims to load web pages in a new way by streaming all render-blocking content. The related work showed that multiple sub-techniques are required first, which were conceptualized next. First, an optimization and splitting tool for CSS is proposed, called Essential. This is followed by an optimization framework concept for JavaScript, consisting of Waiter and AUTRATAC. Lastly, a backward-compatible approach was developed, which allows for splitting HTML and streaming all content to a client. The evaluation showed that the streamed web page loads significantly faster when comparing FCP, content ”Above-the-Fold,” and total transfer time of all render-blocking resources of the document. For example, the case study test determined that the streamed page could reduce the time until FCP by 83.3% at 2 Mbps and the time until the last render-blocking data is transferred by up to 70.4% at 2 Mbps. Furthermore, existing streaming methods were also compared, determining that WebSockets meets the requirements to stream web page content sufficiently. Lastly, an anonymous online user questionnaire showed that 85% of users preferred this new style of loading pages
    • …
    corecore