1,137 research outputs found

    Performance scalability analysis of JavaScript applications with web workers

    Get PDF
    Web applications are getting closer to the performance of native applications taking advantage of new standard–based technologies. The recent HTML5 standard includes, among others, the Web Workers API that allows executing JavaScript applications on multiple threads, or workers. However, the internals of the browser’s JavaScript virtual machine does not expose direct relation between workers and running threads in the browser and the utilization of logical cores in the processor. As a result, developers do not know how performance actually scales on different environments and therefore what is the optimal number of workers on parallel JavaScript codes. This paper presents the first performance scalability analysis of parallel web apps with multiple workers. We focus on two case studies representative of different worker execution models. Our analyses show performance scaling on different parallel processor microarchitectures and on three major web browsers in the market. Besides, we study the impact of co–running applications on the web app performance. The results provide insights for future approaches to automatically find out the optimal number of workers that provide the best tradeoff between performance and resource usage to preserve system responsiveness and user experience, especially on environments with unexpected changes on system workload.Peer ReviewedPostprint (author's final draft

    Dynamic web worker pool management for highly parallel javascript web applications

    Get PDF
    JavaScript web applications are improving performance mainly thanks to the inclusion of new standards by HTML5. Among others, web workers API allows multithreaded JavaScript web apps to exploit parallel processors. However, developers have difficulties to determine the minimum number of web workers that provide the highest performance. But even if developers found out this optimal number, it is a static value configured at the beginning of the execution. Because users tend to execute other applications in background, the estimated number of web workers could be non-optimal, because it may overload or underutilize the system. In this paper, we propose a solution for highly parallel web apps to dynamically adapt the number of running web workers to the actual available resources, avoiding the hassle to estimate a static optimal number of threads. The solution consists in the inclusion of a web worker pool and a simple management algorithm in the web app. Even though there are co-running applications, the results show our approach dynamically enables a number of web workers close to the optimal. Our proposal, which is independent of the web browser, overcomes the lack of knowledge of the underlying processor architecture as well as dynamic resources availability changes.Peer ReviewedPostprint (author's final draft

    Accelerated Mobile Pages from JavaScript as Accelerator Tool for Web Service on E-Commerce in the E-Business

    Get PDF
    E-commerce is a sub-part of E-business which includes all kinds of functions and business activities by using electronic data that has the main purpose to increase corporate profits. One of the strategies used is to automate the existing tasks in E-commerce by utilizing web service facilities, which will significantly save time. The use of smartphones as a primary means of information and communication forces web service application providers to improve their services and facilities, such as websites that can be opened quickly and lightly on smartphone devices. This paper will discuss the benefits of accelerated mobile pages as a coding accelerator tool in the javaScript programming language using XML, HTML and XHTML programming as well as involving the use of SOAP, WSDL and NuSOAP that will run on both HTTP and HTTPS protocols. Using accelerated mobile pages as a means of accelerator tool on E-commerce in E-business can directly improve web service performance. This is very prominent when the site is accessed through a smartphone that has limited resources, in the form of website access will feel fast, easy and light

    Social media analytics: a survey of techniques, tools and platforms

    Get PDF
    This paper is written for (social science) researchers seeking to analyze the wealth of social media now available. It presents a comprehensive review of software tools for social networking media, wikis, really simple syndication feeds, blogs, newsgroups, chat and news feeds. For completeness, it also includes introductions to social media scraping, storage, data cleaning and sentiment analysis. Although principally a review, the paper also provides a methodology and a critique of social media tools. Analyzing social media, in particular Twitter feeds for sentiment analysis, has become a major research and business activity due to the availability of web-based application programming interfaces (APIs) provided by Twitter, Facebook and News services. This has led to an ‘explosion’ of data services, software tools for scraping and analysis and social media analytics platforms. It is also a research area undergoing rapid change and evolution due to commercial pressures and the potential for using social media data for computational (social science) research. Using a simple taxonomy, this paper provides a review of leading software tools and how to use them to scrape, cleanse and analyze the spectrum of social media. In addition, it discussed the requirement of an experimental computational environment for social media research and presents as an illustration the system architecture of a social media (analytics) platform built by University College London. The principal contribution of this paper is to provide an overview (including code fragments) for scientists seeking to utilize social media scraping and analytics either in their research or business. The data retrieval techniques that are presented in this paper are valid at the time of writing this paper (June 2014), but they are subject to change since social media data scraping APIs are rapidly changing

    Energy-Efficient Acceleration of Asynchronous Programs.

    Full text link
    Asynchronous or event-driven programming has become the dominant programming model in the last few years. In this model, computations are posted as events to an event queue from where they get processed asynchronously by the application. A huge fraction of computing systems built today use asynchronous programming. All the Web 2.0 JavaScript applications (e.g., Gmail, Facebook) use asynchronous programming. There are now more than two million mobile applications available between the Apple App Store and Google Play, which are all written using asynchronous programming. Distributed servers (e.g., Twitter, LinkedIn, PayPal) built using actor-based languages (e.g., Scala) and platforms such as node.js rely on asynchronous events for scalable communication. Internet-of-Things (IoT), embedded systems, sensor networks, desktop GUI applications, etc., all rely on the asynchronous programming model. Despite the ubiquity of asynchronous programs, their unique execution characteristics have been largely ignored by conventional processor architectures, which have remained heavily optimized for synchronous programs. Asynchronous programs are characterized by short events executing varied tasks. This results in a large instruction footprint with little cache locality, severely degrading cache performance. Also, event execution has few repeatable patterns causing poor branch prediction. This thesis proposes novel processor optimizations exploiting the unique execution characteristics of asynchronous programs for performance optimization and energy-efficiency. These optimizations are designed to make the underlying hardware aware of discrete events and thereafter, exploit the latent Event-Level Parallelism present in these applications. Through speculative pre-execution of future events, cache addresses and branch outcomes are recorded and later used for improving cache and branch predictor performance. A hardware instruction prefetcher specialized for asynchronous programs is also proposed as a comparative design direction.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120780/1/gauravc_1.pd

    Reducing Memory in Software-Based Thread-Level Speculation for JavaScript Virtual Machine Execution of Web Applications

    Full text link

    Online interactive thematic mapping: Applications and techniques for socio-economic research

    Get PDF
    Recent advances in public sector open data and online mapping software are opening up new possibilities for interactive mapping in research applications. Increasingly there are opportunities to develop advanced interactive platforms with exploratory and analytical functionality. This paper reviews tools and workflows for the production of online research mapping platforms, alongside a classification of the interactive functionality that can be achieved. A series of mapping case studies from government, academia and research institutes are reviewed. The conclusions are that online cartography's technical hurdles are falling due to open data releases, open source software and cloud services innovations. The data exploration functionality of these new tools is powerful and complements the emerging fields of big data and open GIS. International data perspectives are also increasingly feasible. Analytical functionality for web mapping is currently less developed, but promising examples can be seen in areas such as urban analytics. For more presentational research communication applications, there has been progress in story-driven mapping drawing on data journalism approaches that are capable of connecting with very large audiences

    JSOPT: A Framework for Optimization of JavaScript on Web Browsers

    Full text link

    Leveraging Grammars For OpenMP Development in Supercomputing Environments

    Get PDF
    This thesis proposes a solution to streamline the process of using supercomputing re- sources on Southern Methodist University’s ManeFrame II supercomputer. A large segment of the research community that uses ManeFrame II belong outside of the computer science department and the Lyle School of Engineering. While users know how to apply compu- tation to their field, their knowledge does not necessarily extend to the suite of tools and operating system that are required to use ManeFrame II. To solve this, the thesis proposes an interface for those who have little knowledge of Linux and SLURM to be able to use the supercomputing resources that SMU’s Center for Scientific Computation provides. OpenMP is a compiler extension for C, C++ and Fortran that generates a binary using multithreading using in-code directives. With knowledge of OpenMP, researchers are already able to split their code into multiple threads of execution. However, because of the complexity of Linux and SLURM, using OpenMP with the supercomputer can be problematic. This thesis focuses on the user of ANTLR, a programming language recognition tool. This tool allows for the insertion of directives into code which serves to generate batch files that are compatible with the supercomputer scheduling software, SLURM. With the batch file, the user is then able to submit their code to the supercomputer. Additional tools around this core piece of software facilitate a usable interface. In order to make the tool accessible to those without a background in software, the proposed forward- facing solution is a user interface to upload their code and returns a batch file that the user can use to run their code. This eliminates the need for a new user to download, compile and run the ANTLR distribution to generate a batch file. By abstracting away these complexities into a web interface, the solution can generate a batch submission file for the user. Additional tooling assists the user in finding empty nodes for code execution, testing the compilation of their code on the supercomputer and running a timed sample of their code to ensure that OpenMP is leading to a speedup in execution time
    corecore