13 research outputs found

    JavaScript Dead Code Identification, Elimination, and Empirical Assessment

    Get PDF
    Web apps are built by using a combination of HTML, CSS, and JavaScript. While building modern web apps, it is common practice to make use of third-party libraries and frameworks, as to improve developers' productivity and code quality. Alongside these benefits, the adoption of such libraries results in the introduction of JavaScript dead code, i.e., code implementing unused functionalities. The costs for downloading and parsing dead code can negatively contribute to the loading time and resource usage of web apps. The goal of our study is two-fold. First, we present Lacuna, an approach for automatically detecting and eliminating JavaScript dead code from web apps. The proposed approach supports both static and dynamic analyses, it is extensible and can be applied to any JavaScript code base, without imposing constraints on the coding style or on the use of specific JavaScript constructs. Secondly, by leveraging Lacuna we conduct an experiment to empirically evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps. We applied Lacuna four times on 30 mobile web apps independently developed by third-party developers, each time eliminating dead code according to a different optimization level provided by Lacuna. Afterward, each different version of the web app is executed on an Android device, while collecting measures to assess the potential run-time overhead caused by dead code. Experimental results, among others, highlight that the removal of JavaScript dead code has a positive impact on the loading time of mobile web apps, while significantly reducing the number of bytes transferred over the network

    The state of the art in measurement-based experiments on the mobile web

    No full text
    Context: Nowadays the majority of all worldwide Web traffic comes from mobile devices, as we tend to primarily rely on the browsers installed on our smartphones and tablets (e.g., Chrome for Android, Safari for iOS) for accessing online services. A market of such a large scale leads to an extremely fierce competition, where it is of paramount importance that the developed mobile Web apps are of high quality, e.g., in terms of performance, energy consumption, security, usability. In order to objectively assess the quality of mobile Web apps, practitioners and researchers are conducting experiments based on the measurement of run-time metrics such as battery discharge, CPU and memory usage, number and type of network requests, etc. Objective: The objective of this work is to identify, classify, and evaluate the state of the art of conducting measurement-based experiments on the mobile Web. Specifically, we focus on (i) which metrics are employed during experimentation, how they are measured, and how they are analyzed; (ii) the platforms chosen to run the experiments; (iii) what subjects are used; (iv) the used tools and environments under which the experiments are run. Method: We apply the systematic mapping methodology. Starting from a search process that identified 786 potentially relevant studies, we selected a set of 33 primary studies following a rigorous selection procedure. We defined and applied a classification framework to them to extract data and gather relevant insights. Results: This work contributes with (i) a classification framework for measurement-based experiments on the mobile Web; (ii) a systematic map of current research on the topic; (iii) a discussion of emergent findings and challenges, and resulting implications for future research. Conclusion: This study provides a rigorous and replicable map of the state of the art of conducting measurement-based experiments on the mobile Web. Its results can benefit researchers and practitioners by presenting common techniques, empirical practices, and tools to properly conduct measurement-based experiments on the mobile Web

    An extensible approach for taming the challenges of JavaScript dead code elimination

    No full text
    JavaScript is becoming the de-facto programming language of the Web. Large-scale web applications (web apps) written in Javascript are commonplace nowadays, with big technology players (e.g., Google, Facebook) using it in their core flagship products. Today, it is common practice to reuse existing JavaScript code, usually in the form of third-party libraries and frameworks. If on one side this practice helps in speeding up development time, on the other side it comes with the risk of bringing dead code, i.e., JavaScript code which is never executed, but still downloaded from the network and parsed in the browser. This overhead can negatively impact the overall performance and energy consumption of the web app. In this paper we present Lacuna, an approach for JavaScript dead code elimination, where existing JavaScript analysis techniques are applied in combination. The proposed approach supports both static and dynamic analyses, it is extensible, and independent of the specificities of the used JavaScript analysis techniques. Lacuna can be applied to any JavaScript code base, without imposing any constraints to the developer, e.g., on her coding style or on the use of some specific JavaScript feature (e.g., modules). Lacuna has been evaluated on a suite of 29 publicly-available web apps, composed of 15,946 JavaScript functions, and built with different JavaScript frameworks (e.g., Angular, Vue.js, jQuery). Despite being a prototype, Lacuna obtained promising results in terms of analysis execution time and precision

    Software engineering techniques for statically analyzing mobile apps: research trends, characteristics, and potential for industrial adoption

    Get PDF
    Mobile platforms are rapidly and continuously changing, with support for new sensors, APIs, and programming abstractions. Static analysis is gaining a growing interest, allowing developers to predict properties about the run-time behavior of mobile apps without executing them. Over the years, literally hundreds of static analysis techniques have been proposed, ranging from structural and control-flow analysis to state-based analysis.In this paper, we present a systematic mapping study aimed at identifying, evaluating and classifying characteristics, trends and potential for industrial adoption of existing research in static analysis of mobile apps. Starting from over 12,000 potentially relevant studies, we applied a rigorous selection procedure resulting in 261 primary studies along a time span of 9 years. We analyzed each primary study according to a rigorously-defined classification framework. The results of this study give a solid foundation for assessing existing and future approaches for static analysis of mobile apps, especially in terms of their industrial adoptability.Researchers and practitioners can use the results of this study to (i) identify existing research/technical gaps to target, (ii) understand how approaches developed in academia can be successfully transferred to industry, and (iii) better position their (past and future) approaches for static analysis of mobile apps

    Social, Legal, Ethical, Empathetic, and Cultural Rules: Compilation and Reasoning

    No full text
    The rise of AI-based and autonomous systems is raising concerns and apprehension due to potential negative repercussions arising from their behavior or decisions. These systems must be designed to comply with the human contexts in which they will operate. To this extent, Townsend et al. (2022) introduce the concept of SLEEC (social, legal, ethical, empathetic, or cultural) rules that aim to facilitate the formulation, verification, and enforcement of the rules AI-based and autonomous systems should obey. They lay out a methodology to elicit them and to let philosophers, lawyers, domain experts, and others to formulate them in natural language. To enable their effective use in AI systems, it is necessary to translate these rules systematically into a formal language that supports automated reasoning. In this study, we first conduct a linguistic analysis of the SLEEC rules pattern, which justifies the translation of SLEEC rules into classical logic. Then we investigate the computational complexity of reasoning about SLEEC rules and show how logical programming frameworks can be employed to implement SLEEC rules in practical scenarios. the result is a readily applicable strategy for implementing AI systems that conform to norms expressed as SLEEC rules

    Permission issues in open-source android apps:An exploratory study

    No full text
    Permissions are one of the most fundamental components for protecting an Android user's privacy and security. Unfortunately, developers frequently misuse permissions by requiring too many or too few permissions, or by not adhering to permission best practices. These permission-related issues can negatively impact users in a variety of ways, ranging from creating a poor user experience to severe privacy and security implications. To advance the understanding permission-related issues during the app's development process, we conducted an empirical study of 574 GitHub repositories of open-source Android apps. We analyzed the occurrences of four types of permission-related issues across the lifetime of the apps. Our findings reveal that (i) permission-related issues are a frequent phenomenon in Android apps, (ii) the majority of issues are fixed within a few days after their introduction, (iii) permission-related issues can frequently linger inside an app for an extended period of time, which can be as high as several years, before being fixed, and (iv) both project newcomers and regular contributors exhibit the same behaviour in terms of number of introduced and fixed permission-related issues per commit

    Enhancing Trustability of Android Applications via User-Centric Flexible Permissions

    No full text

    User-centric android flexible permissions

    No full text
    Privacy in mobile apps is a fundamental aspect to be considered, particularly with regard to meeting end user expectations. Due to the rigidities of the Android permission model, desirable trade-offs are not allowed. End users are confined into a secondary role, having the only option of choosing between either privacy or functionalities. This work proposes a user-centric approach to the flexible management of Android permissions that empowers end users to specify the desired level of permissions on a per-feature basis

    Enhancing Trustability of Android Applications via User-Centric Flexible Permissions

    No full text
    The Android OS market is experiencing a growing share globally. It is becoming the mobile platform of choice for an increasing number of users. People rely on Android mobile devices for surfing the web, purchasing products, or to be part of a social network. The large amount of personal information that is exchanged makes privacy an important concern. As a result, the trustability of mobile apps is a fundamental aspect to be considered, particularly with regard to meeting the expectations of end users. The rigidities of the Android permission model confine end users into a secondary role, offering the only option of choosing between either privacy or functionalities. In this paper, we aim at improving the trustability of Android apps by proposing a user-centric approach to the flexible management of Android permissions. The proposed approach empowers end users to selectively grant permission by specifying (i) the desired level of permissions granularity and (ii) the specific features of the app in which the chosen permission levels are granted. Four experiments have been designed, conducted, and reported for evaluating it. The experiments consider performance, usability, and acceptance from both the end user's and developer's perspective. Results confirm confidence on the approach

    Clinical risk factors and features on computed tomography angiography in high-risk carotid artery plaque in patients with type 2 diabetes

    No full text
    Background: High -risk carotid artery plaque (HPR) is associated with a markedly increased risk of ischemic stroke. The aims of this study were: 1) to examine the prevalence of HRP in a cohort of asymptomatic adults with type 2 diabetes (T2D); 2) to investigate the relationship between HRP, established cardiovascular risk factors and computed tomography angiography (CTA) profile; and 3) to assess whether the presence of HRP is associated with an increased risk of major adverse cardiovascular events (MACE). Methods: This was a retrospective cohort study of T2D asymptomatic patients who underwent carotid endarterectomy (CEA) from January 2018 to July 2021. The carotid atherosclerotic plaque (CAP) was assessed for the presence of ulceration, the presence of lipids, fibrosis, thrombotic deposits, hemorrhage, neovascularization, and inflammation. A CAP presenting at least five of these histological features was defined as a HRP (Group A); in all other cases it was defined as a mild to moderate heterogeneous plaque and no-HRP (Group B). CTA features included the presence of rim sign consisting of thin peripheral adventitial calcification (<2 mm) and internal soft plaque (>= 2 mm), NASCET percent diameter stenosis, maximum plaque thickness, ulceration, calcification, and intraluminal thrombus were recorded. Binary logistic regression with Uni- and Multivariate was used to evaluate possible predictors for HRP while multivariable Cox Proportional Hazards was used to assess independent predictors for MACE. Results: One hundred eighty-five asymptomatic patients (mean age 73 +/- 8 years, 131 men), undergoing carotid endarterectomy, were included. Of these, 124 (67%) had HRP, and the 61 (33%) did not. Diabetic complications (OR 2.4, 95% CI: 1.1-5.1, P=0.01), NASCET stenosis >= 75% (OR 2.4, 95% CI: 1.2-3.7, P=0.02) and carotid RIM sign (OR 4.3, 95% CI: 3.9-7.3, P<0.001) were independently associated with HRP. However, HRP was not associated with a higher risk of MACE (freedom from MACE at 5 years: HRP 83.4% vs. non HRP 87.8%, P=0.72) or a reduction of survival (5 -year survival estimates: HRP 96.4% vs. non HRP: 94.6%, P=0.76). Conclusions: A high prevalence of HRP (67%) was observed in asymptomatic and elderly T2D patients. Independent predictors of HRP were diabetic complications, NASCET stenosis >= 75% and carotid RIM sign (OR 4.3, 95% CI: 3.9-7.3, P<0.001). HRP was not associated with an increased risk of MACE during a mean follow-up of 39 +/- 24 years
    corecore