52 research outputs found

    Enabling Additional Parallelism in Asynchronous JavaScript Applications

    Get PDF
    JavaScript is a single-threaded programming language, so asynchronous programming is practiced out of necessity to ensure that applications remain responsive in the presence of user input or interactions with file systems and networks. However, many JavaScript applications execute in environments that do exhibit concurrency by, e.g., interacting with multiple or concurrent servers, or by using file systems managed by operating systems that support concurrent I/O. In this paper, we demonstrate that JavaScript programmers often schedule asynchronous I/O operations suboptimally, and that reordering such operations may yield significant performance benefits. Concretely, we define a static side-effect analysis that can be used to determine how asynchronous I/O operations can be refactored so that asynchronous I/O-related requests are made as early as possible, and so that the results of these requests are awaited as late as possible. While our static analysis is potentially unsound, we have not encountered any situations where it suggested reorderings that change program behavior. We evaluate the refactoring on 20 applications that perform file- or network-related I/O. For these applications, we observe average speedups ranging between 0.99% and 53.6% for the tests that execute refactored code (8.1% on average)

    Static Analysis for Asynchronous JavaScript Programs

    Get PDF
    Asynchrony has become an inherent element of JavaScript, as an effort to improve the scalability and performance of modern web applications. To this end, JavaScript provides programmers with a wide range of constructs and features for developing code that performs asynchronous computations, including but not limited to timers, promises, and non-blocking I/O. However, the data flow imposed by asynchrony is implicit, and not always well-understood by the developers who introduce many asynchrony-related bugs to their programs. Worse, there are few tools and techniques available for analyzing and reasoning about such asynchronous applications. In this work, we address this issue by designing and implementing one of the first static analysis schemes capable of dealing with almost all the asynchronous primitives of JavaScript up to the 7th edition of the ECMAScript specification. Specifically, we introduce the callback graph, a representation for capturing data flow between asynchronous code. We exploit the callback graph for designing a more precise analysis that respects the execution order between different asynchronous functions. We parameterize our analysis with one novel context-sensitivity flavor, and we end up with multiple analysis variations for building callback graph. We performed a number of experiments on a set of hand-written and real-world JavaScript programs. Our results show that our analysis can be applied to medium-sized programs achieving 79% precision, on average. The findings further suggest that analysis sensitivity is beneficial for the vast majority of the benchmarks. Specifically, it is able to improve precision by up to 28.5%, while it achieves an 88% precision on average without highly sacrificing performance

    A trusted infrastructure for symbolic analysis of event-driven web applications

    Get PDF
    We introduce a trusted infrastructure for the symbolic analysis of modern event-driven Web applica-tions. This infrastructure consists of reference implementations of the DOM Core Level 1, DOM UIEvents, JavaScript Promises and the JavaScriptasync/awaitAPIs, all underpinned by a simpleCore Event Semantics which is sufficiently expressive to describe the event models underlying theseAPIs. Our reference implementations are trustworthy in that three follow the appropriate standardsline-by-line and all are thoroughly tested against the official test-suites, passing all the applicabletests. Using the Core Event Semantics and the reference implementations, we develop JaVerT.Click,a symbolic execution tool for JavaScript that, for the first time, supports reasoning about JavaScriptprograms that use multiple event-related APIs. We demonstrate the viability of JaVerT.Click byproving both the presence and absence of bugs in real-world JavaScript code

    A Trusted Infrastructure for Symbolic Analysis of Event-Driven Web Applications

    Get PDF
    We introduce a trusted infrastructure for the symbolic analysis of modern event-driven Web applications. This infrastructure consists of reference implementations of the DOM Core Level 1, DOM UI Events, JavaScript Promises and the JavaScript async/await APIs, all underpinned by a simple Core Event Semantics which is sufficiently expressive to describe the event models underlying these APIs. Our reference implementations are trustworthy in that three follow the appropriate standards line-by-line and all are thoroughly tested against the official test-suites, passing all the applicable tests. Using the Core Event Semantics and the reference implementations, we develop JaVerT.Click, a symbolic execution tool for JavaScript that, for the first time, supports reasoning about JavaScript programs that use multiple event-related APIs. We demonstrate the viability of JaVerT.Click by proving both the presence and absence of bugs in real-world JavaScript code

    Understanding asynchronous code

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 61-64).JavaScript on the web is difficult to debug due to its asynchronous and dynamic nature. Traditional debuggers are often little help because the language's idioms rely heavily on non-linear control flow via function pointers. The aim of this work is to create a debugging interface that helps users understand complicated control flow in languages like JavaScript. This thesis presents a programming editor extension called Theseus that uses program tracing to provide real-time in-editor feedback so that programmers can answer questions quickly as they write new code and interact with their application. Theseus augments the call graph with semantic edges that allow users to make intuitive leaps through program traces, such as from the start of an asynchronous network request to its response. Participants in lab and classroom studies found Theseus to be a usable replacement for traditional breakpoint and logging tools, though no significant difference was found in their ability to complete programming tasks.by Thomas Lieber.S.M

    A viabilidade de Dart em desenvolvimento full stack: um estudo de caso

    Get PDF
    In 2012, Google released the Dart language which, more recently, due to Flutter, has received a boost in popularity and is being often referred to as a full-stack language / ecosystem suitable for developing front-end and back-end solutions. However, aside from Flutter for mobile, Dart usage is still quite low when it comes to developing enterprise level solutions. In this dissertation, we tried to investigate the adequacy of using Dart to develop a full-stack solution with special focus on its back-end support. With that in mind, a typical scenario involving both a mobile and a web-supported front end, where both communicate with a back-end server via a REST endpoint, was established. For performance comparison, we deployed an equivalent back-end server developed using Spring Boot, a popular Java-based solution, which was used as reference. The main result was that a full-stack system can be developed with just a Dart / Flutter ecosystem and, in our scenario, this system’s performance surpassed Spring Boot’s. From a developer’s perspective, off-the-shelf Dart embedded asynchronous solutions (e.g., streams, Futures, etc.) are clearly an improvement over similar mechanisms in Java / Spring Boot due to avoiding typical Java solutions, namely asynchronous configurations, and annotations. However, despite some interesting projects arising, when excluding Google’s own developed packages/resources, most third-party packages are either using out-of-date dependencies due to compatibility issues or have been abandoned entirely – this had an impact during the development stage as it led to unplanned constraints when choosing packages and / or frameworks used.Em 2012, Google lançou a linguagem Dart que, mais recentemente, devido ao Flutter, recebeu um impulso em popularidade e é muitas vezes referida como uma linguagem / ecossistema full stack adequado para o desenvolvimento de soluções front end e back end. No entanto, além do Flutter para dispositivos móveis, o uso de Dart ainda é muito baixo quando se trata de desenvolver soluções de nível corporativo. Nesta dissertação, tentamos investigar a adequação do uso de Dart para desenvolver uma solução full stack com foco especial no seu suporte de back end. Com isso em mente, foi estabelecido um cenário típico envolvendo um front end móvel e um compatível com web, em que ambos comunicam com um servidor back end por meio de um endpoint REST. Para comparação de desempenho, implementamos um servidor back end equivalente desenvolvido usando Spring Boot, uma solução popular baseada em Java, que foi usada como referência. O resultado principal foi que um sistema full stack pode ser desenvolvido com apenas um ecossistema Dart / Flutter e, no nosso cenário, o desempenho desse sistema ultrapassou o Spring Boot. Do ponto de vista do programador, soluções assíncronas incorporadas Dart prontas para uso (por exemplo, streams, Futures, etc.) são claramente uma melhoria em relação a mecanismos semelhantes em Java / Spring Boot devido a evitarem soluções Java típicas, nomeadamente configurações assíncronas e anotações. No entanto, apesar de alguns projetos interessantes surgirem, ao excluir os pacotes / recursos desenvolvidos pela própria Google, a maioria dos pacotes desenvolvidos por terceiros usam dependências desatualizadas devido a problemas de compatibilidade ou foram abandonados inteiramente - isso teve um impacto durante a fase de desenvolvimento, pois levou a restrições não planeadas na escolha de pacotes e / ou frameworks usados.Mestrado em Engenharia de Computadores e Telemátic

    Knowledge-driven architecture composition

    Full text link
    Service interoperability for embedded devices is a mandatory feature for dynamically changing Internet-of-Things and Industry 4.0 software platforms. Service interoperability is achieved on a technical, syntactic, and semantic level. If service interoperability is achieved on all layers, plug and play functionality known from USB storage sticks or printer drivers becomes feasible. As a result, micro batch size production, individualized automation solution, or job order production become affordable. However, interoperability at the semantic layer is still a problem for the maturing class of IoT systems. Current solutions to achieve semantic integration of IoT devices’ heterogeneous services include standards, machine-understandable service descriptions, and the implementation of software adapters. Standardization bodies such as the VDMA tackle the problem by providing a reference software architecture and an information meta model for building up domain standards. For instance, the universal machine technology interface (UMATI) facilitates the data exchange between machines, components, installations, and their integration into a customerand user-specific IT ecosystem for mechanical engineering and plant construction worldwide. Automated component integration approaches fill the gap of software interfaces that are not relying on a global standard. These approaches translate required into provided software interfaces based on the needed architectural styles (e.g., client-server, layered, publish-subscribe, or cloud-based) using additional component descriptions. Interoperability at the semantic layer is achieved by relying on a shared domain vocabulary (e.g., an ontology) and service description (e.g., SAWSDL) used by all devices involved. If these service descriptions are available and machine-understandable knowledge of how to integrate software components on the functional and behavioral level is available, plug and play scenarios are feasible. Both standards and formal service descriptions cannot be applied effectively to IoT systems as they rely on the assumption that the semantic domain is completely known when they are noted down. This assumption is hard to believe as an increasing number of decentralized developed and connected IoT devices will exist (i.e., 30.73 billion in 2020 and 75.44 billion in 2025). If standards are applied in IoT systems, they must be updated continuously, so they contain the most recent domain knowledge agreed upon centrally and ahead of application. Although formal descriptions of concrete integration contexts can happen in a decentralized manner, they still rely on the assumption that the knowledge once noted down is complete. Hence, if an interoperable service from a new device is available that has not been considered in the initial integration context, the formal descriptions must be updated continuously. Both the formalization effort and keeping standards up to date result in too much additional engineering effort. Consequently, practitioners rely on implementing software adapters manually. However, this dull solution hardly scales with the increasing number of IoT devices. In this work, we introduce a novel engineering method that explicitly allows for an incomplete semantic domain description without losing the ability for automated IoT system integration. Dropping the completeness claim requires the management of incomplete integration knowledge. By sharing integration knowledge centrally, we assist the system integrator in automating software adapter generation. In addition to existing approaches, we enable semantic integration for services by making integration knowledge reusable. We empirically show with students that integration effort can be lowered in a home automation context

    Integration of an Automatic Fault Localization Tool in an IDE and its Evaluation

    Get PDF
    Debugging is one of the most demanding and error-prone tasks in software development. Trying to address bugs has become overall more expensive as the software complexity and size have increased. As a result, several researchers attempted to improve the developers’ debugging experience and efficiency by automating as much of the process as possible. Existing auto-finding tools will assist developers in automatically detecting bugs, however, they are not yet widely available to software engineers. Making such tools available to developers can save debugging time and increase the productivity. Subsequently, the main goal of this dissertation is to incorporate an automatic fault localization tool into an Integrated Development Environment (IDE). The selected IDE was Visual Studio Code, a source-code editor developed by Microsoft for Windows, Linux, and macOS. Visual Studio Code is one of the most used IDEs and is known for its flexible API, which allows nearly every aspect of it to be customized. Furthermore, the chosen automatic fault localization tool was FLACOCO, a recent fault localization tool for Java that supports up to the most recent versions. Nonetheless, this document contains a full overview of several fault localization methodologies and tools, as well as an explanation of the complete planning and development process of the produced Visual Studio Code extension. After the development and deployment were completed, an evaluation was carried out. The extension was evaluated through a user study in which thirty Java professionals took part. The test had two parts: the first involved users using the extension to complete two debugging tasks in previously unknown projects, and the second had them filling out a satisfaction questionnaire for further analysis. Finally, the results show that the extension was a success, with the system being rated positively in all areas. However, it may be revised in light of the questionnaire responses, with the suggestions received being considered for future work.A depuração é uma das tarefas mais exigentes e propensas a erros no desenvolvimento de software. Tentar resolver esses erros tornou-se mais dispendioso com os incrementos de complexidade e tamanho do software. Deste modo, ao longo dos últimos anos, vários investigadores tentaram melhorar a experiência de depuração e a eficiência dos desenvolvedores automatizando o máximo possível do processo. Existem ferramentas de localização de defeitos que assistem os desenvolvedores na detecção automática de bugs, no entanto estas ainda não se encontram amplamente disponíveis para os programadores. Tornar essas ferramentas disponíveis para todos certamente iria resultar na redução do tempo de depuração e no aumento da produtividade. Assim sendo, o principal objetivo desta dissertação é incorporar uma ferramenta de localização automática de defeitos num IDE. Em termos de IDE, o Visual Studio Code, um editor de código-fonte desenvolvido pela Microsoft para Windows, Linux e macOS, foi selecionado. Este IDE tem ganho bastante popularidade, sendo um dos IDEs mais utilizados mundialmente. Além disso, o Visual Studio Code é reconhecido pela sua API flexível, que permite que quase todos os seus aspectos sejam personalizados. Adicionalmente, o FLACOCO, uma ferramenta de localização de defeitos baseada em SFL que suporta até as versões mais recentes do Java, foi escolhida como ferramenta de localização automática de defeitos. Além do mais, esta dissertação contém um estudo sobre as técnicas de localização automática de defeitos e as suas ferramentas, bem como uma explicação do planeamento e implementação da extensão criada para o Visual Studio Code. Após o término da implementação e a posterior implantação, foi efetuada a sua avaliação. Procedeu-se a um teste de utilização com a participação de treze utilizadores proficientes na linguagem Java. O teste foi composto por duas componentes: na primeira os utilizadores utilizaram a extensão para completar duas tarefas de depuração em projetos por eles desconhecidos e na segunda foi-lhes fornecido um questionário de satisfação para posterior análise. Os resultados obtidos sugerem que a extensão foi um sucesso, sendo que o sistema foi positivamente avaliado em todos os aspetos. No entanto a mesma poderá ser aprimorada tendo em consideração o feedback obtido na secção de resposta livre do questionário, sendo que o mesmo foi bastante valioso e as sugestões apuradas vieram a ser consideradas para trabalho futuro

    An architectural framework for expert identification based on social network analysis

    Get PDF
    Social network analysis has been widely used in different application contexts. For example, in Global Software Development (GSD), where multiple developers with diverse skills and knowledge are involved, the use of social networking models helps to understand how these developers collaborate. Finding experts who can help address critical elements or issues in a project is a challenging and critical task. It is especially true in the context of Global Software Development projects, where developers with specific skills and knowledge often need to be identified. In this sense, searching for essential members is a valuable task, as they are fundamental to the evolution of the network. This work proposes an architectural framework for expert identification as a hybrid solution that includes syntactic and semantic analysis in social networks. We seek to address research challenges related to designing recommendation systems when analyzing social structures in the Global Software Development context. In this solution, we define a model for the social network capable of capturing collaboration between developers, incorporate strategies for temporal analysis of the network, explore the network using machine learning algorithms, propose an ontology to enrich the data semantically, and consider a performative approach for high-volume social network analysis methods. We conducted four case studies using data extracted from GitHub to evaluate the proposed approach, as well as a more extensive dataset for the performance studies. The case studies provide evidence that our proposed method can identify specialists, highlighting their expertise and importance to the evolution of the social network.A análise de redes sociais tem sido amplamente utilizada em diferentes contextos de aplicação. Por exemplo, em Desenvolvimento Global de Software, onde vários desenvolvedores com diversos conhecimentos e habilidades estão envolvidos, o uso de modelos de redes sociais ajuda a entender como esses desenvolvedores colaboram. Encontrar especialistas que possam ajudar a abordar elementos ou problemas críticos em um projeto é uma tarefa desafiadora e crítica. Isso é especialmente verdade em projetos no contexto de Desenvolvimento Global de Software, onde desenvolvedores com habilidades e conhecimentos específicos geralmente precisam ser identificados. Nesse sentido, buscar membros essenciais é uma tarefa valiosa, pois eles são fundamentais para a evolução da rede. Este trabalho propõe um framework arquitetural para a identificação de especialistas como uma solução híbrida que inclui análise sintática e semântica em redes sociais. Buscamos abordar desafios de pesquisa relacionados ao projeto de sistemas de recomendação que envolvam a análise de estruturas sociais no contexto de Desenvolvimento Global de Software. Nesta solução, definimos um modelo para a rede social capaz de capturar a colaboração entre desenvolvedores, incorporamos estratégias de análise temporal da rede, exploramos a rede usando algoritmos de aprendizado de máquina, propomos uma ontologia para enriquecer os dados semanticamente e consideramos uma abordagem performativa para métodos de análise de redes de grande volume. Realizamos quatro estudos de caso usando dados extraídos do GitHub para avaliar a abordagem proposta, bem como um conjunto de dados de grande volume para os estudos de performance. Os estudos de caso fornecem evidências de que o método proposto pode identificar especialistas, destacando sua expertise e importância para a evolução da rede social.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superio

    Performance comparison between a distributed particle swarm algorithm and a centralised algorithm

    Get PDF
    Particle Swarm optimisation (PSO) is a particular form of swarm intelligence, which itself is an innovative intelligent paradigm for solving optimization problems. PSO is generally used to find a global optimum in a single optimisation function. This typically occurs on one node(machine) but there has been a significant body of research into creating distributed implementations of the PSO algorithm. Such research has often focused on the creation and performance of the distributed implementation in an isolated manner or compared to different distributed algorithms. This research piece aims to bridge a gap in the existing literature, by testing a distributed implementation of a PSO algorithm against a centralised implementation, and investigating what, if any, gains there are to utilising a distributed implementation over a centralised implementation. The focus will primarily be on the time taken for the algorithm to successfully find a global minimum to a specific fitness function, but other elements will be examined over the course of the study
    corecore