1,103 research outputs found

    Mapping System Level Behaviors with Android APIs via System Call Dependence Graphs

    Full text link
    Due to Android's open source feature and low barriers to entry for developers, millions of developers and third-party organizations have been attracted into the Android ecosystem. However, over 90 percent of mobile malware are found targeted on Android. Though Android provides multiple security features and layers to protect user data and system resources, there are still some over-privileged applications in Google Play Store or third-party Android app stores at wild. In this paper, we proposed an approach to map system level behavior and Android APIs, based on the observation that system level behaviors cannot be avoided but sensitive Android APIs could be evaded. To the best of our knowledge, our approach provides the first work to map system level behavior and Android APIs through System Call Dependence Graphs. The study also shows that our approach can effectively identify potential permission abusing, with almost negligible performance impact.Comment: 14 pages, 6 figure

    Advanced Strategies for Precise and Transparent Debugging of Performance Issues in In-Memory Data Store-Based Microservices

    Full text link
    The rise of microservice architectures has revolutionized application design, fostering adaptability and resilience. These architectures facilitate scaling and encourage collaborative efforts among specialized teams, streamlining deployment and maintenance. Critical to this ecosystem is the demand for low latency, prompting the adoption of cloud-based structures and in-memory data storage. This shift optimizes data access times, supplanting direct disk access and driving the adoption of non-relational databases. Despite their benefits, microservice architectures present challenges in system performance and debugging, particularly as complexity grows. Performance issues can readily cascade through components, jeopardizing user satisfaction and service quality. Existing monitoring approaches often require code instrumentation, demanding extensive developer involvement. Recent strategies like proxies and service meshes aim to enhance tracing transparency, but introduce added configuration complexities. Our innovative solution introduces a new framework that transparently integrates heterogeneous microservices, enabling the creation of tailored tools for fine-grained performance debugging, especially for in-memory data store-based microservices. This approach leverages transparent user-level tracing, employing a two-level abstraction analysis model to pinpoint key performance influencers. It harnesses system tracing and advanced analysis to provide visualization tools for identifying intricate performance issues. In a performance-centric landscape, this approach offers a promising solution to ensure peak efficiency and reliability for in-memory data store-based cloud applications

    The Requirements Editor RED

    Get PDF

    Network analysis of large scale object oriented software systems

    Get PDF
    PhD ThesisThe evolution of software engineering knowledge, technology, tools, and practices has seen progressive adoption of new design paradigms. Currently, the predominant design paradigm is object oriented design. Despite the advocated and demonstrated benefits of object oriented design, there are known limitations of static software analysis techniques for object oriented systems, and there are many current and legacy object oriented software systems that are difficult to maintain using the existing reverse engineering techniques and tools. Consequently, there is renewed interest in dynamic analysis of object oriented systems, and the emergence of large and highly interconnected systems has fuelled research into the development of new scalable techniques and tools to aid program comprehension and software testing. In dynamic analysis, a key research problem is efficient interpretation and analysis of large volumes of precise program execution data to facilitate efficient handling of software engineering tasks. Some of the techniques, employed to improve the efficiency of analysis, are inspired by empirical approaches developed in other fields of science and engineering that face comparable data analysis challenges. This research is focused on application of empirical network analysis measures to dynamic analysis data of object oriented software. The premise of this research is that the methods that contribute significantly to the object collaboration network's structural integrity are also important for delivery of the software system’s function. This thesis makes two key contributions. First, a definition is proposed for the concept of the functional importance of methods of object oriented software. Second, the thesis proposes and validates a conceptual link between object collaboration networks and the properties of a network model with power law connectivity distribution. Results from empirical software engineering experiments on JHotdraw and Google Chrome are presented. The results indicate that five considered standard centrality based network measures can be used to predict functionally important methods with a significant level of accuracy. The search for functional importance of software elements is an essential starting point for program comprehension and software testing activities. The proposed definition and application of network analysis has the potential to improve the efficiency of post release phase software engineering activities by facilitating rapid identification of potentially functionally important methods in object oriented software. These results, with some refinement, could be used to perform change impact prediction and a host of other potentially beneficial applications to improve software engineering techniques

    Development of a performance analysis environment for parallel pattern-based applications

    Get PDF
    One of the challenges that the scientific community is facing nowadays refers to the data parallel treatment. Every day, we produce more and more overwhelming amounts of data, in such a way that there comes a point at which all those data volumes grow exponentially and can not be treated as we would desire. The importance of this treatment and data processing is mainly due to the need that arises in the scientific area for discovering new advances in science, or simply for discovering new algorithms capable of solving experiments each time more complex, whose resolution years ago was an unfeasible task due to the available resources. As a consequence, some changes appear in the internal architecture of these computers in order to increase their computing capacity and thus, be able to cope with the need for massive data processing. Thus, the scientific community has implemented different pattern-based parallel programming frameworks in order to compute experiments in a faster and efficient way. Unfortunately, the use of these programming paradigms is not a simple task, since it requires expertise and programming skills. This is further complicated when developers are not aware of the program internal behaviour, which leads to unexpected results in certain parts of the code. Inevitably, some need arises to develop a series of tools with the aim of helping this community to analyze the performance and results of their experiments. Hence, this bachelor thesis presents the development of a performance analysis environment based on parallel applications as a solution to that problem. Specifically, this environment is composed of two techniques commonly used, profiling and tracing, which have been added to the GrPPI framework. In this way, users can obtain a general assessment of their applications performance and thus, act according with the results obtained.Uno de los retos actuales al que la comunidad científica está haciendo frente hace referencia al tratamiento en paralelo de datos. Diariamente producimos cada vez más cantidades abrumadoras de datos, de tal manera que llega un punto en el que todos esos volúmenes de datos crecen desenfrenadamente y no pueden ser tratados como se desearía. La importancia de este tratamiento y procesamiento de datos se debe, principalmente, a la necesidad que surge en el ámbito científico por descubrir nuevos avances en la ciencia, o, simplemente, por descubrir nuevos algoritmos capaces de resolver experimentos cada vez más complejos cuya resolución años atrás era una tarea inviable debido a los recursos disponibles. Como consecuencia, surgen cambios en la arquitectura interna de estos ordenadores con el fin de aumentar su capacidad de cómputo y así poder hacer frente a dicha necesidad de tratamiento masivo de datos. Así, la comunidad científica ha implementado distintos frameworks de programación paralela basados en patrones paralelos con el fin de computar esos experimentos de una manera más rápida y eficiente. Desafortunadamente, la utilización de estos paradigmas de programación no es una tarea sencilla, ya que requiere experiencia y destrezas en programación. Esto se complica aún más cuando los desarrolladores no son conscientes del comportamiento interno del programa, lo cual conlleva a obtener resultados inesperados en ciertas partes del código. Inevitablemente, surge la necesidad de desarrollar una serie de herramientas con el objetivo de ayudar a esta comunidad a analizar el rendimiento y resultados de sus experimentos. Así pues, este trabajo fin de carrera presenta el desarrollo de un entorno de análisis de rendimiento de aplicaciones paralelas como solución al ese problema. Concretamente, este entorno está compuesto de dos técnicas comunmente utilizadas, profiling y tracing, las cuales han sido añadidas al framework de programación paralela GrPPI. Así, los usuarios podrán recibir una valoración general sobre el rendimiento de sus aplicaciones y actuar conforme a los resultados obtenidos.Ingeniería Informática (Plan 2011
    • …
    corecore