143 research outputs found

    Program analysis to support quality assurance techniques for web applications

    Get PDF
    As web applications occupy an increasingly important role in the day-to-day lives of millions of people, testing and analysis techniques that ensure that these applications function with a high level of quality are becoming even more essential. However, many software quality assurance techniques are not directly applicable to modern web applications. Certain characteristics, such as the use of HTTP and generated object programs, can make it difficult to identify software abstractions used by traditional quality assurance techniques. More generally, many of these abstractions are implemented differently in web applications, and the lack of techniques to identify them complicates the application of existing quality assurance techniques to web applications. This dissertation describes the development of program analysis techniques for modern web applications and shows that these techniques can be used to improve quality assurance. The first part of the research focuses on the development of a suite of program analysis techniques that identifies useful abstractions in web applications. The second part of the research evaluates whether these program analysis techniques can be used to successfully adapt traditional quality assurance techniques to web applications, improve existing web application quality assurance techniques, and develop new techniques focused on web application-specific issues. The work in quality assurance techniques focuses on improving three different areas: generating test inputs, verifying interface invocations, and detecting vulnerabilities. The evaluations of the resulting techniques show that the use of the program analyses results in significant improvements in existing quality assurance techniques and facilitates the development of new useful techniques.Ph.D.Committee Chair: Orso, Alessandro; Committee Member: Giffin, Jon; Committee Member: Harrold, Mary Jean; Committee Member: Rugaber, Spencer; Committee Member: Tip, Fran

    LOW-COST OPEN-SOURCE GMAW-BASED METAL 3-D PRINTING: MONITORING, SLICER, OPTIMIZATION, AND APPLICATIONS

    Get PDF
    Low-cost and open-source gas metal arc welding (GMAW)-based 3-D printing has been demonstrated yet the electrical design and software was not developed enough to enable wide-spread adoption. This thesis provides three novel technical improvements based on the application of mechatronic and software theory that when combined demonstrate the ability for distributed digital manufacturing at the small and medium enterprise scale of steel and aluminum parts. First, low cost metal inert gas welders contain no power monitoring needed to tune GMAW 3-D printers. To obtain this data about power and energy usage during the printing, an integrated monitoring system was developed to measure current (I) and voltage (V) in real-time. The new design of this monitoring system integrates an open source microcontroller and free and open source software on the open-source metal 3-D printer to record the data. Second, the primary obstacle to the diffusion of this technology was that existing slicing software, which determines the toolpath of the printhead was optimized for polymer 3-D printing and inappropriate for printed parts made from metal due to their mechanical strength. Previous prints were accomplished by manually designing the toolpath, which was not practical for real use by an extended userbase. To overcome the problem, the free and open-source slicing software, CuraEngine, was forked to MOSTMetalCura, which supports the needs of GMAW-based metal 3-D printing. The optimized setting for wire feed rate is calculated by the new slicer based on printing speed, bead width, layer height, and material diameter. Previous studies have shown that GMAW-based metal 3-D printing is capable of fabricating parts with good layer adhesion and porosity. However, this preliminary work lacked demonstrations of real-world applications. Finally, in this work, the practical applications of open-source GMAW-based metal 3-D printing are well demonstrated for both developing world and developed world applications including: 1) fixing an existing part by adding on a 3-D metal feature, 2) creating a product using the substrate as part of the component, 3) 3-D printing useful objects in high resolution, 4) near net shape objects and 5) making an integrated product using a combination of steel and polymer 3-D printing. The results prove that low-cost and open-source GMAW-based metal 3-D printing is ready for distributed manufacturing by SMEs and adequate for a wide range of applications

    Verbesserrung der Datenflussüberwachung für Datennutzungskontrollsysteme

    Get PDF
    This thesis provides a new, hybrid approach in the field of Distributed Data Usage Control (DUC), to track the flow of data inside applications. A combination between static information flow analysis and dynamic data flow tracking enables to track selectively only those program locations that are actually relevant for a flow of data. This ensures the portability of a monitored application with low performance overhead. Beyond that, DUC systems benefit from the present approach as it reduces overapproximation in data flow tracking, and thus, provides a more precise result to enforce data usage restrictions.Diese Thesis liefert einen neuartigen hybriden Ansatz auf dem Gebiet von Distributed Data Usage Control (DUC), um den Datenfluss innerhalb einer Anwendung zu überwachen. Eine Kombination aus statischer Informationsflussanalyse und dynamischer Datenflussüberwachung ermöglicht die selektive, modulare Überwachung derjenigen Programmstellen, welche tatsächlich relevant für einen Datenfluss sind. Dadurch wird die Portabilität einer zu überwachenden Anwendung, bei geringem Performance Overhead, sichergestellt. DUC Systeme profitieren vom vorliegenden Ansatz vor allem dadurch, dass Überapproximation bei der Datenflussüberwachung reduziert wird, und somit ein präziseres Ergebnis für die Durchsetzung von Datennutzungsrestriktionen vorliegt

    An investigation into the unsoundness of static program analysis : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Palmerston North, New Zealand

    Get PDF
    Static program analysis is widely used in many software applications such as in security analysis, compiler optimisation, program verification and code refactoring. In contrast to dynamic analysis, static analysis can perform a full program analysis without the need of running the program under analysis. While it provides full program coverage, one of the main issues with static analysis is imprecision -- i.e., the potential of reporting false positives due to overestimating actual program behaviours. For many years, research in static program analysis has focused on reducing such imprecision while improving scalability. However, static program analysis may also miss some critical parts of the program, resulting in program behaviours not being reported. A typical example of this is the case of dynamic language features, where certain behaviours are hard to model due to their dynamic nature. The term ``unsoundness'' has been used to describe those missed program behaviours. Compared to static analysis, dynamic analysis has the advantage of obtaining precise results, as it only captures what has been executed during run-time. However, dynamic analysis is also limited to the defined program executions. This thesis investigates the unsoundness issue in static program analysis. We first investigate causes of unsoundness in terms of Java dynamic language features and identify potential usage patterns of such features. We then report the results of a number of empirical experiments we conducted in order to identify and categorise the sources of unsoundness in state-of-the-art static analysis frameworks. Finally, we quantify and measure the level of unsoundness in static analysis in the presence of dynamic language features. The models developed in this thesis can be used by static analysis frameworks and tools to boost the soundness in those frameworks and tools

    A compiler level intermediate representation based binary analysis system and its applications

    Get PDF
    Analyzing and optimizing programs from their executables has received a lot of attention recently in the research community. There has been a tremendous amount of activity in executable-level research targeting varied applications such as security vulnerability analysis, untrusted code analysis, malware analysis, program testing, and binary optimizations. The vision of this dissertation is to advance the field of static analysis of executables and bridge the gap between source-level analysis and executable analysis. The main thesis of this work is scalable static binary rewriting and analysis using compiler-level intermediate representation without relying on the presence of metadata information such as debug or symbolic information. In spite of a significant overlap in the overall goals of several source-code methods and executables-level techniques, several sophisticated transformations that are well-understood and implemented in source-level infrastructures have yet to become available in executable frameworks. It is a well known fact that a standalone executable without any meta data is less amenable to analysis than the source code. Nonetheless, we believe that one of the prime reasons behind the limitations of existing executable frameworks is that current executable frameworks define their own intermediate representations (IR) which are significantly more constrained than an IR used in a compiler. Intermediate representations used in existing binary frameworks lack high level features like abstract stack, variables, and symbols and are even machine dependent in some cases. This severely limits the application of well-understood compiler transformations to executables and necessitates new research to make them applicable. In the first part of this dissertation, we present techniques to convert the binaries to the same high-level intermediate representation that compilers use. We propose methods to segment the flat address space in an executable containing undifferentiated blocks of memory. We demonstrate the inadequacy of existing variable identification methods for their promotion to symbols and present our methods for symbol promotion. We also present methods to convert the physically addressed stack in an executable to an abstract stack. The proposed methods are practical since they do not employ symbolic, relocation, or debug information which are usually absent in deployed executables. We have integrated our techniques with a prototype x86 binary framework called \emph{SecondWrite} that uses LLVM as the IR. The robustness of the framework is demonstrated by handling executables totaling more than a million lines of source-code, including several real world programs. In the next part of this work, we demonstrate that several well-known source-level analysis frameworks such as symbolic analysis have limited effectiveness in the executable domain since executables typically lack higher-level semantics such as program variables. The IR should have a precise memory abstraction for an analysis to effectively reason about memory operations. Our first work of recovering a compiler-level representation addresses this limitation by recovering several higher-level semantics information from executables. In the next part of this work, we propose methods to handle the scenarios when such semantics cannot be recovered. First, we propose a hybrid static-dynamic mechanism for recovering a precise and correct memory model in executables in presence of executable-specific artifacts such as indirect control transfers. Next, the enhanced memory model is employed to define a novel symbolic analysis framework for executables that can perform the same types of program analysis as source-level tools. Frameworks hitherto fail to simultaneously maintain the properties of correct representation and precise memory model and ignore memory-allocated variables while defining symbolic analysis mechanisms. We exemplify that our framework is robust, efficient and it significantly improves the performance of various traditional analyses like global value numbering, alias analysis and dependence analysis for executables. Finally, the underlying representation and analysis framework is employed for two separate applications. First, the framework is extended to define a novel static analysis framework, \emph{DemandFlow}, for identifying information flow security violations in program executables. Unlike existing static vulnerability detection methods for executables, DemandFlow analyzes memory locations in addition to symbols, thus improving the precision of the analysis. DemandFlow proposes a novel demand-driven mechanism to identify and precisely analyze only those program locations and memory accesses which are relevant to a vulnerability, thus enhancing scalability. DemandFlow uncovers six previously undiscovered format string and directory traversal vulnerabilities in popular ftp and internet relay chat clients. Next, the framework is extended to implement a platform-specific optimization for embedded processors. Several embedded systems provide the facility of locking one or more lines in the cache. We devise the first method in literature that employs instruction cache locking as a mechanism for improving the average-case run-time of general embedded applications. We demonstrate that the optimal solution for instruction cache locking can be obtained in polynomial time. Since our scheme is implemented inside a binary framework, it successfully addresses the portability concern by enabling the implementation of cache locking at the time of deployment when all the details of the memory hierarchy are available

    4G/5G cellular networks metrology and management

    Get PDF
    La prolifération d'applications et de services sophistiqués s'accompagne de diverses exigences de performances, ainsi que d'une croissance exponentielle du trafic pour le lien montant (uplink) et descendant (downlink). Les réseaux cellulaires tels que 4G et 5G évoluent pour prendre en charge cette quantité diversifiée et énorme de données. Le travail de cette thèse vise le renforcement de techniques avancées de gestion et supervision des réseaux cellulaires prenant l'explosion du trafic et sa diversité comme deux des principaux défis dans ces réseaux. La première contribution aborde l'intégration de l'intelligence dans les réseaux cellulaires via l'estimation du débit instantané sur le lien montant pour de petites granularités temporelles. Un banc d'essai 4G temps réel est déployé dans ce but de fournir un benchmark exhaustif des métriques de l'eNB. Des estimations précises sont ainsi obtenues. La deuxième contribution renforce le découpage 5G en temps réel au niveau des ressources radio dans un système multicellulaire. Pour cela, deux modèles d'optimisation ont été proposés. Du fait de leurs temps d'exécution trop long, des heuristiques ont été développées et évaluées en comparaisons des modèles optimaux. Les résultats sont prometteurs, les deux heuristiques renforçant fortement le découpage du RAN en temps réel.The proliferation of sophisticated applications and services comes with diverse performance requirements as well as an exponential traffic growth for both upload and download. The cellular networks such as 4G and 5G are advocated to support this diverse and huge amount of data. This thesis work targets the enforcement of advanced cellular network supervision and management techniques taking the traffic explosion and diversity as two main challenges in these networks. The first contribution tackles the intelligence integration in cellular networks through the estimation of users uplink instantaneous throughput at small time granularities. A real time 4G testbed is deployed for such aim with an exhaustive metrics benchmark. Accurate estimations are achieved.The second contribution enforces the real time 5G slicing from radio resources perspective in a multi-cell system. For that, two exact optimization models are proposed. Due to their high convergence time, heuristics are developed and evaluated with the optimal models. Results are promising, as two heuristics are highly enforcing the real time RAN slicing

    MILAS : ModernIzing Legtacy Applications towards Service Oriented Architecture (SOA) and Software as a Service (SaaS)

    Get PDF
    MILAS er om modernisering av gamle applikasjoner, og løfte dem opp til et SOA, SaaS nivå. MILAS beskriver en metodikk for hvordan dette kan best gjøres. Metodikken baserer seg på standarder og modelleringsmuligheter. MILAS dekker spesielt modellering av tjenester og hvordan en kan modellere tjeneser ut fra gamle komponeter
    corecore