9 research outputs found

    A Concurrent Perspective on Smart Contracts

    Get PDF
    In this paper, we explore remarkable similarities between multi-transactional behaviors of smart contracts in cryptocurrencies such as Ethereum and classical problems of shared-memory concurrency. We examine two real-world examples from the Ethereum blockchain and analyzing how they are vulnerable to bugs that are closely reminiscent to those that often occur in traditional concurrent programs. We then elaborate on the relation between observable contract behaviors and well-studied concurrency topics, such as atomicity, interference, synchronization, and resource ownership. The described contracts-as-concurrent-objects analogy provides deeper understanding of potential threats for smart contracts, indicate better engineering practices, and enable applications of existing state-of-the-art formal verification techniques.Comment: 15 page

    Staccato: A Bug Finder for Dynamic Configuration Updates

    Get PDF

    Study and Refactoring of Android Asynchronous Programming

    Get PDF
    To avoid unresponsiveness, a core part of mobile development is asynchronous programming. Android provides several async constructs that developers can use. However, developers can still use the inappropriate async constructs, which result in memory leaks, lost results, and wasted energy. Fortunately, refactoring tools can eliminate these problems by transforming async code to use the appropriate constructs. In this paper we conducted a formative study on a corpus of 611 widely-used Android apps to map the asynchronous landscape of Android apps, understand how developers retrofit asynchrony, and learn about barriers encountered by developers. Based on this study, we designed, implemented, and evaluated ASYNCDROID, a refactoring tool which enables Android developers to transform existing improperly-used async constructs into correct constructs. Our empirical evaluation shows that ASYNCDROID is applicable, accurate, and saves developers effort. We submitted 30 refactoring patches, and developers consider that the refactorings are useful.Ope

    Automated refactoring for Java concurrency

    Get PDF
    In multicore era, programmers exploit concurrent programming to gain performance and responsiveness benefits. However, concurrent programs are difficult to write: the programmer has to balance two conflicting forces, thread safety and performance. To make concurrent programming easier, modern programming languages provide many kinds of concurrent constructs, such as threads, asynchronous tasks, concurrent collections, etc. However, despite the existence of these concurrent constructs, we know little about how developers use them. On the other hand, although existing API documentation teach developers how to use concurrent constructs, developers can still misuse and underuse them. In this dissertation, we study the use, misuse, and underuse of two types of commonly used Java concurrent constructs: Java concurrent collections and Android async constructs. Our studies show that even though concurrent constructs are widely used in practice, developers still misuse and underuse them, causing semantic and performance bugs. We propose and develop a refactoring toolset to help developers correctly use concurrent constructs. The toolset is composed of three automated refactorings: (i) detecting and fixing the misuses of Java concurrent collections, (ii) retro fitting concurrency for existing sequential Android code via a basic Android async construct, and (iii) converting inappropriately used basic Android async constructs to appropriately enhanced constructs for Android apps. Refactorings (i) and (iii) aim to fix misused constructs while refactoring (ii) aims to eliminate underuses. First, we cataloged nine commonly misused check-then-act idioms of Java concurrent collections, and show the correct usage of each idiom. We implemented the detection strategies in a tool, CTADetector, that finds and fi xes misused check-then-act idioms. We applied CTADetector to 28 widely used open source Java projects (comprising 6.4 million lines of code) that use Java concurrent collections. CTADetector discovered and fixed 60 bugs. These bugs were con firmed by developers and the fixes were accepted. Second, we conducted a formative study on how a basic Android async construct, AsyncTask, is used, misused, and underused in Android apps. Based on the study, we designed, developed, and evaluated Asynchronizer, an automated refactoring tool that enables developers to retrofit concurrency into Android apps. The refactoring uses a points-to static analysis to determine the safety of the refactoring. We applied Asynchronizer to perform 123 refactorings in 19 widely used Android apps; their developers accepted 40 refactorings in 7 projects. Third, we conducted a formative study on a corpus of 611 widely-used Android apps to map the asynchronous landscape of Android apps, understand how developers retrofi t concurrency in Android apps, and learn about barriers encountered by developers. Based on this study, we designed, implemented, and evaluated AsyncDroid, a refactoring tool which enables Android developers to transform existing improperly-used async constructs into correct constructs. We submitted 45 refactoring patches generated by AsyncDroid in 7 widely used Android projects, and developers accepted 15 of them. Finally, we released all tools as open-source plugins for the widely used Eclipse IDE which has millions of Java users. Moreover, we also integrated CTADetector and AsyncDroid with a static analysis platform, ShipShape, that is developed by Google. Google envisions ShipShape to become a widely-used platform. Any app developer that wants to check code quality, for example before submitting an app to the app store, would run ShipShape on her code base. We expect that by contributing new async analyzers to ShipShape, millions of app developers would bene t by being able to execute our analysis and transformations on their code

    Immutable data types in concurrent programming on basis of Clojure language

    Get PDF
    Konkurentne programmeerimine keskendub probleemidele, kus erinevaid ressursse tuleb jagada mitme lõime vahel. Kõige lihtsamal juhul võib selleks olla protsessori arvutusressurss, kuid tänapäevased mitme tuumaga protsessorid lisavad probleemile lisamõõtme, kus valdavaks probleemiks saab mälu ühine konkurentne kasutamine. Selle töö eesmärk on uurida konkurentses programmerimises esinevaid probleeme ja võimalikke lahendusi Java ja Clojure keelte näite varal pannes rõhku keeles Clojure kasutusele võetud uuendustele. Leitakse, et konkrurentne programmeerimine Javas pärib enamiku probleemidest konkurentsete programmeerimise vahendite suhteliselt madalatasemelisest lisamisest Java keelde. Enamik probleeme tuleneb ühismälu mudeli kasutuselevõtust. Kuna Javas on võimalik pöörduda ühismälu poole korrektse vastastiku välistuseta, siis võib see põhustada raskesti leitavaid tarkvara vigu. Peale selle võib Java lukkudel põhinev vastastik välistus luua raskesti leitavaid uusi probleeme. Näiteks võib programm sisaldada tupikut, kus programmi kaks lõime ootavad vastastikku võetud lukkude taga tänu ebakorrektsele lukkude võtmise järjekorrale programmis. Lukke kasutades on keeruline koostada mitmest eraldi seisvast atomaarsest operatsioonist uut ühendatud atomaarset operatsiooni. Töös leitakse, et Lispist inspireeritud funktsionaalne Java platformil põhinev programmerimiskeel Clojure pakub rohkem piiratud reeglistikku andmete jagamiseks mitme lõime vahel. Kõige olulisem on, et kõik andmete jagamised mitme lõime vahel peavad olema väljendatud tahtlikult, mis võib arvatavalt vähendada programmeerimisvigade hulka. Clojures võib andmeid jagada asünkroonselt kasutades agente või sünkroonselt, kas kasutades tarkvaralisi mälutransaktsioone või lihtsamaid atomaarseid uuendusi üksikväärtuse jagamiseks. Clojure tarkvaralised mälutransaktsioonid pakuvad lihtsa viisi mitme eraldi atomaarse operatsiooni uueks tervikuks kombineerimiseks. Clojure tarkvaralised mälutransaktsioonid võivad lisaks vähendada koodi vigu, kuna need kontrollivad programmi töö käigus, et jagatud mälu poole pöördumine toimuks transaktsiooni siseselt. Töös jõutakse järeldusele, et eelnevale vaatamata ei vabasta see programmeerijat vajalike atomaarsete operatsioonide korrektsest tuvastamisest programmi koodis. Clojure lähenemine konkurentsele programmeerimisele põhineb muutumatute muutujate kontseptsioonil. Muutumatud muutujad võimaldavad kasutada keerukaid andmestruktuure lihtsate väärtustena, mille olek ei muutu viite haldaja kontrolli väliselt. Seega on oluline, et Cloure pakuks erinevaid andmeüüpe, mis järgivaid neid printsiipe. Üks sellistest andmestruktuuridest Clojures on Persistent Vector – Clojure suvapöördusega loend. Käesolevas töös uuriti selle andmestruktuuri ehitust ja jõudlust. Kokkuvõtvalt võib öelda, et tegemist on “bitmapped” trie andmetüübiga, millel on kõrge hargnevustegur, mis võimaldab puhverdada lisamise operatsioone kogudes lisatavad elemendid esmalt nii öelda sabapuhvermällu ennem nende lisamist terviklikuna puusse. Persistent Vector andmetüübi ülesehitus võimaldab sel jagada oma sisemist struktuuri oma eelnevate versioonidega, mis teeb sellest tõhusa muutumatu andmetüübi. Mõõtmised näitavad, et võrdluses Java ArrayList andmetüübiga pakkub see sarnast jõudlust nii elementide lisamisel nimekirja lõppu kui ka nimekirja järjestikusel läbimisel. Elemendi positsiooni järgi uuendamise jõudlus on siiski kaks suurusjärku madalam. Elemendi positsiooni järgi pärimise jõudlusele ei õnnestunud anda selgepiirilist hinnangut tänu arvatavasti Java JIT kompilaatori poolt põhjustatud probleemidele ArrayList jõudluse hindamisel. Tulemused annavad siiski alust spekuleerida, et andmetüüpide Persistent Vector ja ArrayList positsiooni järgi pärimise jõudlus on sarnane positsiooni järgi uuendamise jõudlusega. Käesolevas töös analüüsiti erinevaid jõudluse paranduse ettepanekuid. Võib järeldada, et Persistent Vector nimekirja lõppu lisamise jõudlust on võimalik tõsta ligikaudu kaks korda, kui jagada selle lisamise sabapuhvermälu ühe lõime piires. Võib arvata, et piisavalt hea lisamise ja läbimise operatsioonide jõudlus võimaldaks Persistent Vector andmetüüpi kasutada mitmete praktiliste ülesannete lahendamisel. Näiteks võiks seda kasutada andmebaasist laetud nimekirjast veebilehe koostamisel vahepuhvermäluna. Korrektsete paralleeltestide koostamise keerukuse tõttu parallelljõudluse testid ei kajastu antud töös. Seega võib soovitada nende testide sooritamist edasiseks uurimisvaldkonnaks. Kokkuvõtvalt jõuti töös järeldusele, et Clojure näitab, et on võimalik muuta konkurentne programmerimine suhteliselt turvaliseks, kui loetletud disaini printsiibid on järgitud. Võib arutleda,et raskused Java konkurentses programeerimises ei vähene kuniks Java mälu kasutus ei ole kriitiliselt üle vaadatud.Concurrent programming tries to solve the problems where there is a need to share different resources between different threads. In most simplest case it is about sharing the processor time but modern multi-core processors do add a new dimension where the access to the shared memory becomes the most essential problem. It is concluded in this work that the concurrent programming in Java inherits most of its problems from the direct incorporation of the shared memory model. Because it is possible in Java to access shared memory without properly applied mutual exclusion, it can produce hard to detect software bugs. Moreover Java lock based mutual exclusion can introduce additional hard to detect problems. Most notoriously the program can contain a possible deadlock when lock acquisition is not correctly ordered. It is difficult to compose separate thread safe atomic operations into a new atomic operation using locks because an additional complex synchronization is required when combining multiple method calls. The main focus of this work is on the innovations provided by the Clojure language. It is concluded in this work that a new Lisp inspired functional language Clojure that is implemented on top of the Java platform introduces a more limited ruleset for the data sharing between different threads. Most importantly all data sharing operations must be expressed explicitly. This approach can arguably reduces the set of possible programming errors. Clojure offers two methods for the data sharing. It can be accomplished asynchronously with the agents or synchronously with either software transactional memory (STM) for operations that require updating multiple values in one atomic operations or with simpler atomic updates when only one values is shared. Clojure STM provides syntactically simple method for combining multiple separate atomic operations into new atomic operation by simply wrapping given operations into a new transaction. Clojure STM can reduce programming errors further by using runtime verification to check that no updates are performed outside of the transaction. It was concluded that regardless of the above Clojure does not free the programmer from correctly identifying set of the operations that should be executed atomically. Clojure method of the concurrent programming relies heavily on the immutable data structures. Immutability lets it regard complex data structures as simple values whose state does not change outside of the control of the reference holder. Therefore it is important for Clojure to provide rich set of different data structures that follow these principles. One of such data structures in Clojure is Persistent Vector from Clojure collections library. The internal working principles of this data type were explored. In summary it is a bit mapped trie with the high branching factor that allows possibility of the deferred additions into the end of the vector by collecting the new elements into a tail buffer before pushing them into the trie as a whole. Persistent Vector can share a bulk of its internal structure with the previous versions making it effective immutable data structure. The actual performance of the Persistent Vector was evaluated. The findings show that it can provide addition and iteration operation performance compared to the Java collections ArrayList. The update by index performs two orders of magnitude slower than analogue operation on ArrayList. The performance difference of lookup by index operation was not conclusively determined due probably JIT induced difficulty to measure ArrayList index lookups reliably. Performed measurements still allow to speculate that the performance difference of the index lookup operation between Persistent Vector and ArrayList is similar to the performance difference of the update by index operation. Few additional performance enchantments were evaluated and it was concluded that it would be possible to improve the addition operation performance around two times when additional thread confined flag is used to allow further sharing of the tail buffer between different versions. It can be argued that relatively good addition and iterating performance would allow to use Persistent Vector to solve a set of useful problems. For example the Persistent Vector can be used to load a list of the records from the database to be iterated over to build a web page based on that data. Due hardness of the proper performance testing of the parallel operations such tests were not included into this work. It can be suggested that testing the performance of sharing persistent vector between multiple threads is needed. It was concluded in summary that Clojure shows that it is possible to make concurrent programming relatively safer when a set of design principles are changed. It can be argued that difficulty of concurrent programming in Java does not improve unless its memory access principles are considerably reevaluated

    Reusable Concurrent Data Types

    Get PDF
    This paper contributes to address the fundamental challenge of building Concurrent Data Types (CDT) that are reusable and scalable at the same time. We do so by proposing the abstraction of Polymorphic Transactions (PT): a new programming abstraction that offers different compatible transactions that can run concurrently in the same application. We outline the commonality of the problem in various object-oriented languages and implement PT and a reusable package in Java. With PT, annotating sequential ADTs guarantee novice programmers to obtain an atomic and deadlock-free CDT and let an advanced programmer leverage the application semantics to get higher performance. We compare our polymorphic synchronization against transaction-based, lock-based and lock-free synchronizations on SPARC and x86-64 architectures and we integrate our methodology to a travel reservation benchmark. Although our reusable CDTs are sometimes less efficient than non-composable handcrafted CDTs from the JDK, they outperform all reusable Java CDTs

    Check-then-Act Misuse of Java Concurrent Collections

    Get PDF
    Concurrent collections provide thread-safe, highly-scalable operations, and are widely used in practice. However, programmers can misuse these concurrent collections by composing one operation that checks a condition (e.g., whether the collection contains an element) with another operation that acts based on this condition (e.g., insert the element into the collection). Unless the whole composition is atomic, the program contains an atomicity violation bug. In this paper we present the first empirical study of CHECK-THEN-ACT idioms of Java concurrent collections in a large corpus of open-source applications. We catalog nine commonly misused check-then-act idioms and show the correct usage. We quantitatively and qualitatively analyze 28 widely-used open source Java projects that use Java concurrency collections – comprising 6.4M lines of code. We classify the commonly used idioms, the ones that are the most error-prone, and the evolution of the programs with respect to misused idioms. We found 282 buggy instances. We reported 155 to the developers, who examined 90 of them. The developers confirmed 60 as new bugs and accepted our patch. This shows that CHECK-THENACT idioms are commonly misused in practice, and correcting them is important.published or submitted for publicationnot peer reviewe

    Explainable, Security-Aware and Dependency-Aware Framework for Intelligent Software Refactoring

    Full text link
    As software systems continue to grow in size and complexity, their maintenance continues to become more challenging and costly. Even for the most technologically sophisticated and competent organizations, building and maintaining high-performing software applications with high-quality-code is an extremely challenging and expensive endeavor. Software Refactoring is widely recognized as the key component for maintaining high-quality software by restructuring existing code and reducing technical debt. However, refactoring is difficult to achieve and often neglected due to several limitations in the existing refactoring techniques that reduce their effectiveness. These limitation include, but not limited to, detecting refactoring opportunities, recommending specific refactoring activities, and explaining the recommended changes. Existing techniques are mainly focused on the use of quality metrics such as coupling, cohesion, and the Quality Metrics for Object Oriented Design (QMOOD). However, there are many other factors identified in this work to assist and facilitate different maintenance activities for developers: 1. To structure the refactoring field and existing research results, this dissertation provides the most scalable and comprehensive systematic literature review analyzing the results of 3183 research papers on refactoring covering the last three decades. Based on this survey, we created a taxonomy to classify the existing research, identified research trends and highlighted gaps in the literature for further research. 2. To draw attention to what should be the current refactoring research focus from the developers’ perspective, we carried out the first large scale refactoring study on the most popular online Q&A forum for developers, Stack Overflow. We collected and analyzed posts to identify what developers ask about refactoring, the challenges that practitioners face when refactoring software systems, and what should be the current refactoring research focus from the developers’ perspective. 3. To improve the detection of refactoring opportunities in terms of quality and security in the context of mobile apps, we designed a framework that recommends the files to be refactored based on user reviews. We also considered the detection of refactoring opportunities in the context of web services. We proposed a machine learning-based approach that helps service providers and subscribers predict the quality of service with the least costs. Furthermore, to help developers make an accurate assessment of the quality of their software systems and decide if the code should be refactored, we propose a clustering-based approach to automatically identify the preferred benchmark to use for the quality assessment of a project. 4. Regarding the refactoring generation process, we proposed different techniques to enhance the change operators and seeding mechanism by using the history of applied refactorings and incorporating refactoring dependencies in order to improve the quality of the refactoring solutions. We also introduced the security aspect when generating refactoring recommendations, by investigating the possible impact of improving different quality attributes on a set of security metrics and finding the best trade-off between them. In another approach, we recommend refactorings to prioritize fixing quality issues in security-critical files, improve quality attributes and remove code smells. All the above contributions were validated at the large scale on thousands of open source and industry projects in collaboration with industry partners and the open source community. The contributions of this dissertation are integrated in a cloud-based refactoring framework which is currently used by practitioners.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/171082/1/Chaima Abid Final Dissertation.pdfDescription of Chaima Abid Final Dissertation.pdf : Dissertatio
    corecore