79 research outputs found

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    L'intertextualité dans les publications scientifiques

    No full text
    La base de données bibliographiques de l'IEEE contient un certain nombre de duplications avérées avec indication des originaux copiés. Ce corpus est utilisé pour tester une méthode d'attribution d'auteur. La combinaison de la distance intertextuelle avec la fenêtre glissante et diverses techniques de classification permet d'identifier ces duplications avec un risque d'erreur très faible. Cette expérience montre également que plusieurs facteurs brouillent l'identité de l'auteur scientifique, notamment des collectifs de chercheurs à géométrie variable et une forte dose d'intertextualité acceptée voire recherchée

    Software developers reasoning behind adoption and use of software development methods – a systematic literature review

    Get PDF
    When adopting and using a Software Development Method (SDM), it is important to stay true to the philosophy of the method; otherwise, software developers might execute activities that do not lead to the intended outcomes. Currently, no overview of SDM research addresses software developers’ reasoning behind adopting and using SDMs. Accordingly, this paper aims to survey existing SDM research to scrutinize the current knowledge base on software developers’ type of reasoning behind SDM adoption and use. We executed a systematic literature review and analyzed existing research using two steps. First, we classified papers based on what type of reasoning was addressed regarding SDM adoption and use: rational, irrational, and non-rational. Second, we made a thematic synthesis across these three types of reasoning to provide a more detailed characterization of the existing research. We elicited 28 studies addressing software developers’ reasoning and identified five research themes. Building on these themes, we framed four future research directions with four broad research questions, which can be used as a basis for future research

    Mutation Testing Advances: An Analysis and Survey

    Get PDF

    Traceability Links Recovery among Requirements and BPMN models

    Full text link
    Tesis por compendio[EN] Throughout the pages of this document, I present the results of the research that was carried out in the context of my PhD studies. During the aforementioned research, I studied the process of Traceability Links Recovery between natural language requirements and industrial software models. More precisely, due to their popularity and extensive usage, I studied the process of Traceability Links Recovery between natural language requirements and Business Process Models, also known as BPMN models. In order to carry out the research, I focused my work on two main objectives: (1) the development of the Traceability Links Recovery techniques between natural language requirements and BPMN models, and (2) the validation and analysis of the results obtained by the developed techniques in industrial domain case studies. The results of the research have been redacted and published in forums, conferences, and journals specialized in the topics and context of the research. This thesis document introduces the topics, context, and objectives of the research, presents the academic publications that have been published as a result of the work, and then discusses the outcomes of the investigation.[ES] A través de las páginas de este documento, presento los resultados de la investigación realizada en el contexto de mis estudios de doctorado. Durante la investigación, he estudiado el proceso de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y modelos de software industriales. Más concretamente, debido a su popularidad y uso extensivo, he estudiado el proceso de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y Modelos de Procesos de Negocio, también conocidos como modelos BPMN. Para llevar a cabo esta investigación, mi trabajo se ha centrado en dos objetivos principales: (1) desarrollo de técnicas de Recuperación de Enlaces de Trazabilidad entre requisitos especificados en lenguaje natural y modelos BPMN, y (2) validación y análisis de los resultados obtenidos por las técnicas desarrolladas en casos de estudio de dominios industriales. Los resultados de la investigación han sido redactados y publicados en foros, conferencias y revistas especializadas en los temas y contexto de la investigación. Esta tesis introduce los temas, contexto y objetivos de la investigación, presenta las publicaciones académicas que han sido publicadas como resultado del trabajo, y expone los resultados de la investigación.[CA] A través de les pàgines d'aquest document, presente els resultats de la investigació realitzada en el context dels meus estudis de doctorat. Durant la investigació, he estudiat el procés de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i models de programari industrials. Més concretament, a causa de la seua popularitat i ús extensiu, he estudiat el procés de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i Models de Processos de Negoci, també coneguts com a models BPMN. Per a dur a terme aquesta investigació, el meu treball s'ha centrat en dos objectius principals: (1) desenvolupament de tècniques de Recuperació d'Enllaços de Traçabilitat entre requisits especificats en llenguatge natural i models BPMN, i (2) validació i anàlisi dels resultats obtinguts per les tècniques desenvolupades en casos d'estudi de dominis industrials. Els resultats de la investigació han sigut redactats i publicats en fòrums, conferències i revistes especialitzades en els temes i context de la investigació. Aquesta tesi introdueix els temes, context i objectius de la investigació, presenta les publicacions acadèmiques que han sigut publicades com a resultat del treball, i exposa els resultats de la investigació.Lapeña Martí, R. (2020). Traceability Links Recovery among Requirements and BPMN models [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/149391TESISCompendi

    Human face detection techniques: A comprehensive review and future research directions

    Get PDF
    Face detection which is an effortless task for humans are complex to perform on machines. Recent veer proliferation of computational resources are paving the way for a frantic advancement of face detection technology. Many astutely developed algorithms have been proposed to detect faces. However, there is a little heed paid in making a comprehensive survey of the available algorithms. This paper aims at providing fourfold discussions on face detection algorithms. At first, we explore a wide variety of available face detection algorithms in five steps including history, working procedure, advantages, limitations, and use in other fields alongside face detection. Secondly, we include a comparative evaluation among different algorithms in each single method. Thirdly, we provide detailed comparisons among the algorithms epitomized to have an all inclusive outlook. Lastly, we conclude this study with several promising research directions to pursue. Earlier survey papers on face detection algorithms are limited to just technical details and popularly used algorithms. In our study, however, we cover detailed technical explanations of face detection algorithms and various recent sub-branches of neural network. We present detailed comparisons among the algorithms in all-inclusive and also under sub-branches. We provide strengths and limitations of these algorithms and a novel literature survey including their use besides face detection

    Produktiivsema Java EE ökosüsteemi poole

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Alates Java programmeerimiskeele loomisest 1995. a. on selle üks kõige olulisemaid kasutusvaldkondi veebirakenduste programmeerimine. Java populaarsuse põhjuseks ei olnud ainult keeledisainilised omadused nagu objektorienteeritus ja range tüübisüsteem, vaid ennekõike platvormist sõltumatus ja standardiseeritud teekide rohkus, mis tegi tavaprogrammeerijatele veebirakenduste programmeerimise jõukohaseks. Kümme aastat hiljem oli olukord muutunud märkimisväärselt. Java oli kaotamas oma liidripositsiooni uutele, nn. dünaamilistele keeltele nagu PHP, Ruby ja Python. Seejuures polnud põhjuseks mitte see, et need keeled ise oleksid tunduvalt Javast paremad, vaid Java ökosüsteemi areng oli väga konservatiivne ja aeglane. Antud kontekstis alustasime aastal 2005 oma uuringuid eesmärgiga parandada suurimad probleemid Java ökosüsteemis ja viia see vähemalt samale tasemele ülalmainitud keeltega. Käesaolevas dissertatsioonis on esitatud vastavate uuringute tulemused. Dissertatsioon põhineb neljal publikatsioonil—kolmel eelretsenseeritud teadusartiklil ja ühel patendil. Esimeseks katseks oli uue veebiraamistike integreerimisraamistiku “Aranea” loomine. Antud hetkel oli Javas üle kolmekümne aktiivselt arendatavat veebiraamistikku, mistõttu otsustasime fokuseeruda kahele võtmeprobleemile: raamistike taaskasutatavuse lihtsus ja koostöövõime. Selleks töötasime välja uudse komponentmudeli, mis võimaldab kirjeldada süsteemi teenus- ja kasutajaliideskomponentide hierarhilisi seoseid, ja realiseerisime eri raamistike adapterid komponentmudelisse sobitumiseks. Järgmise probleemina käsitlesime andmete haldamiskihi kirjeldamist. Lähtusime eeldusest, et relatsioonilistes andmebaasides on SQL kõige enamlevinud andmete kirjelduskeel, ja efektiivne admehalduskiht peab võimaldama Javas lihtsasti esitada SQL päringuid, samas garanteerima konstrueeritavate päringute süntaktilise korrektsuse. Senised lahendused baseerusid reeglina SQL päringute programsel konstrueerimisel sõnedena, mistõttu päringute korrektsuse kontroll oli raskendatud. Lahenduseks töötasime välja nn. rakendispetsiifilise keele (i.k. domain-specific language, DSL) SQL päringute esitamiseks kasutades Java keele tüübisüsteemi vahendeid nende korrektsuse kompileerimisaegseks valideerimiseks. Töö käigus identifitseerisime üldised tarkvara disainimustrid, mis lihtsustavad analoogiliste tüübikindlate DSLide loomist, ja kasutasime neid kahe uue eksperimentaalse tüübikindla DSLi loomisel - Java klasside täitmisaegseks loomiseks ja manipuleerimiseks ning XMLi parsimiseks ja genereerimiseks. Kolmanda ülesandena pühendusime ühele olulisemale Java platvormi puudusele võrreldes dünaamiliste keeltega. Kui PHP’s või Ruby’s saab programmi koodi otseselt muuta ja tulemust koheselt näha, siis Java rakendusserverid nõuavad rakenduse “ehitamist” (i.k. build) ja “paigutamist” (i.k. deploy), mis suurte rakenduste korral võib võtta mitmeid või isegi kümneid minuteid. Probleemi lahenduseks töötasime välja uudse ja praktilise meetodi koodi ümberlaadimiseks Java platvormil, mille põhjal arendasime ja lasime välja toote “JRebel”. See kasutab Java baitkoodi laadimisaegset modifitseerimist koos spetsiaalse ümbersuunamiskihiga kutsekohtade, meetodite ja meetodikeha vahel, mis võimaldab hallata koodi erinevaid versioone ning täitmisajal suunata väljakutsed viimasele versioonile. Täna, rohkem kui seitse aastat pärast uuringute algust, tuleb tõdeda, et meie töö veebiraamistikega lõi küll eduka platvormi erinevate eksperimentaalsete ideede uurimiseks ja katsetamiseks, kuid reaalses tarkvaratööstuses ei ole leidnud laialdast kasutust. Töö tüübikindlate DSLidega oli edukam, sest see mõjutas otseselt edaspidiseid uuringuid antud teemal ning selle elemendid leidsid rakendust viimases JPA standardi spetsifikatsioonis. Kõige suurem mõju tarkvaratööstusele on meie dünaamiline koodiümberlaadimise lahendus, mis on tänapäeval Java kogukonnas laialdaselt kasutusel ning mida kasutavad igapäevaselt rohkem kui 3000 erinevat organisatsiooni üle maailma.Since the Tim Berners-Lee famous proposal of World Wide Web in 1990 and the introduction of the Common Gateway Interface in 1993, the world of online web applications has been booming. In the nineties the Java language and platform became the first choice for web development in and out of the enterprise. But by the mid-aughts the platform was in crisis - newcomers like PHP, Ruby and Python have picked up the flag as the most productive platforms, with Java left for conservative enterprises. However, this was not because those languages and platforms were significantly better than Java. Rather, the issue was that innovation in the Java ecosystem was slow, due to the ways the platform was managed. Large vendors dominated the space, committees designed standards and the brightest minds were moving to other JVM languages like Scala, Groovy or JRuby. In this context we started our investigations in 2005. Our goals were to address some of the more gaping holes in the Java ecosystem and bring it on par with the languages touted as more productive. The first effort was to design a better web framework, called “Aranea”. At that point of time Java had more than thirty actively developed web frameworks, and many of them were used simultaneously in the same projects. We decided to focus on two key issues: ease of reuse and framework interoperability. To solve the first issue we created a self-contained component model that allowed the construction of both simple and sophisticated systems using a simple object protocol and hierarchical aggregation in style of the Composite design pattern. This allowed one to capture every aspect of reuse in a dedicated component, be it a part of framework functionality, a repeating UI component or a whole UI process backed by complex logic. Those could be mixed and matched almost indiscriminately subject to rules expressed in the interfaces they implemented. To solve the second issue we proposed adapters between the component model and the various models of other frameworks. We later implemented some of those adapters both in a local and remote fashion, allowing one to almost effortlessly capture and mix different web application components together, no matter what the underlying implementation may be. The next issue that we focused on was the data access layer. At that point in the Java community the most popular ways of accessing data was either using embedded SQL strings or an Object-Relational Mapping tool ``Hibernate''. Both approaches had severe disadvantages. Using embedded SQL strings exposed the developers to typographical errors, lack of abstraction, very late validation and dangers of dynamic string concatenation. Using Hibernate/ORM introduced a layer of abstraction notorious for the level of misunderstanding and production performance issues it caused. We believed that SQL is the right way to access the data in a relational database, as it expresses exactly the data that is needed without much overhead. Instead of embedding it into strings, we decided to embed it using the constructs of the Java language, thus creating an embedded DSL. As one of the goals was to provide extensive compiler-time validation, we made extensive use of Java Generics and code generation to provide maximum possible static safety. We also built some basic SQL extensions into the language that provided a better interface between Java structures and relational queries as well as allowing effortless further extension and enabling ease of abstraction. Our work on the SQL DSL made us believe that building type safe embedded DSLs could be of great use for the Java community. We embarked on building two more experimental DSLs, one for generating and manipulating Java classes on-the-fly and the other for parsing and generating XML. These experiments exposed some common patterns, including restricting DSL syntax, collecting type safe history and using type safe metadata. Applying those patterns to different domains helps encode a truly type safe DSL in the Java language. Our final and largest effort concentrated on a major disadvantage of the Java platform as compared to the dynamically-typed language platforms. Namely, while in PHP or Ruby on Rails one could edit any line of code and see the result immediately, the Java application servers would force one to do “build” and “deploy”, which for larger applications could take minutes and even tens of minutes. Initial investigation revealed that the claims of fast code reloading were not quite solid across the board. Dynamically-typed languages would typically destroy state and recreate the application, just like the Java application servers. The crucial difference was that they did it quickly and the productivity of development was a large concern for language and framework designers. However as we investigated the issue deeper on the Java side, we came up with a novel and practical way of reloading code on the JVM, which we developed and released as the product “JRebel”. We made use of the fact that Java bytecode is a very high level encoding of the Java language, which is easy to modify during load time. This allowed us to insert a layer of indirection between the call sites, methods and method bodies which was versatile enough to manage multiple versions of code and redirect the calls to the latest version during runtime. There have been over the years some basic developments in the similar fashion, but unlike them we engineered JRebel to run on the stock JVM and to have no visible impact on application functional or non-functional behaviour. The latter was the hardest, as the layer of indirection both introduces numerous compatibility problems and adds performance overhead. To overcome those limitations we had to integrate deeply on many levels of the JVM and to use compiler techniques to remove the layer of indirection where possible

    Higher Order Mutation Testing

    Get PDF
    Mutation testing is a fault-based software testing technique that has been studied widely for over three decades. To date, work in this field has focused largely on first order mutants because it is believed that higher order mutation testing is too computationally expensive to be practical. This thesis argues that some higher order mutants are potentially better able to simulate real world faults and to reveal insights into programming bugs than the restricted class of first order mutants. This thesis proposes a higher order mutation testing paradigm which combines valuable higher order mutants and non-trivial first order mutants together for mutation testing. To overcome the exponential increase in the number of higher order mutants a search process that seeks fit mutants (both first and higher order) from the space of all possible mutants is proposed. A fault-based higher order mutant classification scheme is introduced. Based on different types of fault interactions, this approach classifies higher order mutants into four categories: expected, worsening, fault masking and fault shifting. A search-based approach is then proposed for locating subsuming and strongly subsuming higher order mutants. These mutants are a subset of fault mask and fault shift classes of higher order mutants that are more difficult to kill than their constituent first order mutants. Finally, a hybrid test data generation approach is introduced, which combines the dynamic symbolic execution and search based software testing approaches to generate strongly adequate test data to kill first and higher order mutants
    corecore