176 research outputs found

    Enhancement of the usability of SOA services for novice users

    Get PDF
    Recently, the automation of service integration has provided a significant advantage in delivering services to novice users. This art of integrating various services is known as Service Composition and its main purpose is to simplify the development process for web applications and facilitates reuse of services. It is one of the paradigms that enables services to end-users (i.e.service provisioning) through the outsourcing of web contents and it requires users to share and reuse services in more collaborative ways. Most service composers are effective at enabling integration of web contents, but they do not enable universal access across different groups of users. This is because, the currently existing content aggregators require complex interactions in order to create web applications (e.g., Web Service Business Process Execution Language (WS-BPEL)) as a result not all users are able to use such web tools. This trend demands changes in the web tools that end-users use to gain and share information, hence this research uses Mashups as a service composition technique to allow novice users to integrate publicly available Service Oriented Architecture (SOA) services, where there is a minimal active web application development. Mashups being the platforms that integrate disparate web Application Programming Interfaces (APIs) to create user defined web applications; presents a great opportunity for service provisioning. However, their usability for novice users remains invalidated since Mashup tools are not easy to use they require basic programming skills which makes the process of designing and creating Mashups difficult. This is because Mashup tools access heterogeneous web contents using public web APIs and the process of integrating them become complex since web APIs are tailored by different vendors. Moreover, the design of Mashup editors is unnecessary complex; as a result, users do not know where to start when creating Mashups. This research address the gap between Mashup tools and usability by the designing and implementing a semantically enriched Mashup tool to discover, annotate and compose APIs to improve the utilization of SOA services by novice users. The researchers conducted an analysis of the already existing Mashup tools to identify challenges and weaknesses experienced by novice Mashup users. The findings from the requirement analysis formulated the system usability requirements that informed the design and implementation of the proposed Mashup tool. The proposed architecture addressed three layers: composition, annotation and discovery. The researchers developed a simple Mashup tool referred to as soa-Services Provisioner (SerPro) that allowed novice users to create web application flexibly. Its usability and effectiveness was validated. The proposed Mashup tool enhanced the usability of SOA services, since data analysis and results showed that it was usable to novice users by scoring a System Usability Scale (SUS) score of 72.08. Furthermore, this research discusses the research limitations and future work for further improvements

    Automatic generation of semantic Mashups in web portals

    Get PDF
    The Web has become an important source for information, which are created by independent providers. Web portals provide an unified point of access to content, data, services and web applications located throughout the enterprise. However, Web users have often only an insufficient available amount of time, to effectively use the available information resources. This thesis proposes a mashup framework that automatically mashes-up web portal content with related background information. The background information are derived from information web services that are composed by an evolutionary algorithm

    Leveraging Identifier Naming Structures in Source Code and Bug Reports to Localize Relevant Bugs

    Get PDF
    When bugs are found in source code, bug reports are created which contain relevant information for developers to locate and fix the bug. In large source code repositories, it can be difficult and time consuming for developers to manually analyze bug reports to locate a bug. The discovery of patterns between bug reports and source files has led to the creation of automated tools using various techniques. Automated bug localization techniques can reduce the amount of manual effort required by developers by ranking the most probable location of the bug using textual information from bug reports and source code. Although these approaches offer some assistance, the lexical mismatch between the bug reports and the source code makes it difficult to accurately locate the buggy source code file(s) using Information Retrieval (IR) techniques. Our research proposes a technique that takes advantage of the lexical and structural patterns observed in source code identifier names to help offset the mismatch between bug reports and their related source code files. Our observations reveal that there are lexical and structural identifier naming trends for different identifier types in the source code. Using two open-source projects, and collecting frequencies for observed identifier patterns across the project, we applied the observed frequencies to matched word occurrences in bug reports across our evaluation data set to modify the significance of that word. Based on observations discovered in our empirical analysis of open source repositories ElasticSearch and RxJava, we developed a method to modify the significance of a word by altering the weight of the matched word represented in the Term Frequency - Inverse Document Frequency (TF-IDF) vectorization of that particular bug report. The idea behind this approach is that if we come across a word perceived to be significant based on our observed identifier pattern frequency data, we can apply a weight to that word in the bug report vectorization to increase the cosine similarity score between the bug report and source file vectors. This work expands and improves upon previous work by Gharibi et al. [1], who propose a multicomponent approach that uses token matching, stack trace, semantic similarity, and a revised vector space model (rVSM). Specifically, our approach modifies the rVSM component, and our work is evaluated on the same three open-source software projects: AspectJ, SWT, and ZXing. The results of our approach are comparable to the results of Gharibi et al., and we achieve an improvement in some cases. It was observed that our work outperforms many existing bug localization approaches. Top@N, Mean Reciprocal Rank (MRR), and Mean Average Precision (MAP) are metrics used to evaluate and rank our work against other approaches, revealing some improvement in bug localization across three open-source projects

    Composition de services basée sur les relations sociales entre objets dans l’IoT Service composition based on social relations between things in IoT

    Get PDF
    With the rapid development of service-oriented computing applications and social Internet ofthings (SIoT), it is becoming more and more difficult for end-users to find relevant services to create value-added composite services in this big data environment. Therefore, this work proposes S-SCORE (Social Service Composition based on Recommendation), an approach for interactive web services composition in SIoT ecosystem for end-users. The main contribution of this work is providing a novel recommendation approach, which enables to discover and suggest trustworthy and personalized web services that are suitable for composition. The first proposed model of recommendation aims to face the problem of information overload, which enables to discover services and provide personalized suggestions for users without sacrificing the recommendation accuracy. To validate the performance of our approach, seven variant algorithms of different approaches (popularity-based, user-based and item-based) are compared using MovieLens 20M dataset. The experiments show that our model improves the recommendation accuracy by 12% increase with the highest score among compared methods. Additionally it outperforms the compared models in diversity over all lengths of recommendation lists. The second proposed approach is a novel recommendation mechanism for service composition, which enables to suggest trustworthy and personalized web services that are suitable for composition. The process of recommendation consists of online and offline stages. In the offline stage, two models of similarity computation are presented. Firstly, an improved users’ similarity model is provided to filter the set of advisors for an active user. Then, a new service collaboration model is proposed that based on functional and non-functional features of services, which allows providing a set of collaborators for the active service. The online phase makes rating prediction of candidate services based on a hybrid algorithm that based on collaborative filtering technique. The proposed method gives considerable improvement on the prediction accuracy. Firstly, it achieves the lowest value in MAE (Mean Absolute Error) metric and the highest coverage values than other compared traditional collaborative filtering-based prediction approaches

    Testing of Neural Networks

    Get PDF
    Research in Neural Networks is becoming more popular each year. Re- search has introduced different ways to utilize Neural Networks, but an important aspect is missing: Testing. There are only 16 papers that strictly address Testing Neural Networks with a majority of them focusing on Deep Neural Networks and a small part on Recurrent Neural Networks. Testing Re- current neural networks is just as important as testing Deep Neural Networks as they are used in products like Autonomous Vehicles. So there is a need to ensure that the recurrent neural networks are of high quality, reliable, and have the correct behavior. For the few existing research papers on the testing of recurrent neural networks, they only focused on LSTM or GRU recurrent neural network architectures, but more recurrent neural network architectures exist such as MGU, UGRNN, and Delta-RNN. This means we need to see if ex- isting test metrics works for these architectures or do we need to introduce new testing metrics. For this paper we have two objectives. First, we will do a comparative analysis of the 16 papers with research in Testing Neural Networks. We define the testing metrics and analyze the features such as code availability, programming languages, related testing software concepts, etc. We then perform a case study with the Neuron Coverage Test Metric. We will conduct an experiment using unoptimized RNN models trained by a tool within EXAMM, a RNN Framework and optimized RNN Models trained and optimized using ANTS. We compared the Neuron Coverage Outputs with the assumption that the Optimized Models will perform better

    Why did you clone these identifiers? Using Grounded Theory to understand Identifier Clones

    Get PDF
    Developers spend most of their time comprehending source code, with some studies estimating this activity takes between 58% to 70% of a developer’s time. To improve the readability of source code, and therefore the productivity of developers, it is important to understand what aspects of static code analysis and syntactic code structure hinder the understandability of code. Identifiers are a main source of code comprehension due to their large volume and their role as implicit documentation of a developer’s intent when writing code. Despite the critical role that identifiers play during program comprehension, there are no regulated naming standards for developers to follow when picking identifier names. Our research supports previous work aimed at understanding what makes a good identifier name, and practices to follow when picking names by exploring a phenomenon that occurs during identifier naming: identifier clones. Identifier clones are two or more identifiers that are declared using the same name. This is an important yet unexplored phenomenon in identifier naming where developers intentionally give the same name to two or more identifiers in separate parts of a system. We must study identifier clones to understand it’s impact on program comprehension and to better understand the nature of identifier naming. To accomplish this, we conducted an empirical study on identifier clones detected in open-source software engineered systems and propose a taxonomy of identifier clones containing categories that can explain why they are introduced into systems and whether they represent naming antipatterns

    Query-Time Data Integration

    Get PDF
    Today, data is collected in ever increasing scale and variety, opening up enormous potential for new insights and data-centric products. However, in many cases the volume and heterogeneity of new data sources precludes up-front integration using traditional ETL processes and data warehouses. In some cases, it is even unclear if and in what context the collected data will be utilized. Therefore, there is a need for agile methods that defer the effort of integration until the usage context is established. This thesis introduces Query-Time Data Integration as an alternative concept to traditional up-front integration. It aims at enabling users to issue ad-hoc queries on their own data as if all potential other data sources were already integrated, without declaring specific sources and mappings to use. Automated data search and integration methods are then coupled directly with query processing on the available data. The ambiguity and uncertainty introduced through fully automated retrieval and mapping methods is compensated by answering those queries with ranked lists of alternative results. Each result is then based on different data sources or query interpretations, allowing users to pick the result most suitable to their information need. To this end, this thesis makes three main contributions. Firstly, we introduce a novel method for Top-k Entity Augmentation, which is able to construct a top-k list of consistent integration results from a large corpus of heterogeneous data sources. It improves on the state-of-the-art by producing a set of individually consistent, but mutually diverse, set of alternative solutions, while minimizing the number of data sources used. Secondly, based on this novel augmentation method, we introduce the DrillBeyond system, which is able to process Open World SQL queries, i.e., queries referencing arbitrary attributes not defined in the queried database. The original database is then augmented at query time with Web data sources providing those attributes. Its hybrid augmentation/relational query processing enables the use of ad-hoc data search and integration in data analysis queries, and improves both performance and quality when compared to using separate systems for the two tasks. Finally, we studied the management of large-scale dataset corpora such as data lakes or Open Data platforms, which are used as data sources for our augmentation methods. We introduce Publish-time Data Integration as a new technique for data curation systems managing such corpora, which aims at improving the individual reusability of datasets without requiring up-front global integration. This is achieved by automatically generating metadata and format recommendations, allowing publishers to enhance their datasets with minimal effort. Collectively, these three contributions are the foundation of a Query-time Data Integration architecture, that enables ad-hoc data search and integration queries over large heterogeneous dataset collections

    Taxonomy of Software Readability Changes

    Get PDF
    Software readability has emerged as an important software quality metric. Numerous pieces of research have highlighted the importance of readability. Developers generally spend a large amount of their time reading and understanding existing code, rather than writing new code. By creating more readable code, engineers can limit the mental load required to understand specific code segments. With this importance established, research has been done into how to improve software readability. This research looked for ways of measuring readability, how to create more readable software, and how to potentially improve readability. While some research has examined the changes developers make, their use of automatic source code analysis may miss some aspects of these changes. As such, this study conducted a manual review of software readability commits to identify what changes developers tend to make. In this study, we identified 1,782 potential readability commits for 800 open-source Java projects, by mining keyword patterns in commit messages. These commits were then reviewed by human reviewers to identify the changes made by the developers. The observations made by the reviewers were then reviewed for trends, from which several categories would be established. These categories would be further reviewed for additional trends, developing a taxonomy of readability changes. Overall, this research looked at 314 changes from 194 commits across 154 unique projects. This study shows the developers’ actions when improving software readability, identifying the common trends of method extraction, identifier renaming, and code formatting, supported by existing research. In addition, this research presents less observed trends, such as code removal or keyword modification, which were changes not seen in other research. Overall, this work provides a taxonomy of the trends seen, identifying high level trends as well as subgroups within those trends

    Mashup Ecosystems: Integrating Web Resources on Desktop and Mobile Devices

    Get PDF
    The Web is increasingly used as an application platform, and recent development of it has introduced software ecosystems where different actors collaborate. This collaboration is international from day one, and it evolves and grows rapidly. In web ecosystems applications are provided as services, and interdependencies between ecosystem parts can vary from very strong and obvious to loose and recondite. Mashups -- web application hybrids that combine resources from different services into an integrated system that has increased value from user perspective -- are exploiting services of the Web and creating ecosystems where end-users, mashup authors, and service providers collaborate. The term "resources" is used here in a broad sense, and it can refer to user's local data, infinite content of the Web, and even executable code. This dissertation presents mashups as a new breed of web applications that are intended for parsing the web content into an easily accessed form on both regular desktop computers as well as on mobile devices. Constantly evolving web technologies and new web services open up unforeseen possibilities for mashup development. However, developing mashups with current methods and tools for existing deployment environments is challenging. First, the Web as an application platform faces numerous shortcomings, second, web application development practices in general are still immature, and third, development of mashups has additional requirements that need to be addressed. In addition, mobility sets even more challenges for mashup authoring. This dissertation describes and addresses numerous issues regarding mashup ecosystems and client-side mashup development. To achieve this, we have implemented technical research artifacts including mashup ecosystems and different kinds of mashup compositions. The artifacts are developed with numerous runtime environments and tools and targeted at different end-user platforms. This has allowed us to evaluate methods, tools, and practises used during the implementation. As result, this dissertation identifies the fundamental challenges of mashup ecosystems and describes how service providers and mashup ecosystem authors can address these challenges in practice. In addition, example implementation of a specialized multimedia mashup ecosystem for mobile devices is described. To address mashup development issues, this dissertation introduces practical guidelines and a reference architecture that can be applied when mashups are created with traditional web development tools. Moreover, environments that can be used on mobile devices to create mashups that have access to both web and local resources are introduced. Finally, a novel approach to web software development -- creating software as a mashup -- is introduced, and a realization of such concept is described
    • …
    corecore