23 research outputs found

    Web Service Mashup Middleware with Partitioning of XML Pipelines

    Full text link

    Quality of Web Mashups: A Systematic Mapping Study

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-04244-2_8Web mashups are a new generation of applications based on the composition of ready-to-use, heterogeneous components. They are gaining momentum thanks to their lightweight composition approach, which represents a new opportunity for companies to leverage on past investments in SOA, Web services, and public APIs. Although several studies are emerging in order to address mashup development, no systematic mapping studies have been reported on how quality issues are being addressed. This paper reports a systematic mapping study on which and how the quality of Web mashups has been addressed and how the product quality-aware approaches have been defined and validated. The aim of this study is to provide a background in which to appropriately develop future research activities. A total of 38 research papers have been included from an initial set of 187 papers. Our results provided some findings regarding how the most relevant product quality characteristics have been addressed in different artifacts and stages of the development process. They have also been useful to detect some research gaps, such as the need of more controlled experiments and more quality-aware mashup development proposals for other characteristics which being important for the Web domain have been neglected such as Usability and ReliabilityThis work is funded by the MULTIPLE project (TIN2009-13838), the Senescyt program (scholarships 2011), and the Erasmus Mundus Programme of the European Commission under the Transatlantic Partnership for Excellence in Engineering - TEE Project.Cedillo Orellana, IP.; Fernández Martínez, A.; Insfrán Pelozo, CE.; Abrahao Gonzales, SM. (2013). Quality of Web Mashups: A Systematic Mapping Study. En Current Trends in Web Engineering. Springer. 66-78. https://doi.org/10.1007/978-3-319-04244-2_8S6678Alkhalifa, E.: The Future of Enterprise Mashups. Business Insights. E-Strategies for Resource Management Systems (2009)Beemer, B., Gregg, D.: Mashups: A Literature Review and Classification Framework. Future Internet 1, 59–87 (2009)Cappiello, C., Daniel, F., Matera, M.: A Quality Model for Mashup Components. In: Gaedke, M., Grossniklaus, M., Díaz, O. (eds.) ICWE 2009. LNCS, vol. 5648, pp. 236–250. Springer, Heidelberg (2009)Cappiello, C., Daniel, F., Matera, M., Pautasso, C.: Information Quality in Mashups. IEEE Internet Computing 14(4), 32–40 (2010)Cappiello, C., Matera, M., Picozzi, M., Daniel, F., Fernandez, A.: Quality-Aware Mashup Composition: Issues, Techniques and Tools. In: 8th International Conference on the Quality of Information and Communications Technology (QUATIC 2012), pp. 10–19 (2012)Fenton, N.E., Pfleeger, S.L.: Software Metrics: A Rigorous and Practical Approach, 2nd edn. International Thompson 1996, pp. I–XII, 1–638 (1996) ISBN 978-1-85032-275-7Fernandez, A., Insfran, E., Abrahão, S.: Usability evaluation methods for the web: A systematic mapping study. Information and Software Technology 53(8), 789–817 (2011)Garousi, V., Mesbah, A., Betin-Can, A., Mirshokraie, S.: A systematic mapping study of web application testing. Information and Software Technology 55(8), 1374–1396 (2013)Grammel, L., Storey, M.-A.: A survey of mashup development environments. In: Chignell, M., Cordy, J., Ng, J., Yesha, Y. (eds.) The Smart Internet. LNCS, vol. 6400, pp. 137–151. Springer, Heidelberg (2010)Hoyer, V., Fischer, M.: Market Overview of Enterprise Mashup Tools. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 708–721. Springer, Heidelberg (2008)ISO/IEC: ISO/IEC 25010 Systems and software engineering. Systems and software Quality Requirements and Evaluation (SQuaRE). System and software quality models (2011)Kitchenham, B., Charters, S.: Guidelines for performing Systematic Literature Reviews in Software Engineering. Version 2.3, ESBE Technical Report, Keele University, UK (2007)Mendes, E.: A systematic review on the Web engineering research. In: International Symposium on Empirical Software Engineering (ISESE 2005), pp. 498–507 (2005)OrangeLabs: State of the Art in Mashup tools, SocEDA project, pp. 1–59 (2011)Petersen, K., Feldt, R., Mujtaba, S., Mattsson, M.: Systematic mapping studies in software engineering. In: 12th International Conference on Evaluation and Assessment in Software Engineering (EASE), pp. 68–77 (2008)Raza, M., Hussain, F.K., Chang, E.: A methodology for quality-based mashup of data sources. In: 10th International Conference on Information Integration and Web-based Applications & Services (iiWAS 2008), pp. 528–533 (2008)Saeed, A.: A Quality-based Framework for Leveraging the Process of Mashup Component Selection (2009), https://gupea.ub.gu.se/handle/2077/21953Sharma, A., Hellmann, T.D., Maurer, F.: Testing of Web Services - A Systematic Mapping. In: 8th World Congress on Services (SERVICES 2012), pp. 346–352 (2012

    Un método de evaluación de usabilidad de mashups basado en la composicionalidad de sus componentes

    Full text link
    [ES] Los Mashups son una nueva generación de aplicaciones Web que integran componentes provistos por terceros desde la Web. La usabilidad en este tipo de aplicaciones es un factor muy importante ya que como en toda aplicación Web, este determina el éxito de la aplicación. Si bien se han planteado métodos de evaluación que permiten evaluar la usabilidad de aplicaciones Web en general, éstos no cubren las características inherentes a la composicionalidad de los Mashups. La presente tesina de máster pretende aportar un método de evaluación de la usabilidad de Mashups acorde con sus características propias de composicionalidad. El método está compuesto de un modelo de usabilidad y un proceso de evaluación que provee guías sobre la forma en la que el modelo de usabilidad puede ser usado para realizar evaluaciones específicas. Tanto el modelo de usabilidad como el proceso de evaluación están alineados con el último estándar ISO/IEC 25000 para la evaluación de la calidad de producto (SQuaRE) y ambos toman en cuenta la naturaleza composicional de los Mashups. El modelo de usabilidad descompone el concepto de usabilidad en sub-características y métricas genéricas. Estas métricas son aplicadas en la composición del producto final y el método de evaluación puede ser aplicado en cualquier estado del ciclo de vida de este tipo de aplicaciones, ya sea en la selección de los componentes, durante el proceso de composición o cuando el producto está listo para ser usado. Para apoyar esta propuesta se ha realizado un profundo estudio del estado del arte. Este estudio comprende dos mapeos sistemáticos: el primero abarca un estudio de la evaluación de la calidad de los Mashups y el segundo cubre las características de la composicionalidad envueltas en los Mashups. Los resultados obtenidos aportaron de una forma significativa en la definición del Modelo de Usabilidad de Mashups y posteriormente en el proceso de evaluación de usabilidad. Finalmente, se desarrollaron tres casos de estudio que muestran la viabilidad de nuestro enfoque. Estos casos de estudio muestran en detalle como el método de evaluación propuesto utiliza nuestro Modelo de Usabilidad para Mashups. Los resultados muestran que nuestra propuesta permite detectar problemas de usabilidad, los cuales una vez corregidos, permiten obtener Mashups más usables[EN] Mashups are the new-generation of Web applications that integrates third-party components from the Web. Usability in these applications is very important since their success depends on this quality factor. Although several methods have been proposed to evaluate the usability of Web applications in general, they do not cover the specific characteristics inherent to the compositionality of Mashups. This master thesis aims to provide a method for evaluating the usability of Mashups according to their specific characteristics of compositionality. The method is composed of a usability model and an evaluation process that provides guidelines on how the usability model can be used to perform specific evaluations. Both the usability model and the evaluation process are aligned with the latest ISO/IEC 25000 standard for software product quality evaluation (SQuaRE), and both take into account the compositional nature of mashups. The usability model breaks down the concept of usability into sub-characteristics, attributes and generic metrics. These metrics are applied to the final product composition and process evaluation method can be applied at any stage of the life cycle of this kind of applications (i.e., during the component selection, the composition process or when the product is ready for use. In order to support the foundation of this proposal, an in-depth study was performed about the state of art. This study comprises two systematic mappings: the first one covers the quality evaluation of Mashups, whereas the second one covers the compositionality features involved in Mashups. The obtained results were used as input in order to define both our Mashups Usability Model, and our usability evaluation process. Finally, we developed three case studies in order to show the feasibility of our approach. These case studies show in detail how the proposed evaluation method was followed by using our Mashup Usability Model. Results showed that our approach was able to detect usability problems, which once corrected, it allows obtaining more usable Mashups.Cedillo Orellana, IP. (2013). Un método de evaluación de usabilidad de mashups basado en la composicionalidad de sus componentes. http://hdl.handle.net/10251/37955Archivo delegad

    Cost-Based Optimization of Integration Flows

    Get PDF
    Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation

    Colaboración P2P para ambientes de desarrollo de usuario final basados en navegadores web

    Get PDF
    El browser, como lo conocemos, está evolucionando y la colaboración en la web, está tomando un papel central. Explorar la alternativa a la interacción de la arquitectura cliente-servidor por una arquitectura P2P permite que el usuario se conecte directamente con su otro par con la premisa de compartir contenido, descentralizar la información y aprovechar las tecnologías web. La propuesta es ofrecer Middleware P2P y un Framework para extender la funcionalidad del browser a través del uso de WebExtensionsFacultad de Informátic

    4Sensing - decentralized processing for participatory sensing data

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.Participatory sensing is a new application paradigm, stemming from both technical and social drives, which is currently gaining momentum as a research domain. It leverages the growing adoption of mobile phones equipped with sensors, such as camera, GPS and accelerometer, enabling users to collect and aggregate data, covering a wide area without incurring in the costs associated with a large-scale sensor network. Related research in participatory sensing usually proposes an architecture based on a centralized back-end. Centralized solutions raise a set of issues. On one side, there is the implications of having a centralized repository hosting privacy sensitive information. On the other side, this centralized model has financial costs that can discourage grassroots initiatives. This dissertation focuses on the data management aspects of a decentralized infrastructure for the support of participatory sensing applications, leveraging the body of work on participatory sensing and related areas, such as wireless and internet-wide sensor networks, peer-to-peer data management and stream processing. It proposes a framework covering a common set of data management requirements - from data acquisition, to processing, storage and querying - with the goal of lowering the barrier for the development and deployment of applications. Alternative architectural approaches - RTree, QTree and NTree - are proposed and evaluated experimentally in the context of a case-study application - SpeedSense - supporting the monitoring and prediction of traffic conditions, through the collection of speed and location samples in an urban setting, using GPS equipped mobile phones

    Managing Event-Driven Applications in Heterogeneous Fog Infrastructures

    Get PDF
    The steady increase in digitalization propelled by the Internet of Things (IoT) has led to a deluge of generated data at unprecedented pace. Thereby, the promise to realize data-driven decision-making is a major innovation driver in a myriad of industries. Based on the widely used event processing paradigm, event-driven applications allow to analyze data in the form of event streams in order to extract relevant information in a timely manner. Most recently, graphical flow-based approaches in no-code event processing systems have been introduced to significantly lower technological entry barriers. This empowers non-technical citizen technologists to create event-driven applications comprised of multiple interconnected event-driven processing services. Still, today’s event-driven applications are focused on centralized cloud deployments that come with inevitable drawbacks, especially in the context of IoT scenarios that require fast results, are limited by the available bandwidth, or are bound by the regulations in terms of privacy and security. Despite recent advances in the area of fog computing which mitigate these shortcomings by extending the cloud and moving certain processing closer to the event source, these approaches are hardly established in existing systems. Inherent fog computing characteristics, especially the heterogeneity of resources alongside novel application management demands, particularly the aspects of geo-distribution and dynamic adaptation, pose challenges that are currently insufficiently addressed and hinder the transition to a next generation of no-code event processing systems. The contributions of this thesis enable citizen technologists to manage event-driven applications in heterogeneous fog infrastructures along the application life cycle. Therefore, an approach for a holistic application management is proposed which abstracts citizen technologists from underlying technicalities. This allows to evolve present event processing systems and advances the democratization of event-driven application management in fog computing. Individual contributions of this thesis are summarized as follows: 1. A model, manifested in a geo-distributed system architecture, to semantically describe characteristics specific to node resources, event-driven applications and their management to blend application-centric and infrastructure-centric realms. 2. Concepts for geo-distributed deployment and operation of event-driven applications alongside strategies for flexible event stream management. 3. A methodology to support the evolution of event-driven applications including methods to dynamically reconfigure, migrate and offload individual event-driven processing services at run-time. The contributions are introduced, applied and evaluated along two scenarios from the manufacturing and logistics domain

    Sharing Semantic Resources

    Get PDF
    The Semantic Web is an extension of the current Web in which information, so far created for human consumption, becomes machine readable, “enabling computers and people to work in cooperation”. To turn into reality this vision several challenges are still open among which the most important is to share meaning formally represented with ontologies or more generally with semantic resources. This Semantic Web long-term goal has many convergences with the activities in the field of Human Language Technology and in particular in the development of Natural Language Processing applications where there is a great need of multilingual lexical resources. For instance, one of the most important lexical resources, WordNet, is also commonly regarded and used as an ontology. Nowadays, another important phenomenon is represented by the explosion of social collaboration, and Wikipedia, the largest encyclopedia in the world, is object of research as an up to date omni comprehensive semantic resource. The main topic of this thesis is the management and exploitation of semantic resources in a collaborative way, trying to use the already available resources as Wikipedia and Wordnet. This work presents a general environment able to turn into reality the vision of shared and distributed semantic resources and describes a distributed three-layer architecture to enable a rapid prototyping of cooperative applications for developing semantic resources

    User-centered semantic dataset retrieval

    Get PDF
    Finding relevant research data is an increasingly important but time-consuming task in daily research practice. Several studies report on difficulties in dataset search, e.g., scholars retrieve only partial pertinent data, and important information can not be displayed in the user interface. Overcoming these problems has motivated a number of research efforts in computer science, such as text mining and semantic search. In particular, the emergence of the Semantic Web opens a variety of novel research perspectives. Motivated by these challenges, the overall aim of this work is to analyze the current obstacles in dataset search and to propose and develop a novel semantic dataset search. The studied domain is biodiversity research, a domain that explores the diversity of life, habitats and ecosystems. This thesis has three main contributions: (1) We evaluate the current situation in dataset search in a user study, and we compare a semantic search with a classical keyword search to explore the suitability of semantic web technologies for dataset search. (2) We generate a question corpus and develop an information model to figure out on what scientific topics scholars in biodiversity research are interested in. Moreover, we also analyze the gap between current metadata and scholarly search interests, and we explore whether metadata and user interests match. (3) We propose and develop an improved dataset search based on three components: (A) a text mining pipeline, enriching metadata and queries with semantic categories and URIs, (B) a retrieval component with a semantic index over categories and URIs and (C) a user interface that enables a search within categories and a search including further hierarchical relations. Following user centered design principles, we ensure user involvement in various user studies during the development process
    corecore