1,797 research outputs found

    Interactive visual exploration of a large spatio-temporal dataset: Reflections on a geovisualization mashup

    Get PDF
    Exploratory visual analysis is useful for the preliminary investigation of large structured, multifaceted spatio-temporal datasets. This process requires the selection and aggregation of records by time, space and attribute, the ability to transform data and the flexibility to apply appropriate visual encodings and interactions. We propose an approach inspired by geographical 'mashups' in which freely-available functionality and data are loosely but flexibly combined using de facto exchange standards. Our case study combines MySQL, PHP and the LandSerf GIS to allow Google Earth to be used for visual synthesis and interaction with encodings described in KML. This approach is applied to the exploration of a log of 1.42 million requests made of a mobile directory service. Novel combinations of interaction and visual encoding are developed including spatial 'tag clouds', 'tag maps', 'data dials' and multi-scale density surfaces. Four aspects of the approach are informally evaluated: the visual encodings employed, their success in the visual exploration of the clataset, the specific tools used and the 'rnashup' approach. Preliminary findings will be beneficial to others considering using mashups for visualization. The specific techniques developed may be more widely applied to offer insights into the structure of multifarious spatio-temporal data of the type explored here

    Programming patterns and development guidelines for Semantic Sensor Grids (SemSorGrid4Env)

    No full text
    The web of Linked Data holds great potential for the creation of semantic applications that can combine self-describing structured data from many sources including sensor networks. Such applications build upon the success of an earlier generation of 'rapidly developed' applications that utilised RESTful APIs. This deliverable details experience, best practice, and design patterns for developing high-level web-based APIs in support of semantic web applications and mashups for sensor grids. Its main contributions are a proposal for combining Linked Data with RESTful application development summarised through a set of design principles; and the application of these design principles to Semantic Sensor Grids through the development of a High-Level API for Observations. These are supported by implementations of the High-Level API for Observations in software, and example semantic mashups that utilise the API

    Implementation and Deployment of a Library of the High-level Application Programming Interfaces (SemSorGrid4Env)

    No full text
    The high-level API service is designed to support rapid development of thin web applications and mashups beyond the state of the art in GIS, while maintaining compatibility with existing tools and expectations. It provides a fully configurable API, while maintaining a separation of concerns between domain experts, service administrators and mashup developers. It adheres to REST and Linked Data principles, and provides a novel bridge between standards-based (OGC O&M) and Semantic Web approaches. This document discusses the background motivations for the HLAPI (including experiences gained from any previously implemented versions), before moving onto specific details of the final implementation, including configuration and deployment instructions, as well as a full tutorial to assist mashup developers with using the exposed observation data

    Towards a Model of Determinants of Web Services Platform Adoption by Complementers

    Get PDF
    The recent surge of interest in web services has called attention to the increasingly intense competition between owners of the platforms on which these services run. Given that widely adopted operating systems and middleware platforms have yielded sizable economic returns for their owners, many web services platform owners are aggressively pursuing strategies that can give them a competitive advantage and, it is hoped, similarly sizable returns. A review of the broader literature on software platform competition reveals widespread acceptance of network effect theory as an explanatory framework. Network effect theory posits that the value of a software platform to a potential user is associated positively with the number of existing users of the platform (who generate direct network effects) and the number of developers of complementary software applications (who generate indirect network effects) (see, e.g., Katz and Shapiro, 1986; Zhu et al., 2006). Users realize direct network effects when, for example, they share compatible files with other users (Gao and Iyer, 2006; Lin and Kulatilaka, 2006) or participate in ???trading communities??? (Zhu et al., 2006). Indirect network effects are realized through the availability of useful, innovative and compatible software applications (Lin and Kulatilaka, 2006). Users of widely adopted software platforms also gain value from the reduced likelihood of being ???stranded with a failed and unsupported platform??? and consequent switching costs (Gallaugher and Wang, 2002, p. 306). In the presence of network effects, then, software platform owners pursue strategies that will secure them an ???installed base??? of users and complementers that is sufficiently large to attract more and more new users (Shapiro and Varian, 1998; Suarez, 2005). While one set of strategies is aimed at promoting adoption by new users, another set emphasizes the value generated for users by indirect network effects and aims instead at promoting adoption by complementers. (This distinction reflects the idea that platform markets are two-sided, with (end) users populating one side and complementers populating the other.) There appears to be considerably more research on strategies for increasing user adoption (see Gallaugher and Wang (2002), von Westarp (2003) and Zhu and Iansiti (2007) for reviews) than on complementer adoption strategies. Nonetheless, three studies of the latter merit mentioning here. First, in their study of the U.S. video game industry from 1976 to 2002, Venkatraman and Lee (2003) find that platform dominance (i.e., largest installed base), together with complementers??? path dependency and level of experience with platform architecture, largely determine platform adoption by complementers. Second, in his investigation of how software platform owners maintain a balance between ???adoption and appropriation,??? West (2003) concludes that software platform owners who disclose some proprietary code will attract more complements (thereby fostering innovation), but cautions against disclosing any code that confers a competitive advantage. Finally, Cusumano and Gawer???s (2002) landmark study of Intel???s platform management strategies culminated in the endorsement of four ???levers??? for platform leadership, with one of these levers aimed at managing relations with ???external complementers???. Specific strategies include building a consensus on technical specifications and standards, handling potential conflicts of interest and letting complementers keep any intellectual property they develop on the platform. Both West (2003) and Cusumano and Gawer (2002) also underscore the importance of providing complementers with an interface to connect to the platform. Beyond West???s (2003, p. 1260) suggestion that software platform owners ???create and evolve application programming interfaces (APIs),??? though, the varied ways in which these APIs might influence a complementer???s choice to adopt have not been sufficiently explored by these or other authors. The research-in-progress described in the following section aims to bolster the somewhat scant literature on software platform adoption by complementers. More specifically, the proceeding research design outlines a proposed investigation of the determinants of complementer adoption of geo-mapping web services platforms. The reasons for including independent variables are discussed, and some methodological details are introduced. The paper concludes with a brief discussion of anticipated outcomes of the study

    Enabling Innovation across the Enterprise through Mashup-oriented Collaboration Environments

    Get PDF
    Nowadays enterprise collaboration is becoming essential for valuable innovation and competitive advantage. This collaboration must be brought a step forward from technical collaboration till collective smart exploitation of global intelligence. The Internet of Future is expected to be composed of a mesh of interoperable Web Services accessed from all over the Web. This approach has not yet caught on since a global user-service interaction is still an open issue. This paper states our vision with regard to the next generation front-end web technology that will enable integrated access to services, contents and things in the Future Internet. This approach will enable the massive deployment of services over Internet in a user-centric fashion. Having this in mind, the rationale behind EzWeb, a reference architecture and implementation of an open Enterprise 2.0 Collaboration Platform that empower its users to co-produce and share instant applications is presente

    Enhancement of the usability of SOA services for novice users

    Get PDF
    Recently, the automation of service integration has provided a significant advantage in delivering services to novice users. This art of integrating various services is known as Service Composition and its main purpose is to simplify the development process for web applications and facilitates reuse of services. It is one of the paradigms that enables services to end-users (i.e.service provisioning) through the outsourcing of web contents and it requires users to share and reuse services in more collaborative ways. Most service composers are effective at enabling integration of web contents, but they do not enable universal access across different groups of users. This is because, the currently existing content aggregators require complex interactions in order to create web applications (e.g., Web Service Business Process Execution Language (WS-BPEL)) as a result not all users are able to use such web tools. This trend demands changes in the web tools that end-users use to gain and share information, hence this research uses Mashups as a service composition technique to allow novice users to integrate publicly available Service Oriented Architecture (SOA) services, where there is a minimal active web application development. Mashups being the platforms that integrate disparate web Application Programming Interfaces (APIs) to create user defined web applications; presents a great opportunity for service provisioning. However, their usability for novice users remains invalidated since Mashup tools are not easy to use they require basic programming skills which makes the process of designing and creating Mashups difficult. This is because Mashup tools access heterogeneous web contents using public web APIs and the process of integrating them become complex since web APIs are tailored by different vendors. Moreover, the design of Mashup editors is unnecessary complex; as a result, users do not know where to start when creating Mashups. This research address the gap between Mashup tools and usability by the designing and implementing a semantically enriched Mashup tool to discover, annotate and compose APIs to improve the utilization of SOA services by novice users. The researchers conducted an analysis of the already existing Mashup tools to identify challenges and weaknesses experienced by novice Mashup users. The findings from the requirement analysis formulated the system usability requirements that informed the design and implementation of the proposed Mashup tool. The proposed architecture addressed three layers: composition, annotation and discovery. The researchers developed a simple Mashup tool referred to as soa-Services Provisioner (SerPro) that allowed novice users to create web application flexibly. Its usability and effectiveness was validated. The proposed Mashup tool enhanced the usability of SOA services, since data analysis and results showed that it was usable to novice users by scoring a System Usability Scale (SUS) score of 72.08. Furthermore, this research discusses the research limitations and future work for further improvements

    Semantic annotation of Web APIs with SWEET

    Get PDF
    Recently technology developments in the area of services on the Web are marked by the proliferation of Web applications and APIs. The development and evolution of applications based on Web APIs is, however, hampered by the lack of automation that can be achieved with current technologies. In this paper we present SWEET - Semantic Web sErvices Editing Tool - a lightweight Web application for creating semantic descriptions of Web APIs. SWEET directly supports the creation of mashups by enabling the semantic annotation of Web APIs, thus contributing to the automation of the discovery, composition and invocation service tasks. Furthermore, it enables the development of composite SWS based applications on top of Linked Data

    Mining and quality assessment of mashup model patterns with the crowd: A feasibility study

    Get PDF
    Pattern mining, that is, the automated discovery of patterns from data, is a mathematically complex and computationally demanding problem that is generally not manageable by humans. In this article, we focus on small datasets and study whether it is possible to mine patterns with the help of the crowd by means of a set of controlled experiments on a common crowdsourcing platform. We specifically concentrate on mining model patterns from a dataset of real mashup models taken from Yahoo! Pipes and cover the entire pattern mining process, including pattern identification and quality assessment. The results of our experiments show that a sensible design of crowdsourcing tasks indeed may enable the crowd to identify patterns from small datasets (40 models). The results, however, also show that the design of tasks for the assessment of the quality of patterns to decide which patterns to retain for further processing and use is much harder (our experiments fail to elicit assessments from the crowd that are similar to those by an expert). The problem is relevant in general to model-driven development (e.g., UML, business processes, scientific workflows), in that reusable model patterns encode valuable modeling and domain knowledge, such as best practices, organizational conventions, or technical choices, that modelers can benefit from when designing their own models
    • 

    corecore